public class PigInputFormatSpark extends PigInputFormat
PigInputFormat.RecordReaderFactorylog, PIG_INPUT_LIMITS, PIG_INPUT_SIGNATURES, PIG_INPUT_TARGETS, PIG_LOADS| Constructor and Description |
|---|
PigInputFormatSpark() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.Text,Tuple> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
java.util.List<org.apache.hadoop.mapreduce.InputSplit> |
getSplits(org.apache.hadoop.mapreduce.JobContext jobcontext)
This is where we have to wrap PigSplits into SparkPigSplits
|
getPigSplitspublic org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.Text,Tuple> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) throws java.io.IOException, java.lang.InterruptedException
createRecordReader in class PigInputFormatjava.io.IOExceptionjava.lang.InterruptedExceptionpublic java.util.List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext jobcontext)
throws java.io.IOException,
java.lang.InterruptedException
getSplits in class PigInputFormatjobcontext - java.io.IOExceptionjava.lang.InterruptedExceptionCopyright © 2007-2025 The Apache Software Foundation