Modifier and Type | Method and Description |
---|---|
protected PigStats |
PigServer.launchPlan(LogicalPlan lp,
String jobName)
A common method for launching the jobs according to the logical plan
|
Constructor and Description |
---|
PigServer(ExecType execType) |
PigServer(ExecType execType,
org.apache.hadoop.conf.Configuration conf) |
PigServer(ExecType execType,
Properties properties) |
PigServer(PigContext context) |
PigServer(PigContext context,
boolean connect) |
PigServer(Properties properties) |
PigServer(String execTypeString) |
PigServer(String execTypeString,
Properties properties) |
Modifier and Type | Method and Description |
---|---|
String |
ExecJob.getAlias()
Returns the alias of relation generated by this job
|
void |
ExecJob.getLogs(OutputStream log)
Collecting various forms of outputs
|
Iterator<Tuple> |
ExecJob.getResults()
if query has executed successfully we want to retrieve the results
via iterating over them.
|
void |
ExecJob.getSTDError(OutputStream error) |
void |
ExecJob.getSTDOut(OutputStream out) |
boolean |
ExecJob.hasCompleted()
true is the physical plan has executed successfully and results are ready
to be retrieved
|
void |
ExecutionEngine.init()
This method is responsible for the initialization of the ExecutionEngine.
|
void |
ExecJob.kill()
Kills current job.
|
PigStats |
ExecutionEngine.launchPig(LogicalPlan lp,
String grpName,
PigContext pc)
This method is responsible for the actual execution of a LogicalPlan.
|
void |
ExecutionEngine.setConfiguration(Properties newConfiguration)
Responsible for updating the properties for the ExecutionEngine.
|
Modifier and Type | Method and Description |
---|---|
static byte |
HDataType.findTypeFromClassName(String className) |
static byte |
HDataType.findTypeFromNullableWritable(PigNullableWritable o) |
static PigNullableWritable |
HDataType.getNewWritableComparable(byte keyType) |
static PigNullableWritable |
HDataType.getWritableComparable(String className) |
static PigNullableWritable |
HDataType.getWritableComparableTypes(byte type) |
static PigNullableWritable |
HDataType.getWritableComparableTypes(Object o,
byte keyType) |
Modifier and Type | Method and Description |
---|---|
protected Collection<org.apache.accumulo.core.data.Mutation> |
AccumuloStorage.getMutations(Tuple tuple) |
protected abstract Collection<org.apache.accumulo.core.data.Mutation> |
AbstractAccumuloStorage.getMutations(Tuple tuple) |
void |
AbstractAccumuloStorage.putNext(Tuple tuple) |
Modifier and Type | Method and Description |
---|---|
String |
HJob.getAlias() |
org.apache.hadoop.mapred.JobConf |
HExecutionEngine.getExecConf(Properties properties) |
void |
HJob.getLogs(OutputStream log) |
Iterator<Tuple> |
HJob.getResults() |
org.apache.hadoop.mapred.JobConf |
HExecutionEngine.getS3Conf() |
void |
HJob.getSTDError(OutputStream error) |
void |
HJob.getSTDOut(OutputStream out) |
boolean |
HJob.hasCompleted() |
void |
HExecutionEngine.init() |
void |
HJob.kill() |
PigStats |
HExecutionEngine.launchPig(LogicalPlan lp,
String grpName,
PigContext pc) |
void |
HExecutionEngine.setConfiguration(Properties newConfiguration) |
Modifier and Type | Method and Description |
---|---|
PigStats |
MapReduceLauncher.launchPig(PhysicalPlan php,
String grpName,
PigContext pc) |
Constructor and Description |
---|
MergeJoinIndexer(String funcSpec,
String innerPlan,
String serializedPhyPlan,
String udfCntxtSignature,
String scope,
String ignoreNulls) |
Modifier and Type | Method and Description |
---|---|
protected float[] |
WeightedRangePartitioner.getProbVec(Tuple values) |
Modifier and Type | Method and Description |
---|---|
Result |
PhysicalOperator.getNext(byte dataType)
Implementations that call into the different versions of getNext are often
identical, differing only in the signature of the getNext() call they make.
|
Result |
PhysicalOperator.getNextBigDecimal() |
Result |
PhysicalOperator.getNextBigInteger() |
Result |
PhysicalOperator.getNextBoolean() |
Result |
PhysicalOperator.getNextDataBag() |
Result |
PhysicalOperator.getNextDataByteArray() |
Result |
PhysicalOperator.getNextDateTime() |
Result |
PhysicalOperator.getNextDouble() |
Result |
PhysicalOperator.getNextFloat() |
Result |
PhysicalOperator.getNextInteger() |
Result |
PhysicalOperator.getNextLong() |
Result |
PhysicalOperator.getNextMap() |
Result |
PhysicalOperator.getNextString() |
Result |
PhysicalOperator.getNextTuple() |
Result |
PhysicalOperator.processInput()
A generic method for parsing input that either returns the attached input
if it exists or fetches it from its predecessor.
|
Modifier and Type | Method and Description |
---|---|
protected Result |
POCounter.addCounterValue(Result input)
Add current task id and local counter value.
|
Result |
PORank.addRank(Result input)
Reads the output tuple from POCounter and the cumulative sum previously calculated.
|
void |
Packager.attachInput(Object key,
DataBag[] bags,
boolean[] readOnce) |
void |
JoinPackager.attachInput(Object key,
DataBag[] bags,
boolean[] readOnce) |
protected Tuple |
POLocalRearrange.constructLROutput(List<Result> resLst,
List<Result> secondaryResLst,
Tuple value) |
protected Tuple |
POPreCombinerLocalRearrange.constructLROutput(List<Result> resLst,
Tuple value) |
protected Tuple |
POCollectedGroup.constructOutput(List<Result> resLst,
Tuple value) |
protected DataBag |
POPartitionRearrange.constructPROutput(List<Result> resLst,
Tuple value) |
protected Tuple |
POForEach.createTuple(Object[] data) |
Object |
Packager.getKey(PigNullableWritable key) |
protected Object |
POLocalRearrange.getKeyFromResult(List<Result> resLst,
byte type) |
Result |
Packager.getNext() |
Result |
MultiQueryPackager.getNext()
Constructs the output tuple from the inputs.
|
Result |
LitePackager.getNext()
Similar to POPackage.getNext except that
only one input is expected with index 0
and ReadOnceBag is used instead of
DefaultDataBag.
|
Result |
JoinPackager.getNext()
Calls getNext to get next ForEach result.
|
Result |
CombinerPackager.getNext() |
Result |
POStream.getNextHelper(Tuple t) |
Result |
POUnion.getNextTuple()
The code below, tries to follow our single threaded
shared execution model with execution being passed
around each non-drained input
|
Result |
POStream.getNextTuple() |
Result |
POStore.getNextTuple() |
Result |
POSplit.getNextTuple() |
Result |
POSortedDistinct.getNextTuple() |
Result |
POSort.getNextTuple() |
Result |
POReservoirSample.getNextTuple() |
Result |
PORank.getNextTuple() |
Result |
POPreCombinerLocalRearrange.getNextTuple()
Calls getNext on the generate operator inside the nested
physical plan.
|
Result |
POPoissonSample.getNextTuple() |
Result |
POPartitionRearrange.getNextTuple()
Calls getNext on the generate operator inside the nested
physical plan.
|
Result |
POPartialAgg.getNextTuple() |
Result |
POPackage.getNextTuple()
From the inputs, constructs the output tuple for this co-group in the
required format which is (key, {bag of tuples from input 1}, {bag of
tuples from input 2}, ...)
|
Result |
POOptimizedForEach.getNextTuple()
Calls getNext on the generate operator inside the nested
physical plan and returns it maintaining an additional state
to denote the begin and end of the nested plan processing.
|
Result |
POMergeJoin.getNextTuple() |
Result |
POMergeCogroup.getNextTuple() |
Result |
POLocalRearrange.getNextTuple()
Calls getNext on the generate operator inside the nested
physical plan.
|
Result |
POLoad.getNextTuple()
The main method used by this operator's successor
to read tuples from the specified file using the
specified load function.
|
Result |
POLimit.getNextTuple()
Counts the number of tuples processed into static variable soFar, if the number of tuples processed reach the
limit, return EOP; Otherwise, return the tuple
|
Result |
POForEach.getNextTuple()
Calls getNext on the generate operator inside the nested
physical plan and returns it maintaining an additional state
to denote the begin and end of the nested plan processing.
|
Result |
POFilter.getNextTuple()
Attaches the proccesed input tuple to the expression plan and checks if
comparison operator returns a true.
|
Result |
POFRJoin.getNextTuple() |
Result |
PODistinct.getNextTuple() |
Result |
PODemux.getNextTuple() |
Result |
POCross.getNextTuple() |
Result |
POCounter.getNextTuple() |
Result |
POCollectedGroup.getNextTuple() |
Tuple |
Packager.getValueTuple(PigNullableWritable keyWritable,
NullableTuple ntup,
int index) |
Tuple |
MultiQueryPackager.getValueTuple(PigNullableWritable keyWritable,
NullableTuple ntup,
int origIndex) |
Tuple |
LitePackager.getValueTuple(PigNullableWritable keyWritable,
NullableTuple ntup,
int index)
Makes use of the superclass method, but this requires an additional
parameter key passed by ReadOnceBag.
|
Tuple |
CombinerPackager.getValueTuple(PigNullableWritable keyWritable,
NullableTuple ntup,
int index) |
protected Tuple |
POFRJoin.getValueTuple(POLocalRearrange lr,
Tuple tuple) |
protected boolean |
POFRJoin.isKeyNull(Object key) |
boolean |
POForEach.needEndOfAllInputProcessing() |
protected Result |
POForEach.processPlan() |
void |
POLocalRearrange.setIndex(int index)
Sets the co-group index of this operator
|
void |
POLocalRearrange.setMultiQueryIndex(int index)
Sets the multi-query index of this operator
|
protected void |
POFRJoin.setUpHashMap()
Builds the HashMaps by reading each replicated input from the DFS using a
Load operator
|
void |
POMergeJoin.throwProcessingException(boolean withCauseException,
Exception e) |
Constructor and Description |
---|
POFRJoin(OperatorKey k,
int rp,
List<PhysicalOperator> inp,
List<List<PhysicalPlan>> ppLists,
List<List<Byte>> keyTypes,
FileSpec[] replFiles,
int fragment,
boolean isLeftOuter,
Tuple nullTuple) |
POFRJoin(OperatorKey k,
int rp,
List<PhysicalOperator> inp,
List<List<PhysicalPlan>> ppLists,
List<List<Byte>> keyTypes,
FileSpec[] replFiles,
int fragment,
boolean isLeftOuter,
Tuple nullTuple,
Schema[] inputSchemas,
Schema[] keySchemas) |
POFRJoin(POFRJoin copy) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.mapred.JobConf |
TezExecutionEngine.getExecConf(Properties properties) |
Modifier and Type | Method and Description |
---|---|
void |
POValueInputTez.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POSimpleTezLoad.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POShuffledValueInputTez.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POShuffleTezLoad.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
PORankTez.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POIdentityInOutTez.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POFRJoinTez.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POCounterStatsTez.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POValueOutputTez.attachOutputs(Map<String,org.apache.tez.runtime.api.LogicalOutput> outputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POStoreTez.attachOutputs(Map<String,org.apache.tez.runtime.api.LogicalOutput> outputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POLocalRearrangeTez.attachOutputs(Map<String,org.apache.tez.runtime.api.LogicalOutput> outputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POIdentityInOutTez.attachOutputs(Map<String,org.apache.tez.runtime.api.LogicalOutput> outputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POCounterTez.attachOutputs(Map<String,org.apache.tez.runtime.api.LogicalOutput> outputs,
org.apache.hadoop.conf.Configuration conf) |
void |
POCounterStatsTez.attachOutputs(Map<String,org.apache.tez.runtime.api.LogicalOutput> outputs,
org.apache.hadoop.conf.Configuration conf) |
protected DataBag |
POPartitionRearrangeTez.constructPROutput(List<Result> resLst,
Tuple value) |
Result |
POValueOutputTez.getNextTuple() |
Result |
POValueInputTez.getNextTuple() |
Result |
POStoreTez.getNextTuple() |
Result |
POSimpleTezLoad.getNextTuple()
Previously, we reused the same Result object for all results, but we found
certain operators (e.g.
|
Result |
POShuffledValueInputTez.getNextTuple() |
Result |
POShuffleTezLoad.getNextTuple() |
Result |
PORankTez.getNextTuple() |
Result |
POPartitionRearrangeTez.getNextTuple()
Calls getNext on the generate operator inside the nested physical plan.
|
Result |
POLocalRearrangeTez.getNextTuple() |
Result |
POIdentityInOutTez.getNextTuple() |
Result |
POCounterTez.getNextTuple() |
Result |
POCounterStatsTez.getNextTuple() |
void |
POValueOutputTez.initialize(org.apache.tez.runtime.api.ProcessorContext processorContext) |
void |
POStoreTez.initialize(org.apache.tez.runtime.api.ProcessorContext processorContext) |
void |
POSimpleTezLoad.initialize(org.apache.tez.runtime.api.ProcessorContext processorContext) |
void |
POCounterTez.initialize(org.apache.tez.runtime.api.ProcessorContext processorContext) |
protected void |
POFRJoinTez.setUpHashMap()
Builds the HashMaps by reading replicated inputs from broadcast edges
|
Constructor and Description |
---|
POFRJoinTez(POFRJoin copy,
List<String> inputKeys) |
Modifier and Type | Method and Description |
---|---|
void |
ReadScalarsTez.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
void |
TezInput.attachInputs(Map<String,org.apache.tez.runtime.api.LogicalInput> inputs,
org.apache.hadoop.conf.Configuration conf)
Attach the inputs to the operator.
|
void |
TezOutput.attachOutputs(Map<String,org.apache.tez.runtime.api.LogicalOutput> outputs,
org.apache.hadoop.conf.Configuration conf) |
void |
TezTaskConfigurable.initialize(org.apache.tez.runtime.api.ProcessorContext processorContext) |
Modifier and Type | Method and Description |
---|---|
static boolean |
TezCompilerUtil.bagDataTypeInCombinePlan(PhysicalPlan combinePlan) |
Modifier and Type | Method and Description |
---|---|
static FileSpec |
MapRedUtil.checkLeafIsStore(PhysicalPlan plan,
PigContext pigContext) |
Modifier and Type | Method and Description |
---|---|
void |
HadoopExecutableManager.configure(POStream stream) |
Modifier and Type | Method and Description |
---|---|
protected static Tuple |
LongAvg.combine(DataBag values) |
protected static Tuple |
IntAvg.combine(DataBag values) |
protected static Tuple |
FloatAvg.combine(DataBag values) |
protected static Tuple |
DoubleAvg.combine(DataBag values) |
protected static Tuple |
BigIntegerAvg.combine(DataBag values) |
protected static Tuple |
BigDecimalAvg.combine(DataBag values) |
protected static Tuple |
AVG.combine(DataBag values) |
static void |
CubeDimensions.convertNullToUnknown(Tuple tuple) |
protected static long |
LongAvg.count(Tuple input) |
protected static long |
IntAvg.count(Tuple input) |
protected static long |
FloatAvg.count(Tuple input) |
protected static long |
DoubleAvg.count(Tuple input) |
protected static BigInteger |
BigIntegerAvg.count(Tuple input) |
protected static BigDecimal |
BigDecimalAvg.count(Tuple input) |
protected static long |
AVG.count(Tuple input) |
protected static Long |
AlgebraicLongMathBase.doTupleWork(Tuple input,
AlgebraicMathBase.KnownOpProvider opProvider) |
protected static Integer |
AlgebraicIntMathBase.doTupleWork(Tuple input,
AlgebraicMathBase.KnownOpProvider opProvider) |
protected static Float |
AlgebraicFloatMathBase.doTupleWork(Tuple input,
AlgebraicMathBase.KnownOpProvider opProvider) |
protected static Double |
AlgebraicDoubleMathBase.doTupleWork(Tuple input,
AlgebraicMathBase.KnownOpProvider opProvider) |
protected static BigInteger |
AlgebraicBigIntegerMathBase.doTupleWork(Tuple input,
AlgebraicMathBase.KnownOpProvider opProvider) |
protected static BigDecimal |
AlgebraicBigDecimalMathBase.doTupleWork(Tuple input,
AlgebraicMathBase.KnownOpProvider opProvider) |
protected static Double |
AlgebraicByteArrayMathBase.doTupleWork(Tuple input,
AlgebraicMathBase.KnownOpProvider opProvider,
byte expectedType) |
protected static String |
StringMax.max(Tuple input) |
protected static org.joda.time.DateTime |
DateTimeMax.max(Tuple input) |
protected static String |
StringMin.min(Tuple input) |
protected static org.joda.time.DateTime |
DateTimeMin.min(Tuple input) |
protected static Long |
LongAvg.sum(Tuple input) |
protected static Long |
IntAvg.sum(Tuple input) |
protected static Double |
FloatAvg.sum(Tuple input) |
protected static Double |
DoubleAvg.sum(Tuple input) |
protected static Long |
COUNT_STAR.sum(Tuple input) |
protected static Long |
COUNT.sum(Tuple input) |
protected static BigInteger |
BigIntegerAvg.sum(Tuple input) |
protected static BigDecimal |
BigDecimalAvg.sum(Tuple input) |
protected static Double |
AVG.sum(Tuple input) |
Modifier and Type | Class and Description |
---|---|
class |
FieldIsNullException |
Modifier and Type | Method and Description |
---|---|
static Schema.FieldSchema |
DataType.determineFieldSchema(Object o)
Determine the field schema of an object
|
static Schema.FieldSchema |
DataType.determineFieldSchema(ResourceSchema.ResourceFieldSchema rcFieldSchema)
Determine the field schema of an ResourceFieldSchema
|
protected abstract BigDecimal |
SchemaTuple.generatedCodeGetBigDecimal(int fieldNum) |
protected abstract BigInteger |
SchemaTuple.generatedCodeGetBigInteger(int fieldNum) |
protected abstract boolean |
SchemaTuple.generatedCodeGetBoolean(int fieldNum) |
protected abstract byte[] |
SchemaTuple.generatedCodeGetBytes(int fieldNum) |
protected abstract DataBag |
SchemaTuple.generatedCodeGetDataBag(int fieldNum) |
protected abstract org.joda.time.DateTime |
SchemaTuple.generatedCodeGetDateTime(int fieldNum) |
protected abstract double |
SchemaTuple.generatedCodeGetDouble(int fieldNum) |
abstract Object |
SchemaTuple.generatedCodeGetField(int fieldNum) |
protected abstract float |
SchemaTuple.generatedCodeGetFloat(int fieldNum) |
protected abstract int |
SchemaTuple.generatedCodeGetInt(int fieldNum) |
protected abstract long |
SchemaTuple.generatedCodeGetLong(int fieldNum) |
protected abstract Map<String,Object> |
SchemaTuple.generatedCodeGetMap(int fieldNum) |
protected abstract String |
SchemaTuple.generatedCodeGetString(int fieldNum) |
protected abstract Tuple |
SchemaTuple.generatedCodeGetTuple(int fieldNum) |
protected abstract SchemaTuple<T> |
SchemaTuple.generatedCodeSet(SchemaTuple<?> t,
boolean checkType) |
protected abstract void |
SchemaTuple.generatedCodeSetBigDecimal(int fieldNum,
BigDecimal val) |
protected abstract void |
SchemaTuple.generatedCodeSetBigInteger(int fieldNum,
BigInteger val) |
protected abstract void |
SchemaTuple.generatedCodeSetBoolean(int fieldNum,
boolean val) |
protected abstract void |
SchemaTuple.generatedCodeSetBytes(int fieldNum,
byte[] val) |
protected abstract void |
SchemaTuple.generatedCodeSetDataBag(int fieldNum,
DataBag val) |
protected abstract void |
SchemaTuple.generatedCodeSetDateTime(int fieldNum,
org.joda.time.DateTime val) |
protected abstract void |
SchemaTuple.generatedCodeSetDouble(int fieldNum,
double val) |
abstract void |
SchemaTuple.generatedCodeSetField(int fieldNum,
Object val) |
protected abstract void |
SchemaTuple.generatedCodeSetFloat(int fieldNum,
float val) |
protected abstract void |
SchemaTuple.generatedCodeSetInt(int fieldNum,
int val) |
protected abstract void |
SchemaTuple.generatedCodeSetIterator(Iterator<Object> l) |
protected abstract void |
SchemaTuple.generatedCodeSetLong(int fieldNum,
long val) |
protected abstract void |
SchemaTuple.generatedCodeSetMap(int fieldNum,
Map<String,Object> val) |
protected abstract void |
SchemaTuple.generatedCodeSetString(int fieldNum,
String val) |
protected abstract void |
SchemaTuple.generatedCodeSetTuple(int fieldNum,
Tuple val) |
Object |
UnlimitedNullTuple.get(int fieldNum) |
Object |
Tuple.get(int fieldNum)
Get the value in a given field.
|
Object |
TargetedTuple.get(int fieldNum) |
Object |
SchemaTuple.get(int fieldNum) |
Object |
DefaultTuple.get(int fieldNum)
Get the value in a given field.
|
Object |
AppendableSchemaTuple.get(int fieldNum) |
protected Object |
AppendableSchemaTuple.getAppendedField(int i) |
BigDecimal |
TypeAwareTuple.getBigDecimal(int idx) |
BigDecimal |
SchemaTuple.getBigDecimal(int fieldNum) |
BigInteger |
TypeAwareTuple.getBigInteger(int idx) |
BigInteger |
SchemaTuple.getBigInteger(int fieldNum) |
boolean |
TypeAwareTuple.getBoolean(int idx) |
boolean |
SchemaTuple.getBoolean(int fieldNum) |
byte[] |
TypeAwareTuple.getBytes(int idx) |
byte[] |
SchemaTuple.getBytes(int fieldNum) |
DataBag |
TypeAwareTuple.getDataBag(int idx) |
DataBag |
SchemaTuple.getDataBag(int fieldNum) |
org.joda.time.DateTime |
TypeAwareTuple.getDateTime(int idx) |
org.joda.time.DateTime |
SchemaTuple.getDateTime(int fieldNum) |
double |
TypeAwareTuple.getDouble(int idx) |
double |
SchemaTuple.getDouble(int fieldNum) |
float |
TypeAwareTuple.getFloat(int idx) |
float |
SchemaTuple.getFloat(int fieldNum) |
abstract byte |
SchemaTuple.getGeneratedCodeFieldType(int fieldNum) |
int |
TypeAwareTuple.getInt(int idx) |
int |
SchemaTuple.getInt(int fieldNum) |
long |
TypeAwareTuple.getLong(int idx) |
long |
SchemaTuple.getLong(int fieldNum) |
Map<String,Object> |
TypeAwareTuple.getMap(int idx) |
Map<String,Object> |
SchemaTuple.getMap(int fieldNum) |
String |
TypeAwareTuple.getString(int idx) |
String |
SchemaTuple.getString(int fieldNum) |
Tuple |
TypeAwareTuple.getTuple(int idx) |
Tuple |
SchemaTuple.getTuple(int fieldNum) |
byte |
Tuple.getType(int fieldNum)
Find the type of a given field.
|
byte |
TargetedTuple.getType(int fieldNum) |
byte |
SchemaTuple.getType(int fieldNum) |
byte |
AppendableSchemaTuple.getType(int fieldNum) |
byte |
AbstractTuple.getType(int fieldNum)
Find the type of a given field.
|
protected Object |
SchemaTuple.getTypeAwareBase(int fieldNum,
String type) |
protected Object |
AppendableSchemaTuple.getTypeAwareBase(int fieldNum,
String type) |
abstract boolean |
SchemaTuple.isGeneratedCodeFieldNull(int fieldNum) |
boolean |
Tuple.isNull(int fieldNum)
Find out if a given field is null.
|
boolean |
SchemaTuple.isNull(int fieldNum) |
boolean |
AppendableSchemaTuple.isNull(int fieldNum) |
boolean |
AbstractTuple.isNull(int fieldNum)
Find out if a given field is null.
|
Object |
InterSedes.readDatum(DataInput in)
Get the next object from DataInput in
|
static Object |
DataReaderWriter.readDatum(DataInput in) |
Object |
BinInterSedes.readDatum(DataInput in) |
Object |
InterSedes.readDatum(DataInput in,
byte type)
Get the next object from DataInput in of the type of type argument
The type information has been read from DataInput.
|
static Object |
DataReaderWriter.readDatum(DataInput in,
byte type) |
Object |
BinInterSedes.readDatum(DataInput in,
byte type)
Expects binInterSedes data types (NOT DataType types!)
|
void |
UnlimitedNullTuple.set(int fieldNum,
Object val) |
void |
Tuple.set(int fieldNum,
Object val)
Set the value in a given field.
|
void |
TargetedTuple.set(int fieldNum,
Object val) |
void |
SchemaTuple.set(int fieldNum,
Object val) |
void |
DefaultTuple.set(int fieldNum,
Object val)
Set the value in a given field.
|
void |
AppendableSchemaTuple.set(int fieldNum,
Object val) |
SchemaTuple<T> |
SchemaTuple.set(List<Object> l) |
SchemaTuple<T> |
AppendableSchemaTuple.set(List<Object> l) |
SchemaTuple<T> |
SchemaTuple.set(SchemaTuple<?> t) |
protected SchemaTuple<T> |
SchemaTuple.set(SchemaTuple<?> t,
boolean checkType) |
protected SchemaTuple<T> |
AppendableSchemaTuple.set(SchemaTuple<?> t,
boolean checkType) |
SchemaTuple<T> |
SchemaTuple.set(Tuple t) |
protected SchemaTuple<T> |
SchemaTuple.set(Tuple t,
boolean checkType) |
void |
TypeAwareTuple.setBigDecimal(int idx,
BigDecimal val) |
void |
SchemaTuple.setBigDecimal(int fieldNum,
BigDecimal val) |
void |
TypeAwareTuple.setBigInteger(int idx,
BigInteger val) |
void |
SchemaTuple.setBigInteger(int fieldNum,
BigInteger val) |
void |
TypeAwareTuple.setBoolean(int idx,
boolean val) |
void |
SchemaTuple.setBoolean(int fieldNum,
boolean val) |
void |
TypeAwareTuple.setBytes(int idx,
byte[] val) |
void |
SchemaTuple.setBytes(int fieldNum,
byte[] val) |
void |
TypeAwareTuple.setDataBag(int idx,
DataBag val) |
void |
SchemaTuple.setDataBag(int fieldNum,
DataBag val) |
void |
TypeAwareTuple.setDateTime(int idx,
org.joda.time.DateTime val) |
void |
SchemaTuple.setDateTime(int fieldNum,
org.joda.time.DateTime val) |
void |
TypeAwareTuple.setDouble(int idx,
double val) |
void |
SchemaTuple.setDouble(int fieldNum,
double val) |
void |
TypeAwareTuple.setFloat(int idx,
float val) |
void |
SchemaTuple.setFloat(int fieldNum,
float val) |
void |
TypeAwareTuple.setInt(int idx,
int val) |
void |
SchemaTuple.setInt(int fieldNum,
int val) |
void |
TypeAwareTuple.setLong(int idx,
long val) |
void |
SchemaTuple.setLong(int fieldNum,
long val) |
void |
TypeAwareTuple.setMap(int idx,
Map<String,Object> val) |
void |
SchemaTuple.setMap(int fieldNum,
Map<String,Object> val) |
void |
TypeAwareTuple.setString(int idx,
String val) |
void |
SchemaTuple.setString(int fieldNum,
String val) |
void |
TypeAwareTuple.setTuple(int idx,
Tuple val) |
void |
SchemaTuple.setTuple(int fieldNum,
Tuple val) |
protected void |
SchemaTuple.setTypeAwareBase(int fieldNum,
Object val,
String type) |
protected void |
AppendableSchemaTuple.setTypeAwareBase(int fieldNum,
Object val,
String type) |
static DataBag |
DataType.toBag(Object o)
If this object is a bag, return it as a bag.
|
static BigDecimal |
DataType.toBigDecimal(Object o) |
static BigDecimal |
DataType.toBigDecimal(Object o,
byte type) |
static BigInteger |
DataType.toBigInteger(Object o) |
static BigInteger |
DataType.toBigInteger(Object o,
byte type) |
static Boolean |
DataType.toBoolean(Object o) |
static Boolean |
DataType.toBoolean(Object o,
byte type)
Force a data object to a Boolean, if possible.
|
static byte[] |
DataType.toBytes(Object o) |
static byte[] |
DataType.toBytes(Object o,
byte type) |
static org.joda.time.DateTime |
DataType.toDateTime(Object o) |
static org.joda.time.DateTime |
DataType.toDateTime(Object o,
byte type)
Force a data object to a DateTime, if possible.
|
String |
Tuple.toDelimitedString(String delim)
Write a tuple of values into a string.
|
String |
AbstractTuple.toDelimitedString(String delim)
Write a tuple of values into a string.
|
static Double |
DataType.toDouble(Object o)
Force a data object to a Double, if possible.
|
static Double |
DataType.toDouble(Object o,
byte type)
Force a data object to a Double, if possible.
|
static Float |
DataType.toFloat(Object o)
Force a data object to a Float, if possible.
|
static Float |
DataType.toFloat(Object o,
byte type)
Force a data object to a Float, if possible.
|
static Integer |
DataType.toInteger(Object o)
Force a data object to an Integer, if possible.
|
static Integer |
DataType.toInteger(Object o,
byte type)
Force a data object to an Integer, if possible.
|
static Long |
DataType.toLong(Object o)
Force a data object to a Long, if possible.
|
static Long |
DataType.toLong(Object o,
byte type)
Force a data object to a Long, if possible.
|
static Map<String,Object> |
DataType.toMap(Object o)
If this object is a map, return it as a map.
|
static String |
DataType.toString(Object o)
Force a data object to a String, if possible.
|
static String |
DataType.toString(Object o,
byte type)
Force a data object to a String, if possible.
|
static Tuple |
DataType.toTuple(Object o)
If this object is a tuple, return it as a tuple.
|
Modifier and Type | Method and Description |
---|---|
void |
PigContext.connect() |
ExecutableManager |
PigContext.createExecutableManager()
Create a new
ExecutableManager depending on the ExecType. |
static <T> T |
PigContext.instantiateObjectFromParams(org.apache.hadoop.conf.Configuration conf,
String classParamKey,
String argParamKey,
Class<T> clazz)
A common Pig pattern for initializing objects via system properties is to support passing
something like this on the command line:
-Dpig.notification.listener=MyClass
-Dpig.notification.listener.arg=myConstructorStringArg
This method will properly initialize the class with the args, if they exist. |
Modifier and Type | Method and Description |
---|---|
void |
SampleLoader.computeSamples(ArrayList<Pair<FileSpec,Boolean>> inputs,
PigContext pc) |
Object |
IdentityColumn.exec(Tuple input) |
Constructor and Description |
---|
StreamingUDF(String language,
String filePath,
String funcName,
String outputSchemaString,
String schemaLineNumber,
String execType,
String isIllustrate) |
Modifier and Type | Class and Description |
---|---|
class |
StreamingUDFException |
Modifier and Type | Method and Description |
---|---|
void |
ExecutableManager.configure(POStream stream)
Configure and initialize the
ExecutableManager . |
static InputHandler |
HandlerFactory.createInputHandler(StreamingCommand command)
Create an
InputHandler for the given input specification
of the StreamingCommand . |
static OutputHandler |
HandlerFactory.createOutputHandler(StreamingCommand command)
Create an
OutputHandler for the given output specification
of the StreamingCommand . |
Constructor and Description |
---|
FileInputHandler(StreamingCommand.HandleSpec handleSpec) |
FileOutputHandler(StreamingCommand.HandleSpec handleSpec) |
Modifier and Type | Method and Description |
---|---|
Object |
AvroTupleWrapper.get(int pos) |
static byte |
AvroStorageSchemaConversionUtilities.getPigType(org.apache.avro.Schema s)
Determines the pig object type of the Avro schema.
|
byte |
AvroTupleWrapper.getType(int arg0) |
boolean |
AvroTupleWrapper.isNull(int arg0) |
void |
AvroTupleWrapper.set(int arg0,
Object arg1) |
String |
AvroTupleWrapper.toDelimitedString(String arg0) |
Modifier and Type | Method and Description |
---|---|
Map<LOLoad,DataBag> |
AugmentBaseDataVisitor.getNewBaseData() |
Modifier and Type | Method and Description |
---|---|
Object |
ExampleTuple.get(int fieldNum) |
byte |
ExampleTuple.getType(int fieldNum) |
boolean |
ExampleTuple.isNull(int fieldNum) |
void |
ExampleTuple.set(int fieldNum,
Object val) |
Modifier and Type | Method and Description |
---|---|
protected static Tuple |
ExtremalTupleByNthField.extreme(int pind,
int psign,
Tuple input,
PigProgressable reporter) |
protected static Tuple |
MaxTupleBy1stField.max(Tuple input,
PigProgressable reporter) |
protected static int |
ExtremalTupleByNthField.parseFieldIndex(String inputFieldIndex) |
Constructor and Description |
---|
ExtremalTupleByNthField.HelperClass() |
ExtremalTupleByNthField.HelperClass(String fieldIndexString) |
ExtremalTupleByNthField.HelperClass(String fieldIndexString,
String order) |
ExtremalTupleByNthField()
Constructors
|
ExtremalTupleByNthField(String fieldIndexString) |
ExtremalTupleByNthField(String fieldIndexString,
String order) |
Modifier and Type | Method and Description |
---|---|
static org.joda.time.DateTime |
ISOHelper.parseDateTime(Tuple input) |
Modifier and Type | Method and Description |
---|---|
Map<String,List<PigStats>> |
ScriptEngine.run(PigContext pigContext,
String scriptFile)
Runs a script file.
|
Modifier and Type | Method and Description |
---|---|
static Object |
GroovyUtils.groovyToPig(Object groovyObject)
Converts an object created on the Groovy side to its Pig counterpart.
|
static Object |
GroovyUtils.pigToGroovy(Object pigObject)
Converts an object created on the Pig side to its Groovy counterpart.
|
Modifier and Type | Method and Description |
---|---|
void |
RubyDataBag.add(org.jruby.runtime.ThreadContext context,
org.jruby.runtime.builtin.IRubyObject[] args)
The add method accepts a varargs argument; each argument can be either a random
object, a DataBag, or a RubyArray.
|
org.jruby.runtime.builtin.IRubyObject |
RubyDataBag.each(org.jruby.runtime.ThreadContext context,
org.jruby.runtime.Block block)
This is an implementation of the each method which opens up the Enumerable interface,
and makes it very convenient to iterate over the elements of a DataBag.
|
org.jruby.runtime.builtin.IRubyObject |
RubyDataBag.flatten(org.jruby.runtime.ThreadContext context,
org.jruby.runtime.Block block)
This is a convenience method which will run the given block on the first element
of each tuple contained.
|
static <T> org.jruby.RubyHash |
PigJrubyLibrary.pigToRuby(org.jruby.Ruby ruby,
Map<T,?> object)
A type specific conversion routine for Pig Maps.
|
static org.jruby.runtime.builtin.IRubyObject |
PigJrubyLibrary.pigToRuby(org.jruby.Ruby ruby,
Object object)
This is the method which provides conversion from Pig to Ruby.
|
static org.jruby.RubyArray |
PigJrubyLibrary.pigToRuby(org.jruby.Ruby ruby,
Tuple object)
A type specific conversion routine.
|
static Object |
PigJrubyLibrary.rubyToPig(org.jruby.runtime.builtin.IRubyObject rbObject)
This method facilitates conversion from Ruby objects to Pig objects.
|
static Tuple |
PigJrubyLibrary.rubyToPig(org.jruby.RubyArray rbObject)
A type specific conversion routine.
|
static Map<String,?> |
PigJrubyLibrary.rubyToPig(org.jruby.RubyHash rbObject)
A type specific conversion routine.
|
Modifier and Type | Method and Description |
---|---|
static Object |
JythonUtils.pythonToPig(org.python.core.PyObject pyObject) |
Modifier and Type | Method and Description |
---|---|
List<ExecJob> |
ToolsPigServer.runPlan(LogicalPlan newPlan,
String jobName)
Given a (modified) new logical plan, run the script.
|
Constructor and Description |
---|
ToolsPigServer(ExecType execType,
Properties properties) |
ToolsPigServer(PigContext ctx) |
ToolsPigServer(String execTypeString) |
Constructor and Description |
---|
Grunt(BufferedReader in,
PigContext pigContext) |
Copyright © 2007-2012 The Apache Software Foundation