org.apache.hadoop.zebra.pig
Class TableLoader

java.lang.Object
  extended by org.apache.pig.LoadFunc
      extended by org.apache.hadoop.zebra.pig.TableLoader
All Implemented Interfaces:
CollectableLoadFunc, IndexableLoadFunc, LoadMetadata, LoadPushDown, OrderedLoadFunc

public class TableLoader
extends LoadFunc
implements LoadMetadata, LoadPushDown, IndexableLoadFunc, CollectableLoadFunc, OrderedLoadFunc

Pig IndexableLoadFunc and Slicer for Zebra Table


Nested Class Summary
 
Nested classes/interfaces inherited from interface org.apache.pig.LoadPushDown
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponse
 
Constructor Summary
TableLoader()
          default constructor
TableLoader(String projectionStr)
           
TableLoader(String projectionStr, String sorted)
           
 
Method Summary
 void close()
          A method called by the Pig runtime to give an opportunity for implementations to perform cleanup actions like closing the underlying input stream.
 void ensureAllKeyInstancesInSameSplit()
          When this method is called, Pig is communicating to the Loader that it must load data such that all instances of a key are in same split.
 List<LoadPushDown.OperatorSet> getFeatures()
          Determine the operators that can be pushed to the loader.
 org.apache.hadoop.mapreduce.InputFormat getInputFormat()
          This will be called during planning on the front end.
 Tuple getNext()
          Retrieves the next tuple to be processed.
 String[] getPartitionKeys(String location, org.apache.hadoop.mapreduce.Job job)
          Find what columns are partition keys for this input.
 ResourceSchema getSchema(String location, org.apache.hadoop.mapreduce.Job job)
          Get a schema for the data to be loaded.
 org.apache.hadoop.io.WritableComparable<?> getSplitComparable(org.apache.hadoop.mapreduce.InputSplit split)
          The WritableComparable object returned will be used to compare the position of different splits in an ordered stream
 ResourceStatistics getStatistics(String location, org.apache.hadoop.mapreduce.Job job)
          Get statistics about the data to be loaded.
 void initialize(org.apache.hadoop.conf.Configuration conf)
          This method is called by Pig run time to allow the IndexableLoadFunc to perform any initialization actions
 void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader, PigSplit split)
          Initializes LoadFunc for reading data.
 LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
          Indicate to the loader fields that will be needed.
 void seekNear(Tuple tuple)
          This method is called only once.
 void setLocation(String location, org.apache.hadoop.mapreduce.Job job)
          This method is called by pig on both frontend and backend.
 void setPartitionFilter(Expression partitionFilter)
          Set the filter for partitioning.
 void setUDFContextSignature(String signature)
          This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc.
 
Methods inherited from class org.apache.pig.LoadFunc
getAbsolutePath, getLoadCaster, getPathStrings, join, relativeToAbsolutePath
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

TableLoader

public TableLoader()
default constructor


TableLoader

public TableLoader(String projectionStr)
Parameters:
projectionStr - projection string passed from pig query.

TableLoader

public TableLoader(String projectionStr,
                   String sorted)
            throws IOException
Parameters:
projectionStr - projection string passed from pig query.
sorted - need sorted table(s)?
Throws:
IOException
Method Detail

initialize

public void initialize(org.apache.hadoop.conf.Configuration conf)
                throws IOException
Description copied from interface: IndexableLoadFunc
This method is called by Pig run time to allow the IndexableLoadFunc to perform any initialization actions

Specified by:
initialize in interface IndexableLoadFunc
Parameters:
conf - The job configuration object
Throws:
IOException

seekNear

public void seekNear(Tuple tuple)
              throws IOException
This method is called only once.

Specified by:
seekNear in interface IndexableLoadFunc
Parameters:
tuple - Tuple with join keys (which are a prefix of the sort keys of the input data). For example if the data is sorted on columns in position 2,4,5 any of the following Tuples are valid as an argument value: (fieldAt(2)) (fieldAt(2), fieldAt(4)) (fieldAt(2), fieldAt(4), fieldAt(5)) The following are some invalid cases: (fieldAt(4)) (fieldAt(2), fieldAt(5)) (fieldAt(4), fieldAt(5))
Throws:
IOException - When the loadFunc is unable to position to the required point in its input stream

getNext

public Tuple getNext()
              throws IOException
Description copied from class: LoadFunc
Retrieves the next tuple to be processed. Implementations should NOT reuse tuple objects (or inner member objects) they return across calls and should return a different tuple object in each call.

Specified by:
getNext in class LoadFunc
Returns:
the next tuple to be processed or null if there are no more tuples to be processed.
Throws:
IOException - if there is an exception while retrieving the next tuple

close

public void close()
           throws IOException
Description copied from interface: IndexableLoadFunc
A method called by the Pig runtime to give an opportunity for implementations to perform cleanup actions like closing the underlying input stream. This is necessary since while performing a join the Pig run time may determine than no further join is possible with remaining records and may indicate to the IndexableLoader to cleanup by calling this method.

Specified by:
close in interface IndexableLoadFunc
Throws:
IOException - if the loadfunc is unable to perform its close actions.

prepareToRead

public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
                          PigSplit split)
                   throws IOException
Description copied from class: LoadFunc
Initializes LoadFunc for reading data. This will be called during execution before any calls to getNext. The RecordReader needs to be passed here because it has been instantiated for a particular InputSplit.

Specified by:
prepareToRead in class LoadFunc
Parameters:
reader - RecordReader to be used by this instance of the LoadFunc
split - The input PigSplit to process
Throws:
IOException - if there is an exception during initialization

setLocation

public void setLocation(String location,
                        org.apache.hadoop.mapreduce.Job job)
                 throws IOException
This method is called by pig on both frontend and backend.

Specified by:
setLocation in class LoadFunc
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, Path)
job - the Job object store or retrieve earlier stored information from the UDFContext
Throws:
IOException - if the location is not valid.

getInputFormat

public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
                                                       throws IOException
Description copied from class: LoadFunc
This will be called during planning on the front end. This is the instance of InputFormat (rather than the class name) because the load function may need to instantiate the InputFormat in order to control how it is constructed.

Specified by:
getInputFormat in class LoadFunc
Returns:
the InputFormat associated with this loader.
Throws:
IOException - if there is an exception during InputFormat construction

getPartitionKeys

public String[] getPartitionKeys(String location,
                                 org.apache.hadoop.mapreduce.Job job)
                          throws IOException
Description copied from interface: LoadMetadata
Find what columns are partition keys for this input.

Specified by:
getPartitionKeys in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
array of field names of the partition keys. Implementations should return null to indicate that there are no partition keys
Throws:
IOException - if an exception occurs while retrieving partition keys

getSchema

public ResourceSchema getSchema(String location,
                                org.apache.hadoop.mapreduce.Job job)
                         throws IOException
Description copied from interface: LoadMetadata
Get a schema for the data to be loaded.

Specified by:
getSchema in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
schema for the data to be loaded. This schema should represent all tuples of the returned data. If the schema is unknown or it is not possible to return a schema that represents all returned data, then null should be returned. The schema should not be affected by pushProjection, ie. getSchema should always return the original schema even after pushProjection
Throws:
IOException - if an exception occurs while determining the schema

getStatistics

public ResourceStatistics getStatistics(String location,
                                        org.apache.hadoop.mapreduce.Job job)
                                 throws IOException
Description copied from interface: LoadMetadata
Get statistics about the data to be loaded. If no statistics are available, then null should be returned.

Specified by:
getStatistics in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
statistics about the data to be loaded. If no statistics are available, then null should be returned.
Throws:
IOException - if an exception occurs while retrieving statistics

setPartitionFilter

public void setPartitionFilter(Expression partitionFilter)
                        throws IOException
Description copied from interface: LoadMetadata
Set the filter for partitioning. It is assumed that this filter will only contain references to fields given as partition keys in getPartitionKeys. So if the implementation returns null in LoadMetadata.getPartitionKeys(String, Job), then this method is not called by Pig runtime. This method is also not called by the Pig runtime if there are no partition filter conditions.

Specified by:
setPartitionFilter in interface LoadMetadata
Parameters:
partitionFilter - that describes filter for partitioning
Throws:
IOException - if the filter is not compatible with the storage mechanism or contains non-partition fields.

getFeatures

public List<LoadPushDown.OperatorSet> getFeatures()
Description copied from interface: LoadPushDown
Determine the operators that can be pushed to the loader. Note that by indicating a loader can accept a certain operator (such as selection) the loader is not promising that it can handle all selections. When it is passed the actual operators to push down it will still have a chance to reject them.

Specified by:
getFeatures in interface LoadPushDown
Returns:
list of all features that the loader can support

pushProjection

public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
                                                  throws FrontendException
Description copied from interface: LoadPushDown
Indicate to the loader fields that will be needed. This can be useful for loaders that access data that is stored in a columnar format where indicating columns to be accessed a head of time will save scans. This method will not be invoked by the Pig runtime if all fields are required. So implementations should assume that if this method is not invoked, then all fields from the input are required. If the loader function cannot make use of this information, it is free to ignore it by returning an appropriate Response

Specified by:
pushProjection in interface LoadPushDown
Parameters:
requiredFieldList - RequiredFieldList indicating which columns will be needed.
Returns:
Indicates which fields will be returned
Throws:
FrontendException

setUDFContextSignature

public void setUDFContextSignature(String signature)
Description copied from class: LoadFunc
This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc. The signature can be used to store into the UDFContext any information which the LoadFunc needs to store between various method invocations in the front end and back end. A use case is to store LoadPushDown.RequiredFieldList passed to it in LoadPushDown.pushProjection(RequiredFieldList) for use in the back end before returning tuples in LoadFunc.getNext(). This method will be call before other methods in LoadFunc

Overrides:
setUDFContextSignature in class LoadFunc
Parameters:
signature - a unique signature to identify this LoadFunc

getSplitComparable

public org.apache.hadoop.io.WritableComparable<?> getSplitComparable(org.apache.hadoop.mapreduce.InputSplit split)
                                                              throws IOException
Description copied from interface: OrderedLoadFunc
The WritableComparable object returned will be used to compare the position of different splits in an ordered stream

Specified by:
getSplitComparable in interface OrderedLoadFunc
Parameters:
split - An InputSplit from the InputFormat underlying this loader.
Returns:
WritableComparable representing the position of the split in input
Throws:
IOException

ensureAllKeyInstancesInSameSplit

public void ensureAllKeyInstancesInSameSplit()
                                      throws IOException
Description copied from interface: CollectableLoadFunc
When this method is called, Pig is communicating to the Loader that it must load data such that all instances of a key are in same split. Pig will make no further checks at runtime to ensure whether the contract is honored or not.

Specified by:
ensureAllKeyInstancesInSameSplit in interface CollectableLoadFunc
Throws:
IOException


Copyright © ${year} The Apache Software Foundation