org.apache.pig.piggybank.storage
Class HiveColumnarLoader

java.lang.Object
  extended by org.apache.pig.LoadFunc
      extended by org.apache.pig.FileInputLoadFunc
          extended by org.apache.pig.piggybank.storage.HiveColumnarLoader
All Implemented Interfaces:
LoadMetadata, LoadPushDown, OrderedLoadFunc

public class HiveColumnarLoader
extends FileInputLoadFunc
implements LoadMetadata, LoadPushDown

Loader for Hive RC Columnar files.
Supports the following types:
*

Hive Type Pig Type from DataType
string CHARARRAY
int INTEGER
bigint or long LONG
float float
double DOUBLE
boolean BOOLEAN
byte BYTE
array TUPLE
map MAP

Partitions
The input paths are scanned by the loader for [partition name]=[value] patterns in the subdirectories.
If detected these partitions are appended to the table schema.
For example if you have the directory structure:

 /user/hive/warehouse/mytable
                                /year=2010/month=02/day=01
 
The mytable schema is (id int,name string).
The final schema returned in pig will be (id:int, name:chararray, year:chararray, month:chararray, day:chararray).

Usage 1:

To load a hive table: uid bigint, ts long, arr ARRAY, m MAP

 a = LOAD 'file' USING HiveColumnarLoader("uid bigint, ts long, arr array, m map");
 -- to reference the fields
 b = FOREACH GENERATE a.uid, a.ts, a.arr, a.m;
 

Usage 2:

To load a hive table: uid bigint, ts long, arr ARRAY, m MAP only processing dates 2009-10-01 to 2009-10-02 in a
date partitioned hive table.
Old Usage
Note: The partitions can be filtered by using pig's FILTER operator.

 a = LOAD 'file' USING HiveColumnarLoader("uid bigint, ts long, arr array, m map", "2009-10-01:2009-10-02");
 -- to reference the fields
 b = FOREACH GENERATE a.uid, a.ts, a.arr, a.m;
 

New Usage
 a = LOAD 'file' USING HiveColumnarLoader("uid bigint, ts long, arr array, m map");
 f = FILTER a BY daydate>='2009-10-01' AND daydate >='2009-10-02';
 

Usage 3:

To load a hive table: uid bigint, ts long, arr ARRAY, m MAP only reading column uid and ts for dates 2009-10-01 to 2009-10-02.
Old Usage
Note: This behaviour is now supported in pig by LoadPushDown adding the columns needed to be loaded like below is ignored and pig will automatically send the columns used by the script to the loader.

 a = LOAD 'file' USING HiveColumnarLoader("uid bigint, ts long, arr array, m map");
 f = FILTER a BY daydate>='2009-10-01' AND daydate >='2009-10-02';
 -- to reference the fields
 b = FOREACH a GENERATE uid, ts, arr, m;
 

Issues

Table schema definition
The schema definition must be column name followed by a space then a comma then no space and the next column name and so on.
This so column1 string, column2 string will not work, it must be column1 string,column2 string

Partitioning
Partitions must be in the format [partition name]=[partition value]
Only strings are supported in the partitioning.
Partitions must follow the same naming convention for all sub directories in a table
For example:
The following is not valid:

     mytable/hour=00
     mytable/day=01/hour=00
 


Nested Class Summary
 
Nested classes/interfaces inherited from interface org.apache.pig.LoadPushDown
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponse
 
Field Summary
static String DATE_RANGE
           
protected static org.apache.commons.logging.Log LOG
           
protected static Pattern pcols
          Regex to filter out column names
static String PROJECTION_ID
           
protected  TupleFactory tupleFactory
           
 
Constructor Summary
HiveColumnarLoader(String table_schema)
          Table schema should be a space and comma separated string describing the Hive schema.
For example uid BIGINT, pid long, means 1 column of uid type BIGINT and one column of pid type LONG.
The types are not case sensitive.
HiveColumnarLoader(String table_schema, String dateRange)
          This constructor is for backward compatibility.
HiveColumnarLoader(String table_schema, String dateRange, String columns)
          This constructor is for backward compatibility.
 
Method Summary
 List<LoadPushDown.OperatorSet> getFeatures()
          Determine the operators that can be pushed to the loader.
 org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.LongWritable,org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable> getInputFormat()
          This will be called during planning on the front end.
 Tuple getNext()
          Retrieves the next tuple to be processed.
 String[] getPartitionKeys(String location, org.apache.hadoop.mapreduce.Job job)
          Find what columns are partition keys for this input.
 ResourceSchema getSchema(String location, org.apache.hadoop.mapreduce.Job job)
          Get a schema for the data to be loaded.
 ResourceStatistics getStatistics(String location, org.apache.hadoop.mapreduce.Job job)
          Get statistics about the data to be loaded.
 void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader, PigSplit split)
          Initializes LoadFunc for reading data.
 LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
          Indicate to the loader fields that will be needed.
 void setLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the loader the location of the object(s) being loaded.
 void setPartitionFilter(Expression partitionFilter)
          Set the filter for partitioning.
 void setUDFContextSignature(String signature)
          This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc.
 
Methods inherited from class org.apache.pig.FileInputLoadFunc
getSplitComparable
 
Methods inherited from class org.apache.pig.LoadFunc
getAbsolutePath, getLoadCaster, getPathStrings, join, relativeToAbsolutePath
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

PROJECTION_ID

public static final String PROJECTION_ID

DATE_RANGE

public static final String DATE_RANGE

pcols

protected static final Pattern pcols
Regex to filter out column names


LOG

protected static final org.apache.commons.logging.Log LOG

tupleFactory

protected TupleFactory tupleFactory
Constructor Detail

HiveColumnarLoader

public HiveColumnarLoader(String table_schema)
Table schema should be a space and comma separated string describing the Hive schema.
For example uid BIGINT, pid long, means 1 column of uid type BIGINT and one column of pid type LONG.
The types are not case sensitive.

Parameters:
table_schema - This property cannot be null

HiveColumnarLoader

public HiveColumnarLoader(String table_schema,
                          String dateRange,
                          String columns)
This constructor is for backward compatibility. Table schema should be a space and comma separated string describing the Hive schema.
For example uid BIGINT, pid long, means 1 column of uid type BIGINT and one column of pid type LONG.
The types are not case sensitive.

Parameters:
table_schema - This property cannot be null
dateRange - String
columns - String not used any more

HiveColumnarLoader

public HiveColumnarLoader(String table_schema,
                          String dateRange)
This constructor is for backward compatibility. Table schema should be a space and comma separated string describing the Hive schema.
For example uid BIGINT, pid long, means 1 column of uid type BIGINT and one column of pid type LONG.
The types are not case sensitive.

Parameters:
table_schema - This property cannot be null
dateRange - String
Method Detail

getInputFormat

public org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.LongWritable,org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable> getInputFormat()
                                                                                                                                                       throws IOException
Description copied from class: LoadFunc
This will be called during planning on the front end. This is the instance of InputFormat (rather than the class name) because the load function may need to instantiate the InputFormat in order to control how it is constructed.

Specified by:
getInputFormat in class LoadFunc
Returns:
the InputFormat associated with this loader.
Throws:
IOException - if there is an exception during InputFormat construction

getNext

public Tuple getNext()
              throws IOException
Description copied from class: LoadFunc
Retrieves the next tuple to be processed. Implementations should NOT reuse tuple objects (or inner member objects) they return across calls and should return a different tuple object in each call.

Specified by:
getNext in class LoadFunc
Returns:
the next tuple to be processed or null if there are no more tuples to be processed.
Throws:
IOException - if there is an exception while retrieving the next tuple

prepareToRead

public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
                          PigSplit split)
                   throws IOException
Description copied from class: LoadFunc
Initializes LoadFunc for reading data. This will be called during execution before any calls to getNext. The RecordReader needs to be passed here because it has been instantiated for a particular InputSplit.

Specified by:
prepareToRead in class LoadFunc
Parameters:
reader - RecordReader to be used by this instance of the LoadFunc
split - The input PigSplit to process
Throws:
IOException - if there is an exception during initialization

setLocation

public void setLocation(String location,
                        org.apache.hadoop.mapreduce.Job job)
                 throws IOException
Description copied from class: LoadFunc
Communicate to the loader the location of the object(s) being loaded. The location string passed to the LoadFunc here is the return value of LoadFunc.relativeToAbsolutePath(String, Path). Implementations should use this method to communicate the location (and any other information) to its underlying InputFormat through the Job object. This method will be called in the backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls.

Specified by:
setLocation in class LoadFunc
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, Path)
job - the Job object store or retrieve earlier stored information from the UDFContext
Throws:
IOException - if the location is not valid.

getPartitionKeys

public String[] getPartitionKeys(String location,
                                 org.apache.hadoop.mapreduce.Job job)
                          throws IOException
Description copied from interface: LoadMetadata
Find what columns are partition keys for this input.

Specified by:
getPartitionKeys in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
array of field names of the partition keys. Implementations should return null to indicate that there are no partition keys
Throws:
IOException - if an exception occurs while retrieving partition keys

getSchema

public ResourceSchema getSchema(String location,
                                org.apache.hadoop.mapreduce.Job job)
                         throws IOException
Description copied from interface: LoadMetadata
Get a schema for the data to be loaded.

Specified by:
getSchema in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
schema for the data to be loaded. This schema should represent all tuples of the returned data. If the schema is unknown or it is not possible to return a schema that represents all returned data, then null should be returned. The schema should not be affected by pushProjection, ie. getSchema should always return the original schema even after pushProjection
Throws:
IOException - if an exception occurs while determining the schema

getStatistics

public ResourceStatistics getStatistics(String location,
                                        org.apache.hadoop.mapreduce.Job job)
                                 throws IOException
Description copied from interface: LoadMetadata
Get statistics about the data to be loaded. If no statistics are available, then null should be returned.

Specified by:
getStatistics in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
statistics about the data to be loaded. If no statistics are available, then null should be returned.
Throws:
IOException - if an exception occurs while retrieving statistics

setPartitionFilter

public void setPartitionFilter(Expression partitionFilter)
                        throws IOException
Description copied from interface: LoadMetadata
Set the filter for partitioning. It is assumed that this filter will only contain references to fields given as partition keys in getPartitionKeys. So if the implementation returns null in LoadMetadata.getPartitionKeys(String, Job), then this method is not called by Pig runtime. This method is also not called by the Pig runtime if there are no partition filter conditions.

Specified by:
setPartitionFilter in interface LoadMetadata
Parameters:
partitionFilter - that describes filter for partitioning
Throws:
IOException - if the filter is not compatible with the storage mechanism or contains non-partition fields.

getFeatures

public List<LoadPushDown.OperatorSet> getFeatures()
Description copied from interface: LoadPushDown
Determine the operators that can be pushed to the loader. Note that by indicating a loader can accept a certain operator (such as selection) the loader is not promising that it can handle all selections. When it is passed the actual operators to push down it will still have a chance to reject them.

Specified by:
getFeatures in interface LoadPushDown
Returns:
list of all features that the loader can support

pushProjection

public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
                                                  throws FrontendException
Description copied from interface: LoadPushDown
Indicate to the loader fields that will be needed. This can be useful for loaders that access data that is stored in a columnar format where indicating columns to be accessed a head of time will save scans. This method will not be invoked by the Pig runtime if all fields are required. So implementations should assume that if this method is not invoked, then all fields from the input are required. If the loader function cannot make use of this information, it is free to ignore it by returning an appropriate Response

Specified by:
pushProjection in interface LoadPushDown
Parameters:
requiredFieldList - RequiredFieldList indicating which columns will be needed. This structure is read only. User cannot make change to it inside pushProjection.
Returns:
Indicates which fields will be returned
Throws:
FrontendException

setUDFContextSignature

public void setUDFContextSignature(String signature)
Description copied from class: LoadFunc
This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc. The signature can be used to store into the UDFContext any information which the LoadFunc needs to store between various method invocations in the front end and back end. A use case is to store LoadPushDown.RequiredFieldList passed to it in LoadPushDown.pushProjection(RequiredFieldList) for use in the back end before returning tuples in LoadFunc.getNext(). This method will be call before other methods in LoadFunc

Overrides:
setUDFContextSignature in class LoadFunc
Parameters:
signature - a unique signature to identify this LoadFunc


Copyright © ${year} The Apache Software Foundation