org.apache.pig.builtin
Class PigStorage

java.lang.Object
  extended by org.apache.pig.LoadFunc
      extended by org.apache.pig.FileInputLoadFunc
          extended by org.apache.pig.builtin.PigStorage
All Implemented Interfaces:
LoadMetadata, LoadPushDown, OrderedLoadFunc, StoreFuncInterface, StoreMetadata
Direct Known Subclasses:
CSVExcelStorage, IndexedStorage, PigStorageSchema

public class PigStorage
extends FileInputLoadFunc
implements StoreFuncInterface, LoadPushDown, LoadMetadata, StoreMetadata

A load function that parses a line of input into fields using a character delimiter. The default delimiter is a tab. You can specify any character as a literal ("a"), a known escape character ("\\t"), or a dec or hex value ("\\u001", "\\x0A").

An optional second constructor argument is provided that allows one to customize advanced behaviors. A list of available options is below:

Schemas

If -schema is specified, a hidden ".pig_schema" file is created in the output directory when storing data. It is used by PigStorage (with or without -schema) during loading to determine the field names and types of the data without the need for a user to explicitly provide the schema in an as clause, unless -noschema is specified. No attempt to merge conflicting schemas is made during loading. The first schema encountered during a file system scan is used. If the schema file is not present while '-schema' option is used during loading, it results in an error.

In addition, using -schema drops a ".pig_headers" file in the output directory. This file simply lists the delimited aliases. This is intended to make export to tools that can read files with header lines easier (just cat the header to your data).

Source tagging

If-tagsource is specified, PigStorage will prepend input split path to each Tuple/row. User needs to ensure pig.splitCombination is set to false. Usage: A = LOAD 'input' using PigStorage(',','-tagschema'); B = foreach A generate INPUT_FILE_NAME; The first field in each Tuple will contain input path (INPUT_FILE_NAME)

Note that regardless of whether or not you store the schema, you always need to specify the correct delimiter to read your data. If you store reading delimiter "#" and then load using the default delimiter, your data will not be parsed correctly.

Compression

Storing to a directory whose name ends in ".bz2" or ".gz" or ".lzo" (if you have installed support for LZO compression in Hadoop) will automatically use the corresponding compression codec.
output.compression.enabled and output.compression.codec job properties also work.

Loading from directories ending in .bz2 or .bz works automatically; other compression formats are not auto-detected on loading.


Nested Class Summary
 
Nested classes/interfaces inherited from interface org.apache.pig.LoadPushDown
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponse
 
Field Summary
protected  LoadCaster caster
           
protected  org.apache.hadoop.mapreduce.RecordReader in
           
protected  org.apache.commons.logging.Log mLog
           
protected  boolean[] mRequiredColumns
           
protected  ResourceSchema schema
           
protected  String signature
           
protected  org.apache.hadoop.mapreduce.RecordWriter writer
           
 
Constructor Summary
PigStorage()
           
PigStorage(String delimiter)
          Constructs a Pig loader that uses specified character as a field delimiter.
PigStorage(String delimiter, String options)
          Constructs a Pig loader that uses specified character as a field delimiter.
 
Method Summary
 void checkSchema(ResourceSchema s)
          Set the schema for data to be stored.
 void cleanupOnFailure(String location, org.apache.hadoop.mapreduce.Job job)
          This method will be called by Pig if the job which contains this store fails.
 boolean equals(Object obj)
           
 boolean equals(PigStorage other)
           
 List<LoadPushDown.OperatorSet> getFeatures()
          Determine the operators that can be pushed to the loader.
 org.apache.hadoop.mapreduce.InputFormat getInputFormat()
          This will be called during planning on the front end.
 Tuple getNext()
          Retrieves the next tuple to be processed.
 org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
          Return the OutputFormat associated with StoreFuncInterface.
 String[] getPartitionKeys(String location, org.apache.hadoop.mapreduce.Job job)
          Find what columns are partition keys for this input.
 ResourceSchema getSchema(String location, org.apache.hadoop.mapreduce.Job job)
          Get a schema for the data to be loaded.
 ResourceStatistics getStatistics(String location, org.apache.hadoop.mapreduce.Job job)
          Get statistics about the data to be loaded.
 int hashCode()
           
 void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader, PigSplit split)
          Initializes LoadFunc for reading data.
 void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
          Initialize StoreFuncInterface to write data.
 LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
          Indicate to the loader fields that will be needed.
 void putNext(Tuple f)
          Write a tuple to the data store.
 String relToAbsPathForStoreLocation(String location, org.apache.hadoop.fs.Path curDir)
          This method is called by the Pig runtime in the front end to convert the output location to an absolute path if the location is relative.
 void setLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the loader the location of the object(s) being loaded.
 void setPartitionFilter(Expression partitionFilter)
          Set the filter for partitioning.
 void setStoreFuncUDFContextSignature(String signature)
          This method will be called by Pig both in the front end and back end to pass a unique signature to the StoreFuncInterface which it can use to store information in the UDFContext which it needs to store between various method invocations in the front end and back end.
 void setStoreLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the storer the location where the data needs to be stored.
 void setUDFContextSignature(String signature)
          This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc.
 void storeSchema(ResourceSchema schema, String location, org.apache.hadoop.mapreduce.Job job)
          Store schema of the data being written
 void storeStatistics(ResourceStatistics stats, String location, org.apache.hadoop.mapreduce.Job job)
          Store statistics about the data being written.
 
Methods inherited from class org.apache.pig.FileInputLoadFunc
getSplitComparable
 
Methods inherited from class org.apache.pig.LoadFunc
getAbsolutePath, getLoadCaster, getPathStrings, join, relativeToAbsolutePath, warn
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

in

protected org.apache.hadoop.mapreduce.RecordReader in

writer

protected org.apache.hadoop.mapreduce.RecordWriter writer

mLog

protected final org.apache.commons.logging.Log mLog

signature

protected String signature

schema

protected ResourceSchema schema

caster

protected LoadCaster caster

mRequiredColumns

protected boolean[] mRequiredColumns
Constructor Detail

PigStorage

public PigStorage()

PigStorage

public PigStorage(String delimiter)
Constructs a Pig loader that uses specified character as a field delimiter.

Parameters:
delimiter - the single byte character that is used to separate fields. ("\t" is the default.)
Throws:
org.apache.commons.cli.ParseException

PigStorage

public PigStorage(String delimiter,
                  String options)
Constructs a Pig loader that uses specified character as a field delimiter.

Understands the following options, which can be specified in the second paramter:

Parameters:
delimiter - the single byte character that is used to separate fields.
options - a list of options that can be used to modify PigStorage behavior
Throws:
org.apache.commons.cli.ParseException
Method Detail

getNext

public Tuple getNext()
              throws IOException
Description copied from class: LoadFunc
Retrieves the next tuple to be processed. Implementations should NOT reuse tuple objects (or inner member objects) they return across calls and should return a different tuple object in each call.

Specified by:
getNext in class LoadFunc
Returns:
the next tuple to be processed or null if there are no more tuples to be processed.
Throws:
IOException - if there is an exception while retrieving the next tuple

putNext

public void putNext(Tuple f)
             throws IOException
Description copied from interface: StoreFuncInterface
Write a tuple to the data store.

Specified by:
putNext in interface StoreFuncInterface
Parameters:
f - the tuple to store.
Throws:
IOException - if an exception occurs during the write

pushProjection

public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
                                                  throws FrontendException
Description copied from interface: LoadPushDown
Indicate to the loader fields that will be needed. This can be useful for loaders that access data that is stored in a columnar format where indicating columns to be accessed a head of time will save scans. This method will not be invoked by the Pig runtime if all fields are required. So implementations should assume that if this method is not invoked, then all fields from the input are required. If the loader function cannot make use of this information, it is free to ignore it by returning an appropriate Response

Specified by:
pushProjection in interface LoadPushDown
Parameters:
requiredFieldList - RequiredFieldList indicating which columns will be needed. This structure is read only. User cannot make change to it inside pushProjection.
Returns:
Indicates which fields will be returned
Throws:
FrontendException

equals

public boolean equals(Object obj)
Overrides:
equals in class Object

equals

public boolean equals(PigStorage other)

getInputFormat

public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
Description copied from class: LoadFunc
This will be called during planning on the front end. This is the instance of InputFormat (rather than the class name) because the load function may need to instantiate the InputFormat in order to control how it is constructed.

Specified by:
getInputFormat in class LoadFunc
Returns:
the InputFormat associated with this loader.

prepareToRead

public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
                          PigSplit split)
Description copied from class: LoadFunc
Initializes LoadFunc for reading data. This will be called during execution before any calls to getNext. The RecordReader needs to be passed here because it has been instantiated for a particular InputSplit.

Specified by:
prepareToRead in class LoadFunc
Parameters:
reader - RecordReader to be used by this instance of the LoadFunc
split - The input PigSplit to process

setLocation

public void setLocation(String location,
                        org.apache.hadoop.mapreduce.Job job)
                 throws IOException
Description copied from class: LoadFunc
Communicate to the loader the location of the object(s) being loaded. The location string passed to the LoadFunc here is the return value of LoadFunc.relativeToAbsolutePath(String, Path). Implementations should use this method to communicate the location (and any other information) to its underlying InputFormat through the Job object. This method will be called in the backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls.

Specified by:
setLocation in class LoadFunc
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, Path)
job - the Job object store or retrieve earlier stored information from the UDFContext
Throws:
IOException - if the location is not valid.

getOutputFormat

public org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
Description copied from interface: StoreFuncInterface
Return the OutputFormat associated with StoreFuncInterface. This will be called on the front end during planning and on the backend during execution.

Specified by:
getOutputFormat in interface StoreFuncInterface
Returns:
the OutputFormat associated with StoreFuncInterface

prepareToWrite

public void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
Description copied from interface: StoreFuncInterface
Initialize StoreFuncInterface to write data. This will be called during execution before the call to putNext.

Specified by:
prepareToWrite in interface StoreFuncInterface
Parameters:
writer - RecordWriter to use.

setStoreLocation

public void setStoreLocation(String location,
                             org.apache.hadoop.mapreduce.Job job)
                      throws IOException
Description copied from interface: StoreFuncInterface
Communicate to the storer the location where the data needs to be stored. The location string passed to the StoreFuncInterface here is the return value of StoreFuncInterface.relToAbsPathForStoreLocation(String, Path) This method will be called in the frontend and backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls. StoreFuncInterface.checkSchema(ResourceSchema) will be called before any call to StoreFuncInterface.setStoreLocation(String, Job).

Specified by:
setStoreLocation in interface StoreFuncInterface
Parameters:
location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job - The Job object
Throws:
IOException - if the location is not valid.

checkSchema

public void checkSchema(ResourceSchema s)
                 throws IOException
Description copied from interface: StoreFuncInterface
Set the schema for data to be stored. This will be called on the front end during planning if the store is associated with a schema. A Store function should implement this function to check that a given schema is acceptable to it. For example, it can check that the correct partition keys are included; a storage function to be written directly to an OutputFormat can make sure the schema will translate in a well defined way.

Specified by:
checkSchema in interface StoreFuncInterface
Parameters:
s - to be checked
Throws:
IOException - if this schema is not acceptable. It should include a detailed error message indicating what is wrong with the schema.

relToAbsPathForStoreLocation

public String relToAbsPathForStoreLocation(String location,
                                           org.apache.hadoop.fs.Path curDir)
                                    throws IOException
Description copied from interface: StoreFuncInterface
This method is called by the Pig runtime in the front end to convert the output location to an absolute path if the location is relative. The StoreFuncInterface implementation is free to choose how it converts a relative location to an absolute location since this may depend on what the location string represent (hdfs path or some other data source). The static method LoadFunc.getAbsolutePath(java.lang.String, org.apache.hadoop.fs.Path) provides a default implementation for hdfs and hadoop local file system and it can be used to implement this method.

Specified by:
relToAbsPathForStoreLocation in interface StoreFuncInterface
Parameters:
location - location as provided in the "store" statement of the script
curDir - the current working direction based on any "cd" statements in the script before the "store" statement. If there are no "cd" statements in the script, this would be the home directory -
/user/ 
Returns:
the absolute location based on the arguments passed
Throws:
IOException - if the conversion is not possible

hashCode

public int hashCode()
Overrides:
hashCode in class Object

setUDFContextSignature

public void setUDFContextSignature(String signature)
Description copied from class: LoadFunc
This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc. The signature can be used to store into the UDFContext any information which the LoadFunc needs to store between various method invocations in the front end and back end. A use case is to store LoadPushDown.RequiredFieldList passed to it in LoadPushDown.pushProjection(RequiredFieldList) for use in the back end before returning tuples in LoadFunc.getNext(). This method will be call before other methods in LoadFunc

Overrides:
setUDFContextSignature in class LoadFunc
Parameters:
signature - a unique signature to identify this LoadFunc

getFeatures

public List<LoadPushDown.OperatorSet> getFeatures()
Description copied from interface: LoadPushDown
Determine the operators that can be pushed to the loader. Note that by indicating a loader can accept a certain operator (such as selection) the loader is not promising that it can handle all selections. When it is passed the actual operators to push down it will still have a chance to reject them.

Specified by:
getFeatures in interface LoadPushDown
Returns:
list of all features that the loader can support

setStoreFuncUDFContextSignature

public void setStoreFuncUDFContextSignature(String signature)
Description copied from interface: StoreFuncInterface
This method will be called by Pig both in the front end and back end to pass a unique signature to the StoreFuncInterface which it can use to store information in the UDFContext which it needs to store between various method invocations in the front end and back end. This is necessary because in a Pig Latin script with multiple stores, the different instances of store functions need to be able to find their (and only their) data in the UDFContext object.

Specified by:
setStoreFuncUDFContextSignature in interface StoreFuncInterface
Parameters:
signature - a unique signature to identify this StoreFuncInterface

cleanupOnFailure

public void cleanupOnFailure(String location,
                             org.apache.hadoop.mapreduce.Job job)
                      throws IOException
Description copied from interface: StoreFuncInterface
This method will be called by Pig if the job which contains this store fails. Implementations can clean up output locations in this method to ensure that no incorrect/incomplete results are left in the output location

Specified by:
cleanupOnFailure in interface StoreFuncInterface
Parameters:
location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Throws:
IOException

getSchema

public ResourceSchema getSchema(String location,
                                org.apache.hadoop.mapreduce.Job job)
                         throws IOException
Description copied from interface: LoadMetadata
Get a schema for the data to be loaded.

Specified by:
getSchema in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
schema for the data to be loaded. This schema should represent all tuples of the returned data. If the schema is unknown or it is not possible to return a schema that represents all returned data, then null should be returned. The schema should not be affected by pushProjection, ie. getSchema should always return the original schema even after pushProjection
Throws:
IOException - if an exception occurs while determining the schema

getStatistics

public ResourceStatistics getStatistics(String location,
                                        org.apache.hadoop.mapreduce.Job job)
                                 throws IOException
Description copied from interface: LoadMetadata
Get statistics about the data to be loaded. If no statistics are available, then null should be returned.

Specified by:
getStatistics in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
statistics about the data to be loaded. If no statistics are available, then null should be returned.
Throws:
IOException - if an exception occurs while retrieving statistics

setPartitionFilter

public void setPartitionFilter(Expression partitionFilter)
                        throws IOException
Description copied from interface: LoadMetadata
Set the filter for partitioning. It is assumed that this filter will only contain references to fields given as partition keys in getPartitionKeys. So if the implementation returns null in LoadMetadata.getPartitionKeys(String, Job), then this method is not called by Pig runtime. This method is also not called by the Pig runtime if there are no partition filter conditions.

Specified by:
setPartitionFilter in interface LoadMetadata
Parameters:
partitionFilter - that describes filter for partitioning
Throws:
IOException - if the filter is not compatible with the storage mechanism or contains non-partition fields.

getPartitionKeys

public String[] getPartitionKeys(String location,
                                 org.apache.hadoop.mapreduce.Job job)
                          throws IOException
Description copied from interface: LoadMetadata
Find what columns are partition keys for this input.

Specified by:
getPartitionKeys in interface LoadMetadata
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Returns:
array of field names of the partition keys. Implementations should return null to indicate that there are no partition keys
Throws:
IOException - if an exception occurs while retrieving partition keys

storeSchema

public void storeSchema(ResourceSchema schema,
                        String location,
                        org.apache.hadoop.mapreduce.Job job)
                 throws IOException
Description copied from interface: StoreMetadata
Store schema of the data being written

Specified by:
storeSchema in interface StoreMetadata
Parameters:
schema - Schema to be recorded
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Throws:
IOException

storeStatistics

public void storeStatistics(ResourceStatistics stats,
                            String location,
                            org.apache.hadoop.mapreduce.Job job)
                     throws IOException
Description copied from interface: StoreMetadata
Store statistics about the data being written.

Specified by:
storeStatistics in interface StoreMetadata
Parameters:
stats - statistics to be recorded
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Throws:
IOException


Copyright © 2007-2012 The Apache Software Foundation