org.apache.pig.builtin
Class AvroStorage

java.lang.Object
  extended by org.apache.pig.LoadFunc
      extended by org.apache.pig.builtin.AvroStorage
All Implemented Interfaces:
LoadMetadata, LoadPushDown, StoreFuncInterface
Direct Known Subclasses:
TrevniStorage

public class AvroStorage
extends LoadFunc
implements StoreFuncInterface, LoadMetadata, LoadPushDown

Pig UDF for reading and writing Avro data.


Nested Class Summary
 
Nested classes/interfaces inherited from interface org.apache.pig.LoadPushDown
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponse
 
Field Summary
protected  boolean allowRecursive
           
protected  boolean doubleColonsToDoubleUnderscores
           
static String INPUT_AVRO_SCHEMA
          Pig property name for the input avro schema.
protected  org.apache.commons.logging.Log log
           
static String OUTPUT_AVRO_SCHEMA
          Pig property name for the output avro schema.
protected  LoadPushDown.RequiredFieldList requiredFieldList
          List of required fields passed by pig in a push down projection.
protected  org.apache.avro.Schema schema
           
protected  String udfContextSignature
          Context signature for this UDF instance.
protected static org.apache.hadoop.fs.PathFilter VISIBLE_FILES
          A PathFilter that filters out invisible files.
 
Constructor Summary
AvroStorage()
          Creates new instance of Pig Storage function, without specifying the schema.
AvroStorage(String sn)
          Creates new instance of Pig Storage function.
AvroStorage(String sn, String opts)
          Creates new instance of AvroStorage function, specifying output schema properties.
 
Method Summary
 void checkSchema(ResourceSchema rs)
          Set the schema for data to be stored.
 void cleanupOnFailure(String location, org.apache.hadoop.mapreduce.Job job)
          This method will be called by Pig if the job which contains this store fails.
 void cleanupOnSuccess(String location, org.apache.hadoop.mapreduce.Job job)
          This method will be called by Pig if the job which contains this store is successful, and some cleanup of intermediate resources is required.
protected  org.apache.hadoop.fs.Path depthFirstSearchForFile(org.apache.hadoop.fs.FileStatus[] statusArray, org.apache.hadoop.fs.FileSystem fileSystem)
          Finds a valid path for a file from an array of FileStatus objects.
 org.apache.avro.Schema getAvroSchema(org.apache.hadoop.fs.Path[] p, org.apache.hadoop.mapreduce.Job job)
          Reads the avro schemas at the specified location.
protected  org.apache.avro.Schema getAvroSchema(String location, org.apache.hadoop.mapreduce.Job job)
          Reads the avro schema at the specified location.
 List<LoadPushDown.OperatorSet> getFeatures()
          Determine the operators that can be pushed to the loader.
 org.apache.avro.Schema getInputAvroSchema()
          Helper function reads the input avro schema from the UDF Properties.
 org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.avro.generic.GenericData.Record> getInputFormat()
          This will be called during planning on the front end.
 Tuple getNext()
          Retrieves the next tuple to be processed.
protected  org.apache.avro.Schema getOutputAvroSchema()
          Utility function that gets the output schema from the udf properties for this instance of the store function.
 org.apache.hadoop.mapreduce.OutputFormat<org.apache.hadoop.io.NullWritable,Object> getOutputFormat()
          Return the OutputFormat associated with StoreFuncInterface.
 String[] getPartitionKeys(String location, org.apache.hadoop.mapreduce.Job job)
          Find what columns are partition keys for this input.
protected  Properties getProperties()
          Internal function for getting the Properties object associated with this UDF instance.
protected  Properties getProperties(Class c, String signature)
          Internal function for getting the Properties object associated with this UDF instance.
 ResourceSchema getSchema(String location, org.apache.hadoop.mapreduce.Job job)
          Get a schema for the data to be loaded.
 ResourceStatistics getStatistics(String location, org.apache.hadoop.mapreduce.Job job)
          Get statistics about the data to be loaded.
 void prepareToRead(org.apache.hadoop.mapreduce.RecordReader r, PigSplit s)
          Initializes LoadFunc for reading data.
 void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter w)
          Initialize StoreFuncInterface to write data.
 LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList rfl)
          Indicate to the loader fields that will be needed.
 void putNext(Tuple t)
          Write a tuple to the data store.
 String relToAbsPathForStoreLocation(String location, org.apache.hadoop.fs.Path curDir)
          This method is called by the Pig runtime in the front end to convert the output location to an absolute path if the location is relative.
protected  void setInputAvroSchema(org.apache.avro.Schema s)
          Sets the input avro schema to .
 void setLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the loader the location of the object(s) being loaded.
protected  void setOutputAvroSchema(org.apache.avro.Schema s)
          Sets the output avro schema to .
 void setPartitionFilter(Expression partitionFilter)
          Set the filter for partitioning.
 void setStoreFuncUDFContextSignature(String signature)
          This method will be called by Pig both in the front end and back end to pass a unique signature to the StoreFuncInterface which it can use to store information in the UDFContext which it needs to store between various method invocations in the front end and back end.
 void setStoreLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the storer the location where the data needs to be stored.
 void setUDFContextSignature(String signature)
          This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc.
 
Methods inherited from class org.apache.pig.LoadFunc
getAbsolutePath, getLoadCaster, getPathStrings, join, relativeToAbsolutePath, warn
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

allowRecursive

protected boolean allowRecursive

doubleColonsToDoubleUnderscores

protected boolean doubleColonsToDoubleUnderscores

schema

protected org.apache.avro.Schema schema

log

protected final org.apache.commons.logging.Log log

udfContextSignature

protected String udfContextSignature
Context signature for this UDF instance.


VISIBLE_FILES

protected static final org.apache.hadoop.fs.PathFilter VISIBLE_FILES
A PathFilter that filters out invisible files.


OUTPUT_AVRO_SCHEMA

public static final String OUTPUT_AVRO_SCHEMA
Pig property name for the output avro schema.

See Also:
Constant Field Values

INPUT_AVRO_SCHEMA

public static final String INPUT_AVRO_SCHEMA
Pig property name for the input avro schema.

See Also:
Constant Field Values

requiredFieldList

protected LoadPushDown.RequiredFieldList requiredFieldList
List of required fields passed by pig in a push down projection.

Constructor Detail

AvroStorage

public AvroStorage()
Creates new instance of Pig Storage function, without specifying the schema. Useful for just loading in data.


AvroStorage

public AvroStorage(String sn)
Creates new instance of Pig Storage function.

Parameters:
sn - Specifies the input/output schema or record name.

AvroStorage

public AvroStorage(String sn,
                   String opts)
Creates new instance of AvroStorage function, specifying output schema properties.

Parameters:
sn - Specifies the input/output schema or record name.
opts - Options for AvroStorage:
  • -namespace Namespace for an automatically generated output schema.
  • -schemafile Specifies URL for avro schema file from which to read the input schema (can be local file, hdfs, url, etc).
  • -examplefile Specifies URL for avro data file from which to copy the input schema (can be local file, hdfs, url, etc).
  • -allowrecursive Option to allow recursive schema definitions (default is false).
  • -doublecolons Option to translate Pig schema names with double colons to names with double underscores (default is false).
  • Method Detail

    setUDFContextSignature

    public final void setUDFContextSignature(String signature)
    Description copied from class: LoadFunc
    This method will be called by Pig both in the front end and back end to pass a unique signature to the LoadFunc. The signature can be used to store into the UDFContext any information which the LoadFunc needs to store between various method invocations in the front end and back end. A use case is to store LoadPushDown.RequiredFieldList passed to it in LoadPushDown.pushProjection(RequiredFieldList) for use in the back end before returning tuples in LoadFunc.getNext(). This method will be call before other methods in LoadFunc

    Overrides:
    setUDFContextSignature in class LoadFunc
    Parameters:
    signature - a unique signature to identify this LoadFunc

    getProperties

    protected final Properties getProperties()
    Internal function for getting the Properties object associated with this UDF instance.

    Returns:
    The Properties object associated with this UDF instance

    getProperties

    protected final Properties getProperties(Class c,
                                             String signature)
    Internal function for getting the Properties object associated with this UDF instance.

    Parameters:
    c - Class of this UDF
    signature - Signature string
    Returns:
    The Properties object associated with this UDF instance

    getSchema

    public final ResourceSchema getSchema(String location,
                                          org.apache.hadoop.mapreduce.Job job)
                                   throws IOException
    Description copied from interface: LoadMetadata
    Get a schema for the data to be loaded.

    Specified by:
    getSchema in interface LoadMetadata
    Parameters:
    location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
    job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
    Returns:
    schema for the data to be loaded. This schema should represent all tuples of the returned data. If the schema is unknown or it is not possible to return a schema that represents all returned data, then null should be returned. The schema should not be affected by pushProjection, ie. getSchema should always return the original schema even after pushProjection
    Throws:
    IOException - if an exception occurs while determining the schema

    getAvroSchema

    protected final org.apache.avro.Schema getAvroSchema(String location,
                                                         org.apache.hadoop.mapreduce.Job job)
                                                  throws IOException
    Reads the avro schema at the specified location.

    Parameters:
    location - Location of file
    job - Hadoop job object
    Returns:
    an Avro Schema object derived from the specified file
    Throws:
    IOException

    getAvroSchema

    public org.apache.avro.Schema getAvroSchema(org.apache.hadoop.fs.Path[] p,
                                                org.apache.hadoop.mapreduce.Job job)
                                         throws IOException
    Reads the avro schemas at the specified location.

    Parameters:
    p - Location of file
    job - Hadoop job object
    Returns:
    an Avro Schema object derived from the specified file
    Throws:
    IOException

    depthFirstSearchForFile

    protected org.apache.hadoop.fs.Path depthFirstSearchForFile(org.apache.hadoop.fs.FileStatus[] statusArray,
                                                                org.apache.hadoop.fs.FileSystem fileSystem)
                                                         throws IOException
    Finds a valid path for a file from an array of FileStatus objects.

    Parameters:
    statusArray - Array of FileStatus objects in which to search for the file.
    fileSystem - FileSystem in which to search for the first file.
    Returns:
    The first file found.
    Throws:
    IOException

    getStatistics

    public final ResourceStatistics getStatistics(String location,
                                                  org.apache.hadoop.mapreduce.Job job)
                                           throws IOException
    Description copied from interface: LoadMetadata
    Get statistics about the data to be loaded. If no statistics are available, then null should be returned. If the implementing class also extends LoadFunc, then LoadFunc.setLocation(String, org.apache.hadoop.mapreduce.Job) is guaranteed to be called before this method.

    Specified by:
    getStatistics in interface LoadMetadata
    Parameters:
    location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
    job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
    Returns:
    statistics about the data to be loaded. If no statistics are available, then null should be returned.
    Throws:
    IOException - if an exception occurs while retrieving statistics

    getPartitionKeys

    public final String[] getPartitionKeys(String location,
                                           org.apache.hadoop.mapreduce.Job job)
                                    throws IOException
    Description copied from interface: LoadMetadata
    Find what columns are partition keys for this input.

    Specified by:
    getPartitionKeys in interface LoadMetadata
    Parameters:
    location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
    job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
    Returns:
    array of field names of the partition keys. Implementations should return null to indicate that there are no partition keys
    Throws:
    IOException - if an exception occurs while retrieving partition keys

    setPartitionFilter

    public void setPartitionFilter(Expression partitionFilter)
                            throws IOException
    Description copied from interface: LoadMetadata
    Set the filter for partitioning. It is assumed that this filter will only contain references to fields given as partition keys in getPartitionKeys. So if the implementation returns null in LoadMetadata.getPartitionKeys(String, Job), then this method is not called by Pig runtime. This method is also not called by the Pig runtime if there are no partition filter conditions.

    Specified by:
    setPartitionFilter in interface LoadMetadata
    Parameters:
    partitionFilter - that describes filter for partitioning
    Throws:
    IOException - if the filter is not compatible with the storage mechanism or contains non-partition fields.

    relToAbsPathForStoreLocation

    public final String relToAbsPathForStoreLocation(String location,
                                                     org.apache.hadoop.fs.Path curDir)
                                              throws IOException
    Description copied from interface: StoreFuncInterface
    This method is called by the Pig runtime in the front end to convert the output location to an absolute path if the location is relative. The StoreFuncInterface implementation is free to choose how it converts a relative location to an absolute location since this may depend on what the location string represent (hdfs path or some other data source). The static method LoadFunc.getAbsolutePath(java.lang.String, org.apache.hadoop.fs.Path) provides a default implementation for hdfs and hadoop local file system and it can be used to implement this method.

    Specified by:
    relToAbsPathForStoreLocation in interface StoreFuncInterface
    Parameters:
    location - location as provided in the "store" statement of the script
    curDir - the current working direction based on any "cd" statements in the script before the "store" statement. If there are no "cd" statements in the script, this would be the home directory -
    /user/ 
    Returns:
    the absolute location based on the arguments passed
    Throws:
    IOException - if the conversion is not possible

    getOutputFormat

    public org.apache.hadoop.mapreduce.OutputFormat<org.apache.hadoop.io.NullWritable,Object> getOutputFormat()
                                                                                                       throws IOException
    Description copied from interface: StoreFuncInterface
    Return the OutputFormat associated with StoreFuncInterface. This will be called on the front end during planning and on the backend during execution.

    Specified by:
    getOutputFormat in interface StoreFuncInterface
    Returns:
    the OutputFormat associated with StoreFuncInterface
    Throws:
    IOException - if an exception occurs while constructing the OutputFormat

    setStoreLocation

    public final void setStoreLocation(String location,
                                       org.apache.hadoop.mapreduce.Job job)
                                throws IOException
    Description copied from interface: StoreFuncInterface
    Communicate to the storer the location where the data needs to be stored. The location string passed to the StoreFuncInterface here is the return value of StoreFuncInterface.relToAbsPathForStoreLocation(String, Path) This method will be called in the frontend and backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls. StoreFuncInterface.checkSchema(ResourceSchema) will be called before any call to StoreFuncInterface.setStoreLocation(String, Job).

    Specified by:
    setStoreLocation in interface StoreFuncInterface
    Parameters:
    location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
    job - The Job object
    Throws:
    IOException - if the location is not valid.

    checkSchema

    public final void checkSchema(ResourceSchema rs)
                           throws IOException
    Description copied from interface: StoreFuncInterface
    Set the schema for data to be stored. This will be called on the front end during planning if the store is associated with a schema. A Store function should implement this function to check that a given schema is acceptable to it. For example, it can check that the correct partition keys are included; a storage function to be written directly to an OutputFormat can make sure the schema will translate in a well defined way.

    Specified by:
    checkSchema in interface StoreFuncInterface
    Parameters:
    rs - to be checked
    Throws:
    IOException - if this schema is not acceptable. It should include a detailed error message indicating what is wrong with the schema.

    setOutputAvroSchema

    protected final void setOutputAvroSchema(org.apache.avro.Schema s)
    Sets the output avro schema to .

    Parameters:
    s - An Avro schema

    getOutputAvroSchema

    protected final org.apache.avro.Schema getOutputAvroSchema()
    Utility function that gets the output schema from the udf properties for this instance of the store function.

    Returns:
    the output schema associated with this UDF

    prepareToWrite

    public final void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter w)
                              throws IOException
    Description copied from interface: StoreFuncInterface
    Initialize StoreFuncInterface to write data. This will be called during execution before the call to putNext.

    Specified by:
    prepareToWrite in interface StoreFuncInterface
    Parameters:
    w - RecordWriter to use.
    Throws:
    IOException - if an exception occurs during initialization

    putNext

    public final void putNext(Tuple t)
                       throws IOException
    Description copied from interface: StoreFuncInterface
    Write a tuple to the data store.

    Specified by:
    putNext in interface StoreFuncInterface
    Parameters:
    t - the tuple to store.
    Throws:
    IOException - if an exception occurs during the write

    setStoreFuncUDFContextSignature

    public final void setStoreFuncUDFContextSignature(String signature)
    Description copied from interface: StoreFuncInterface
    This method will be called by Pig both in the front end and back end to pass a unique signature to the StoreFuncInterface which it can use to store information in the UDFContext which it needs to store between various method invocations in the front end and back end. This is necessary because in a Pig Latin script with multiple stores, the different instances of store functions need to be able to find their (and only their) data in the UDFContext object.

    Specified by:
    setStoreFuncUDFContextSignature in interface StoreFuncInterface
    Parameters:
    signature - a unique signature to identify this StoreFuncInterface

    cleanupOnFailure

    public final void cleanupOnFailure(String location,
                                       org.apache.hadoop.mapreduce.Job job)
                                throws IOException
    Description copied from interface: StoreFuncInterface
    This method will be called by Pig if the job which contains this store fails. Implementations can clean up output locations in this method to ensure that no incorrect/incomplete results are left in the output location

    Specified by:
    cleanupOnFailure in interface StoreFuncInterface
    Parameters:
    location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
    job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
    Throws:
    IOException

    setLocation

    public void setLocation(String location,
                            org.apache.hadoop.mapreduce.Job job)
                     throws IOException
    Description copied from class: LoadFunc
    Communicate to the loader the location of the object(s) being loaded. The location string passed to the LoadFunc here is the return value of LoadFunc.relativeToAbsolutePath(String, Path). Implementations should use this method to communicate the location (and any other information) to its underlying InputFormat through the Job object. This method will be called in the frontend and backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls.

    Specified by:
    setLocation in class LoadFunc
    Parameters:
    location - Location as returned by LoadFunc.relativeToAbsolutePath(String, Path)
    job - the Job object store or retrieve earlier stored information from the UDFContext
    Throws:
    IOException - if the location is not valid.

    setInputAvroSchema

    protected final void setInputAvroSchema(org.apache.avro.Schema s)
    Sets the input avro schema to .

    Parameters:
    s - The specified schema

    getInputAvroSchema

    public final org.apache.avro.Schema getInputAvroSchema()
    Helper function reads the input avro schema from the UDF Properties.

    Returns:
    The input avro schema

    getInputFormat

    public org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.avro.generic.GenericData.Record> getInputFormat()
                                                                                                                                         throws IOException
    Description copied from class: LoadFunc
    This will be called during planning on the front end. This is the instance of InputFormat (rather than the class name) because the load function may need to instantiate the InputFormat in order to control how it is constructed.

    Specified by:
    getInputFormat in class LoadFunc
    Returns:
    the InputFormat associated with this loader.
    Throws:
    IOException - if there is an exception during InputFormat construction
    See Also:
    LoadFunc.getInputFormat()

    prepareToRead

    public final void prepareToRead(org.apache.hadoop.mapreduce.RecordReader r,
                                    PigSplit s)
                             throws IOException
    Description copied from class: LoadFunc
    Initializes LoadFunc for reading data. This will be called during execution before any calls to getNext. The RecordReader needs to be passed here because it has been instantiated for a particular InputSplit.

    Specified by:
    prepareToRead in class LoadFunc
    Parameters:
    r - RecordReader to be used by this instance of the LoadFunc
    s - The input PigSplit to process
    Throws:
    IOException - if there is an exception during initialization

    getNext

    public final Tuple getNext()
                        throws IOException
    Description copied from class: LoadFunc
    Retrieves the next tuple to be processed. Implementations should NOT reuse tuple objects (or inner member objects) they return across calls and should return a different tuple object in each call.

    Specified by:
    getNext in class LoadFunc
    Returns:
    the next tuple to be processed or null if there are no more tuples to be processed.
    Throws:
    IOException - if there is an exception while retrieving the next tuple

    cleanupOnSuccess

    public void cleanupOnSuccess(String location,
                                 org.apache.hadoop.mapreduce.Job job)
                          throws IOException
    Description copied from interface: StoreFuncInterface
    This method will be called by Pig if the job which contains this store is successful, and some cleanup of intermediate resources is required. Implementations can clean up output locations in this method to ensure that no incorrect/incomplete results are left in the output location

    Specified by:
    cleanupOnSuccess in interface StoreFuncInterface
    Parameters:
    location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
    job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
    Throws:
    IOException

    getFeatures

    public List<LoadPushDown.OperatorSet> getFeatures()
    Description copied from interface: LoadPushDown
    Determine the operators that can be pushed to the loader. Note that by indicating a loader can accept a certain operator (such as selection) the loader is not promising that it can handle all selections. When it is passed the actual operators to push down it will still have a chance to reject them.

    Specified by:
    getFeatures in interface LoadPushDown
    Returns:
    list of all features that the loader can support

    pushProjection

    public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList rfl)
                                                      throws FrontendException
    Description copied from interface: LoadPushDown
    Indicate to the loader fields that will be needed. This can be useful for loaders that access data that is stored in a columnar format where indicating columns to be accessed a head of time will save scans. This method will not be invoked by the Pig runtime if all fields are required. So implementations should assume that if this method is not invoked, then all fields from the input are required. If the loader function cannot make use of this information, it is free to ignore it by returning an appropriate Response

    Specified by:
    pushProjection in interface LoadPushDown
    Parameters:
    rfl - RequiredFieldList indicating which columns will be needed. This structure is read only. User cannot make change to it inside pushProjection.
    Returns:
    Indicates which fields will be returned
    Throws:
    FrontendException


    Copyright © 2007-2012 The Apache Software Foundation