org.apache.pig
Class StoreFuncWrapper

java.lang.Object
  extended by org.apache.pig.StoreFuncWrapper
All Implemented Interfaces:
StoreFuncInterface
Direct Known Subclasses:
StoreFuncMetadataWrapper

public class StoreFuncWrapper
extends Object
implements StoreFuncInterface

Convenience class to extend when decorating a StoreFunc. It's not abstract so that it will fail to compile if new methods get added to StoreFuncInterface. Subclasses must call the setStoreFunc with an instance of StoreFuncInterface before other methods can be called. Not doing so will result in an IllegalArgumentException when the method is called.


Constructor Summary
protected StoreFuncWrapper()
           
 
Method Summary
 void checkSchema(ResourceSchema resourceSchema)
          Set the schema for data to be stored.
 void cleanupOnFailure(String location, org.apache.hadoop.mapreduce.Job job)
          This method will be called by Pig if the job which contains this store fails.
 void cleanupOnSuccess(String location, org.apache.hadoop.mapreduce.Job job)
          This method will be called by Pig if the job which contains this store is successful, and some cleanup of intermediate resources is required.
protected  String getMethodName(int depth)
          Returns a method in the call stack at the given depth.
 org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
          Return the OutputFormat associated with StoreFuncInterface.
 void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter recordWriter)
          Initialize StoreFuncInterface to write data.
 void putNext(Tuple tuple)
          Write a tuple to the data store.
 String relToAbsPathForStoreLocation(String location, org.apache.hadoop.fs.Path path)
          This method is called by the Pig runtime in the front end to convert the output location to an absolute path if the location is relative.
protected  void setStoreFunc(StoreFuncInterface storeFunc)
          The wrapped StoreFuncInterface object must be set before method calls are made on this object.
 void setStoreFuncUDFContextSignature(String signature)
          This method will be called by Pig both in the front end and back end to pass a unique signature to the StoreFuncInterface which it can use to store information in the UDFContext which it needs to store between various method invocations in the front end and back end.
 void setStoreLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the storer the location where the data needs to be stored.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

StoreFuncWrapper

protected StoreFuncWrapper()
Method Detail

setStoreFunc

protected void setStoreFunc(StoreFuncInterface storeFunc)
The wrapped StoreFuncInterface object must be set before method calls are made on this object. Typically, this is done with via constructor, but often times the wrapped object can not be properly initialized until later in the lifecycle of the wrapper object.

Parameters:
storeFunc -

relToAbsPathForStoreLocation

public String relToAbsPathForStoreLocation(String location,
                                           org.apache.hadoop.fs.Path path)
                                    throws IOException
Description copied from interface: StoreFuncInterface
This method is called by the Pig runtime in the front end to convert the output location to an absolute path if the location is relative. The StoreFuncInterface implementation is free to choose how it converts a relative location to an absolute location since this may depend on what the location string represent (hdfs path or some other data source). The static method LoadFunc.getAbsolutePath(java.lang.String, org.apache.hadoop.fs.Path) provides a default implementation for hdfs and hadoop local file system and it can be used to implement this method.

Specified by:
relToAbsPathForStoreLocation in interface StoreFuncInterface
Parameters:
location - location as provided in the "store" statement of the script
path - the current working direction based on any "cd" statements in the script before the "store" statement. If there are no "cd" statements in the script, this would be the home directory -
/user/ 
Returns:
the absolute location based on the arguments passed
Throws:
IOException - if the conversion is not possible

getOutputFormat

public org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
                                                         throws IOException
Description copied from interface: StoreFuncInterface
Return the OutputFormat associated with StoreFuncInterface. This will be called on the front end during planning and on the backend during execution.

Specified by:
getOutputFormat in interface StoreFuncInterface
Returns:
the OutputFormat associated with StoreFuncInterface
Throws:
IOException - if an exception occurs while constructing the OutputFormat

setStoreLocation

public void setStoreLocation(String location,
                             org.apache.hadoop.mapreduce.Job job)
                      throws IOException
Description copied from interface: StoreFuncInterface
Communicate to the storer the location where the data needs to be stored. The location string passed to the StoreFuncInterface here is the return value of StoreFuncInterface.relToAbsPathForStoreLocation(String, Path) This method will be called in the frontend and backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls. StoreFuncInterface.checkSchema(ResourceSchema) will be called before any call to StoreFuncInterface.setStoreLocation(String, Job).

Specified by:
setStoreLocation in interface StoreFuncInterface
Parameters:
location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job - The Job object
Throws:
IOException - if the location is not valid.

checkSchema

public void checkSchema(ResourceSchema resourceSchema)
                 throws IOException
Description copied from interface: StoreFuncInterface
Set the schema for data to be stored. This will be called on the front end during planning if the store is associated with a schema. A Store function should implement this function to check that a given schema is acceptable to it. For example, it can check that the correct partition keys are included; a storage function to be written directly to an OutputFormat can make sure the schema will translate in a well defined way.

Specified by:
checkSchema in interface StoreFuncInterface
Parameters:
resourceSchema - to be checked
Throws:
IOException - if this schema is not acceptable. It should include a detailed error message indicating what is wrong with the schema.

prepareToWrite

public void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter recordWriter)
                    throws IOException
Description copied from interface: StoreFuncInterface
Initialize StoreFuncInterface to write data. This will be called during execution before the call to putNext.

Specified by:
prepareToWrite in interface StoreFuncInterface
Parameters:
recordWriter - RecordWriter to use.
Throws:
IOException - if an exception occurs during initialization

putNext

public void putNext(Tuple tuple)
             throws IOException
Description copied from interface: StoreFuncInterface
Write a tuple to the data store.

Specified by:
putNext in interface StoreFuncInterface
Parameters:
tuple - the tuple to store.
Throws:
IOException - if an exception occurs during the write

setStoreFuncUDFContextSignature

public void setStoreFuncUDFContextSignature(String signature)
Description copied from interface: StoreFuncInterface
This method will be called by Pig both in the front end and back end to pass a unique signature to the StoreFuncInterface which it can use to store information in the UDFContext which it needs to store between various method invocations in the front end and back end. This is necessary because in a Pig Latin script with multiple stores, the different instances of store functions need to be able to find their (and only their) data in the UDFContext object.

Specified by:
setStoreFuncUDFContextSignature in interface StoreFuncInterface
Parameters:
signature - a unique signature to identify this StoreFuncInterface

cleanupOnFailure

public void cleanupOnFailure(String location,
                             org.apache.hadoop.mapreduce.Job job)
                      throws IOException
Description copied from interface: StoreFuncInterface
This method will be called by Pig if the job which contains this store fails. Implementations can clean up output locations in this method to ensure that no incorrect/incomplete results are left in the output location

Specified by:
cleanupOnFailure in interface StoreFuncInterface
Parameters:
location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Throws:
IOException

cleanupOnSuccess

public void cleanupOnSuccess(String location,
                             org.apache.hadoop.mapreduce.Job job)
                      throws IOException
Description copied from interface: StoreFuncInterface
This method will be called by Pig if the job which contains this store is successful, and some cleanup of intermediate resources is required. Implementations can clean up output locations in this method to ensure that no incorrect/incomplete results are left in the output location

Specified by:
cleanupOnSuccess in interface StoreFuncInterface
Parameters:
location - Location returned by StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
Throws:
IOException

getMethodName

protected String getMethodName(int depth)
Returns a method in the call stack at the given depth. Depth 0 will return the method that called this getMethodName, depth 1 the method that called it, etc...

Parameters:
depth -
Returns:
method name as String


Copyright © 2007-2012 The Apache Software Foundation