org.apache.pig.piggybank.storage
Class MultiStorage

java.lang.Object
  extended by org.apache.pig.StoreFunc
      extended by org.apache.pig.piggybank.storage.MultiStorage
All Implemented Interfaces:
StoreFuncInterface

public class MultiStorage
extends StoreFunc

The UDF is useful for splitting the output data into a bunch of directories and files dynamically based on user specified key field in the output tuple. Sample usage: A = LOAD 'mydata' USING PigStorage() as (a, b, c); STORE A INTO '/my/home/output' USING MultiStorage('/my/home/output','0', 'bz2', '\\t'); Parameter details:- ========== /my/home/output (Required) : The DFS path where output directories and files will be created. 0 (Required) : Index of field whose values should be used to create directories and files( field 'a' in this case). 'bz2' (Optional) : The compression type. Default is 'none'. Supported types are:- 'none', 'gz' and 'bz2' '\\t' (Optional) : Output field separator. Let 'a1', 'a2' be the unique values of field 'a'. Then output may look like this /my/home/output/a1/a1-0000 /my/home/output/a1/a1-0001 /my/home/output/a1/a1-0002 ... /my/home/output/a2/a2-0000 /my/home/output/a2/a2-0001 /my/home/output/a2/a2-0002 The prefix '0000*' is the task-id of the mapper/reducer task executing this store. In case user does a GROUP BY on the field followed by MultiStorage(), then its imperative that all tuples for a particular group will go exactly to 1 reducer. So in the above case for e.g. there will be only 1 file each under 'a1' and 'a2' directories. If the output is compressed,then the sub directories and the output files will be having the extension. Say for example in the above case if bz2 is used one file will look like ;/my/home/output.bz2/a1.bz2/a1-0000.bz2


Nested Class Summary
static class MultiStorage.MultiStorageOutputFormat
           
 
Constructor Summary
MultiStorage(String parentPathStr, String splitFieldIndex)
           
MultiStorage(String parentPathStr, String splitFieldIndex, String compression)
           
MultiStorage(String parentPathStr, String splitFieldIndex, String compression, String fieldDel)
          Constructor
 
Method Summary
 org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
          Return the OutputFormat associated with StoreFunc.
 void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
          Initialize StoreFunc to write data.
 void putNext(Tuple tuple)
          Write a tuple to the data store.
 void setStoreLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the storer the location where the data needs to be stored.
 
Methods inherited from class org.apache.pig.StoreFunc
checkSchema, cleanupOnFailure, cleanupOnFailureImpl, relToAbsPathForStoreLocation, setStoreFuncUDFContextSignature, warn
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

MultiStorage

public MultiStorage(String parentPathStr,
                    String splitFieldIndex)

MultiStorage

public MultiStorage(String parentPathStr,
                    String splitFieldIndex,
                    String compression)

MultiStorage

public MultiStorage(String parentPathStr,
                    String splitFieldIndex,
                    String compression,
                    String fieldDel)
Constructor

Parameters:
parentPathStr - Parent output dir path
splitFieldIndex - key field index
compression - 'bz2', 'bz', 'gz' or 'none'
fieldDel - Output record field delimiter.
Method Detail

putNext

public void putNext(Tuple tuple)
             throws IOException
Description copied from class: StoreFunc
Write a tuple to the data store.

Specified by:
putNext in interface StoreFuncInterface
Specified by:
putNext in class StoreFunc
Parameters:
tuple - the tuple to store.
Throws:
IOException - if an exception occurs during the write

getOutputFormat

public org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
                                                         throws IOException
Description copied from class: StoreFunc
Return the OutputFormat associated with StoreFunc. This will be called on the front end during planning and on the backend during execution.

Specified by:
getOutputFormat in interface StoreFuncInterface
Specified by:
getOutputFormat in class StoreFunc
Returns:
the OutputFormat associated with StoreFunc
Throws:
IOException - if an exception occurs while constructing the OutputFormat

prepareToWrite

public void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
                    throws IOException
Description copied from class: StoreFunc
Initialize StoreFunc to write data. This will be called during execution on the backend before the call to putNext.

Specified by:
prepareToWrite in interface StoreFuncInterface
Specified by:
prepareToWrite in class StoreFunc
Parameters:
writer - RecordWriter to use.
Throws:
IOException - if an exception occurs during initialization

setStoreLocation

public void setStoreLocation(String location,
                             org.apache.hadoop.mapreduce.Job job)
                      throws IOException
Description copied from class: StoreFunc
Communicate to the storer the location where the data needs to be stored. The location string passed to the StoreFunc here is the return value of StoreFunc.relToAbsPathForStoreLocation(String, Path) This method will be called in the frontend and backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls. StoreFunc.checkSchema(ResourceSchema) will be called before any call to StoreFunc.setStoreLocation(String, Job).

Specified by:
setStoreLocation in interface StoreFuncInterface
Specified by:
setStoreLocation in class StoreFunc
Parameters:
location - Location returned by StoreFunc.relToAbsPathForStoreLocation(String, Path)
job - The Job object
Throws:
IOException - if the location is not valid.


Copyright © 2007-2012 The Apache Software Foundation