public class Storage extends LoadFunc implements StoreFuncInterface, LoadMetadata, StoreMetadata
PigServer pigServer = new PigServer(ExecType.LOCAL); Data data = resetData(pigServer); data.set("foo", tuple("a"), tuple("b"), tuple("c") ); pigServer.registerQuery("A = LOAD 'foo' USING mock.Storage();"); pigServer.registerQuery("STORE A INTO 'bar' USING mock.Storage();"); ListWith Schema:out = data.get("bar"); assertEquals(tuple("a"), out.get(0)); assertEquals(tuple("b"), out.get(1)); assertEquals(tuple("c"), out.get(2));
PigServer pigServer = new PigServer(ExecType.LOCAL); Data data = resetData(pigServer); data.set("foo", "blah:chararray", tuple("a"), tuple("b"), tuple("c") ); pigServer.registerQuery("A = LOAD 'foo' USING mock.Storage();"); pigServer.registerQuery("B = FOREACH A GENERATE blah as a, blah as b;"); pigServer.registerQuery("STORE B INTO 'bar' USING mock.Storage();"); assertEquals(schema("a:chararray,b:chararray"), data.getSchema("bar")); Listout = data.get("bar"); assertEquals(tuple("a", "a"), out.get(0)); assertEquals(tuple("b", "b"), out.get(1)); assertEquals(tuple("c", "c"), out.get(2));
Modifier and Type | Class and Description |
---|---|
static class |
Storage.Data
An isolated data store to avoid side effects
|
Constructor and Description |
---|
Storage() |
Modifier and Type | Method and Description |
---|---|
static DataBag |
bag(Tuple... tuples) |
void |
checkSchema(ResourceSchema s)
Set the schema for data to be stored.
|
void |
cleanupOnFailure(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
This method will be called by Pig if the job which contains this store
fails.
|
void |
cleanupOnSuccess(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
This method will be called by Pig if the job which contains this store
is successful, and some cleanup of intermediate resources is required.
|
org.apache.hadoop.mapreduce.InputFormat<?,?> |
getInputFormat()
This will be called during planning on the front end.
|
LoadCaster |
getLoadCaster()
This will be called on the front end during planning and not on the back
end during execution.
|
Tuple |
getNext()
Retrieves the next tuple to be processed.
|
org.apache.hadoop.mapreduce.OutputFormat<?,?> |
getOutputFormat()
Return the OutputFormat associated with StoreFuncInterface.
|
java.lang.String[] |
getPartitionKeys(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Find what columns are partition keys for this input.
|
ResourceSchema |
getSchema(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Get a schema for the data to be loaded.
|
ResourceStatistics |
getStatistics(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Get statistics about the data to be loaded.
|
void |
prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
Initializes LoadFunc for reading data.
|
void |
prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
Initialize StoreFuncInterface to write data.
|
void |
putNext(Tuple t)
Write a tuple to the data store.
|
java.lang.String |
relativeToAbsolutePath(java.lang.String location,
org.apache.hadoop.fs.Path curDir)
This method is called by the Pig runtime in the front end to convert the
input location to an absolute path if the location is relative.
|
java.lang.String |
relToAbsPathForStoreLocation(java.lang.String location,
org.apache.hadoop.fs.Path curDir)
This method is called by the Pig runtime in the front end to convert the
output location to an absolute path if the location is relative.
|
static Storage.Data |
resetData(PigContext context)
reset the store and get the Data object to access it
|
static Storage.Data |
resetData(PigServer pigServer)
reset the store and get the Data object to access it
|
static Schema |
schema(java.lang.String schema) |
void |
setLocation(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the loader the location of the object(s) being loaded.
|
void |
setPartitionFilter(Expression partitionFilter)
Set the filter for partitioning.
|
void |
setStoreFuncUDFContextSignature(java.lang.String signature)
This method will be called by Pig both in the front end and back end to
pass a unique signature to the
StoreFuncInterface which it can use to store
information in the UDFContext which it needs to store between
various method invocations in the front end and back end. |
void |
setStoreLocation(java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the storer the location where the data needs to be stored.
|
void |
setUDFContextSignature(java.lang.String signature)
This method will be called by Pig both in the front end and back end to
pass a unique signature to the
LoadFunc . |
void |
storeSchema(ResourceSchema schema,
java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Store schema of the data being written
|
void |
storeStatistics(ResourceStatistics stats,
java.lang.String location,
org.apache.hadoop.mapreduce.Job job)
Store statistics about the data being written.
|
static Tuple |
tuple(java.lang.Object... objects) |
getAbsolutePath, getCacheFiles, getPathStrings, getShipFiles, join, warn
public static Tuple tuple(java.lang.Object... objects)
objects
- public static DataBag bag(Tuple... tuples)
tuples
- public static Schema schema(java.lang.String schema) throws ParserException
schema
- ParserException
- if the schema is invalidpublic static Storage.Data resetData(PigServer pigServer)
pigServer
- public static Storage.Data resetData(PigContext context)
context
- public java.lang.String relativeToAbsolutePath(java.lang.String location, org.apache.hadoop.fs.Path curDir) throws java.io.IOException
LoadFunc
relativeToAbsolutePath
in class LoadFunc
location
- location as provided in the "load" statement of the scriptcurDir
- the current working direction based on any "cd" statements
in the script before the "load" statement. If there are no "cd" statements
in the script, this would be the home directory -
/user/
java.io.IOException
- if the conversion is not possiblepublic void setLocation(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
LoadFunc
LoadFunc.relativeToAbsolutePath(String, Path)
. Implementations
should use this method to communicate the location (and any other information)
to its underlying InputFormat through the Job object.
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.setLocation
in class LoadFunc
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, Path)
job
- the Job
object
store or retrieve earlier stored information from the UDFContext
java.io.IOException
- if the location is not valid.public org.apache.hadoop.mapreduce.InputFormat<?,?> getInputFormat() throws java.io.IOException
LoadFunc
getInputFormat
in class LoadFunc
java.io.IOException
- if there is an exception during InputFormat
constructionpublic LoadCaster getLoadCaster() throws java.io.IOException
LoadFunc
getLoadCaster
in class LoadFunc
LoadCaster
associated with this loader. Returning null
indicates that casts from byte array are not supported for this loader.
constructionjava.io.IOException
- if there is an exception during LoadCasterpublic void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader, PigSplit split) throws java.io.IOException
LoadFunc
prepareToRead
in class LoadFunc
reader
- RecordReader
to be used by this instance of the LoadFuncsplit
- The input PigSplit
to processjava.io.IOException
- if there is an exception during initializationpublic Tuple getNext() throws java.io.IOException
LoadFunc
public void setUDFContextSignature(java.lang.String signature)
LoadFunc
LoadFunc
. The signature can be used
to store into the UDFContext
any information which the
LoadFunc
needs to store between various method invocations in the
front end and back end. A use case is to store LoadPushDown.RequiredFieldList
passed to it in LoadPushDown.pushProjection(RequiredFieldList)
for
use in the back end before returning tuples in LoadFunc.getNext()
.
This method will be call before other methods in LoadFunc
setUDFContextSignature
in class LoadFunc
signature
- a unique signature to identify this LoadFuncpublic ResourceSchema getSchema(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
LoadMetadata
getSchema
in interface LoadMetadata
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
- if an exception occurs while determining the schemapublic ResourceStatistics getStatistics(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
LoadMetadata
LoadFunc
, then LoadFunc.setLocation(String, org.apache.hadoop.mapreduce.Job)
is guaranteed to be called before this method.getStatistics
in interface LoadMetadata
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
- if an exception occurs while retrieving statisticspublic java.lang.String[] getPartitionKeys(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
LoadMetadata
getPartitionKeys
in interface LoadMetadata
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
- if an exception occurs while retrieving partition keyspublic void setPartitionFilter(Expression partitionFilter) throws java.io.IOException
LoadMetadata
LoadMetadata.getPartitionKeys(String, Job)
, then this method is not
called by Pig runtime. This method is also not called by the Pig runtime
if there are no partition filter conditions.setPartitionFilter
in interface LoadMetadata
partitionFilter
- that describes filter for partitioningjava.io.IOException
- if the filter is not compatible with the storage
mechanism or contains non-partition fields.public java.lang.String relToAbsPathForStoreLocation(java.lang.String location, org.apache.hadoop.fs.Path curDir) throws java.io.IOException
StoreFuncInterface
LoadFunc.getAbsolutePath(java.lang.String, org.apache.hadoop.fs.Path)
provides a default
implementation for hdfs and hadoop local file system and it can be used
to implement this method.relToAbsPathForStoreLocation
in interface StoreFuncInterface
location
- location as provided in the "store" statement of the scriptcurDir
- the current working direction based on any "cd" statements
in the script before the "store" statement. If there are no "cd" statements
in the script, this would be the home directory -
/user/
java.io.IOException
- if the conversion is not possiblepublic org.apache.hadoop.mapreduce.OutputFormat<?,?> getOutputFormat() throws java.io.IOException
StoreFuncInterface
getOutputFormat
in interface StoreFuncInterface
OutputFormat
associated with StoreFuncInterfacejava.io.IOException
- if an exception occurs while constructing the
OutputFormatpublic void setStoreLocation(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
StoreFuncInterface
StoreFuncInterface
here is the
return value of StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.
StoreFuncInterface.checkSchema(ResourceSchema)
will be called before any call to
StoreFuncInterface.setStoreLocation(String, Job)
.setStoreLocation
in interface StoreFuncInterface
location
- Location returned by
StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job
- The Job
objectjava.io.IOException
- if the location is not valid.public void checkSchema(ResourceSchema s) throws java.io.IOException
StoreFuncInterface
checkSchema
in interface StoreFuncInterface
s
- to be checkedjava.io.IOException
- if this schema is not acceptable. It should include
a detailed error message indicating what is wrong with the schema.public void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer) throws java.io.IOException
StoreFuncInterface
prepareToWrite
in interface StoreFuncInterface
writer
- RecordWriter to use.java.io.IOException
- if an exception occurs during initializationpublic void putNext(Tuple t) throws java.io.IOException
StoreFuncInterface
putNext
in interface StoreFuncInterface
t
- the tuple to store.java.io.IOException
- if an exception occurs during the writepublic void setStoreFuncUDFContextSignature(java.lang.String signature)
StoreFuncInterface
StoreFuncInterface
which it can use to store
information in the UDFContext
which it needs to store between
various method invocations in the front end and back end. This is necessary
because in a Pig Latin script with multiple stores, the different
instances of store functions need to be able to find their (and only their)
data in the UDFContext object.setStoreFuncUDFContextSignature
in interface StoreFuncInterface
signature
- a unique signature to identify this StoreFuncInterfacepublic void cleanupOnFailure(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
StoreFuncInterface
cleanupOnFailure
in interface StoreFuncInterface
location
- Location returned by
StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
public void cleanupOnSuccess(java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
StoreFuncInterface
cleanupOnSuccess
in interface StoreFuncInterface
location
- Location returned by
StoreFuncInterface.relToAbsPathForStoreLocation(String, Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
public void storeStatistics(ResourceStatistics stats, java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
StoreMetadata
storeStatistics
in interface StoreMetadata
stats
- statistics to be recordedlocation
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
public void storeSchema(ResourceSchema schema, java.lang.String location, org.apache.hadoop.mapreduce.Job job) throws java.io.IOException
StoreMetadata
storeSchema
in interface StoreMetadata
schema
- Schema to be recordedlocation
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
job
- The Job
object - this should be used only to obtain
cluster properties through JobContext.getConfiguration()
and not to set/query
any runtime job information.java.io.IOException
Copyright © 2007-2012 The Apache Software Foundation