org.apache.pig.piggybank.storage
Class XMLLoader

java.lang.Object
  extended by org.apache.pig.LoadFunc
      extended by org.apache.pig.piggybank.storage.XMLLoader

public class XMLLoader
extends LoadFunc

The load function to load the XML file This implements the LoadFunc interface which is used to parse records from a dataset. The various helper adaptor function is extended from loader.Utf8StorageConverter which included various functions to cast raw byte data into various datatypes. other sections of the code can call back to the loader to do the cast. This takes a xmlTag as the arg which it will use to split the inputdataset into multiple records. For example if the input xml (input.xml) is like this foobar barfoo foo justname And your pig script is like this --load the jar files register /homes/aloks/pig/udfLib/loader.jar; -- load the dataset using XMLLoader -- A is the bag containing the tuple which contains one atom i.e doc see output A = load '/user/aloks/pig/input.xml using loader.XMLLoader('property') as (doc:chararray); --dump the result dump A; Then you will get the output ( foobar barfoo ) ( justname ) Where each () indicate one record


Nested Class Summary
static class XMLLoader.XMLFileInputFormat
           
static class XMLLoader.XMLFileRecordReader
           
 
Field Summary
protected  org.apache.commons.logging.Log mLog
          logger from pig
 String recordIdentifier
          The record seperated.
 
Constructor Summary
XMLLoader()
           
XMLLoader(String recordIdentifier)
          Constructs a Pig loader that uses specified string as the record seperater for example if the recordIdentifier is document.
 
Method Summary
 Tuple createTuple(byte[] content)
           
 boolean equals(Object obj)
          to check for equality
 boolean equals(XMLLoader other)
          to check for equality
 org.apache.hadoop.mapreduce.InputFormat getInputFormat()
          This will be called during planning on the front end.
 Tuple getNext()
          Retrieves the next tuple to be processed.
 void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader, PigSplit split)
          Initializes LoadFunc for reading data.
 void setLocation(String location, org.apache.hadoop.mapreduce.Job job)
          Communicate to the loader the location of the object(s) being loaded.
 
Methods inherited from class org.apache.pig.LoadFunc
getAbsolutePath, getLoadCaster, getPathStrings, join, relativeToAbsolutePath, setUDFContextSignature
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

mLog

protected final org.apache.commons.logging.Log mLog
logger from pig


recordIdentifier

public String recordIdentifier
The record seperated. The default value is 'document'

Constructor Detail

XMLLoader

public XMLLoader()

XMLLoader

public XMLLoader(String recordIdentifier)
Constructs a Pig loader that uses specified string as the record seperater for example if the recordIdentifier is document. It will consider the record as .*

Parameters:
recordIdentifier - the xml tag which is used to pull records
Method Detail

getNext

public Tuple getNext()
              throws IOException
Retrieves the next tuple to be processed.

Specified by:
getNext in class LoadFunc
Returns:
the next tuple to be processed or null if there are no more tuples to be processed.
Throws:
IOException

createTuple

public Tuple createTuple(byte[] content)
                  throws Exception
Throws:
Exception

equals

public boolean equals(Object obj)
to check for equality

Overrides:
equals in class Object
Parameters:
object -

equals

public boolean equals(XMLLoader other)
to check for equality

Parameters:
XMLLoader - object

getInputFormat

public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
                                                       throws IOException
Description copied from class: LoadFunc
This will be called during planning on the front end. This is the instance of InputFormat (rather than the class name) because the load function may need to instantiate the InputFormat in order to control how it is constructed.

Specified by:
getInputFormat in class LoadFunc
Returns:
the InputFormat associated with this loader.
Throws:
IOException - if there is an exception during InputFormat construction

prepareToRead

public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
                          PigSplit split)
                   throws IOException
Description copied from class: LoadFunc
Initializes LoadFunc for reading data. This will be called during execution before any calls to getNext. The RecordReader needs to be passed here because it has been instantiated for a particular InputSplit.

Specified by:
prepareToRead in class LoadFunc
Parameters:
reader - RecordReader to be used by this instance of the LoadFunc
split - The input PigSplit to process
Throws:
IOException - if there is an exception during initialization

setLocation

public void setLocation(String location,
                        org.apache.hadoop.mapreduce.Job job)
                 throws IOException
Description copied from class: LoadFunc
Communicate to the loader the location of the object(s) being loaded. The location string passed to the LoadFunc here is the return value of LoadFunc.relativeToAbsolutePath(String, Path). Implementations should use this method to communicate the location (and any other information) to its underlying InputFormat through the Job object. This method will be called in the backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls.

Specified by:
setLocation in class LoadFunc
Parameters:
location - Location as returned by LoadFunc.relativeToAbsolutePath(String, Path)
job - the Job object store or retrieve earlier stored information from the UDFContext
Throws:
IOException - if the location is not valid.


Copyright © ${year} The Apache Software Foundation