org.apache.pig.piggybank.storage

Class AllLoader

    • Constructor Detail

      • AllLoader

        public AllLoader()
      • AllLoader

        public AllLoader(java.lang.String partitionFilter)
    • Method Detail

      • setLocation

        public void setLocation(java.lang.String location,
                       org.apache.hadoop.mapreduce.Job job)
                         throws java.io.IOException
        Description copied from class: LoadFunc
        Communicate to the loader the location of the object(s) being loaded. The location string passed to the LoadFunc here is the return value of LoadFunc.relativeToAbsolutePath(String, Path). Implementations should use this method to communicate the location (and any other information) to its underlying InputFormat through the Job object. This method will be called in the frontend and backend multiple times. Implementations should bear in mind that this method is called multiple times and should ensure there are no inconsistent side effects due to the multiple calls.
        Specified by:
        setLocation in class LoadFunc
        Parameters:
        location - Location as returned by LoadFunc.relativeToAbsolutePath(String, Path)
        job - the Job object store or retrieve earlier stored information from the UDFContext
        Throws:
        java.io.IOException - if the location is not valid.
      • getLoadCaster

        public LoadCaster getLoadCaster()
                                 throws java.io.IOException
        Description copied from class: LoadFunc
        This will be called on the front end during planning and not on the back end during execution.
        Overrides:
        getLoadCaster in class LoadFunc
        Returns:
        the LoadCaster associated with this loader. Returning null indicates that casts from byte array are not supported for this loader. construction
        Throws:
        java.io.IOException - if there is an exception during LoadCaster
      • getInputFormat

        public AllLoader.AllLoaderInputFormat getInputFormat()
                                                      throws java.io.IOException
        Description copied from class: LoadFunc
        This will be called during planning on the front end. This is the instance of InputFormat (rather than the class name) because the load function may need to instantiate the InputFormat in order to control how it is constructed.
        Specified by:
        getInputFormat in class LoadFunc
        Returns:
        the InputFormat associated with this loader.
        Throws:
        java.io.IOException - if there is an exception during InputFormat construction
      • prepareToRead

        public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
                         PigSplit split)
                           throws java.io.IOException
        Description copied from class: LoadFunc
        Initializes LoadFunc for reading data. This will be called during execution before any calls to getNext. The RecordReader needs to be passed here because it has been instantiated for a particular InputSplit.
        Specified by:
        prepareToRead in class LoadFunc
        Parameters:
        reader - RecordReader to be used by this instance of the LoadFunc
        split - The input PigSplit to process
        Throws:
        java.io.IOException - if there is an exception during initialization
      • getNext

        public Tuple getNext()
                      throws java.io.IOException
        Description copied from class: LoadFunc
        Retrieves the next tuple to be processed. Implementations should NOT reuse tuple objects (or inner member objects) they return across calls and should return a different tuple object in each call.
        Specified by:
        getNext in class LoadFunc
        Returns:
        the next tuple to be processed or null if there are no more tuples to be processed.
        Throws:
        java.io.IOException - if there is an exception while retrieving the next tuple
      • getFeatures

        public java.util.List<LoadPushDown.OperatorSet> getFeatures()
        Description copied from interface: LoadPushDown
        Determine the operators that can be pushed to the loader. Note that by indicating a loader can accept a certain operator (such as selection) the loader is not promising that it can handle all selections. When it is passed the actual operators to push down it will still have a chance to reject them.
        Specified by:
        getFeatures in interface LoadPushDown
        Returns:
        list of all features that the loader can support
      • pushProjection

        public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
                                                          throws FrontendException
        Description copied from interface: LoadPushDown
        Indicate to the loader fields that will be needed. This can be useful for loaders that access data that is stored in a columnar format where indicating columns to be accessed a head of time will save scans. This method will not be invoked by the Pig runtime if all fields are required. So implementations should assume that if this method is not invoked, then all fields from the input are required. If the loader function cannot make use of this information, it is free to ignore it by returning an appropriate Response
        Specified by:
        pushProjection in interface LoadPushDown
        Parameters:
        requiredFieldList - RequiredFieldList indicating which columns will be needed. This structure is read only. User cannot make change to it inside pushProjection.
        Returns:
        Indicates which fields will be returned
        Throws:
        FrontendException
      • getSchema

        public ResourceSchema getSchema(java.lang.String location,
                               org.apache.hadoop.mapreduce.Job job)
                                 throws java.io.IOException
        Description copied from interface: LoadMetadata
        Get a schema for the data to be loaded.
        Specified by:
        getSchema in interface LoadMetadata
        Parameters:
        location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
        job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
        Returns:
        schema for the data to be loaded. This schema should represent all tuples of the returned data. If the schema is unknown or it is not possible to return a schema that represents all returned data, then null should be returned. The schema should not be affected by pushProjection, ie. getSchema should always return the original schema even after pushProjection
        Throws:
        java.io.IOException - if an exception occurs while determining the schema
      • getStatistics

        public ResourceStatistics getStatistics(java.lang.String location,
                                       org.apache.hadoop.mapreduce.Job job)
                                         throws java.io.IOException
        Description copied from interface: LoadMetadata
        Get statistics about the data to be loaded. If no statistics are available, then null should be returned. If the implementing class also extends LoadFunc, then LoadFunc.setLocation(String, org.apache.hadoop.mapreduce.Job) is guaranteed to be called before this method.
        Specified by:
        getStatistics in interface LoadMetadata
        Parameters:
        location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
        job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
        Returns:
        statistics about the data to be loaded. If no statistics are available, then null should be returned.
        Throws:
        java.io.IOException - if an exception occurs while retrieving statistics
      • storeStatistics

        public void storeStatistics(ResourceStatistics stats,
                           java.lang.String location,
                           org.apache.hadoop.mapreduce.Job job)
                             throws java.io.IOException
        Description copied from interface: StoreMetadata
        Store statistics about the data being written.
        Specified by:
        storeStatistics in interface StoreMetadata
        Parameters:
        stats - statistics to be recorded
        location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
        job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
        Throws:
        java.io.IOException
      • storeSchema

        public void storeSchema(ResourceSchema schema,
                       java.lang.String location,
                       org.apache.hadoop.mapreduce.Job job)
                         throws java.io.IOException
        Description copied from interface: StoreMetadata
        Store schema of the data being written
        Specified by:
        storeSchema in interface StoreMetadata
        Parameters:
        schema - Schema to be recorded
        location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
        job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
        Throws:
        java.io.IOException
      • getPartitionKeys

        public java.lang.String[] getPartitionKeys(java.lang.String location,
                                          org.apache.hadoop.mapreduce.Job job)
                                            throws java.io.IOException
        Description copied from interface: LoadMetadata
        Find what columns are partition keys for this input.
        Specified by:
        getPartitionKeys in interface LoadMetadata
        Parameters:
        location - Location as returned by LoadFunc.relativeToAbsolutePath(String, org.apache.hadoop.fs.Path)
        job - The Job object - this should be used only to obtain cluster properties through JobContext.getConfiguration() and not to set/query any runtime job information.
        Returns:
        array of field names of the partition keys. Implementations should return null to indicate that there are no partition keys
        Throws:
        java.io.IOException - if an exception occurs while retrieving partition keys
      • setPartitionFilter

        public void setPartitionFilter(Expression partitionFilter)
                                throws java.io.IOException
        Description copied from interface: LoadMetadata
        Set the filter for partitioning. It is assumed that this filter will only contain references to fields given as partition keys in getPartitionKeys. So if the implementation returns null in LoadMetadata.getPartitionKeys(String, Job), then this method is not called by Pig runtime. This method is also not called by the Pig runtime if there are no partition filter conditions.
        Specified by:
        setPartitionFilter in interface LoadMetadata
        Parameters:
        partitionFilter - that describes filter for partitioning
        Throws:
        java.io.IOException - if the filter is not compatible with the storage mechanism or contains non-partition fields.

Copyright © 2007-2012 The Apache Software Foundation