public class TrevniStorage extends AvroStorage implements LoadPushDown
LoadPushDown.OperatorSet, LoadPushDown.RequiredField, LoadPushDown.RequiredFieldList, LoadPushDown.RequiredFieldResponseallowRecursive, doubleColonsToDoubleUnderscores, INPUT_AVRO_SCHEMA, log, OUTPUT_AVRO_SCHEMA, requiredFieldList, schema, udfContextSignature| Constructor and Description |
|---|
TrevniStorage()
Create new instance of TrevniStorage with no arguments (useful
for loading files without specifying parameters).
|
TrevniStorage(java.lang.String sn,
java.lang.String opts)
Create new instance of TrevniStorage.
|
| Modifier and Type | Method and Description |
|---|---|
org.apache.avro.Schema |
getAvroSchema(org.apache.hadoop.fs.Path[] p,
org.apache.hadoop.mapreduce.Job job)
Reads the avro schemas at the specified location.
|
org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.avro.generic.GenericData.Record> |
getInputFormat()
This will be called during planning on the front end.
|
org.apache.hadoop.mapreduce.OutputFormat<org.apache.hadoop.io.NullWritable,java.lang.Object> |
getOutputFormat()
Return the OutputFormat associated with StoreFuncInterface.
|
checkSchema, cleanupOnFailure, cleanupOnSuccess, getAvroSchema, getFeatures, getInputAvroSchema, getNext, getOutputAvroSchema, getPartitionKeys, getProperties, getProperties, getSchema, getShipFiles, getStatistics, prepareToRead, prepareToWrite, pushProjection, putNext, relToAbsPathForStoreLocation, setInputAvroSchema, setLocation, setOutputAvroSchema, setPartitionFilter, setStoreFuncUDFContextSignature, setStoreLocation, setUDFContextSignature, supportsParallelWriteToStoreLocationaddCredentials, getAbsolutePath, getCacheFiles, getGlobPaths, getLoadCaster, getPathStrings, join, relativeToAbsolutePath, warnclone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitgetFeatures, pushProjectionaddCredentialsgetCacheFilespublic TrevniStorage()
public TrevniStorage(java.lang.String sn,
java.lang.String opts)
sn - Specifies the input/output schema or record name.opts - Options for AvroStorage:
-namespace Namespace for an automatically generated
output schema.-schemafile Specifies URL for avro schema file
from which to read the input schema (can be local file, hdfs,
url, etc).-examplefile Specifies URL for avro data file from
which to copy the input schema (can be local file, hdfs, url, etc).-allowrecursive Option to allow recursive schema
definitions (default is false).public org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.avro.generic.GenericData.Record> getInputFormat()
throws java.io.IOException
LoadFuncgetInputFormat in class AvroStoragejava.io.IOException - if there is an exception during InputFormat
constructionLoadFunc.getInputFormat()public org.apache.hadoop.mapreduce.OutputFormat<org.apache.hadoop.io.NullWritable,java.lang.Object> getOutputFormat()
throws java.io.IOException
StoreFuncInterfacegetOutputFormat in interface StoreFuncInterfacegetOutputFormat in class AvroStorageOutputFormat associated with StoreFuncInterfacejava.io.IOException - if an exception occurs while constructing the
OutputFormatpublic org.apache.avro.Schema getAvroSchema(org.apache.hadoop.fs.Path[] p,
org.apache.hadoop.mapreduce.Job job)
throws java.io.IOException
AvroStoragegetAvroSchema in class AvroStoragep - Location of filejob - Hadoop job objectjava.io.IOExceptionCopyright © 2007-2025 The Apache Software Foundation