public class HiveRCOutputFormat
extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.Writable>
Modifier and Type | Class and Description |
---|---|
protected static class |
HiveRCOutputFormat.Writer
RecordWriter wrapper around an RCFile.Writer
|
Modifier and Type | Field and Description |
---|---|
static java.lang.String |
COMPRESSION_CODEC_CONF |
static java.lang.String |
DEFAULT_EXTENSION |
static java.lang.String |
EXTENSION_OVERRIDE_CONF |
Constructor and Description |
---|
HiveRCOutputFormat() |
Modifier and Type | Method and Description |
---|---|
protected org.apache.hadoop.hive.ql.io.RCFile.Writer |
createRCFileWriter(org.apache.hadoop.mapreduce.TaskAttemptContext job,
org.apache.hadoop.io.Text columnMetadata) |
static int |
getColumnNumber(org.apache.hadoop.conf.Configuration conf)
Returns the number of columns set in the conf for writers.
|
org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.Writable> |
getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext job) |
static void |
setColumnNumber(org.apache.hadoop.conf.Configuration conf,
int columnNum)
set number of columns into the given configuration.
|
checkOutputSpecs, getCompressOutput, getDefaultWorkFile, getOutputCommitter, getOutputCompressorClass, getOutputName, getOutputPath, getPathForWorkFile, getUniqueFile, getWorkOutputPath, setCompressOutput, setOutputCompressorClass, setOutputName, setOutputPath
public static java.lang.String COMPRESSION_CODEC_CONF
public static java.lang.String DEFAULT_EXTENSION
public static java.lang.String EXTENSION_OVERRIDE_CONF
public static void setColumnNumber(org.apache.hadoop.conf.Configuration conf, int columnNum)
conf
- configuration instance which need to set the column numbercolumnNum
- column number for RCFile's Writerpublic static int getColumnNumber(org.apache.hadoop.conf.Configuration conf)
conf
- protected org.apache.hadoop.hive.ql.io.RCFile.Writer createRCFileWriter(org.apache.hadoop.mapreduce.TaskAttemptContext job, org.apache.hadoop.io.Text columnMetadata) throws java.io.IOException
java.io.IOException
public org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.Writable> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext job) throws java.io.IOException, java.lang.InterruptedException
getRecordWriter
in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<org.apache.hadoop.io.NullWritable,org.apache.hadoop.io.Writable>
java.io.IOException
java.lang.InterruptedException
Copyright © 2007-2012 The Apache Software Foundation