public class MapReducePartitionerWrapper
extends Partitioner
Spark Partitioner that wraps a custom partitioner that implements
org.apache.hadoop.mapreduce.Partitioner interface.
Since Spark's shuffle API takes a different parititioner class
(@see org.apache.spark.Partitioner) compared to MapReduce, we need to
wrap custom partitioners written for MapReduce inside this Spark Partitioner.
MR Custom partitioners are expected to implement getPartition() with
specific arguments:
public int getPartition(PigNullableWritable key, Writable value, int numPartitions)
For an example of such a partitioner,
- See Also:
org.apache.pig.test.utils.SimpleCustomPartitioner