Reading and Writing Custom-Formatted HDFS Data with gphdfs (Deprecated)

Note: The gphdfs external table protocol is deprecated and will be removed in the next major release of Greenplum Database.

Use MapReduce and the CREATE EXTERNAL TABLE command to read and write data with custom formats on HDFS.

To read custom-formatted data:

  1. Author and run a MapReduce job that creates a copy of the data in a format accessible to Greenplum Database.
  2. Use CREATE EXTERNAL TABLE to read the data into Greenplum Database.

See Example 1 - Read Custom-Formatted Data from HDFS.

To write custom-formatted data:

  1. Write the data.
  2. Author and run a MapReduce program to convert the data to the custom format and place it on the Hadoop Distributed File System.

See Example 2 - Write Custom-Formatted Data from Greenplum Database to HDFS.

MapReduce code is written in Java. Greenplum provides Java APIs for use in the MapReduce code. The Javadoc is available in the $GPHOME/docs directory. To view the Javadoc, expand the file gnet-1.2-javadoc.tar and open index.html. The Javadoc documents the following packages:

  1. com.emc.greenplum.gpdb.hadoop.io
  2. com.emc.greenplum.gpdb.hadoop.mapred
  3. com.emc.greenplum.gpdb.hadoop.mapreduce.lib.input
  4. com.emc.greenplum.gpdb.hadoop.mapreduce.lib.output

The HDFS cross-connect packages contain the Java library, which contains the packages GPDBWritable, GPDBInputFormat, and GPDBOutputFormat. The Java packages are available in $GPHOME/lib/hadoop. Compile and run the MapReduce job with the cross-connect package. For example, compile and run the MapReduce job with hdp-gnet-1.2.0.0.jar if you use the HDP distribution of Hadoop.

To make the Java library available to all Hadoop users, the Hadoop cluster administrator should place the corresponding gphdfs connector jar in the $HADOOP_HOME/lib directory and restart the job tracker. If this is not done, a Hadoop user can still use the gphdfs connector jar; but with the distributed cache technique.

Parent topic: Accessing HDFS Data with gphdfs (Deprecated)

Example 1 - Read Custom-Formatted Data from HDFS

The sample code makes the following assumptions.

  • The data is contained in HDFS directory /demo/data/temp and the name node is running on port 8081.
  • This code writes the data in Greenplum Database format to /demo/data/MRTest1 on HDFS.
  • The data contains the following columns, in order.
    1. A long integer
    2. A Boolean
    3. A text string

Sample MapReduce Code

  1. import com.emc.greenplum.gpdb.hadoop.io.GPDBWritable;
  2. import com.emc.greenplum.gpdb.hadoop.mapreduce.lib.input.GPDBInputFormat;
  3. import com.emc.greenplum.gpdb.hadoop.mapreduce.lib.output.GPDBOutputFormat;
  4. import java.io.*;
  5. import java.util.*;
  6. import org.apache.hadoop.fs.Path;
  7. import org.apache.hadoop.conf.*;
  8. import org.apache.hadoop.io.*;
  9. import org.apache.hadoop.mapreduce.*;
  10. import org.apache.hadoop.mapreduce.lib.output.*;
  11. import org.apache.hadoop.mapreduce.lib.input.*;
  12. import org.apache.hadoop.util.*;
  13. public class demoMR {
  14. /*
  15. * Helper routine to create our generic record. This section shows the
  16. * format of the data. Modify as necessary.
  17. */
  18. public static GPDBWritable generateGenericRecord() throws
  19. IOException {
  20. int[] colType = new int[3];
  21. colType[0] = GPDBWritable.BIGINT;
  22. colType[1] = GPDBWritable.BOOLEAN;
  23. colType[2] = GPDBWritable.VARCHAR;
  24. /*
  25. * This section passes the values of the data. Modify as necessary.
  26. */
  27. GPDBWritable gw = new GPDBWritable(colType);
  28. gw.setLong (0, (long)12345);
  29. gw.setBoolean(1, true);
  30. gw.setString (2, "abcdef");
  31. return gw;
  32. }
  33. /*
  34. * DEMO Map/Reduce class test1
  35. * -- Regardless of the input, this section dumps the generic record
  36. * into GPDBFormat/
  37. */
  38. public static class Map_test1
  39. extends Mapper<LongWritable, Text, LongWritable, GPDBWritable> {
  40. private LongWritable word = new LongWritable(1);
  41. public void map(LongWritable key, Text value, Context context) throws
  42. IOException {
  43. try {
  44. GPDBWritable gw = generateGenericRecord();
  45. context.write(word, gw);
  46. }
  47. catch (Exception e) {
  48. throw new IOException (e.getMessage());
  49. }
  50. }
  51. }
  52. Configuration conf = new Configuration(true);
  53. Job job = new Job(conf, "test1");
  54. job.setJarByClass(demoMR.class);
  55. job.setInputFormatClass(TextInputFormat.class);
  56. job.setOutputKeyClass (LongWritable.class);
  57. job.setOutputValueClass (GPDBWritable.class);
  58. job.setOutputFormatClass(GPDBOutputFormat.class);
  59. job.setMapperClass(Map_test1.class);
  60. FileInputFormat.setInputPaths (job, new Path("/demo/data/tmp"));
  61. GPDBOutputFormat.setOutputPath(job, new Path("/demo/data/MRTest1"));
  62. job.waitForCompletion(true);
  63. }

Run CREATE EXTERNAL TABLE

The Hadoop location corresponds to the output path in the MapReduce job.

  1. =# CREATE EXTERNAL TABLE demodata
  2. LOCATION ('gphdfs://hdfshost-1:8081/demo/data/MRTest1')
  3. FORMAT 'custom' (formatter='gphdfs_import');

Example 2 - Write Custom-Formatted Data from Greenplum Database to HDFS

The sample code makes the following assumptions.

  • The data in Greenplum Database format is located on the Hadoop Distributed File System on /demo/data/writeFromGPDB_42 on port 8081.
  • This code writes the data to /demo/data/MRTest2 on port 8081.
  1. Run a SQL command to create the writable table.

    1. =# CREATE WRITABLE EXTERNAL TABLE demodata
    2. LOCATION ('gphdfs://hdfshost-1:8081/demo/data/MRTest2')
    3. FORMAT 'custom' (formatter='gphdfs_export');
  2. Author and run code for a MapReduce job. Use the same import statements shown in Example 1 - Read Custom-Formatted Data from HDFS.

Sample MapReduce Code

  1. /*
  2. * DEMO Map/Reduce class test2
  3. * -- Convert GPDBFormat back to TEXT
  4. */
  5. public static class Map_test2 extends Mapper<LongWritable, GPDBWritable,
  6. Text, NullWritable> {
  7. public void map(LongWritable key, GPDBWritable value, Context context )
  8. throws IOException {
  9. try {
  10. context.write(new Text(value.toString()), NullWritable.get());
  11. } catch (Exception e) { throw new IOException (e.getMessage()); }
  12. }
  13. }
  14. public static void runTest2() throws Exception{
  15. Configuration conf = new Configuration(true);
  16. Job job = new Job(conf, "test2");
  17. job.setJarByClass(demoMR.class);
  18. job.setInputFormatClass(GPDBInputFormat.class);
  19. job.setOutputKeyLClass (Text.class);
  20. job.setOutputValueClass(NullWritable.class);
  21. job.setOutputFormatClass(TextOutputFormat.class);
  22. job.setMapperClass(Map_test2.class);
  23. GPDBInputFormat.setInputPaths (job,
  24. new Path("/demo/data/writeFromGPDB_42"));
  25. GPDBOutputFormat.setOutputPath(job, new Path("/demo/data/MRTest2"));
  26. job.waitForCompletion(true);
  27. }