Phoenix Map Reduce

Phoenix provides support for retrieving and writing to Phoenix tables from within MapReduce jobs. The framework now provides custom InputFormat and OutputFormat classes PhoenixInputFormat , PhoenixOutputFormat.

PhoenixMapReduceUtil provides several utility methods to set the input and output configuration parameters to the job.

When a Phoenix table is the source for the Map Reduce job, we can provide a SELECT query or pass a table name and specific columns to import data . To retrieve data from the table within the mapper class, we need to have a class that implements DBWritable and pass it as an argument to PhoenixMapReduceUtil.setInput method. The custom DBWritable class provides implementation for readFields(ResultSet rs) that allows us to retrieve columns for each row. This custom DBWritable class will form the input value to the mapper class.

Note: The SELECT query must not perform any aggregation or use DISTINCT as these are not supported by our map-reduce integration.

Similarly, when writing to a Phoenix table, we use the PhoenixMapReduceUtil.setOutput method to set the output table and the columns.

Note: Phoenix internally builds the UPSERT query for you .

The output key and value class for the job should always be NullWritable and the custom DBWritable class that implements the write method .

Let’s dive into an example where we have a table, STOCK , that holds the master data of quarterly recordings in a double array for each year and we would like to find out the max price of each stock across all years. Let’s store the output to a STOCK_STATS table which is another Phoenix table.

Note , you can definitely have a job configured to read from hdfs and load into a phoenix table.

a) stock

  1. CREATE TABLE IF NOT EXISTS STOCK (STOCK_NAME VARCHAR NOT NULL ,RECORDING_YEAR INTEGER NOT NULL, RECORDINGS_QUARTER DOUBLE array[] CONSTRAINT pk PRIMARY KEY (STOCK_NAME , RECORDING_YEAR));

b) stock_stats

  1. CREATE TABLE IF NOT EXISTS STOCK_STATS (STOCK_NAME VARCHAR NOT NULL , MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY (STOCK_NAME));

Sample Data

  1. UPSERT into STOCK values ('AAPL',2009,ARRAY[85.88,91.04,88.5,90.3]);
  2. UPSERT into STOCK values ('AAPL',2008,ARRAY[199.27,200.26,192.55,194.84]);
  3. UPSERT into STOCK values ('AAPL',2007,ARRAY[86.29,86.58,81.90,83.80]);
  4. UPSERT into STOCK values ('CSCO',2009,ARRAY[16.41,17.00,16.25,16.96]);
  5. UPSERT into STOCK values ('CSCO',2008,ARRAY[27.00,27.30,26.21,26.54]);
  6. UPSERT into STOCK values ('CSCO',2007,ARRAY[27.46,27.98,27.33,27.73]);
  7. UPSERT into STOCK values ('CSCO',2006,ARRAY[17.21,17.49,17.18,17.45]);
  8. UPSERT into STOCK values ('GOOG',2009,ARRAY[308.60,321.82,305.50,321.32]);
  9. UPSERT into STOCK values ('GOOG',2008,ARRAY[692.87,697.37,677.73,685.19]);
  10. UPSERT into STOCK values ('GOOG',2007,ARRAY[466.00,476.66,461.11,467.59]);
  11. UPSERT into STOCK values ('GOOG',2006,ARRAY[422.52,435.67,418.22,435.23]);
  12. UPSERT into STOCK values ('MSFT',2009,ARRAY[19.53,20.40,19.37,20.33]);
  13. UPSERT into STOCK values ('MSFT',2008,ARRAY[35.79,35.96,35.00,35.22]);
  14. UPSERT into STOCK values ('MSFT',2007,ARRAY[29.91,30.25,29.40,29.86]);
  15. UPSERT into STOCK values ('MSFT',2006,ARRAY[26.25,27.00,26.10,26.84]);
  16. UPSERT into STOCK values ('YHOO',2009,ARRAY[12.17,12.85,12.12,12.85]);
  17. UPSERT into STOCK values ('YHOO',2008,ARRAY[23.80,24.15,23.60,23.72]);
  18. UPSERT into STOCK values ('YHOO',2007,ARRAY[25.85,26.26,25.26,25.61]);
  19. UPSERT into STOCK values ('YHOO',2006,ARRAY[39.69,41.22,38.79,40.91]);

Below is a simple job configuration

Job Configuration

  1. final Configuration configuration = HBaseConfiguration.create();
  2. final Job job = Job.getInstance(configuration, "phoenix-mr-job");
  3.  
  4. // We can either specify a selectQuery or ignore it when we would like to retrieve all the columns
  5. final String selectQuery = "SELECT STOCK_NAME,RECORDING_YEAR,RECORDINGS_QUARTER FROM STOCK ";
  6.  
  7. // StockWritable is the DBWritable class that enables us to process the Result of the above query
  8. PhoenixMapReduceUtil.setInput(job, StockWritable.class, "STOCK", selectQuery);
  9.  
  10. // Set the target Phoenix table and the columns
  11. PhoenixMapReduceUtil.setOutput(job, "STOCK_STATS", "STOCK_NAME,MAX_RECORDING");
  12.  
  13. job.setMapperClass(StockMapper.class);
  14. job.setReducerClass(StockReducer.class);
  15. job.setOutputFormatClass(PhoenixOutputFormat.class);
  16.  
  17. job.setMapOutputKeyClass(Text.class);
  18. job.setMapOutputValueClass(DoubleWritable.class);
  19. job.setOutputKeyClass(NullWritable.class);
  20. job.setOutputValueClass(StockWritable.class);
  21. TableMapReduceUtil.addDependencyJars(job);
  22. job.waitForCompletion(true);

StockWritable

  1. public class StockWritable implements DBWritable,Writable {
  2.  
  3. private String stockName;
  4.  
  5. private int year;
  6.  
  7. private double[] recordings;
  8.  
  9. private double maxPrice;
  10.  
  11. @Override
  12. public void readFields(DataInput input) throws IOException {
  13.  
  14. }
  15.  
  16. @Override
  17. public void write(DataOutput output) throws IOException {
  18.  
  19. }
  20.  
  21. @Override
  22. public void readFields(ResultSet rs) throws SQLException {
  23. stockName = rs.getString("STOCK_NAME");
  24. year = rs.getInt("RECORDING_YEAR");
  25. final Array recordingsArray = rs.getArray("RECORDINGS_QUARTER");
  26. recordings = (double[])recordingsArray.getArray();
  27. }
  28.  
  29. @Override
  30. public void write(PreparedStatement pstmt) throws SQLException {
  31. pstmt.setString(1, stockName);
  32. pstmt.setDouble(2, maxPrice);
  33. }
  34.  
  35. // getters / setters for the fields
  36. ...
  37. ...

Stock Mapper

  1. public static class StockMapper extends Mapper<NullWritable, StockWritable, Text , DoubleWritable> {
  2.  
  3. private Text stock = new Text();
  4. private DoubleWritable price = new DoubleWritable ();
  5.  
  6. @Override
  7. protected void map(NullWritable key, StockWritable stockWritable, Context context) throws IOException, InterruptedException {
  8. double[] recordings = stockWritable.getRecordings();
  9. final String stockName = stockWritable.getStockName();
  10. double maxPrice = Double.MIN_VALUE;
  11. for(double recording : recordings) {
  12. if(maxPrice < recording) {
  13. maxPrice = recording;
  14. }
  15. }
  16. stock.set(stockName);
  17. price.set(maxPrice);
  18. context.write(stock,price);
  19. }
  20.  
  21. }

Stock Reducer

  1. public static class StockReducer extends Reducer<Text, DoubleWritable, NullWritable , StockWritable> {
  2.  
  3. @Override
  4. protected void reduce(Text key, Iterable<DoubleWritable> recordings, Context context) throws IOException, InterruptedException {
  5. double maxPrice = Double.MIN_VALUE;
  6. for(DoubleWritable recording : recordings) {
  7. if(maxPrice < recording.get()) {
  8. maxPrice = recording.get();
  9. }
  10. }
  11. final StockWritable stock = new StockWritable();
  12. stock.setStockName(key.toString());
  13. stock.setMaxPrice(maxPrice);
  14. context.write(NullWritable.get(),stock);
  15. }
  16.  
  17. }

Packaging & Running

  • Ensure phoenix-[version]-client.jar is in the classpath of your Map Reduce job jar.
  • To run the job, use the hadoop jar command with the necessary arguments.

原文: http://phoenix.apache.org/phoenix_mr.html