ElasticDL on SQLFlow

Overview

This is a design doc on integration with ElasticDL.

User Interface

Training Job Submission

  1. SELECT
  2. c1, c2, c3, c4, c5 as class
  3. FROM training_data
  4. TO TRAIN ElasticDLKerasClassifier
  5. WITH
  6. optimizer = "optimizer",
  7. loss = "loss",
  8. eval_metrics = "eval_metrics_fn",
  9. num_classes = 10
  10. COLUMN
  11. c1,
  12. DENSE(c2, 10),
  13. BUCKET(c3, [0, 10, 100]),
  14. c4
  15. LABEL class
  16. INTO trained_elasticdl_keras_classifier;

Prediction Job Submission

  1. SELECT
  2. c1, c2, c3, c4
  3. FROM prediction_data
  4. TO PREDICT prediction_results_table
  5. WITH
  6. num_classes = 10
  7. USING trained_elasticdl_keras_classifier;

Run-time Configurations

Users can provide run-time configurations to ElasticDL job via additional parameters with prefix “runtime” within WITH clause, for example:

  1. SELECT
  2. c1, c2, c3, c4, c5 as class
  3. FROM training_data
  4. TO TRAIN ElasticDLKerasClassifier
  5. WITH
  6. optimizer = "optimizer",
  7. loss = "loss",
  8. eval_metrics = "eval_metrics_fn",
  9. num_classes = 10,
  10. runtime.num_epochs = 2,
  11. runtime.master_resource_request = "cpu=400m,memory=1024Mi",
  12. runtime.master_resource_limit = "cpu=400m,memory=1024Mi",
  13. runtime.worker_resource_request = "cpu=400m,memory=2048Mi",
  14. runtime.worker_resource_limit = "cpu=1,memory=3072Mi",
  15. runtime.num_minibatches_per_task = 10,
  16. runtime.num_workers = 2
  17. COLUMN
  18. c1, c2, c3, c4
  19. LABEL class
  20. INTO trained_elasticdl_keras_classifier;

Implementation Details

Training Job

Steps:

  1. Based on SELECT ... FROM ..., read ODPS table and write it to RecordIO files, including both features and labels. These files will be stored in Kubernetes Persistent Volumes. In the future, we will support reading ODPS table directly without having to convert it to RecordIO files.
  2. Generate model definition file (e.g. cifar10_functional_api.py) that will be used in TO TRAIN clause, which includes:

    • In model definition function e.g. custom_model(), we need to configure model input and output shapes correctly in inputs = tf.keras.layers.Input(shape=<input_shape>) (only when the model is defined using tf.keras functional APIs) and outputs = tf.keras.layers.Dense(<num_classes>)(based on COLUMN ... LABEL ...). For this MVP, users can provide <input_shape> and <num_classes> using WITH clause which will then get passed to the model constructor custom_model(input_shape, num_classes) via --params argument in ElasticDL high-level API. In the future, this will be inferred from the ODPS table.
    • Pass additional parameters from WITH clause to custom_model()’s instantiation, such as optimizer and loss.
    • Skip support for feature transformation functions such as DENSE or BUCKET in COLUMN clause for now as this requires additional design details and discussions on the use of feature column APIs.
    • Pass column names, shapes, and types for features and labels to dataset_fn’s feature description that will be used in tf.io.parse_single_example(). For this MVP, column names can be obtained from SELECT ... LABEL .... Each feature columns will be of shape [1] and of type tf.float32 while label column is of shape [1] and of type tf.int64 for classification problems and tf.float32 for regression problems. In the future, this will be inferred from the ODPS table. An example dataset_fn() looks like the following:
    1. def dataset_fn(dataset, mode):
    2. def _parse_data(record):
    3. if mode == Mode.PREDICTION:
    4. feature_description = {
    5. "f1": tf.io.FixedLenFeature([1], tf.float32),
    6. "f2": tf.io.FixedLenFeature([1], tf.float32),
    7. }
    8. else:
    9. feature_description = {
    10. "f1": tf.io.FixedLenFeature([1], tf.float32),
    11. "f2": tf.io.FixedLenFeature([1], tf.float32),
    12. "label": tf.io.FixedLenFeature([1], tf.int64),
    13. }
    14. r = tf.io.parse_single_example(record, feature_description)
    15. features = {
    16. "f1": tf.math.divide(tf.cast(r["f1"], tf.float32), 255.0),
    17. "f2": tf.math.divide(tf.cast(r["f2"], tf.float32), 255.0)
    18. }
    19. if mode == Mode.PREDICTION:
    20. return features
    21. else:
    22. return features, tf.cast(r["label"], tf.int32)
    23. dataset = dataset.map(_parse_data)
    24. if mode != Mode.PREDICTION:
    25. dataset = dataset.shuffle(buffer_size=1024)
    26. return dataset
    • Pass INTO clause to --outputs argument in ElasticDL high-level API.
  3. Submit ElasticDL training job via a generated ElasticDL high-level API or CLI. Below is an example:
  1. elasticdl train \
  2. --image_base=elasticdl:ci \
  3. --model_zoo=model_zoo \
  4. --model_def=ElasticDLKerasClassifier \
  5. --training_data=training_table_name \
  6. --evaluation_data=evaluation_table_name \
  7. --num_epochs=2 \
  8. --master_resource_request="cpu=400m,memory=1024Mi" \
  9. --master_resource_limit="cpu=1,memory=2048Mi" \
  10. --worker_resource_request="cpu=400m,memory=2048Mi" \
  11. --worker_resource_limit="cpu=1,memory=3072Mi" \
  12. --minibatch_size=64 \
  13. --num_minibatches_per_task=10 \
  14. --num_workers=2 \
  15. --checkpoint_steps=10 \
  16. --evaluation_steps=15 \
  17. --grads_to_wait=2 \
  18. --job_name=test-mnist \
  19. --log_level=INFO \
  20. --image_pull_policy=Never \
  21. --output=model_output

Prediction Job

This is similar to training except that prediction results will be written back to an ODPS table through PREDICT clause. An additional PredictionOutputsProcessor class will be generated in the model definition file for writing the prediction results to ODPS:

  1. class PredictionOutputsProcessor(BasePredictionOutputsProcessor):
  2. def __init__(self):
  3. self.odps_writer = ODPSWriter(
  4. os.environ[ODPSConfig.PROJECT_NAME],
  5. os.environ[ODPSConfig.ACCESS_ID],
  6. os.environ[ODPSConfig.ACCESS_KEY],
  7. os.environ[ODPSConfig.ENDPOINT],
  8. <prediction_results_table>,
  9. columns=["f" + str(i) for i in range(<num_classes>)],
  10. column_types=["double" for _ in range(<num_classes>)],
  11. )
  12. def process(self, predictions, worker_id):
  13. self.odps_writer.from_iterator(...)

where an ODPSWriter will be instantiated with necessary information on ODPS access and prediction output columns. <prediction_results_table> above is inferred from PREDICT clause and <num_classes> is provided from WITH clause.

USING clause contains the name to the trained model to be used to make predictions.

Differentiate Run-time Configurations

We need to differentiate between the run-time configuration parameters (e.g. num_workers, num_epochs, etc.) and the model construction parameters (e.g. optimizer, loss, num_classes, etc.). In this MVP, we can add different prefixes to different types of parameters, such as adding “runtime.” to run-time configuration parameters.