MNIST Demo

This tutorial shows you how to use MLeap and Bundle.ML components to export a trained Spark ML Pipeline and use MLeap to transform new data without any dependencies on the Spark Context.

We will construct an ML Pipeline comprised of a Vector Assembler, a Binarizer, PCA and a Random Forest Model for handwritten image classification on the MNIST dataset. The goal of this exercise is not to train the optimal model, but rather to demonstrate the simplicity of going from training a pipeline in Spark and deploying that same pipeline (data processing + the algorithm) outside of Spark.

The code for this tutorial is split up into two parts:

  • Spark ML Pipeline Code: Vanilla/out-of-the-box Spark code to train the ML Pipeline, which we serialize to Bundle.ML
  • MLeap Code: Load the serialized Bundle to Mleap and transform Leap Frames

Some terms before we begin:

Nouns

  • Estimator: The actual learning algorithms that train/fit the transformer against the data frame and produces a Model
  • Model: In Spark, the model is the code and metadata needed to score against an already trained algorithm
  • Transformer: Anything that transforms a data frame, does not necessarily be trained by an estimator (i.e. a Binarizer)
  • LeapFrame: A dataframe structure used for storing your data and the associated schema

Train a Spark Pipeline

Load the data

  1. // Note that we are taking advantage of com.databricks:spark-csv package to load the data
  2. import org.apache.spark.ml.feature.{VectorAssembler,StringIndexer,IndexToString, Binarizer}
  3. import org.apache.spark.ml.classification.{RandomForestClassificationModel, RandomForestClassifier}
  4. import org.apache.spark.ml.evaluation.{MulticlassClassificationEvaluator}
  5. import org.apache.spark.ml.{Pipeline,PipelineModel}
  6. import org.apache.spark.ml.feature.PCA
  7. // MLeap/Bundle.ML Serialization Libraries
  8. import ml.combust.mleap.spark.SparkSupport._
  9. import resource._
  10. import ml.combust.bundle.BundleFile
  11. import org.apache.spark.ml.bundle.SparkBundleContext
  12. val datasetPath = "./mleap-demo/data/mnist/mnist_train.csv"
  13. var dataset = spark.sqlContext.read.format("com.databricks.spark.csv").
  14. option("header", "true").
  15. option("inferSchema", "true").
  16. load(datasetPath)
  17. val testDatasetPath = "./mleap-demo/data/mnist/mnist_test.csv"
  18. var test = spark.sqlContext.read.format("com.databricks.spark.csv").
  19. option("inferSchema", "true").
  20. option("header", "true").
  21. load(testDatasetPath)

You can download the training and test dataset (gzipped from s3) and of course you’ll have to adjust the datasetPath and testDatasetPath.

The original data is hosted on Yann LeCun’s website.

Build the ML Data Pipeline

  1. // Define Dependent and Independent Features
  2. val predictionCol = "label"
  3. val labels = Seq("0","1","2","3","4","5","6","7","8","9")
  4. val pixelFeatures = (0 until 784).map(x => s"x$x").toArray
  5. val layers = Array[Int](pixelFeatures.length, 784, 800, labels.length)
  6. val vector_assembler = new VectorAssembler()
  7. .setInputCols(pixelFeatures)
  8. .setOutputCol("features")
  9. val stringIndexer = { new StringIndexer()
  10. .setInputCol(predictionCol)
  11. .setOutputCol("label_index")
  12. .fit(dataset)
  13. }
  14. val binarizer = new Binarizer()
  15. .setInputCol(vector_assembler.getOutputCol)
  16. .setThreshold(127.5)
  17. .setOutputCol("binarized_features")
  18. val pca = new PCA().
  19. setInputCol(binarizer.getOutputCol).
  20. setOutputCol("pcaFeatures").
  21. setK(10)
  22. val featurePipeline = new Pipeline().setStages(Array(vector_assembler, stringIndexer, binarizer, pca))
  23. // Transform the raw data with the feature pipeline and persist it
  24. val featureModel = featurePipeline.fit(dataset)
  25. val datasetWithFeatures = featureModel.transform(dataset)
  26. // Select only the data needed for training and persist it
  27. val datasetPcaFeaturesOnly = datasetWithFeatures.select(stringIndexer.getOutputCol, pca.getOutputCol)
  28. val datasetPcaFeaturesOnlyPersisted = datasetPcaFeaturesOnly.persist()

We could make the random forest model be part of the same pipeline, however, there is an existing bug (SPARK-16845] that prevents us from doing that (will be fixed in 2.2.0).

Train a Random Forest Model

  1. // You can optionally experiment with CrossValidator and MulticlassClassificationEvaluator to determine optimal
  2. // settings for the random forest
  3. val rf = new RandomForestClassifier().
  4. setFeaturesCol(pca.getOutputCol).
  5. setLabelCol(stringIndexer.getOutputCol).
  6. setPredictionCol("prediction").
  7. setProbabilityCol("probability").
  8. setRawPredictionCol("raw_prediction")
  9. val rfModel = rf.fit(datasetPcaFeaturesOnlyPersisted)

Serialize the ML Data Pipeline and RF Model to Bundle.ML

  1. import org.apache.spark.ml.mleap.SparkUtil
  2. val pipeline = SparkUtil.createPipelineModel(uid = "pipeline", Array(featureModel, rfModel))
  3. val sbc = SparkBundleContext().withDataset(rfModel.transform(datasetWithFeatures))
  4. for(bf <- managed(BundleFile("jar:file:/tmp/mnist-spark-pipeline.zip"))) {
  5. pipeline.writeBundle.save(bf)(sbc).get
  6. }

Deserialize to MLeap and Score New Data

The goal of this step is to show how to deserialize a bundle and use it to score LeapFrames without any Spark dependencies. You can download the mnist.json from our s3 bucket.

  1. import ml.combust.mleap.runtime.MleapSupport._
  2. import ml.combust.mleap.runtime.MleapContext.defaultContext
  3. import java.io.File
  4. // load the Spark pipeline we saved in the previous section
  5. val mleapPipeline = (for(bf <- managed(BundleFile("jar:file:/tmp/mnist-spark-pipeline.zip"))) yield {
  6. bf.loadMleapBundle().get.root
  7. }).tried.get

Load the sample LeapFrame from the mleap-demo git repo (data/mnist.json)

  1. import ml.combust.mleap.runtime.serialization.FrameReader
  2. val s = scala.io.Source.fromURL("file:///./mleap-demo/mnist.json").mkString
  3. val bytes = s.getBytes("UTF-8")
  4. val frame = FrameReader("ml.combust.mleap.json").fromBytes(bytes)
  5. // transform the dataframe using our pipeline
  6. val frame2 = mleapPipeline.transform(frame).get
  7. val data = frame2.dataset

What next? You can find more examples and notebooks here.