Deeplearning4j on Spark: How To Build Data Pipelines

This page provides some guides on how to create data pipelines for both training and evaluation when using Deeplearning4j on Spark.

This page assumes some familiarity with Spark (RDDs, master vs. workers, etc) and Deeplearning4j (networks, DataSet etc).

As with training on a single machine, the final step of a data pipeline should be to produce a DataSet (single features arrays, single label array) or MultiDataSet (one or more feature arrays, one or more label arrays). In the case of DL4J on Spark, the final step of a data pipeline is data in one of the following formats:(a) an RDD<DataSet>/JavaRDD<DataSet>(b) an RDD<MultiDataSet>/JavaRDD<MultiDataSet>(c) a directory of serialized DataSet/MultiDataSet (minibatch) objects on network storage such as HDFS, S3 or Azure blob storage(d) a directory of minibatches in some other format

Once data is in one of those four formats, it can be used for training or evaluation.

Note: When training multiple models on a single dataset, it is best practice to preprocess your data once, and save it to network storage such as HDFS.Then, when training the network you can call SparkDl4jMultiLayer.fit(String path) or SparkComputationGraph.fit(String path) where path is the directory where you saved the files.

Spark Data Prepration: How-To Guides

How to prepare a RDD[DataSet] from CSV data for classification or regression

This guide shows how to load data contained in one or more CSV files and produce a JavaRDD<DataSet> for export, training or evaluation on Spark.

The process is fairly straightforward. Note that the DataVecDataSetFunction is very similar to the RecordReaderDataSetIterator that is often used for single machine training.

For example, suppose the CSV had the following format - 6 total columns: 5 features followed by an integer class index for classification, and 10 possible classes

  1. 1.0,3.2,4.5,1.1,6.3,0
  2. 1.6,2.4,5.9,0.2,2.2,1
  3. ...

we could load this data for classification using the following code:

  1. String filePath = "hdfs:///your/path/some_csv_file.csv";
  2. JavaSparkContext sc = new JavaSparkContext();
  3. JavaRDD<String> rddString = sc.textFile(filePath);
  4. RecordReader recordReader = new CSVRecordReader(',');
  5. JavaRDD<List<Writable>> rddWritables = rddString.map(new StringToWritablesFunction(recordReader));
  6. int labelIndex = 5; //Labels: a single integer representing the class index in column number 5
  7. int numLabelClasses = 10; //10 classes for the label
  8. JavaRDD<DataSet> rddDataSetClassification = rddWritables.map(new DataVecDataSetFunction(labelIndex, numLabelClasses, false));

However, if this dataset was for regression instead, with again 6 total columns, 3 feature columns (positions 0, 1 and 2 in the file rows) and 3 label columns (positions 3, 4 and 5) we could load it using the same process as above, but changing the last 3 lines to:

  1. int firstLabelColumn = 3; //First column index for label
  2. int lastLabelColumn = 5; //Last column index for label
  3. JavaRDD<DataSet> rddDataSetRegression = rddWritables.map(new DataVecDataSetFunction(firstColumnLabel, lastColumnLabel, true, null, null));

How to create a RDD[MultiDataSet] from one or more RDD[List[Writable]]

RecordReaderMultiDataSetIterator (RRMDSI) is the most common way to create MultiDataSet instances for single-machine training data pipelines.It is possible to use RRMDSI for Spark data pipelines, where data is coming from one or more of RDD<List<Writable>> (for ‘standard’ data) or RDD<List<List<Writable>> (for sequence data).

Case 1: Single RDD<List<Writable>> to RDD<MultiDataSet>

Consider the following single node (non-Spark) data pipeline for a CSV classification task.

  1. RecordReader recordReader = new CSVRecordReader(numLinesToSkip,delimiter);
  2. recordReader.initialize(new FileSplit(new ClassPathResource("iris.txt").getFile()));
  3. int batchSize = 32;
  4. int labelColumn = 4;
  5. int numClasses = 3;
  6. MultiDataSetIterator iter = new RecordReaderMultiDataSetIterator.Builder(batchSize)
  7. .addReader("data", recordReader)
  8. .addInput("data", 0, labelColumn-1)
  9. .addOutputOneHot("data", labelColumn, numClasses)
  10. .build();

The equivalent to the following Spark data pipeline:

  1. JavaRDD<List<Writable>> rdd = sc.textFile(f.getPath()).map(new StringToWritablesFunction(new CSVRecordReader()));
  2. MultiDataSetIterator iter = new RecordReaderMultiDataSetIterator.Builder(batchSize)
  3. .addReader("data", new SparkSourceDummyReader(0)) //Note the use of the "SparkSourceDummyReader"
  4. .addInput("data", 0, labelColumn-1)
  5. .addOutputOneHot("data", labelColumn, numClasses)
  6. .build();
  7. JavaRDD<MultiDataSet> mdsRdd = IteratorUtils.mapRRMDSI(rdd, rrmdsi2);

For Sequence data (List<List<Writable>>) you can use SparkSourceDummySeqReader instead.

Case 2: Multiple RDD<List<Writable>> or RDD<List<List<Writable>> to RDD<MultiDataSet>

For this case, the process is much the same. However, internaly, a join is used.

  1. JavaRDD<List<Writable>> rdd1 = ...
  2. JavaRDD<List<Writable>> rdd2 = ...
  3. RecordReaderMultiDataSetIterator rrmdsi = new RecordReaderMultiDataSetIterator.Builder(batchSize)
  4. .addReader("rdd1", new SparkSourceDummyReader(0)) //0 = use first rdd in list
  5. .addReader("rdd2", new SparkSourceDummyReader(1)) //1 = use second rdd in list
  6. .addInput("rdd1", 1, 2) //
  7. .addOutput("rdd2", 1, 2)
  8. .build();
  9. List<JavaRDD<List<Writable>>> list = Arrays.asList(rdd1, rdd2);
  10. int[] keyIdxs = new int[]{0,0}; //Column 0 in rdd1 and rdd2 is the 'key' used for joining
  11. boolean filterMissing = false; //If true: filter out any records that don't have matching keys in all RDDs
  12. JavaRDD<MultiDataSet> mdsRdd = IteratorUtils.mapRRMDSI(list, null, keyIdxs, null, filterMissing, rrmdsi);

How to save a RDD[DataSet] or RDD[MultiDataSet] to network storage and use it for training

As noted at the start of this page, it is considered a best practice to preprocess and export your data once (i.e., save to network storage such as HDFS and reuse), rather than fitting from an RDD<DataSet> or RDD<MultiDataSet> directly in each training job.

There are a number of reasons for this:

  • Better performance (avoid redundant loading/calculation): When fitting multiple models from the same dataset, it is faster to preprocess this data once and save to disk rather than preprocessing it again for every single training run.
  • Minimizing memory and other resources: By exporting and fitting from disk, we only need to keep the DataSets we are currently using (plus a small async prefetch buffer) in memory, rather than also keeping many unused DataSet objects in memory. Exporting results in lower total memory use and hence we can use larger networks, larger minibatch sizes, or allocate fewer resources to our job.
  • Avoiding recomputation: When an RDD is too large to fit into memory, some parts of it may need to be recomputed before it can be used (depending on the cache settings). When this occurs, Spark will recompute parts of the data pipeline multiple times, costing us both time and memory. A pre-export step avoids this recomputation entirely.Step 1: Saving

Saving the DataSet objects once you have an RDD<DataSet> is quite straightforward:

  1. JavaRDD<DataSet> rddDataSet = ...
  2. int minibatchSize = 32; //Minibatch size of the saved DataSet objects
  3. String exportPath = "hdfs:///path/to/export/data";
  4. JavaRDD<String> paths = rddDataSet.mapPartitionsWithIndex(new BatchAndExportDataSetsFunction(minibatchSize, exportPath), true);

Keep in mind that this is a map function, so no data will be saved until the paths RDD is executed - i.e., you should follow this with an operation such as:

  1. paths.saveAsTextFile("hdfs:///path/to/text/file.txt"); //Specified file will contain paths/URIs of all saved DataSet objects

or

  1. List<String> paths = paths.collect(); //Collection of paths/URIs of all saved DataSet objects

or

  1. paths.foreach(new VoidFunction<String>() {
  2. @Override
  3. public void call(String path) {
  4. //Some operation on each path
  5. }
  6. });

Saving an RDD<MultiDataSet> can be done in the same way using BatchAndExportMultiDataSetsFunction instead, which takes the same arguments.

Step 2: Loading and Fitting

The exported data can be used in a few ways.First, it can be used to fit a network directly:

  1. String exportPath = "hdfs:///path/to/export/data";
  2. SparkDl4jMultiLayer net = ...
  3. net.fit(exportPath); //Loads the serialized DataSet objects found in the 'exportPath' directory

Similarly, we can use SparkComputationGraph.fitMultiDataSet(String path) if we saved an RDD<MultiDataSet> instead.

Alternatively, we can load up the paths in a few different ways, depending on if or how we saved them:

  1. JavaSparkContext sc = new JavaSparkContext();
  2. //If we used saveAsTextFile:
  3. String saveTo = "hdfs:///path/to/text/file.txt";
  4. paths.saveAsTextFile(saveTo); //Save
  5. JavaRDD<String> loadedPaths = sc.textFile(saveTo); //Load
  6. //If we used collecting:
  7. List<String> paths = paths.collect(); //Collect
  8. JavaRDD<String> loadedPaths = sc.parallelize(paths); //Parallelize
  9. //If we want to list the directory contents:
  10. String exportPath = "hdfs:///path/to/export/data";
  11. JavaRDD<String> loadedPaths = SparkUtils.listPaths(sc, exportPath); //List paths using org.deeplearning4j.spark.util.SparkUtils

Then we can execute training on these paths by using methods such as SparkDl4jMultiLayer.fitPaths(JavaRDD<String>)

How to prepare data on a single machine for use on a cluster: saving DataSets

Another possible workflow is to start with the data pipeline on a single machine, and export the DataSet or MultiDataSet objects for use on the cluster.This workflow clearly isn’t as scalable as preparing data on a cluster (you are using just one machine to prepare data) but it can be an easy option in some cases, especially when you have an existing data pipeline.

This section assumes you have an existing DataSetIterator or MultiDataSetIterator used for single-machine training. There are many different ways to create one, which is outside of the scope of this guide.

Step 1: Save the DataSets or MultiDataSets

Saving the contents of a DataSet to a local directory can be done using the following code:

  1. DataSetIterator iter = ...
  2. File rootDir = new File("/saving/directory/");
  3. int count = 0;
  4. while(iter.hasNext()){
  5. DataSet ds = iter.next();
  6. File outFile = new File(rootDir, "dataset_" + (count++) + ".bin");
  7. ds.save(outFile);
  8. }

Note that for the purposes of Spark, the exact file names don’t matter.The process for saving MultiDataSets is almost identical.

As an aside: you can read these saved DataSet objects on a single machine (for non-Spark training) using FileDataSetIterator).

An alternative approach is to save directly to the cluster using output streams, to (for example) HDFS. This can only be done if the machine running the code is properly configured with the required libraries and access rights. For example, to save the DataSets directly to HDFS you could use:

  1. JavaSparkContext sc = new JavaSparkContext();
  2. FileSystem fileSystem = FileSystem.get(sc.hadoopConfiguration());
  3. String outputDir = "hdfs:///my/output/location/";
  4. DataSetIterator iter = ...
  5. int count = 0;
  6. while(iter.hasNext()){
  7. DataSet ds = iter.next();
  8. String filePath = outputDir + "dataset_" + (count++) + ".bin";
  9. try (OutputStream os = new BufferedOutputStream(fileSystem.create(new Path(outputPath)))) {
  10. ds.save(os);
  11. }
  12. }

Step 2: Load and Train on a ClusterThe saved DataSet objects can then be copied to the cluster or network file storage (for example, using Hadoop FS utilities on a Hadoop cluster), and used as follows:

  1. String dir = "hdfs:///data/copied/here";
  2. SparkDl4jMultiLayer net = ...
  3. net.fit(dir); //Loads the serialized DataSet objects found in the 'dir' directory

or alternatively/equivalently, we can list the paths as an RDD using:

  1. String dir = "hdfs:///data/copied/here";
  2. JavaRDD<String> paths = SparkUtils.listPaths(sc, dir); //List paths using org.deeplearning4j.spark.util.SparkUtils

How to prepare data on a single machine for use on a cluster: map/sequence files

An alternative approach is to use Hadoop MapFile and SequenceFiles, which are efficient binary storage formats.This can be used to convert the output of any DataVec RecordReader or SequenceRecordReader (including a custom record reader) to a format usable for use on Spark.MapFileRecordWriter and MapFileSequenceRecordWriter require the following dependencies:

  1. <dependency>
  2. <groupId>org.datavec</groupId>
  3. <artifactId>datavec-hadoop</artifactId>
  4. <version>${datavec.version}</version>
  5. </dependency>
  6. <dependency>
  7. <groupId>org.apache.hadoop</groupId>
  8. <artifactId>hadoop-common</artifactId>
  9. <version>${hadoop.version}</version>
  10. <!-- Optional exclusion for log4j in case you are using other logging frameworks -->
  11. <!--
  12. <exclusions>
  13. <exclusion>
  14. <groupId>log4j</groupId>
  15. <artifactId>log4j</artifactId>
  16. </exclusion>
  17. <exclusion>
  18. <groupId>org.slf4j</groupId>
  19. <artifactId>slf4j-log4j12</artifactId>
  20. </exclusion>
  21. </exclusions>
  22. -->
  23. </dependency>

Step 1: Create a MapFile LocallyIn the following example, a CSVRecordReader will be used, but any other RecordReader could be used in its place:

  1. File csvFile = new File("/path/to/file.csv")
  2. RecordReader recordReader = new CSVRecordReader();
  3. recordReader.initialize(new FileSplit(csvFile));
  4. //Create map file writer
  5. String outPath = "/map/file/root/dir"
  6. MapFileRecordWriter writer = new MapFileRecordWriter(new File(outPath));
  7. //Convert to MapFile binary format:
  8. RecordReaderConverter.convert(recordReader, writer);

The process for using a SequenceRecordReader combined with a MapFileSequenceRecordWriter is virtually the same.

Note also that MapFileRecordWriter and MapFileSequenceRecordWriter both support splitting - i.e., creating multiple smaller map files instead of creating one single (potentially multi-GB) map file. Using splitting is recommended when saving data in this manner for use with Spark.

Step 2: Copy to HDFS or other network file storage

The exact process is beyond the scope of this guide. However, it should be sufficient to simply copy the directory (“/map/file/root/dir” in the example above) to a location on HDFS.

Step 3: Read and Convert to RDD<DataSet> for Training

We can load the data for training using the following:

  1. JavaSparkContext sc = new JavaSparkContext();
  2. String pathOnHDFS = "hdfs:///map/file/directory";
  3. JavaRDD<List<Writable>> rdd = SparkStorageUtils.restoreMapFile(pathOnHDFS, sc); //import: org.datavec.spark.storage.SparkStorageUtils
  4. //Note at this point: it's the same as the latter part of the CSV how-to guide
  5. int labelIndex = 5; //Labels: a single integer representing the class index in column number 5
  6. int numLabelClasses = 10; //10 classes for the label
  7. JavaRDD<DataSet> rddDataSetClassification = rdd.map(new DataVecDataSetFunction(labelIndex, numLabelClasses, false));

How to load multiple CSVs (one sequence per file) for RNN data pipelines

This guide shows how load CSV files for training an RNN.The assumption is that the dataset is comprised of multiple CSV files, where:

  • each CSV file represents one sequence
  • each row/line of the CSV contains the values for one time step (one or more columns/values, same number of values in all rows for all files)
  • each CSV may contain a different number of lines to other CSVs (i.e., variable length sequences are OK here)
  • header lines either aren’t present in any files, or are present in all filesA data pipeline can be created using the following process:
  1. String directoryWithCsvFiles = "hdfs:///path/to/directory";
  2. JavaPairRDD<String, PortableDataStream> origData = sc.binaryFiles(directoryWithCsvFiles);
  3. int numHeaderLinesEachFile = 0; //No header lines
  4. int delimiter = ","; //Comma delimited files
  5. SequenceRecordReader seqRR = new CSVSequenceRecordReader(numHeaderLinesEachFile, delimiter);
  6. JavaRDD<List<List<Writable>>> sequencesRdd = origData.map(new SequenceRecordReaderFunction(seqRR));
  7. //Similar to the non-sequence CSV guide using DataVecDataSetFunction. Assuming classification here:
  8. int labelIndex = 5; //Index of the label column. Occurs at position/column 5
  9. int numClasses = 10; //Number of classes for classification
  10. JavaRDD<DataSet> dataSetRdd = sequencesRdd.map(new DataVecSequenceDataSetFunction(labelIndex, numClasses, false));

How to create a Spark data pipeline for training on images

This guide shows how to create an RDD<DataSet> for image classification, starting from images stored either locally, or on a network file system such as HDFS.

The approach here used (added in 1.0.0-beta3) is to first preprocess the images into batches of files - FileBatch objects.The motivation for this approach is simple: the original image files typically use efficient compresion (JPEG for example) which is much more space (and network) efficient than a bitmap (int8 or 32-bit floating point) representation. However, on a cluster we want to minimize disk reads due to latency issues with remote storage - one file read/transfer is going to be faster than minibatchSize remote file reads.

The TinyImageNet example also shows how this can be done.

Note that one limitation of the implementation is that the set of classes (i.e., the class/category labels when doing classification) needs to be known, provided or collected manually. This differs from using ImageRecordReader for classification on a single machine, which can automatically infer the set of class labels.

First, assume the images are in subdirectories based on their class labels. For example, suppose there are two classes, “cat” and “dog”, the directory structure would look like:

  1. rootDir/cat/img0.jpg
  2. rootDir/cat/img1.jpg
  3. ...
  4. rootDir/dog/img0.jpg
  5. rootDir/dog/img1.jpg
  6. ...

(Note the file names don’t matter in this example - however, the parent directory names are the class labels)

Step 1 (option 1 of 2): Preprocess Locally

Local preprocessing can be done as follows:

  1. String sourceDirectory = "/home/user/my_images"; //Where your data is located
  2. String destinationDirectory = "/home/user/preprocessed"; //Where the preprocessed data should be written
  3. int batchSize = 32; //Number of examples (images) in each FileBatch object
  4. SparkDataUtils.createFileBatchesLocal(sourceDirectory, NativeImageLoader.ALLOWED_FORMATS, true, saveDirTrain, batchSize);

The full import for SparkDataUtils is org.deeplearning4j.spark.util.SparkDataUtils.

After preprocessing is has been completed, the directory can be copied to the cluster for use in training (Step 2).

Step 1 (option 2 of 2): Preprocess using Spark

Alternatively, if the original images are on remote file storage (such as HDFS), we can use the following:

String sourceDirectory = “hdfs:///data/my_images”; //Where your data is locatedString destinationDirectory = “hdfs:///data/preprocessed”; //Where the preprocessed data should be writtenint batchSize = 32; //Number of examples (images) in each FileBatch objectSparkDataUtils.createFileBatchesSpark(sourceDirectory, destinationDirectory, batchSize, sparkContext);

Step 2: TrainingThe data pipeline for image classification can be constructed as follows. This code is taken from the TinyImageNet example:

  1. //Create data loader
  2. int imageHeightWidth = 64; //64x64 pixel input to network
  3. int imageChannels = 3; //RGB
  4. PathLabelGenerator labelMaker = new ParentPathLabelGenerator();
  5. ImageRecordReader rr = new ImageRecordReader(imageHeightWidth, imageHeightWidth, imageChannels, labelMaker);
  6. rr.setLabels(Arrays.asList("cat", "dog"));
  7. int numClasses = 2;
  8. RecordReaderFileBatchLoader loader = new RecordReaderFileBatchLoader(rr, minibatch, 1, numClasses);
  9. loader.setPreProcessor(new ImagePreProcessingScaler()); //Scale 0-255 valued pixels to 0-1 range
  10. //Fit the network
  11. String trainDataPath = "hdfs:///data/preprocessed"; //Where the preprocessed data is located
  12. JavaRDD<String> pathsTrain = SparkUtils.listPaths(sc, trainDataPath);
  13. for (int i = 0; i < numEpochs; i++) {
  14. sparkNet.fitPaths(pathsTrain, loader);
  15. }

And that’s it.

Note: for other label generation cases (such as labels provided from the filename instead of parent directory), or for tasks such as semantic segmentation, you can substitute a different PathLabelGenerator instead of the default. For example, if the label should come from the file name, you can use PatternPathLabelGenerator instead.Let’s say images are in the format “cat_img1234.jpg”, “dog_2309.png” etc. We can use the following process:

  1. PathLabelGenerator labelGenerator = new PatternPathLabelGenerator("_", 0); //Split on the "_" character, and take the first value
  2. ImageRecordReader imageRecordReader = new ImageRecordReader(imageHW, imageHW, imageChannels, labelGenerator);

Note that PathLabelGenerator returns a Writable object, so for tasks like image segmentation, you can return an INDArray using the NDArrayWritable class in a custom PathLabelGenerator.

How to load prepared minibatches in custom format

DL4J Spark training supports the ability to load data serialized in a custom format. The assumption is that each file on the remote/network storage represents a single minibatch of data in some readable format.

Note that this approach is typically not required or recommended for most users, but is provided as an additional option for advanced users or those with pre-prepared data in a custom format or a format that is not natively supported by DL4J.When files represent a single record/example (instead of a minibatch) in a custom format, a custom RecordReader could be used instead.

The interfaces of note are:

Suppose a HDFS directory contains a number of files, each being a minibatch in some custom format.These can be loaded using the following process:

  1. JavaSparkContext sc = new JavaSparkContext();
  2. String dataDirectory = "hdfs:///path/with/data";
  3. JavaRDD<String> loadedPaths = SparkUtils.listPaths(sc, dataDirectory); //List paths using org.deeplearning4j.spark.util.SparkUtils
  4. SparkDl4jMultiLayer net = ...
  5. Loader<DataSet> myCustomLoader = new MyCustomLoader();
  6. net.fitPaths(loadedPaths, myCustomLoader);

Where the custom loader class looks something like:

  1. public class MyCustomLoader implements DataSetLoader {
  2. @Override
  3. public DataSet load(Source source) throws IOException {
  4. InputStream inputStream = source.getInputStream();
  5. <load custom data format here>
  6. INDArray features = ...;
  7. INDArray labels = ...;
  8. return new DataSet(features, labels);
  9. }
  10. }