9.6 Classification with BigML

In this fourth and last modeling section we’re going to classify wines as either red or wine. For this we’ll be using a solution called BigML, which provides a prediction API. This means that the actual modeling and predicting takes place in the cloud, which is useful if you need a bit more power than your own computer can offer.

Although prediction APIs are relatively young, they are upcoming, which is why we’ve included one in this chapter. Other providers of prediction APIs are Google (see https://developers.google.com/prediction) and PredictionIO (see http://prediction.io). One advantage of BigML is that they offer a convenient command-line tool called bigmler (BigML 2014) that interfaces with their API. We can use this command-line like any other presented in this book, but behind the scenes, our data set is being sent to BigML’s servers, which perform the classification and send back the results.

9.6.1 Creating Balanced Train and Test Data Sets

First, we create a balanced data set to ensure that both class are represented equally. For this, we use csvstack (Groskopf 2014h), shuf (Eggert 2012), head, and csvcut:

  1. $ csvstack -n type -g red,white wine-red-clean.csv \
  2. > <(< wine-white-clean.csv body shuf | head -n 1600) |
  3. > csvcut -c fixed_acidity,volatile_acidity,citric_acid,\
  4. > residual_sugar,chlorides,free_sulfur_dioxide,total_sulfur_dioxide,\
  5. > density,ph,sulphates,alcohol,type > wine-balanced.csv

This long command breaks down as follows:

  • csvstack is used to combine multiple data sets. It creates a new column type, which has the value red for all rows coming from the first file wine-red-clean.csv and white for all rows coming from the second file.
  • The second file is passed to csvstack using file redirection. This allows us to create a temporary file using shuf, which creates a random permutation of the wine-white-clean.csv and head which only selects the header and the first 1559 rows.
  • Finally, we reorder the columns of this data set using csvcut because by default, bigmler assumes that the last column is the label.Let’s verify that wine-balanced.csv is actually balanced by counting the number of instances per class using parallel and grep:
  1. $ parallel --tag grep -c {} wine-balanced.csv ::: red white
  2. red 1599
  3. white 1599

As you can see, the data set wine-balanced.csv contains both 1599 red and 1599 white wines. Next we split into train and test data sets using split (Granlund and Stallman 2012b):

  1. $ < wine-balanced.csv header > wine-header.csv
  2. $ tail -n +2 wine-balanced.csv | shuf | split -d -n r/2
  3. $ parallel --xapply "cat wine-header.csv x0{1} > wine-{2}.csv" \
  4. > ::: 0 1 ::: train test

This is another long command that deserves to be broken down:

  • Get the header using header and save it to a temporary file named wine-header.csv
  • Mix up the red and white wines using tail and shuf and split it into two files named x00 and x01 using a round robin distribution.
  • Use cat to combine the header saved in wine-header.csv and the rows stored in x00 to save it as wine-train.csv; similarly for x01 and wine-test.csv. The —xapply command-line argument tells parallel to loop over the two input sources in tandem.Let’s check again number of instances per class in both wine-train.csv and wine-test.csv:
  1. $ parallel --tag grep -c {2} wine-{1}.csv ::: train test ::: red white
  2. train red 821
  3. train white 778
  4. test white 821
  5. test red 778

That looks like are data sets are well balanced. We’re now ready to call the prediction API using bigmler.

9.6.2 Calling the API

You can obtain a BigML username and API key at https://bigml.com/developers. Be sure to set the variables BIGML_USERNAME and BIGML_API_KEY in .bashrc with the appropriate values.

The API call is quite straightforward, and the meaning of each command-line argument is obvious from it’s name.

  1. $ bigmler --train data/wine-train.csv \
  2. > --test data/wine-test-blind.csv \
  3. > --prediction-info full \
  4. > --prediction-header \
  5. > --output-dir output \
  6. > --tag wine \
  7. > --remote

The file wine-test-blind.csv is just wine-test with the type column (so the label) removed. After this call is finished, the results can be found in the output directory:

  1. $ tree output
  2. output
  3. ├── batch_prediction
  4. ├── bigmler_sessions
  5. ├── dataset
  6. ├── dataset_test
  7. ├── models
  8. ├── predictions.csv
  9. ├── source
  10. └── source_test
  11. 0 directories, 8 files

9.6.3 Inspecting the Results

The file which is of most interest is output/predictions.csv:

  1. $ csvcut output/predictions.csv -c type | head
  2. type
  3. white
  4. white
  5. red
  6. red
  7. white
  8. red
  9. red
  10. white
  11. red

We can compare these predicted labels with the labels in our test data set. Let’s count the number of misclassifications:

  1. $ paste -d, <(csvcut -c type data/wine-test.csv) \
  2. > <(csvcut -c type output/predictions.csv) |
  3. > awk -F, '{ if ($1 != $2) {sum+=1 } } END { print sum }'
  4. 766
  • First, we combine the type columns of both data/wine-test.csv and output/predictions.csv.
  • Then, we use awk to keep count of when the two columns differ in value.As you can see, BigML’s API misclassified 766 wines out of 1599. This isn’t a good result, but please note that we just blindly applied an algorithm to a data set, which we normally wouldn’t do.

9.6.4 Conclusion

BigML’s prediction API has proven to be easy to use. As with many of the command-line tools discussed in this book, we’ve barely scratched the surface with BigML. For completeness, we should mention that:

  • BigML’s command-line tool also allows for local computations, which is useful for debugging.
  • Results can also be inspected using BigML’s web interface.
  • BigML can also perform regression tasks.Please see https://bigml.com/developers for a complete overview of BigML’s features.

Although we’ve only been able to experiment with one prediction API, we do believe that prediction APIs in general are worthwhile to consider for doing data science.