Quick Start in 3 Minutes

This article introduces how to quickly get start with OneFlow. We can complete a full neural network training process just in 3 minutes.

Example

With OneFlow installed, you can run the following command to download mlp_mnist.py python script from repository and run it.

  1. wget https://docs.oneflow.org/en/master/code/quick_start/mlp_mnist.py
  2. python3 mlp_mnist.py

The output looks like below:

  1. Epoch [1/20], Loss: 2.3155
  2. Epoch [1/20], Loss: 0.7955
  3. Epoch [1/20], Loss: 0.4653
  4. Epoch [1/20], Loss: 0.2064
  5. Epoch [1/20], Loss: 0.2683
  6. Epoch [1/20], Loss: 0.3167
  7. ...

The output is a series of numbers representing the loss values while training. The goal of training is to make the loss value as small as possible. So far, you have completed a full neural network training by using OneFlow.

Code Explanation

The following is the full code.

  1. # mlp_mnist.py
  2. import oneflow as flow
  3. import oneflow.typing as tp
  4. import numpy as np
  5. BATCH_SIZE = 100
  6. @flow.global_function(type="train")
  7. def train_job(
  8. images: tp.Numpy.Placeholder((BATCH_SIZE, 1, 28, 28), dtype=flow.float),
  9. labels: tp.Numpy.Placeholder((BATCH_SIZE,), dtype=flow.int32),
  10. ) -> tp.Numpy:
  11. with flow.scope.placement("cpu", "0:0"):
  12. reshape = flow.reshape(images, [images.shape[0], -1])
  13. initializer1 = flow.random_uniform_initializer(-1/28.0, 1/28.0)
  14. hidden = flow.layers.dense(
  15. reshape,
  16. 500,
  17. activation=flow.nn.relu,
  18. kernel_initializer=initializer1,
  19. bias_initializer=initializer1,
  20. name="dense1",
  21. )
  22. initializer2 = flow.random_uniform_initializer(
  23. -np.sqrt(1/500.0), np.sqrt(1/500.0))
  24. logits = flow.layers.dense(
  25. hidden, 10, kernel_initializer=initializer2, bias_initializer=initializer2, name="dense2"
  26. )
  27. loss = flow.nn.sparse_softmax_cross_entropy_with_logits(labels, logits)
  28. lr_scheduler = flow.optimizer.PiecewiseConstantScheduler([], [0.001])
  29. flow.optimizer.Adam(lr_scheduler).minimize(loss)
  30. return loss
  31. if __name__ == "__main__":
  32. (train_images, train_labels), (test_images, test_labels) = flow.data.load_mnist(
  33. BATCH_SIZE, BATCH_SIZE
  34. )
  35. for epoch in range(20):
  36. for i, (images, labels) in enumerate(zip(train_images, train_labels)):
  37. loss = train_job(images, labels)
  38. if i % 20 == 0:
  39. print('Epoch [{}/{}], Loss: {:.4f}'
  40. .format(epoch + 1, 20, loss.mean()))

The next section is a brief description of this code.

A special feature of OneFlow compares to other deep learning frameworks is:

  1. @flow.global_function(type="train")
  2. def train_job(
  3. images: tp.Numpy.Placeholder((BATCH_SIZE, 1, 28, 28), dtype=flow.float),
  4. labels: tp.Numpy.Placeholder((BATCH_SIZE,), dtype=flow.int32),
  5. ) -> tp.Numpy:

train_job function which decorated by @flow.global_function is called “job function”. Unless functions are decorated by @flow.global_function, or they can not be recognized by OneFlow.

The parameter type is used to specify the type of job: type="train" means it’s a training job and type="predict" means evaluation or prediction job.

In OneFlow, a neural network training or prediction task needs two pieces of information:

  • One part is the structure of neural network and its related parameters. These are defined in the job function which mentioned above.

  • The other part is the configuration of training to the network. For example, learning rate and type of model optimizer. These are defined by code as below:

  1. lr_scheduler = flow.optimizer.PiecewiseConstantScheduler([], [0.001])
  2. flow.optimizer.Adam(lr_scheduler).minimize(loss)

Besides the job function definition and configuration which mentioned above, code in this script contains all the points of how to train a neural network.

  • flow.data.load_mnist(BATCH_SIZE,BATCH_SIZE): Prepare and load training data.

  • train_job(images, labels): return the loss value for each iteration.

  • print(..., loss.mean()): print loss values for every 20 iterations.

This page is just a simple example on neural network. A more comprehensive and detailed introduction of OneFlow can be found in Convolution Neural Network for Handwriting Recognition.

In addition, you can refer to Basic topics to learn more about how to use OneFlow for deep learning.

Benchmarks and related scripts for some prevalent networks are also provided in repository OneFlow-Benchmark.

FAQ

  • Getting stuck when running this script

It may be that the incorrect proxy is set in the environment. You can cancel the proxy by first running the command

  1. unset http_proxy
  2. unset https_proxy
  3. unset HTTP_PROXY
  4. unset HTTPS_PROXY

Then try again

  • My computer can’t connect to the Internet and keeps getting stuck when I run the script.

This script will automatically download the required data file from the network. If your computer is not connected to the Internet, you will need to download it manually by clicking here and placing it in the script mlp_ mnist.py in the same path and then try again.

Please activate JavaScript for write a comment in LiveRe