BACKPROPAGATION AND OPTIMIZER

So far, we have learned how to use OneFlow to Dataset and DataLoader, Build Models,Autograd, and combine them so that we can train models by using backpropagation algorithms.

In oneflow.optim, there are various optimizers that simplify the code of back propagation.

This article will first introduce the basic concepts of back propagation and then show you how to use the oneflow.optim class.

Backpropagation by Numpy Code

In order to make it easier for readers to understand the relationship between backpropagation and autograd, a training process of a simple model implemented with numpy is provided here:

  1. import numpy as np
  2. ITER_COUNT = 500
  3. LR = 0.01
  4. # Forward propagation
  5. def forward(x, w):
  6. return np.matmul(x, w)
  7. # Loss function
  8. def loss(y_pred, y):
  9. return ((y_pred - y) ** 2).sum()
  10. # Calculate gradient
  11. def gradient(x, y, y_pred):
  12. return np.matmul(x.T, 2 * (y_pred - y))
  13. if __name__ == "__main__":
  14. # Train: Y = 2*X1 + 3*X2
  15. x = np.array([[1, 2], [2, 3], [4, 6], [3, 1]], dtype=np.float32)
  16. y = np.array([[8], [13], [26], [9]], dtype=np.float32)
  17. w = np.array([[2], [1]], dtype=np.float32)
  18. # Training cycle
  19. for i in range(0, ITER_COUNT):
  20. y_pred = forward(x, w)
  21. l = loss(y_pred, y)
  22. if (i + 1) % 50 == 0:
  23. print(f"{i+1}/{500} loss:{l}")
  24. grad = gradient(x, y, y_pred)
  25. w -= LR * grad
  26. print(f"w:{w}")

output:

  1. 50/500 loss:0.0034512376878410578
  2. 100/500 loss:1.965487399502308e-06
  3. 150/500 loss:1.05524122773204e-09
  4. 200/500 loss:3.865352482534945e-12
  5. 250/500 loss:3.865352482534945e-12
  6. 300/500 loss:3.865352482534945e-12
  7. 350/500 loss:3.865352482534945e-12
  8. 400/500 loss:3.865352482534945e-12
  9. 450/500 loss:3.865352482534945e-12
  10. 500/500 loss:3.865352482534945e-12
  11. w:[[2.000001 ]
  12. [2.9999993]]

Note that the loss function expression we selected is Backpropagation and Optimizer - 图1, so the code for gradient of loss to parameter w is:

  1. def gradient(x, y, y_pred):
  2. return np.matmul(x.T, 2 * (y_pred - y))

SGD is used to update parameters:

  1. grad = gradient(x, y, y_pred)
  2. w -= LR*grad

In summary, a complete iteration in the training includes the following steps:

  1. The model calculates the predicted value based on the input and parameters (y_pred)
  2. Calculate loss, which is the error between the predicted value and the label
  3. Calculate the gradient of loss to parameter
  4. Update parameter(s)

1 and 2 are forward propagation process; 3 and 4 are back propagation process.

Hyperparameters

Hyperparameters are parameters related to model training settings, which can affect the efficiency and results of model training.As in the above code ITER_COUNT,LR are hyperparameters.

Using the optimizer class in oneflow.optim

Using the optimizer class in oneflow.optim for back propagation will be more concise.

First, prepare the data and model. The convenience of using Module is that you can place the hyperparameters in Module for management.

  1. import oneflow as flow
  2. x = flow.tensor([[1, 2], [2, 3], [4, 6], [3, 1]], dtype=flow.float32)
  3. y = flow.tensor([[8], [13], [26], [9]], dtype=flow.float32)
  4. class MyLrModule(flow.nn.Module):
  5. def __init__(self, lr, iter_count):
  6. super().__init__()
  7. self.w = flow.nn.Parameter(flow.tensor([[2], [1]], dtype=flow.float32))
  8. self.lr = lr
  9. self.iter_count = iter_count
  10. def forward(self, x):
  11. return flow.matmul(x, self.w)
  12. model = MyLrModule(0.01, 500)

Loss function

Then, select the loss function. OneFlow comes with a variety of loss functions. We choose MSELoss here:

  1. loss = flow.nn.MSELoss(reduction="sum")

Construct Optimizer

The logic of back propagation is wrapped in optimizer. We choose SGD here, You can choose other optimization algorithms as needed, such as Adam andAdamW .

  1. optimizer = flow.optim.SGD(model.parameters(), model.lr)

When the optimizer is constructed, the model parameters and learning rate are given to SGD. Then the optimizer.step() is called, and it automatically completes the gradient of the model parameters and updates the model parameters according to the SGD algorithm.

Train

When the above preparations are completed, we can start training:

  1. for i in range(0, model.iter_count):
  2. y_pred = model(x)
  3. l = loss(y_pred, y)
  4. if (i + 1) % 50 == 0:
  5. print(f"{i+1}/{model.iter_count} loss:{l.numpy()}")
  6. optimizer.zero_grad()
  7. l.backward()
  8. optimizer.step()
  9. print(f"\nw: {model.w}")

output:

  1. 50/500 loss:0.003451163647696376
  2. 100/500 loss:1.965773662959691e-06
  3. 150/500 loss:1.103217073250562e-09
  4. 200/500 loss:3.865352482534945e-12
  5. 250/500 loss:3.865352482534945e-12
  6. 300/500 loss:3.865352482534945e-12
  7. 350/500 loss:3.865352482534945e-12
  8. 400/500 loss:3.865352482534945e-12
  9. 450/500 loss:3.865352482534945e-12
  10. 500/500 loss:3.865352482534945e-12
  11. w: tensor([[2.],
  12. [3.]], dtype=oneflow.float32, grad_fn=<accumulate_grad>)

Please activate JavaScript for write a comment in LiveRe