Questionnaire

  1. If the dataset for your project is so big and complicated that working with it takes a significant amount of time, what should you do?
  2. Why do we concatenate the documents in our dataset before creating a language model?
  3. To use a standard fully connected network to predict the fourth word given the previous three words, what two tweaks do we need to make to our model?
  4. How can we share a weight matrix across multiple layers in PyTorch?
  5. Write a module that predicts the third word given the previous two words of a sentence, without peeking.
  6. What is a recurrent neural network?
  7. What is “hidden state”?
  8. What is the equivalent of hidden state in LMModel1?
  9. To maintain the state in an RNN, why is it important to pass the text to the model in order?
  10. What is an “unrolled” representation of an RNN?
  11. Why can maintaining the hidden state in an RNN lead to memory and performance problems? How do we fix this problem?
  12. What is “BPTT”?
  13. Write code to print out the first few batches of the validation set, including converting the token IDs back into English strings, as we showed for batches of IMDb data in <>.
  14. What does the ModelResetter callback do? Why do we need it?
  15. What are the downsides of predicting just one output word for each three input words?
  16. Why do we need a custom loss function for LMModel4?
  17. Why is the training of LMModel4 unstable?
  18. In the unrolled representation, we can see that a recurrent neural network actually has many layers. So why do we need to stack RNNs to get better results?
  19. Draw a representation of a stacked (multilayer) RNN.
  20. Why should we get better results in an RNN if we call detach less often? Why might this not happen in practice with a simple RNN?
  21. Why can a deep network result in very large or very small activations? Why does this matter?
  22. In a computer’s floating-point representation of numbers, which numbers are the most precise?
  23. Why do vanishing gradients prevent training?
  24. Why does it help to have two hidden states in the LSTM architecture? What is the purpose of each one?
  25. What are these two states called in an LSTM?
  26. What is tanh, and how is it related to sigmoid?
  27. What is the purpose of this code in LSTMCell: h = torch.cat([h, input], dim=1)
  28. What does chunk do in PyTorch?
  29. Study the refactored version of LSTMCell carefully to ensure you understand how and why it does the same thing as the non-refactored version.
  30. Why can we use a higher learning rate for LMModel6?
  31. What are the three regularization techniques used in an AWD-LSTM model?
  32. What is “dropout”?
  33. Why do we scale the acitvations with dropout? Is this applied during training, inference, or both?
  34. What is the purpose of this line from Dropout: if not self.training: return x
  35. Experiment with bernoulli_ to understand how it works.
  36. How do you set your model in training mode in PyTorch? In evaluation mode?
  37. Write the equation for activation regularization (in math or code, as you prefer). How is it different from weight decay?
  38. Write the equation for temporal activation regularization (in math or code, as you prefer). Why wouldn’t we use this for computer vision problems?
  39. What is “weight tying” in a language model?

Further Research

  1. In LMModel2, why can forward start with h=0? Why don’t we need to say h=torch.zeros(...)?
  2. Write the code for an LSTM from scratch (you may refer to <>).
  3. Search the internet for the GRU architecture and implement it from scratch, and try training a model. See if you can get results similar to those we saw in this chapter. Compare your results to the results of PyTorch’s built in GRU module.
  4. Take a look at the source code for AWD-LSTM in fastai, and try to map each of the lines of code to the concepts shown in this chapter.

In [ ]: