Conclusion

You have now seen everything that is inside the AWD-LSTM architecture we used in text classification in <>. It uses dropout in a lot more places:

  • Embedding dropout (inside the embedding layer, drops some random lines of embeddings)
  • Input dropout (applied after the embedding layer)
  • Weight dropout (applied to the weights of the LSTM at each training step)
  • Hidden dropout (applied to the hidden state between two layers)

This makes it even more regularized. Since fine-tuning those five dropout values (including the dropout before the output layer) is complicated, we have determined good defaults and allow the magnitude of dropout to be tuned overall with the drop_mult parameter you saw in that chapter (which is multiplied by each dropout).

Another architecture that is very powerful, especially in “sequence-to-sequence” problems (that is, problems where the dependent variable is itself a variable-length sequence, such as language translation), is the Transformers architecture. You can find it in a bonus chapter on the book’s website.