Questionnaire

  1. What is “self-supervised learning”?
  2. What is a “language model”?
  3. Why is a language model considered self-supervised?
  4. What are self-supervised models usually used for?
  5. Why do we fine-tune language models?
  6. What are the three steps to create a state-of-the-art text classifier?
  7. How do the 50,000 unlabeled movie reviews help us create a better text classifier for the IMDb dataset?
  8. What are the three steps to prepare your data for a language model?
  9. What is “tokenization”? Why do we need it?
  10. Name three different approaches to tokenization.
  11. What is xxbos?
  12. List four rules that fastai applies to text during tokenization.
  13. Why are repeated characters replaced with a token showing the number of repetitions and the character that’s repeated?
  14. What is “numericalization”?
  15. Why might there be words that are replaced with the “unknown word” token?
  16. With a batch size of 64, the first row of the tensor representing the first batch contains the first 64 tokens for the dataset. What does the second row of that tensor contain? What does the first row of the second batch contain? (Careful—students often get this one wrong! Be sure to check your answer on the book’s website.)
  17. Why do we need padding for text classification? Why don’t we need it for language modeling?
  18. What does an embedding matrix for NLP contain? What is its shape?
  19. What is “perplexity”?
  20. Why do we have to pass the vocabulary of the language model to the classifier data block?
  21. What is “gradual unfreezing”?
  22. Why is text generation always likely to be ahead of automatic identification of machine-generated texts?

Further Research

  1. See what you can learn about language models and disinformation. What are the best language models today? Take a look at some of their outputs. Do you find them convincing? How could a bad actor best use such a model to create conflict and uncertainty?
  2. Given the limitation that models are unlikely to be able to consistently recognize machine-generated texts, what other approaches may be needed to handle large-scale disinformation campaigns that leverage deep learning?

In [ ]: