Questionnaire

  1. How did we get to a single vector of activations in the CNNs used for MNIST in previous chapters? Why isn’t that suitable for Imagenette?
  2. What do we do for Imagenette instead?
  3. What is “adaptive pooling”?
  4. What is “average pooling”?
  5. Why do we need Flatten after an adaptive average pooling layer?
  6. What is a “skip connection”?
  7. Why do skip connections allow us to train deeper models?
  8. What does <> show? How did that lead to the idea of skip connections?
  9. What is “identity mapping”?
  10. What is the basic equation for a ResNet block (ignoring batchnorm and ReLU layers)?
  11. What do ResNets have to do with residuals?
  12. How do we deal with the skip connection when there is a stride-2 convolution? How about when the number of filters changes?
  13. How can we express a 1×1 convolution in terms of a vector dot product?
  14. Create a 1x1 convolution with F.conv2d or nn.Conv2d and apply it to an image. What happens to the shape of the image?
  15. What does the noop function return?
  16. Explain what is shown in <>.
  17. When is top-5 accuracy a better metric than top-1 accuracy?
  18. What is the “stem” of a CNN?
  19. Why do we use plain convolutions in the CNN stem, instead of ResNet blocks?
  20. How does a bottleneck block differ from a plain ResNet block?
  21. Why is a bottleneck block faster?
  22. How do fully convolutional nets (and nets with adaptive pooling in general) allow for progressive resizing?

Further Research

  1. Try creating a fully convolutional net with adaptive average pooling for MNIST (note that you’ll need fewer stride-2 layers). How does it compare to a network without such a pooling layer?
  2. In <> we introduce Einstein summation notation. Skip ahead to see how this works, and then write an implementation of the 1×1 convolution operation using torch.einsum. Compare it to the same operation using torch.conv2d.
  3. Write a “top-5 accuracy” function using plain PyTorch or plain Python.
  4. Train a model on Imagenette for more epochs, with and without label smoothing. Take a look at the Imagenette leaderboards and see how close you can get to the best results shown. Read the linked pages describing the leading approaches.

In [ ]: