Visualization of MLP weights on MNIST

http://scikit-learn.org/stable/auto_examples/neural_networks/plot_mnist_filters.html#sphx-glr-auto-examples-neural-networks-plot-mnist-filters-py

此範例將使用MNIST dataset的訓練資料集去訓練MLPClassifier,資料集中每張圖片都是28*28,對於第一層的每個神經元都會有28*28個特徵,輸出結果是將訓練資料的每個像素點對於神經元的權重畫成28*28的圖,用來表示圖片上每個像素點對於神經元的權重多寡。

(一)引入函式庫與資料

1.matplotlib.pyplot:用來繪製影像
2.sklearn.datasets:引入內建的手寫數字資料庫
3.sklearn.neural_network:引入類神經網路的套件

  1. import matplotlib.pyplot as plt
  2. from sklearn.datasets import fetch_mldata
  3. from sklearn.neural_network import MLPClassifier
  4. mnist = fetch_mldata("MNIST original")

(二)將資料切割成訓練集與測試集

  1. # 將灰階影像降尺度降到[0,1]
  2. X, y = mnist.data / 255., mnist.target
  3. X_train, X_test = X[:60000], X[60000:]
  4. y_train, y_test = y[:60000], y[60000:]

(三)設定分類器參數與訓練網路並畫出權重矩陣

  1. #hidden_layer_sizes=(50)此處使用1層隱藏層,只有50個神經元,max_iter=10疊代訓練10次
  2. mlp = MLPClassifier(hidden_layer_sizes=(50), max_iter=10, alpha=1e-4,
  3. solver='sgd', verbose=10, tol=1e-4, random_state=1,
  4. learning_rate_init=.1)
  5. mlp.fit(X_train, y_train)
  6. #畫出16個神經元的權重圖,黑色表示負的權重,越深色表示數值越大,白色表示正的權重,越淺色表示數值越大
  7. fig, axes = plt.subplots(4, 4)
  8. # use global min / max to ensure all weights are shown on the same scale
  9. vmin, vmax = mlp.coefs_[0].min(), mlp.coefs_[0].max()
  10. for coef, ax in zip(mlp.coefs_[0].T, axes.ravel()):
  11. ax.matshow(coef.reshape(28, 28), cmap=plt.cm.gray, vmin=.5 * vmin,
  12. vmax=.5 * vmax)
  13. ax.set_xticks(())
  14. ax.set_yticks(())
  15. plt.show()

Ex 1: Visualization of MLP weights on MNIST - 图1

圖1:16個神經元對於影像的權重圖

Ex 1: Visualization of MLP weights on MNIST - 图2

圖2:疊代計算時loss下降

(四)完整程式碼

  1. print(__doc__)
  2. import matplotlib.pyplot as plt
  3. from sklearn.datasets import fetch_mldata
  4. from sklearn.neural_network import MLPClassifier
  5. mnist = fetch_mldata("MNIST original")
  6. X, y = mnist.data / 255., mnist.target
  7. X_train, X_test = X[:60000], X[60000:]
  8. y_train, y_test = y[:60000], y[60000:]
  9. mlp = MLPClassifier(hidden_layer_sizes=(50), max_iter=10, alpha=1e-4,
  10. solver='sgd', verbose=10, tol=1e-4, random_state=1,
  11. learning_rate_init=.1)
  12. mlp.fit(X_train, y_train)
  13. print("Training set score: %f" % mlp.score(X_train, y_train))
  14. print("Test set score: %f" % mlp.score(X_test, y_test))
  15. fig, axes = plt.subplots(4, 4)
  16. vmin, vmax = mlp.coefs_[0].min(), mlp.coefs_[0].max()
  17. for coef, ax in zip(mlp.coefs_[0].T, axes.ravel()):
  18. ax.matshow(coef.reshape(28, 28), cmap=plt.cm.gray, vmin=.5 * vmin,
  19. vmax=.5 * vmax)
  20. ax.set_xticks(())
  21. ax.set_yticks(())
  22. plt.show()