名称作用域

当处理更复杂的模型(如神经网络)时,该图可以很容易地与数千个节点混淆。 为了避免这种情况,您可以创建名称作用域来对相关节点进行分组。 例如,我们修改以前的代码来定义名为loss的名称作用域内的错误和mse操作:

  1. with tf.name_scope("loss") as scope:
  2. error = y_pred - y
  3. mse = tf.reduce_mean(tf.square(error), name="mse")

在作用域内定义的每个op的名称现在以loss/为前缀:

  1. >>> print(error.op.name)
  2. loss/sub
  3. >>> print(mse.op.name)
  4. loss/mse

在 TensorBoard 中,mseerror节点现在出现在loss命名空间中,默认情况下会出现崩溃(图 9-5)。

名称作用域 - 图1

完整代码

  1. import numpy as np
  2. from sklearn.datasets import fetch_california_housing
  3. import tensorflow as tf
  4. from sklearn.preprocessing import StandardScaler
  5. housing = fetch_california_housing()
  6. m, n = housing.data.shape
  7. print("数据集:{}行,{}列".format(m,n))
  8. housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]
  9. scaler = StandardScaler()
  10. scaled_housing_data = scaler.fit_transform(housing.data)
  11. scaled_housing_data_plus_bias = np.c_[np.ones((m, 1)), scaled_housing_data]
  12. from datetime import datetime
  13. now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
  14. root_logdir = r"D://tf_logs"
  15. logdir = "{}/run-{}/".format(root_logdir, now)
  16. n_epochs = 1000
  17. learning_rate = 0.01
  18. X = tf.placeholder(tf.float32, shape=(None, n + 1), name="X")
  19. y = tf.placeholder(tf.float32, shape=(None, 1), name="y")
  20. theta = tf.Variable(tf.random_uniform([n + 1, 1], -1.0, 1.0, seed=42), name="theta")
  21. y_pred = tf.matmul(X, theta, name="predictions")
  22. def fetch_batch(epoch, batch_index, batch_size):
  23. np.random.seed(epoch * n_batches + batch_index) # not shown in the book
  24. indices = np.random.randint(m, size=batch_size) # not shown
  25. X_batch = scaled_housing_data_plus_bias[indices] # not shown
  26. y_batch = housing.target.reshape(-1, 1)[indices] # not shown
  27. return X_batch, y_batch
  28. with tf.name_scope("loss") as scope:
  29. error = y_pred - y
  30. mse = tf.reduce_mean(tf.square(error), name="mse")
  31. optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
  32. training_op = optimizer.minimize(mse)
  33. init = tf.global_variables_initializer()
  34. mse_summary = tf.summary.scalar('MSE', mse)
  35. file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
  36. n_epochs = 10
  37. batch_size = 100
  38. n_batches = int(np.ceil(m / batch_size))
  39. with tf.Session() as sess:
  40. sess.run(init)
  41. for epoch in range(n_epochs):
  42. for batch_index in range(n_batches):
  43. X_batch, y_batch = fetch_batch(epoch, batch_index, batch_size)
  44. if batch_index % 10 == 0:
  45. summary_str = mse_summary.eval(feed_dict={X: X_batch, y: y_batch})
  46. step = epoch * n_batches + batch_index
  47. file_writer.add_summary(summary_str, step)
  48. sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
  49. best_theta = theta.eval()
  50. file_writer.flush()
  51. file_writer.close()
  52. print("Best theta:")
  53. print(best_theta)

名称作用域 - 图2