gelu

逐元素计算 Gelu激活函数。更多细节请参考 Gaussian Error Linear Units

gelu - 图1

  • 参数:
    • x (Variable) - Gelu Op 的输入,多维 Tensor 或 LoDTensor,数据类型为 float32 或 float64。
  • 返回:
    • 多维 Tensor 或 LoDTensor, 数据类型为 float32 或 float64, 和输入 x 的数据类型相同,形状和输入 x 相同。
  • 返回类型:
    • Variable

代码示例

  1. # declarative mode
  2. import numpy as np
  3. from paddle import fluid
  4.  
  5. x = fluid.data(name="x", shape=(-1, 3), dtype="float32")
  6. y = fluid.layers.gelu(x)
  7.  
  8. place = fluid.CPUPlace()
  9. exe = fluid.Executor(place)
  10. start = fluid.default_startup_program()
  11. main = fluid.default_main_program()
  12.  
  13. data = np.random.randn(2, 3).astype("float32")
  14. exe.run(start)
  15.  
  16. y_np, = exe.run(main, feed={"x": data}, fetch_list=[y])
  17.  
  18. data
  19. # array([[ 0.87165993, -1.0541513 , -0.37214822],
  20. # [ 0.15647964, 0.32496083, 0.33045998]], dtype=float32)
  21. y_np
  22. # array([[ 0.70456535, -0.15380788, -0.13207214],
  23. # [ 0.08796856, 0.20387867, 0.2080159 ]], dtype=float32)
  1. # imperative mode
  2. import numpy as np
  3. from paddle import fluid
  4. import paddle.fluid.dygraph as dg
  5.  
  6. data = np.random.randn(2, 3).astype("float32")
  7. place = fluid.CPUPlace()
  8. with dg.guard(place) as g:
  9. x = dg.to_variable(data)
  10. y = fluid.layers.gelu(x)
  11. y_np = y.numpy()
  12. data
  13. # array([[ 0.87165993, -1.0541513 , -0.37214822],
  14. # [ 0.15647964, 0.32496083, 0.33045998]], dtype=float32)
  15. y_np
  16. # array([[ 0.70456535, -0.15380788, -0.13207214],
  17. # [ 0.08796856, 0.20387867, 0.2080159 ]], dtype=float32)