load

paddle. load ( path, **configs ) [源代码]

从指定路径载入可以在paddle中使用的对象实例。

注解

目前支持载入:Layer 或者 Optimizer 的 state_dict,Layer对象,Tensor以及包含Tensor的嵌套list、tuple、dict,Program。

遇到使用问题,请参考:

参数

  • path (str) – 载入目标对象实例的路径。通常该路径是目标文件的路径,当从用于存储预测模型API的存储结果中载入state_dict时,该路径可能是一个文件前缀或者目录。

  • **config (dict, 可选) - 其他用于兼容的载入配置选项。这些选项将来可能被移除,如果不是必须使用,不推荐使用这些配置选项。默认为 None。目前支持以下配置选项:(1) modelfilename (str) - paddle 1.x版本 save_inference_model 接口存储格式的预测模型文件名,原默认文件名为 `_model; (2) params_filename (str) - paddle 1.x版本save_inference_model接口存储格式的参数文件名,没有默认文件名,默认将各个参数分散存储为单独的文件; (3) return_numpy(bool) - 如果被指定为Trueload的结果中的Tensor会被转化为numpy.ndarray,默认为False` 。

返回

Object,一个可以在paddle中使用的对象实例

代码示例

  1. # example 1: dynamic graph
  2. import paddle
  3. emb = paddle.nn.Embedding(10, 10)
  4. layer_state_dict = emb.state_dict()
  5. # save state_dict of emb
  6. paddle.save(layer_state_dict, "emb.pdparams")
  7. scheduler = paddle.optimizer.lr.NoamDecay(
  8. d_model=0.01, warmup_steps=100, verbose=True)
  9. adam = paddle.optimizer.Adam(
  10. learning_rate=scheduler,
  11. parameters=emb.parameters())
  12. opt_state_dict = adam.state_dict()
  13. # save state_dict of optimizer
  14. paddle.save(opt_state_dict, "adam.pdopt")
  15. # save weight of emb
  16. paddle.save(emb.weight, "emb.weight.pdtensor")
  17. # load state_dict of emb
  18. load_layer_state_dict = paddle.load("emb.pdparams")
  19. # load state_dict of optimizer
  20. load_opt_state_dict = paddle.load("adam.pdopt")
  21. # load weight of emb
  22. load_weight = paddle.load("emb.weight.pdtensor")
  1. # example 2: Load multiple state_dict at the same time
  2. import paddle
  3. from paddle import nn
  4. from paddle.optimizer import Adam
  5. layer = paddle.nn.Linear(3, 4)
  6. adam = Adam(learning_rate=0.001, parameters=layer.parameters())
  7. obj = {'model': layer.state_dict(), 'opt': adam.state_dict(), 'epoch': 100}
  8. path = 'example/model.pdparams'
  9. paddle.save(obj, path)
  10. obj_load = paddle.load(path)
  1. # example 3: static graph
  2. import paddle
  3. import paddle.static as static
  4. paddle.enable_static()
  5. # create network
  6. x = paddle.static.data(name="x", shape=[None, 224], dtype='float32')
  7. z = paddle.static.nn.fc(x, 10)
  8. place = paddle.CPUPlace()
  9. exe = paddle.static.Executor(place)
  10. exe.run(paddle.static.default_startup_program())
  11. prog = paddle.static.default_main_program()
  12. for var in prog.list_vars():
  13. if list(var.shape) == [224, 10]:
  14. tensor = var.get_value()
  15. break
  16. # save/load tensor
  17. path_tensor = 'temp/tensor.pdtensor'
  18. paddle.save(tensor, path_tensor)
  19. load_tensor = paddle.load(path_tensor)
  20. # save/load state_dict
  21. path_state_dict = 'temp/model.pdparams'
  22. paddle.save(prog.state_dict("param"), path_tensor)
  23. load_state_dict = paddle.load(path_tensor)
  1. # example 4: load program
  2. import paddle
  3. paddle.enable_static()
  4. data = paddle.static.data(
  5. name='x_static_save', shape=(None, 224), dtype='float32')
  6. y_static = z = paddle.static.nn.fc(data, 10)
  7. main_program = paddle.static.default_main_program()
  8. path = "example/main_program.pdmodel"
  9. paddle.save(main_program, path)
  10. load_main = paddle.load(path)
  11. print(load_main)

使用本API的教程文档