DataLoader

  • from_generator(feed_list=None, capacity=None, use_double_buffer=True, iterable=True, return_list=False, use_multiprocess=False)

创建一个DataLoader对象用于加载Python生成器产生的数据。数据会由Python线程预先读取,并异步送入一个队列中。

本方法创建的DataLoader对象提供了3个方法设置数据源,分别是 set_sample_generator , set_sample_list_generatorset_batch_generator 。请查阅下述示例代码了解它们的使用方法。

如果iterable = True,本方法创建的DataLoader对象时一个Python生成器,可以for-range的方法循环迭代。

如果iterable = False,本方法创建的DataLoader对象提供 start()reset() 方法控制数据读取过程。此模式用于兼容 fluid.layers.py_reader 的使用方式。用户可使用iterable = False模式,方便地将 fluid.layers.py_reader 的代码迁移至 fluid.io.DataLoader

  • 参数:
    • feed_list (list(Variable)|tuple(Variable)) - feed变量列表,由 fluid.layers.data() 创建。
    • capacity (int) - DataLoader对象内部维护队列的容量大小。单位是batch数量。若reader读取速度较快,建议设置较大的capacity值。
    • use_double_buffer (bool) - 是否使用 double_buffer_reader 。若use_double_buffer=True,DataLoader会异步地预读取下一个batch的数据,可加速数据读取过程,但同时会占用少量的CPU/GPU存储,即一个batch输入数据的存储空间。
    • iterable (bool) - 所创建的DataLoader对象是否可迭代。
    • return_list (bool) - 每个设备上的数据是否以list形式返回。仅在iterable = True模式下有效。若return_list = False,每个设备上的返回数据均是str -> LoDTensor的映射表,其中映射表的key是每个输入变量的名称。若return_list = True,则每个设备上的返回数据均是list(LoDTensor)。推荐在静态图模式下使用return_list = False,在动态图模式下使用return_list = True。
    • use_multiprocess (bool) - 设置是否是用多进程加速动态图的数据载入过程。注意:该参数的设置仅在动态图模式下有效, 在静态图模式下,该参数设置与否均无任何影响。默认值为False。

返回: 被创建的DataLoader对象

返回类型: loader (DataLoader)

代码示例

  1. import paddle.fluid as fluid
  2. import numpy as np
  3.  
  4. BATCH_NUM = 10
  5. BATCH_SIZE = 16
  6. EPOCH_NUM = 4
  7.  
  8. CLASS_NUM = 10
  9.  
  10. ITERABLE = True # whether the created DataLoader object is iterable
  11. USE_GPU = False # whether to use GPU
  12.  
  13. DATA_FORMAT = 'batch_generator' # data format of data source user provides
  14.  
  15. def simple_net(image, label):
  16. fc_tmp = fluid.layers.fc(image, size=CLASS_NUM)
  17. cross_entropy = fluid.layers.softmax_with_cross_entropy(image, label)
  18. loss = fluid.layers.reduce_mean(cross_entropy)
  19. sgd = fluid.optimizer.SGD(learning_rate=1e-3)
  20. sgd.minimize(loss)
  21. return loss
  22.  
  23. def get_random_images_and_labels(image_shape, label_shape):
  24. image = np.random.random(size=image_shape).astype('float32')
  25. label = np.random.random(size=label_shape).astype('int64')
  26. return image, label
  27.  
  28. # If the data generator yields one sample each time,
  29. # use DataLoader.set_sample_generator to set the data source.
  30. def sample_generator_creator():
  31. def __reader__():
  32. for _ in range(BATCH_NUM * BATCH_SIZE):
  33. image, label = get_random_images_and_labels([784], [1])
  34. yield image, label
  35.  
  36. return __reader__
  37.  
  38. # If the data generator yield list of samples each time,
  39. # use DataLoader.set_sample_list_generator to set the data source.
  40. def sample_list_generator_creator():
  41. def __reader__():
  42. for _ in range(BATCH_NUM):
  43. sample_list = []
  44. for _ in range(BATCH_SIZE):
  45. image, label = get_random_images_and_labels([784], [1])
  46. sample_list.append([image, label])
  47.  
  48. yield sample_list
  49.  
  50. return __reader__
  51.  
  52. # If the data generator yields a batch each time,
  53. # use DataLoader.set_batch_generator to set the data source.
  54. def batch_generator_creator():
  55. def __reader__():
  56. for _ in range(BATCH_NUM):
  57. batch_image, batch_label = get_random_images_and_labels([BATCH_SIZE, 784], [BATCH_SIZE, 1])
  58. yield batch_image, batch_label
  59.  
  60. return __reader__
  61.  
  62. # If DataLoader is iterable, use for loop to train the network
  63. def train_iterable(exe, prog, loss, loader):
  64. for _ in range(EPOCH_NUM):
  65. for data in loader():
  66. exe.run(prog, feed=data, fetch_list=[loss])
  67.  
  68. # If DataLoader is not iterable, use start() and reset() method to control the process
  69. def train_non_iterable(exe, prog, loss, loader):
  70. for _ in range(EPOCH_NUM):
  71. loader.start() # call DataLoader.start() before each epoch starts
  72. try:
  73. while True:
  74. exe.run(prog, fetch_list=[loss])
  75. except fluid.core.EOFException:
  76. loader.reset() # call DataLoader.reset() after catching EOFException
  77.  
  78. def set_data_source(loader, places):
  79. if DATA_FORMAT == 'sample_generator':
  80. loader.set_sample_generator(sample_generator_creator(), batch_size=BATCH_SIZE, drop_last=True, places=places)
  81. elif DATA_FORMAT == 'sample_list_generator':
  82. loader.set_sample_list_generator(sample_list_generator_creator(), places=places)
  83. elif DATA_FORMAT == 'batch_generator':
  84. loader.set_batch_generator(batch_generator_creator(), places=places)
  85. else:
  86. raise ValueError('Unsupported data format')
  87.  
  88. image = fluid.layers.data(name='image', shape=[784], dtype='float32')
  89. label = fluid.layers.data(name='label', shape=[1], dtype='int64')
  90.  
  91. # Define DataLoader
  92. loader = fluid.io.DataLoader.from_generator(feed_list=[image, label], capacity=16, iterable=ITERABLE)
  93.  
  94. # Define network
  95. loss = simple_net(image, label)
  96.  
  97. # Set data source of DataLoader
  98. #
  99. # If DataLoader is iterable, places must be given and the number of places must be the same with device number.
  100. # - If you are using GPU, call `fluid.cuda_places()` to get all GPU places.
  101. # - If you are using CPU, call `fluid.cpu_places()` to get all CPU places.
  102. #
  103. # If DataLoader is not iterable, places can be None.
  104. places = fluid.cuda_places() if USE_GPU else fluid.cpu_places()
  105. set_data_source(loader, places)
  106.  
  107. exe = fluid.Executor(places[0])
  108. exe.run(fluid.default_startup_program())
  109.  
  110. prog = fluid.CompiledProgram(fluid.default_main_program()).with_data_parallel(loss_name=loss.name)
  111.  
  112. if loader.iterable:
  113. train_iterable(exe, prog, loss, loader)
  114. else:
  115. train_non_iterable(exe, prog, loss, loader)
  116.  
  117. '''
  118. Users can use return_list = True in dygraph mode.
  119. '''
  120. with fluid.dygraph.guard(places[0]):
  121. loader = fluid.io.DataLoader.from_generator(capacity=2, return_list=True)
  122. set_data_source(loader, places[0])
  123. for image, label in loader():
  124. relu = fluid.layers.relu(image)
  125. assert image.shape == [BATCH_SIZE, 784]
  126. assert label.shape == [BATCH_SIZE, 1]
  127. assert relu.shape == [BATCH_SIZE, 784]
  • from_dataset(dataset, places, drop_last=True)

创建一个DataLoader对象用于加载Dataset产生的数据。目前,Dataset仅支持Linux系统下使用。

  • 参数:
    • dataset (InMemoryDataset|QueueDataset) - Dataset对象。
    • places (list(CUDAPlace)|list(CPUPlace)) - DataLoader对象返回数据所在的place。
    • drop_last (bool) - 是否丢弃最后样本数量不足batch size的batch。若drop_last = True则丢弃,若drop_last = False则不丢弃。

返回: 被创建的DataLoader对象,可以for-range的方式循环迭代

返回类型: loader (DataLoader)

代码示例

  1. import paddle.fluid as fluid
  2.  
  3. image = fluid.layers.data(name='image', shape=[784], dtype='float32')
  4. label = fluid.layers.data(name='label', shape=[1], dtype='int64')
  5.  
  6. dataset = fluid.DatasetFactory().create_dataset("QueueDataset")
  7. dataset.set_batch_size(32)
  8. dataset.set_filelist(['a.txt', 'b.txt', 'c.txt'])
  9. dataset.set_use_var([image, label])
  10. dataset.set_pipe_command('cat')
  11.  
  12. loader = fluid.io.DataLoader.from_dataset(dataset, fluid.cpu_places())