mindspore.dataset

This module provides APIs to load and process various datasets: MNIST,CIFAR-10, CIFAR-100, VOC, ImageNet, CelebA dataset, etc. It also supportsdatasets in special format, including mindrecord, tfrecord, manifest. Userscan also create samplers with this module to sample data.

  • class mindspore.dataset.ImageFolderDatasetV2(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, extensions=None, class_indexing=None, decode=False, num_shards=None, shard_id=None)[source]
  • A source dataset that reads images from a tree of directories.

All images within one folder have the same label.The generated dataset has two columns [‘image’, ‘label’].The shape of the image column is [image_size] if decode flag is False, or [H,W,C]otherwise.The type of the image tensor is uint8. The label is just a scalar uint64tensor.This dataset can take in a sampler. sampler and shuffle are mutually exclusive. Tablebelow shows what input args are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’
Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

  • Parameters
    • dataset_dir (str) – Path to the root directory that contains the dataset.

    • num_samples (int, __optional) – The number of images to be included in the dataset(default=None, all images).

    • num_parallel_workers (int, __optional) – Number of workers to read the data(default=None, set in the config).

    • shuffle (bool, __optional) – Whether or not to perform shuffle on the dataset(default=None, expected order behavior shown in the table).

    • sampler (Sampler, optional) – Object used to choose samples from thedataset (default=None, expected order behavior shown in the table).

    • extensions (list[str], optional) – List of file extensions to beincluded in the dataset (default=None).

    • class_indexing (dict, __optional) – A str-to-int mapping from folder name to index(default=None, the folder names will be sortedalphabetically and each class will be given aunique index starting from 0).

    • decode (bool, __optional) – decode the images after reading (default=False).

    • num_shards (int, __optional) – Number of shards that the dataset should be dividedinto (default=None).

    • shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

  • Raises

    • RuntimeError – If sampler and shuffle are specified at the same time.

    • RuntimeError – If sampler and sharding are specified at the same time.

    • RuntimeError – If num_shards is specified but shard_id is None.

    • RuntimeError – If shard_id is specified but num_shards is None.

    • RuntimeError – If class_indexing is not a dictionary.

    • ValueError – If shard_id is invalid (< 0 or >= num_shards).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # path to imagefolder directory. This directory needs to contain sub-directories which contain the images
  3. >>> dataset_dir = "/path/to/imagefolder_directory"
  4. >>> # 1) read all samples (image files) in dataset_dir with 8 threads
  5. >>> imagefolder_dataset = ds.ImageFolderDatasetV2(dataset_dir, num_parallel_workers=8)
  6. >>> # 2) read all samples (image files) from folder cat and folder dog with label 0 and 1
  7. >>> imagefolder_dataset = ds.ImageFolderDatasetV2(dataset_dir,class_indexing={"cat":0,"dog":1})
  8. >>> # 3) read all samples (image files) in dataset_dir with extensions .JPEG and .png (case sensitive)
  9. >>> imagefolder_dataset = ds.ImageFolderDatasetV2(dataset_dir, extensions={".JPEG",".png"})
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()[source]
  • Get the number of classes in dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.MnistDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None)[source]
  • A source dataset for reading and parsing the Mnist dataset.

The generated dataset has two columns [‘image’, ‘label’].The type of the image tensor is uint8. The label is just a scalar uint32 tensor.This dataset can take in a sampler. sampler and shuffle are mutually exclusive. Tablebelow shows what input args are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’
Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

  • Parameters
    • dataset_dir (str) – Path to the root directory that contains the dataset.

    • num_samples (int, __optional) – The number of images to be included in the dataset(default=None, all images).

    • num_parallel_workers (int, __optional) – Number of workers to read the data(default=value, set in the config).

    • shuffle (bool, __optional) – Whether or not to perform shuffle on the dataset(default=None, expected order behavior shown in the table).

    • sampler (Sampler, optional) – Object used to choose samples from thedataset (default=None, expected order behavior shown in the table).

    • num_shards (int, __optional) – Number of shards that the dataset should be dividedinto (default=None).

    • shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

  • Raises

    • RuntimeError – If sampler and shuffle are specified at the same time.

    • RuntimeError – If sampler and sharding are specified at the same time.

    • RuntimeError – If num_shards is specified but shard_id is None.

    • RuntimeError – If shard_id is specified but num_shards is None.

    • ValueError – If shard_id is invalid (< 0 or >= num_shards).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> dataset_dir = "/path/to/mnist_folder"
  3. >>> # 1) read 3 samples from mnist_dataset
  4. >>> mnist_dataset = ds.MnistDataset(dataset_dir=dataset_dir, num_samples=3)
  5. >>> # in mnist_dataset dataset, each dictionary has keys "image" and "label"
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.StorageDataset(dataset_files, schema, distribution='', columns_list=None, num_parallel_workers=None, deterministic_output=None, prefetch_size=None)[source]
  • A source dataset that reads and parses datasets stored on disk in various formats, including TFData format.

    • Parameters
      • dataset_files (list[str]) – List of files to be read.

      • schema (str) – Path to the json schema file.

      • distribution (str, __optional) – Path of distribution config file (default=””).

      • columns_list (list[str], optional) – List of columns to be read (default=None, read all columns).

      • num_parallel_workers (int, __optional) – Number of parallel working threads (default=None).

      • deterministic_output (bool, __optional) – Whether the result of this dataset can be reproducedor not (default=True). If True, performance might be affected.

      • prefetch_size (int, __optional) – Prefetch number of records ahead of the user’s request (default=None).

    • Raises

      • RuntimeError – If schema file failed to read.

      • RuntimeError – If distribution file path is given but failed to read.

    • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)

    • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()[source]
  • Get the number of classes in dataset.

    • Returns
    • Number, number of classes.

    • Raises

      • ValueError – If dataset type is invalid.

      • ValueError – If dataset is not Imagenet dataset or manifest dataset.

      • RuntimeError – If schema file is given but failed to load.

  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.MindDataset(dataset_file, columns_list=None, num_parallel_workers=None, shuffle=None, num_shards=None, shard_id=None, block_reader=False)[source]
  • A source dataset that reads from shard files and database.

    • Parameters
      • dataset_file (str) – one of file names in dataset.

      • columns_list (list[str], optional) – List of columns to be read (default=None).

      • num_parallel_workers (int, __optional) – The number of readers (default=None).

      • shuffle (bool, __optional) – Whether or not to perform shuffle on the dataset(default=None, performs shuffle).

      • num_shards (int, __optional) – Number of shards that the dataset should be divided into (default=None).

      • shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

      • block_reader (bool, __optional) – Whether read data by block mode (default=False).

    • Raises

      • ValueError – If num_shards is specified but shard_id is None.

      • ValueError – If shard_id is specified but num_shards is None.

      • ValueError – If block reader is true but partition is specified.

    • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)

    • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.GeneratorDataset(generator_function, column_names, column_types=None, prefetch_size=None, sampler=None)[source]
  • A source dataset that generate data from calling generator function each epoch.

    • Parameters
      • generator_function (callable) – A callable object that returns an Generator object that supports the iter() protocol.Generator object is required to return a tuple of numpy array as a row of the dataset on next().

      • column_names (list[str]) – List of column names of the dataset.

      • column_types (list[mindspore.dtype], optional) – List of column data types of the dataset (default=None).If provided, sanity check will be performed on generator output.

      • prefetch_size (int, __optional) – Prefetch number of records ahead of the user’s request (default=None).

      • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # 1) generator function that generates multi-dimensional data
  3. >>> def generator_md():
  4. >>> for i in range(64):
  5. >>> yield (np.array([[i, i + 1], [i + 2, i + 3]]),)
  6. >>> # create multi_dimension_generator_dataset with GeneratorMD() and column name "multi_dimensional_data"
  7. >>> multi_dimension_generator_dataset = ds.GeneratorDataset(generator_md, ["multi_dimensional_data"])
  8. >>> # 2) generator function that generates multi-columns data
  9. >>> def generator_mc(maxid = 64):
  10. >>> for i in range(maxid):
  11. >>> yield (np.array([i]), np.array([[i, i + 1], [i + 2, i + 3]]))
  12. >>> # create multi_column_generator_dataset with GeneratorMC() and column names "col1" and "col2"
  13. >>> multi_column_generator_dataset = ds.GeneratorDataset(generator_mc, ["col1, col2"])
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.TFRecordDataset(dataset_files, schema=None, columns_list=None, num_samples=None, num_parallel_workers=None, shuffle=, num_shards=None, shard_id=None, shard_equal_rows=False)[source]
  • A source dataset that reads and parses datasets stored on disk in TFData format.

    • Parameters
      • dataset_files (str or list[str]) – String or list of files to be read or glob strings to search for a pattern offiles. The list will be sorted in a lexicographical order.

      • schema (str or Schema, __optional) – Path to the json schema file or schema object (default=None).If the schema is not provided, the meta data from the TFData file is considered the schema.

      • columns_list (list[str], optional) – List of columns to be read (default=None, read all columns)

      • num_samples (int, __optional) – number of samples(rows) to read (default=None, reads the full dataset).

      • num_parallel_workers (int, __optional) – number of workers to read the data(default=None, number set in the config).

      • shuffle (bool, Shuffle level, __optional) –

perform reshuffling of the data every epoch (default=Shuffle.GLOBAL).If shuffle is False, no shuffling will be performed;If shuffle is True, the behavior is the same as setting shuffle to be Shuffle.GLOBALOtherwise, there are two levels of shuffling:

  1. -

Shuffle.GLOBAL: Shuffle both the files and samples.

  1. -

Shuffle.FILES: Shuffle files only.

  1. -

num_shards (int, __optional) – Number of shards that the dataset should be dividedinto (default=None).

  1. -

shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

  1. -

shard_equal_rows (bool) – Get equal rows for all shards(default=False). If shard_equal_rows is false, numberof rows of each shard may be not equal.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.common.dtype as mstype
  3. >>> dataset_files = ["/path/to/1", "/path/to/2"] # contains 1 or multiple tf data files
  4. >>> # 1) get all rows from dataset_files with no explicit schema:
  5. >>> # The meta-data in the first row will be used as a schema.
  6. >>> tfdataset = ds.TFRecordDataset(dataset_files=dataset_files)
  7. >>> # 2) get all rows from dataset_files with user-defined schema:
  8. >>> schema = ds.Schema()
  9. >>> schema.add_column('col_1d', de_type=mstype.int64, shape=[2])
  10. >>> tfdataset = ds.TFRecordDataset(dataset_files=dataset_files, schema=schema)
  11. >>> # 3) get all rows from dataset_files with schema file "./schema.json":
  12. >>> tfdataset = ds.TFRecordDataset(dataset_files=dataset_files, schema="./schema.json")
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • getdataset_size(_estimate=False)[source]

  • Get the number of batches in an epoch.

    • Parameters
    • estimate (bool, __optional) – Fast estimation of the dataset size instead of a full scan.

    • Returns

    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)

  • class mindspore.dataset.ManifestDataset(dataset_file, usage='train', num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, class_indexing=None, decode=False, num_shards=None, shard_id=None)[source]
  • A source dataset that reads images from a manifest file.

The generated dataset has two columns [‘image’, ‘label’].The shape of the image column is [image_size] if decode flag is False, or [H,W,C]otherwise.The type of the image tensor is uint8. The label is just a scalar uint64tensor.This dataset can take in a sampler. sampler and shuffle are mutually exclusive. Tablebelow shows what input args are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’
Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

  • Parameters
    • dataset_file (str) – File to be read.

    • usage (str, __optional) – Need train, eval or inference data (default=”train”).

    • num_samples (int, __optional) – The number of images to be included in the dataset.(default=None, all images).

    • num_parallel_workers (int, __optional) – Number of workers to read the data(default=None, number set in the config).

    • shuffle (bool, __optional) – Whether to perform shuffle on the dataset (default=None, expectedorder behavior shown in the table).

    • sampler (Sampler, optional) – Object used to choose samples from thedataset (default=None, expected order behavior shown in the table).

    • class_indexing (dict, __optional) – A str-to-int mapping from label name to index(default=None, the folder names will be sorted alphabetically and eachclass will be given a unique index starting from 0).

    • decode (bool, __optional) – decode the images after reading (defaults=False).

    • num_shards (int, __optional) – Number of shards that the dataset should be dividedinto (default=None).

    • shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

  • Raises

    • RuntimeError – If sampler and shuffle are specified at the same time.

    • RuntimeError – If sampler and sharding are specified at the same time.

    • RuntimeError – If num_shards is specified but shard_id is None.

    • RuntimeError – If shard_id is specified but num_shards is None.

    • RuntimeError – If class_indexing is not a dictionary.

    • ValueError – If shard_id is invalid (< 0 or >= num_shards).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> dataset_file = "/path/to/manifest_file.manifest"
  3. >>> # 1) read all samples specified in manifest_file dataset with 8 threads for training:
  4. >>> manifest_dataset = ds.ManifestDataset(dataset_file, usage="train", num_parallel_workers=8)
  5. >>> # 2) reads samples (specified in manifest_file.manifest) for shard 0 in a 2-way distributed training setup:
  6. >>> manifest_dataset = ds.ManifestDataset(dataset_file, num_shards=2, shard_id=0)
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()[source]

  • Get the class index

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()[source]
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.Cifar10Dataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None)[source]
  • A source dataset that reads cifar10 data.

The generated dataset has two columns [‘image’, ‘label’].The type of the image tensor is uint8. The label is just a scalar uint32tensor.This dataset can take in a sampler. sampler and shuffle are mutually exclusive. Tablebelow shows what input args are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’
Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

  • Parameters
    • dataset_dir (str) – Path to the root directory that contains the dataset.

    • num_samples (int, __optional) – The number of images to be included in the dataset.(default=None, all images).

    • num_parallel_workers (int, __optional) – Number of workers to read the data(default=None, number set in the config).

    • shuffle (bool, __optional) – Whether to perform shuffle on the dataset (default=None, expectedorder behavior shown in the table).

    • sampler (Sampler, optional) – Object used to choose samples from thedataset (default=None, expected order behavior shown in the table).

    • num_shards (int, __optional) – Number of shards that the dataset should be dividedinto (default=None).

    • shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

  • Raises

    • RuntimeError – If sampler and shuffle are specified at the same time.

    • RuntimeError – If sampler and sharding are specified at the same time.

    • RuntimeError – If num_shards is specified but shard_id is None.

    • RuntimeError – If shard_id is specified but num_shards is None.

    • ValueError – If shard_id is invalid (< 0 or >= num_shards).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> dataset_dir = "/path/to/cifar10_dataset_directory"
  3. >>> # 1) get all samples from CIFAR10 dataset in sequence:
  4. >>> dataset = ds.Cifar10Dataset(dataset_dir=dataset_dir,shuffle=False)
  5. >>> # 2) randomly select 350 samples from CIFAR10 dataset:
  6. >>> dataset = ds.Cifar10Dataset(dataset_dir=dataset_dir,num_samples=350, shuffle=True)
  7. >>> # 3) get samples from CIFAR10 dataset for shard 0 in a 2 way distributed training:
  8. >>> dataset = ds.Cifar10Dataset(dataset_dir=dataset_dir,num_shards=2,shard_id=0)
  9. >>> # in CIFAR10 dataset, each dictionary has keys "image" and "label"
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.Cifar100Dataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, sampler=None, num_shards=None, shard_id=None)[source]
  • A source dataset that reads cifar100 data.

The generated dataset has three columns [‘image’, ‘coarse_label’, ‘fine_label’].The type of the image tensor is uint8. The coarse and fine are just a scalar uint32tensor.This dataset can take in a sampler. sampler and shuffle are mutually exclusive. Tablebelow shows what input args are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’
Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

  • Parameters
    • dataset_dir (str) – Path to the root directory that contains the dataset.

    • num_samples (int, __optional) – The number of images to be included in the dataset.(default=None, all images).

    • num_parallel_workers (int, __optional) – Number of workers to read the data(default=None, number set in the config).

    • shuffle (bool, __optional) – Whether to perform shuffle on the dataset (default=None, expectedorder behavior shown in the table).

    • sampler (Sampler, optional) – Object used to choose samples from thedataset (default=None, expected order behavior shown in the table).

    • num_shards (int, __optional) – Number of shards that the dataset should be dividedinto (default=None).

    • shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

  • Raises

    • RuntimeError – If sampler and shuffle are specified at the same time.

    • RuntimeError – If sampler and sharding are specified at the same time.

    • RuntimeError – If num_shards is specified but shard_id is None.

    • RuntimeError – If shard_id is specified but num_shards is None.

    • ValueError – If shard_id is invalid (< 0 or >= num_shards).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> dataset_dir = "/path/to/cifar100_dataset_directory"
  3. >>> # 1) get all samples from CIFAR100 dataset in sequence:
  4. >>> cifar100_dataset = ds.Cifar100Dataset(dataset_dir=dataset_dir,shuffle=False)
  5. >>> # 2) randomly select 350 samples from CIFAR100 dataset:
  6. >>> cifar100_dataset = ds.Cifar100Dataset(dataset_dir=dataset_dir,num_samples=350, shuffle=True)
  7. >>> # in CIFAR100 dataset, each dictionary has 3 keys: "image", "fine_label" and "coarse_label"
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.CelebADataset(dataset_dir, num_parallel_workers=None, shuffle=None, dataset_type='all', sampler=None, decode=False, extensions=None, num_samples=None, num_shards=None, shard_id=None)[source]
  • A source dataset for reading and parsing CelebA dataset.Only support list_attr_celeba.txt currently

Note

The generated dataset has two columns [‘image’, ‘attr’].The type of the image tensor is uint8. The attr tensor is uint32 and one hot type.

  • Parameters
    • dataset_dir (str) – Path to the root directory that contains the dataset.

    • num_parallel_workers (int, __optional) – Number of workers to read the data (default=value set in the config).

    • shuffle (bool, __optional) – Whether to perform shuffle on the dataset (default=None).

    • dataset_type (string) – one of ‘all’, ‘train’, ‘valid’ or ‘test’.

    • sampler (Sampler, optional) – Object used to choose samples from the dataset (default=None).

    • decode (bool, __optional) – decode the images after reading (default=False).

    • extensions (list[str], optional) – List of file extensions to beincluded in the dataset (default=None).

    • num_samples (int, __optional) – The number of images to be included in the dataset.(default=None, all images).

    • num_shards (int, __optional) – Number of shards that the dataset should be dividedinto (default=None).

    • shard_id (int, __optional) – The shard ID within num_shards (default=None). Thisargument should be specified only when num_shards is also specified.

  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)

  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.VOCDataset(dataset_dir, num_samples=None, num_parallel_workers=None, shuffle=None, decode=False, sampler=None, distribution=None)[source]
  • A source dataset for reading and parsing VOC dataset.

The generated dataset has two columns [‘image’, ‘target’].The shape of both column is [image_size] if decode flag is False, or [H, W, C]otherwise.The type of both tensor is uint8.This dataset can take in a sampler. sampler and shuffle are mutually exclusive. Tablebelow shows what input args are allowed and their expected behavior.

Expected Order Behavior of Using ‘sampler’ and ‘shuffle’
Parameter ‘sampler’

Parameter ‘shuffle’

Expected Order Behavior

None

None

random order

None

True

random order

None

False

sequential order

Sampler object

None

order defined by sampler

Sampler object

True

not allowed

Sampler object

False

not allowed

  • Parameters
    • dataset_dir (str) – Path to the root directory that contains the dataset.

    • num_samples (int, __optional) – The number of images to be included in the dataset(default=None, all images).

    • num_parallel_workers (int, __optional) – Number of workers to read the data(default=None, number set in the config).

    • shuffle (bool, __optional) – Whether to perform shuffle on the dataset (default=None, expectedorder behavior shown in the table).

    • decode (bool, __optional) – Decode the images after reading (default=False).

    • sampler (Sampler, optional) – Object used to choose samples from the dataset(default=None, expected order behavior shown in the table).

    • distribution (str, __optional) – Path to the json distribution file to configuredataset sharding (default=None). This argument should be specifiedonly when no ‘sampler’ is used.

  • Raises

    • RuntimeError – If distribution and sampler are specified at the same time.

    • RuntimeError – If distribution is failed to read.

    • RuntimeError – If shuffle and sampler are specified at the same time.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> dataset_dir = "/path/to/voc_dataset_directory"
  3. >>> # 1) read all VOC dataset samples in dataset_dir with 8 threads in random order:
  4. >>> voc_dataset = ds.VOCDataset(dataset_dir, num_parallel_workers=8)
  5. >>> # 2) read then decode all VOC dataset samples in dataset_dir in sequence:
  6. >>> voc_dataset = ds.VOCDataset(dataset_dir, decode=True, shuffle=False)
  7. >>> # in VOC dataset, each dictionary has keys "image" and "target"
  • batch(batch_size, drop_remainder=False, num_parallel_workers=None, per_batch_map=None, input_columns=None)
  • Combines batch_size number of consecutive rows into batches.

For any child node, a batch is treated as a single row.For any column, all the elements within that column must have the same shape.If a per_batch_map callable is provided, it will be applied to the batches of tensors.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.

  1. - Parameters
  2. -
  3. -

batch_size (int or __function) – The number of rows each batch is created with. Anint or callable which takes exactly 1 parameter, BatchInfo.

  1. -

drop_remainder (bool, __optional) – Determines whether or not to drop the lastpossibly incomplete batch (default=False). If True, and if there are lessthan batch_size rows available to make the last batch, then those rows willbe dropped and not propogated to the child node.

  1. -

num_parallel_workers (int, __optional) – Number of workers to process the Dataset in parallel (default=None).

  1. -

per_batch_map (callable, optional) – Per batch map callable. A callable which takes(list[Tensor], list[Tensor], …, BatchInfo) as input parameters. Each list[Tensor] represent a batch ofTensors on a given column. The number of lists should match with number of entries in input_columns. Thelast parameter of the callable should always be a BatchInfo object.

  1. -

input_columns (list of string, optional) – List of names of the input columns. The size of the list shouldmatch with signature of per_batch_map callable.

  1. - Returns
  2. -

BatchDataset, dataset batched.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where every 100 rows is combined into a batch
  4. >>> # and drops the last incomplete batch if there is one.
  5. >>> data = data.batch(100, True)
  • create_dict_iterator()
  • Create an Iterator over the dataset.

The data retrieved will be a dictionary. The orderof the columns in the dictionary may not be the same as the original order.

  1. - Returns
  2. -

Iterator, dictionary of column_name-ndarray pair.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator might be changed.
  5. >>> iterator = data.create_dict_iterator()
  6. >>> for item in iterator:
  7. >>> # print the data in column1
  8. >>> print(item["column1"])
  • createtuple_iterator(_columns=None)
  • Create an Iterator over the dataset. The data retrieved will be a list of ndarray of data.

To specify which columns to list and the order needed, use columns_list. If columns_listis not provided, the order of the columns will not be changed.

  1. - Parameters
  2. -

columns (list[str], optional) – List of columns to be used to specify the order of columns(defaults=None, means all columns).

  1. - Returns
  2. -

Iterator, list of ndarray.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # creates an iterator. The columns in the data obtained by the
  4. >>> # iterator will not be changed.
  5. >>> iterator = data.create_tuple_iterator()
  6. >>> for item in iterator:
  7. >>> # convert the returned tuple to a list and print
  8. >>> print(list(item))
  • deviceque(_prefetch_size=None)
  • Returns a transferredDataset that transfer data through tdt.

    • Parameters
    • prefetch_size (int, __optional) – prefetch number of records ahead of theuser’s request (default=None).

    • Returns

    • TransferDataset, dataset for transferring.
  • get_batch_size()

  • Get the size of a batch.

    • Returns
    • Number, the number of data in a batch.
  • get_class_indexing()

  • Get the class index.

    • Returns
    • Dict, A str-to-int mapping from label name to index.
  • get_dataset_size()[source]

  • Get the number of batches in an epoch.

    • Returns
    • Number, number of batches.
  • get_repeat_count()

  • Get the replication times in RepeatDataset else 1

    • Returns
    • Number, the count of repeat.
  • map(input_columns=None, operations=None, output_columns=None, columns_order=None, num_parallel_workers=None)

  • Applies each operation in operations to this dataset.

The order of operations is determined by the position of each operation in operations.operations[0] will be applied first, then operations[1], then operations[2], etc.

Each operation will be passed one or more columns from the dataset as input, and zero ormore columns will be outputted. The first operation will be passed the columns specifiedin input_columns as input. If there is more than one operator in operations, the outputtedcolumns of the previous operation are used as the input columns for the next operation.The columns outputted by the very last operation will be assigned names specified byoutput_columns.

Only the columns specified in columns_order will be propagated to the child node. Thesecolumns will be in the same order as specified in columns_order.

  1. - Parameters
  2. -
  3. -

input_columns (list[str]) – List of the names of the columns that will be passed tothe first operation as input. The size of this list must match the number ofinput columns expected by the first operator. (default=None, the firstoperation will be passed however many columns that is required, starting fromthe first column).

  1. -

operations (list[TensorOp] or Python list[functions]) – List of operations to beapplied on the dataset. Operations are applied in the order they appear in this list.

  1. -

output_columns (list[str], optional) – List of names assigned to the columns outputted bythe last operation. This parameter is mandatory if len(input_columns) !=len(output_columns). The size of this list must match the number of outputcolumns of the last operation. (default=None, output columns will have the samename as the input columns, i.e., the columns will be replaced).

  1. -

columns_order (list[str], optional) – list of all the desired columns to propagate to thechild node. This list must be a subset of all the columns in the dataset afterall operations are applied. The order of the columns in each row propagated to thechild node follow the order they appear in this list. The parameter is mandatoryif the len(input_columns) != len(output_columns). (default=None, all columnswill be propagated to the child node, the order of the columns will remain thesame).

  1. -

num_parallel_workers (int, __optional) – Number of threads used to process the dataset inparallel (default=None, the value from the config will be used).

  1. - Returns
  2. -

MapDataset, dataset after mapping operation.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.dataset.transforms.vision.c_transforms as c_transforms
  3. >>>
  4. >>> # data is an instance of Dataset which has 2 columns, "image" and "label".
  5. >>> # ds_pyfunc is an instance of Dataset which has 3 columns, "col0", "col1", and "col2". Each column is
  6. >>> # a 2d array of integers.
  7. >>>
  8. >>> # This config is a global setting, meaning that all future operations which
  9. >>> # uses this config value will use 2 worker threads, unless if specified
  10. >>> # otherwise in their constructor. set_num_parallel_workers can be called
  11. >>> # again later if a different number of worker threads are needed.
  12. >>> ds.config.set_num_parallel_workers(2)
  13. >>>
  14. >>> # Two operations, which takes 1 column for input and outputs 1 column.
  15. >>> decode_op = c_transforms.Decode(rgb_format=True)
  16. >>> random_jitter_op = c_transforms.RandomColorAdjust((0.8, 0.8), (1, 1), (1, 1), (0, 0))
  17. >>>
  18. >>> # 1) Simple map example
  19. >>>
  20. >>> operations = [decode_op]
  21. >>> input_columns = ["image"]
  22. >>>
  23. >>> # Applies decode_op on column "image". This column will be replaced by the outputed
  24. >>> # column of decode_op. Since columns_order is not provided, both columns "image"
  25. >>> # and "label" will be propagated to the child node in their original order.
  26. >>> ds_decoded = data.map(input_columns, operations)
  27. >>>
  28. >>> # Rename column "image" to "decoded_image"
  29. >>> output_columns = ["decoded_image"]
  30. >>> ds_decoded = data.map(input_columns, operations, output_columns)
  31. >>>
  32. >>> # Specify the order of the columns.
  33. >>> columns_order ["label", "image"]
  34. >>> ds_decoded = data.map(input_columns, operations, None, columns_order)
  35. >>>
  36. >>> # Rename column "image" to "decoded_image" and also specify the order of the columns.
  37. >>> columns_order ["label", "decoded_image"]
  38. >>> output_columns = ["decoded_image"]
  39. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  40. >>>
  41. >>> # Rename column "image" to "decoded_image" and keep only this column.
  42. >>> columns_order ["decoded_image"]
  43. >>> output_columns = ["decoded_image"]
  44. >>> ds_decoded = data.map(input_columns, operations, output_columns, columns_order)
  45. >>>
  46. >>> # Simple example using pyfunc. Renaming columns and specifying column order
  47. >>> # work in the same way as the previous examples.
  48. >>> input_columns = ["col0"]
  49. >>> operations = [(lambda x: x + 1)]
  50. >>> ds_mapped = ds_pyfunc.map(input_columns, operations)
  51. >>>
  52. >>> # 2) Map example with more than one operation
  53. >>>
  54. >>> # If this list of operations is used with map, decode_op will be applied
  55. >>> # first, then random_jitter_op will be applied.
  56. >>> operations = [decode_op, random_jitter_op]
  57. >>>
  58. >>> input_columns = ["image"]
  59. >>>
  60. >>> # Creates a dataset where the images are decoded, then randomly color jittered.
  61. >>> # decode_op takes column "image" as input and outputs one column. The column
  62. >>> # outputted by decode_op is passed as input to random_jitter_op.
  63. >>> # random_jitter_op will output one column. Column "image" will be replaced by
  64. >>> # the column outputted by random_jitter_op (the very last operation). All other
  65. >>> # columns are unchanged. Since columns_order is not specified, the order of the
  66. >>> # columns will remain the same.
  67. >>> ds_mapped = data.map(input_columns, operations)
  68. >>>
  69. >>> # Creates a dataset that is identical to ds_mapped, except the column "image"
  70. >>> # that is outputted by random_jitter_op is renamed to "image_transformed".
  71. >>> # Specifying column order works in the same way as examples in 1).
  72. >>> output_columns = ["image_transformed"]
  73. >>> ds_mapped_and_renamed = data.map(input_columns, operation, output_columns)
  74. >>>
  75. >>> # Multiple operations using pyfunc. Renaming columns and specifying column order
  76. >>> # work in the same way as examples in 1).
  77. >>> input_columns = ["col0"]
  78. >>> operations = [(lambda x: x + x), (lambda x: x - 1)]
  79. >>> output_columns = ["col0_mapped"]
  80. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns)
  81. >>>
  82. >>> # 3) Example where number of input columns is not equal to number of output columns
  83. >>>
  84. >>> # operations[0] is a lambda that takes 2 columns as input and outputs 3 columns.
  85. >>> # operations[1] is a lambda that takes 3 columns as input and outputs 1 column.
  86. >>> # operations[1] is a lambda that takes 1 column as input and outputs 4 columns.
  87. >>> #
  88. >>> # Note: the number of output columns of operation[i] must equal the number of
  89. >>> # input columns of operation[i+1]. Otherwise, this map call will also result
  90. >>> # in an error.
  91. >>> operations = [(lambda x y: (x, x + y, x + y + 1)),
  92. >>> (lambda x y z: x * y * z),
  93. >>> (lambda x: (x % 2, x % 3, x % 5, x % 7))]
  94. >>>
  95. >>> # Note: because the number of input columns is not the same as the number of
  96. >>> # output columns, the output_columns and columns_order parameter must be
  97. >>> # specified. Otherwise, this map call will also result in an error.
  98. >>> input_columns = ["col2", "col0"]
  99. >>> output_columns = ["mod2", "mod3", "mod5", "mod7"]
  100. >>>
  101. >>> # Propagate all columns to the child node in this order:
  102. >>> columns_order = ["col0", "col2", "mod2", "mod3", "mod5", "mod7", "col1"]
  103. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  104. >>>
  105. >>> # Propagate some columns to the child node in this order:
  106. >>> columns_order = ["mod7", "mod3", "col1"]
  107. >>> ds_mapped = ds_pyfunc.map(input_columns, operations, output_columns, columns_order)
  • num_classes()
  • Get the number of classes in a dataset.

    • Returns
    • Number, number of classes.
  • output_shapes()

  • Get the shapes of output data.

    • Returns
    • List, list of shape of each column.
  • output_types()

  • Get the types of output data.

    • Returns
    • List of data type.
  • project(columns)

  • Projects certain columns in input datasets.

The specified columns will be selected from the dataset and passed downthe pipeline in the order specified. The other columns are discarded.

  1. - Parameters
  2. -

columns (list[str]) – list of names of the columns to project.

  1. - Returns
  2. -

ProjectDataset, dataset projected.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> columns_to_project = ["column3", "column1", "column2"]
  4. >>>
  5. >>> # creates a dataset that consist of column3, column1, column2
  6. >>> # in that order, regardless of the original order of columns.
  7. >>> data = data.project(columns=columns_to_project)
  • rename(input_columns, output_columns)
  • Renames the columns in input datasets.

    • Parameters
      • input_columns (list[str]) – list of names of the input columns.

      • output_columns (list[str]) – list of names of the output columns.

    • Returns

    • RenameDataset, dataset renamed.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> input_columns = ["input_col1", "input_col2", "input_col3"]
  4. >>> output_columns = ["output_col1", "output_col2", "output_col3"]
  5. >>>
  6. >>> # creates a dataset where input_col1 is renamed to output_col1, and
  7. >>> # input_col2 is renamed to output_col2, and input_col3 is renamed
  8. >>> # to output_col3.
  9. >>> data = data.rename(input_columns=input_columns, output_columns=output_columns)
  • repeat(count=None)
  • Repeats this dataset count times. Repeat indefinitely if the count is None or -1.

Note

The order of using repeat and batch reflects the number of batches. Recommend thatrepeat operation should be used after batch operation.If dataset_sink_mode is False (feed mode), here repeat operation is invalid.

  1. - Parameters
  2. -

count (int) – Number of times the dataset should be repeated (default=None).

  1. - Returns
  2. -

RepeatDataset, dataset repeated.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object.
  3. >>> # creates a dataset where the dataset is repeated for 50 epochs
  4. >>> repeated = data.repeat(50)
  5. >>>
  6. >>> # creates a dataset where each epoch is shuffled individually
  7. >>> shuffled_and_repeated = data.shuffle(10)
  8. >>> shuffled_and_repeated = shuffled_and_repeated.repeat(50)
  9. >>>
  10. >>> # creates a dataset where the dataset is first repeated for
  11. >>> # 50 epochs before shuffling. the shuffle operator will treat
  12. >>> # the entire 50 epochs as one big dataset.
  13. >>> repeat_and_shuffle = data.repeat(50)
  14. >>> repeat_and_shuffle = repeat_and_shuffle.shuffle(10)
  • reset()
  • Reset the dataset for next epoch

  • shuffle(buffer_size)

  • Randomly shuffles the rows of this dataset using the following algorithm:

    • Make a shuffle buffer that contains the first buffer_size rows.

    • Randomly select an element from the shuffle buffer to be the next rowpropogated to the child node.

    • Get the next row (if any) from the parent node and put it in the shuffle buffer.

    • Repeat steps 2 and 3 until there are no more rows left in the shuffle buffer.

A seed can be provided to be used on the first epoch. In every subsequentepoch, the seed is changed to a new one, randomly generated value.

  1. - Parameters
  2. -

buffer_size (int) – The size of the buffer (must be larger than 1) forshuffling. Setting buffer_size equal to the number of rows in the entiredataset will result in a global shuffle.

  1. - Returns
  2. -

ShuffleDataset, dataset shuffled.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # data is an instance of Dataset object
  3. >>> # optionally set the seed for the first epoch
  4. >>> ds.config.set_seed(58)
  5. >>>
  6. >>> # creates a shuffled dataset using a shuffle buffer of size 4
  7. >>> data = data.shuffle(4)
  • todevice(_num_batch=None)
  • Transfers data through CPU, GPU or Ascend devices.

    • Parameters
    • num_batch (int, __optional) – limit the number of batch to be sent to device (default=None).

    • Returns

    • TransferDataset, dataset for transferring.

    • Raises

      • TypeError – If device_type is empty.

      • ValueError – If device_type is not ‘Ascend’, ‘GPU’ or ‘CPU’.

      • ValueError – If num_batch is None or 0 or larger than int_max.

      • RuntimeError – If dataset is unknown.

      • RuntimeError – If distribution file path is given but failed to read.

  • zip(datasets)

  • Zips the datasets in the input tuple of datasets. Columns in the input datasets must not have the same name.

    • Parameters
    • datasets (tuple or __class Dataset) – A tuple of datasets or a single class Datasetto be zipped together with this dataset.

    • Returns

    • ZipDataset, dataset zipped.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> # ds1 and ds2 are instances of Dataset object
  3. >>> # creates a dataset which is the combination of ds1 and ds2
  4. >>> data = ds1.zip(ds2)
  • class mindspore.dataset.Schema(schema_file=None)[source]
  • Class to represent a schema of dataset.

    • Parameters
    • schema_file (str) – Path of schema file (default=None).

    • Returns

    • Schema object, schema info about dataset.

    • Raises

    • RuntimeError – If schema file failed to load.

Example

  1. Copy>>> import mindspore.dataset as ds
  2. >>> import mindspore.common.dtype as mstype
  3. >>> # create schema, specify column name, mindspore.dtype and shape of the column
  4. >>> schema = ds.Schema()
  5. >>> schema.add_column('col1', de_type=mstype.int64, shape=[2])
  • addcolumn(_name, de_type, shape=None)[source]
  • Add new column to the schema.

    • Parameters
      • name (str) – name of the column.

      • de_type (str) – data type of the column.

      • shape (list[int], optional) – shape of the column(default=None, [-1] which is an unknown shape of rank 1).

    • Raises

    • ValueError – If column type is unknown.
  • fromjson(_json_obj)[source]

  • Get schema file from json file.

    • Parameters
    • json_obj (dictionary) – object of json parsed.

    • Raises

  • parsecolumns(_columns)[source]

  • Parse the columns and add it to self.

  • to_json()[source]

  • Get a JSON string of the schema.

    • Returns
    • Str, JSON string of the schema.
  • class mindspore.dataset.DistributedSampler(num_shards, shard_id, shuffle=True)[source]
  • Sampler that access a shard of the dataset.

    • Parameters
      • num_shards (int) – Number of shards to divide the dataset into.

      • shard_id (int) – Shard ID of the current shard within num_shards.

      • shuffle (bool, __optional) – If true, the indices are shuffled (default=True).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>>
  3. >>> dataset_dir = "path/to/imagefolder_directory"
  4. >>>
  5. >>> # creates a distributed sampler with 10 shards total. This shard is shard 5
  6. >>> sampler = ds.DistributedSampler(10, 5)
  7. >>> data = ds.ImageFolderDatasetV2(dataset_dir, num_parallel_workers=8, sampler=sampler)
  • Raises
    • ValueError – If num_shards is not positive.

    • ValueError – If shard_id is smaller than 0 or equal to num_shards or larger than num_shards.

    • ValueError – If shuffle is not a boolean value.

  • class mindspore.dataset.PKSampler(num_val, num_class=None, shuffle=False)[source]
  • Samples K elements for each P class in the dataset.

    • Parameters
      • num_val (int) – Number of elements to sample for each class.

      • num_class (int, __optional) – Number of classes to sample (default=None, all classes).

      • shuffle (bool, __optional) – If true, the class IDs are shuffled (default=False).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>>
  3. >>> dataset_dir = "path/to/imagefolder_directory"
  4. >>>
  5. >>> # creates a PKSampler that will get 3 samples from every class.
  6. >>> sampler = ds.PKSampler(3)
  7. >>> data = ds.ImageFolderDatasetV2(dataset_dir, num_parallel_workers=8, sampler=sampler)
  • class mindspore.dataset.RandomSampler(replacement=False, num_samples=None)[source]
  • Samples the elements randomly.

    • Parameters
      • replacement (bool, __optional) – If True, put the sample ID back for the next draw (default=False).

      • num_samples (int, __optional) – Number of elements to sample (default=None, all elements). Thisargument should be specified only when ‘replacement’ is “True”.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>>
  3. >>> dataset_dir = "path/to/imagefolder_directory"
  4. >>>
  5. >>> # creates a RandomSampler
  6. >>> sampler = ds.RandomSampler()
  7. >>> data = ds.ImageFolderDatasetV2(dataset_dir, num_parallel_workers=8, sampler=sampler)
  • Raises
    • ValueError – If replacement is not boolean.

    • ValueError – If num_samples is not None and replacement is false.

    • ValueError – If num_samples is not positive.

  • class mindspore.dataset.SequentialSampler[source]
  • Samples the dataset elements sequentially, same as not having a sampler.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>>
  3. >>> dataset_dir = "path/to/imagefolder_directory"
  4. >>>
  5. >>> # creates a SequentialSampler
  6. >>> sampler = ds.SequentialSampler()
  7. >>> data = ds.ImageFolderDatasetV2(dataset_dir, num_parallel_workers=8, sampler=sampler)
  • class mindspore.dataset.SubsetRandomSampler(indices)[source]
  • Samples the elements randomly from a sequence of indices.

    • Parameters
    • indices (list[int]) – A sequence of indices.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>>
  3. >>> dataset_dir = "path/to/imagefolder_directory"
  4. >>>
  5. >>> indices = [0, 1, 2, 3, 7, 88, 119]
  6. >>>
  7. >>> # creates a SubsetRandomSampler, will sample from the provided indices
  8. >>> sampler = ds.SubsetRandomSampler()
  9. >>> data = ds.ImageFolderDatasetV2(dataset_dir, num_parallel_workers=8, sampler=sampler)
  • class mindspore.dataset.WeightedRandomSampler(weights, num_samples, replacement=True)[source]
  • Samples the elements from [0, len(weights) - 1] randomly with the given weights (probabilities).

    • Parameters
      • weights (list[float]) – A sequence of weights, not necessarily summing up to 1.

      • num_samples (int) – Number of elements to sample.

      • replacement (bool, __optional) – If True, put the sample ID back for the next draw (default=True).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>>
  3. >>> dataset_dir = "path/to/imagefolder_directory"
  4. >>>
  5. >>> weights = [0.9, 0.01, 0.4, 0.8, 0.1, 0.1, 0.3]
  6. >>>
  7. >>> # creates a WeightedRandomSampler that will sample 4 elements without replacement
  8. >>> sampler = ds.WeightedRandomSampler(weights, 4)
  9. >>> data = ds.ImageFolderDatasetV2(dataset_dir, num_parallel_workers=8, sampler=sampler)
  • Raises
  • mindspore.dataset.zip(datasets)[source]
  • Zips the datasets in the input tuple of datasets.

    • Parameters
    • datasets (tuple of class Dataset) – A tuple of datasets to be zipped together.The number of datasets should be more than 1.

    • Returns

    • DatasetOp, ZipDataset.

    • Raises

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>>
  3. >>> dataset_dir1 = "path/to/imagefolder_directory1"
  4. >>> dataset_dir2 = "path/to/imagefolder_directory2"
  5. >>> ds1 = ds.ImageFolderDatasetV2(dataset_dir1, num_parallel_workers=8)
  6. >>> ds2 = ds.ImageFolderDatasetV2(dataset_dir2, num_parallel_workers=8)
  7. >>>
  8. >>> # creates a dataset which is the combination of ds1 and ds2
  9. >>> data = ds.zip((ds1, ds2))
  • mindspore.dataset.config =
  • The configuration manager

  • class mindspore.dataset.core.configuration.ConfigurationManager[source]
  • The configuration manager

    • get_num_parallel_workers()[source]
    • Get the default number of parallel workers.

      • Returns
      • Int, number of parallel workers to be used as a default for each operation
    • get_prefetch_size()[source]

    • Get the prefetch size in number of rows.

      • Returns
      • Size, total number of rows to be prefetched.
    • get_seed()[source]

    • Get the seed

      • Returns
      • Int, seed.
    • load(file)[source]

    • Load configuration from a file.

      • Parameters
      • file – path the config file to be loaded

      • Raises

      • RuntimeError – If file is invalid and parsing fails.

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> con = ds.engine.ConfigurationManager()
  3. >>> # sets the default value according to values in configuration file.
  4. >>> con.load("path/to/config/file")
  5. >>> # example config file:
  6. >>> # {
  7. >>> # "logFilePath": "/tmp",
  8. >>> # "rowsPerBuffer": 32,
  9. >>> # "numParallelWorkers": 4,
  10. >>> # "workerConnectorSize": 16,
  11. >>> # "opConnectorSize": 16,
  12. >>> # "seed": 5489
  13. >>> # }
  • setnum_parallel_workers(_num)[source]
  • Set the default number of parallel workers

    • Parameters
    • num – number of parallel workers to be used as a default for each operation

    • Raises

    • ValueError – If num_parallel_workers is invalid (<= 0 or > MAX_INT_32).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> con = ds.engine.ConfigurationManager()
  3. >>> # sets the new parallel_workers value, now parallel dataset operators will run with 8 workers.
  4. >>> con.set_num_parallel_workers(8)
  • setprefetch_size(_size)[source]
  • Set the number of rows to be prefetched.

    • Parameters
    • size – total number of rows to be prefetched.

    • Raises

    • ValueError – If prefetch_size is invalid (<= 0 or > MAX_INT_32).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> con = ds.engine.ConfigurationManager()
  3. >>> # sets the new prefetch value.
  4. >>> con.set_prefetch_size(1000)
  • setseed(_seed)[source]
  • Set the seed to be used in any random generator. This is used to produce deterministic results.

    • Parameters
    • seed (int) – seed to be set

    • Raises

    • ValueError – If seed is invalid (< 0 or > MAX_UINT_32).

Examples

  1. Copy>>> import mindspore.dataset as ds
  2. >>> con = ds.engine.ConfigurationManager()
  3. >>> # sets the new seed value, now operators with a random seed will use new seed value.
  4. >>> con.set_seed(1000)