mindspore

MindSpore package.

  • class mindspore.Tensor(input_data, dtype=None)[source]
  • Tensor for data storage.

Tensor inherits tensor object in C++ side, some functions are implementedin C++ side and some functions are implemented in Python layer.

  • Parameters
    • input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray) – Input data of the tensor.

    • dtype (mindspore.dtype) – Should be None, bool or numeric type defined in mindspore.dtype.The argument is used to define the data type of the output tensor. If it is None, the data type of theoutput tensor will be as same as the input_data. Default: None.

  • Outputs:

  • Tensor, with the same shape as input_data.

Examples

  1. Copy>>> # init a tensor with input data
  2. >>> t1 = mindspore.Tensor(np.zeros([1, 2, 3]), mindspore.float32)
  3. >>> assert isinstance(t1, mindspore.Tensor)
  4. >>> assert t1.shape() == (1, 2, 3)
  5. >>> assert t1.dtype() == mindspore.float32
  6. >>>
  7. >>> # init a tensor with a float scalar
  8. >>> t2 = mindspore.Tensor(0.1)
  9. >>> assert isinstance(t2, mindspore.Tensor)
  10. >>> assert t2.dtype() == mindspore.float64
  • property virtual_flag
  • Mark tensor is virtual.
  • mindspore.msfunction(_fn=None, obj=None, input_signature=None)[source]
  • Creates a callable MindSpore graph from a python function.

This allows the MindSpore runtime to apply optimizations based on graph.

  • Parameters
    • fn (Function) – The Python function that will be run as a graph. Default: None.

    • obj (Object) – The Python Object that provide information for identify compiled function. Default: None.

    • input_signature (MetaTensor) – The MetaTensor to describe the input arguments. The MetaTensor specifiesthe shape and dtype of the Tensor and they will be supplied to this function. If inputsignatureis specified, every input to _fn must be a Tensor. And the input parameters of fn cannot accept**kwargs. The shape and dtype of actual inputs should keep same with input_signature, or TypeErrorwill be raised. Default: None.

  • Returns

  • Function, if fn is not None, returns a callable that will execute the compiled function; If fn is None,returns a decorator and when this decorator invokes with a single fn argument, the callable is equal to thecase when fn is not None.

Examples

  1. Copy>>> def tensor_add(x, y):
  2. >>> z = F.tensor_add(x, y)
  3. >>> return z
  4. >>>
  5. >>> @ms_function
  6. >>> def tensor_add_with_dec(x, y):
  7. >>> z = F.tensor_add(x, y)
  8. >>> return z
  9. >>>
  10. >>> @ms_function(input_signature=(MetaTensor(mstype.float32, (1, 1, 3, 3)),
  11. >>> MetaTensor(mstype.float32, (1, 1, 3, 3))))
  12. >>> def tensor_add_with_sig(x, y):
  13. >>> z = F.tensor_add(x, y)
  14. >>> return z
  15. >>>
  16. >>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
  17. >>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
  18. >>>
  19. >>> tensor_add_graph = ms_function(fn=tensor_add)
  20. >>> out = tensor_add_graph(x, y)
  21. >>> out = tensor_add_with_dec(x, y)
  22. >>> out = tensor_add_with_sig(x, y)
  • class mindspore.Parameter(default_input, name, requires_grad=True, layerwise_parallel=False)[source]
  • Parameter types of cell models.

Note

Each parameter of Cell is represented by Parameter class.

  • Parameters
    • default_input (Tensor) – A parameter tensor.

    • name (str) – Name of the child parameter.

    • requires_grad (bool) – True if the parameter requires gradient. Default: True.

    • layerwise_parallel (bool) – A kind of model parallel mode. When layerwise_parallel is true in paralle mode,broadcast and gradients communication would not be applied on parameters. Default: False.

  • clone(prefix, init='same')[source]

  • Clone the parameter.

    • Parameters
      • prefix (str) – Namespace of parameter.

      • init (str) – Initialize the shape of the parameter. Default: ‘same’.

    • Returns

    • Parameter, a new parameter.
  • property is_init

  • Get init status of the parameter.

  • property name

  • Get the name of the parameter.

  • property requires_grad

  • Return whether the parameter requires gradient.
  • class mindspore.ParameterTuple[source]
  • Class for storing tuple of parameters.

Note

Used to store the parameters of the network into the parameter tuple collection.

  • clone(prefix, init='same')[source]
  • Clone the parameter.

    • Parameters
      • prefix (str) – Namespace of parameter.

      • init (str) – Initialize the shape of the parameter. Default: ‘same’.

    • Returns

    • Tuple, the new Parameter tuple.
  • mindspore.dtypetonptype(_type)[source]
  • Get numpy data type corresponding to MindSpore dtype.

    • Parameters
    • type_ (mindspore.dtype) – MindSpore’s dtype.

    • Returns

    • The data type of numpy.
  • mindspore.issubclass(type_, _dtype)[source]
  • Determine whether type__ is a subclass of _dtype.

  • mindspore.dtypetopytype(_type)[source]
  • Get python type corresponding to MindSpore dtype.

    • Parameters
    • type_ (mindspore.dtype) – MindSpore’s dtype.

    • Returns

    • Type of python.
  • mindspore.pytypeto_dtype(_obj)[source]
  • Convert python type to MindSpore type.

    • Parameters
    • obj (type) – A python type object.

    • Returns

    • Type of MindSpore type.
  • mindspore.getpy_obj_dtype(_obj)[source]
  • Get the corresponding MindSpore data type by python type or variable.

    • Parameters
    • obj – An object of python type, or a variable in python type.

    • Returns

    • Type of MindSpore type.
  • class mindspore.Model(network, loss_fn=None, optimizer=None, metrics=None, eval_network=None, eval_indexes=None, amp_level='O0', **kwargs)[source]
  • High-Level API for Training or Testing.

Model groups layers into an object with training and inference features.

  • Parameters
    • network (Cell) – The training or testing network.

    • loss_fn (Cell) – Objective function, if loss_fn is None, thenetwork should contain the logic of loss and grads calculation, and the logicof parallel if needed. Default: None.

    • optimizer (Cell) – Optimizer for updating the weights. Default: None.

    • metrics (Union__[dict, set]) – Dict or set of metrics to be evaluated by the model duringtraining and testing. eg: {‘accuracy’, ‘recall’}. Default: None.

    • eval_network (Cell) – Network for evaluation. If not defined, network and loss_fn would be wrapped aseval_network. Default: None.

    • eval_indexes (list) – In case of defining the eval_network, if eval_indexes is None, all outputs ofeval_network would be passed to metrics, otherwise eval_indexes must contain threeelements, representing the positions of loss value, predict value and label, the lossvalue would be passed to Loss metric, predict value and label would be passed to othermetric. Default: None.

    • amp_level (str) –

Option for argument level in mindspore.amp.build_train_network, level for mixedprecision training. Supports [O0, O2]. Default: “O0”.

  1. -

O0: Do not change.

  1. -

O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.

  1. -

loss_scale_manager (Union__[None, LossScaleManager]) – If None, not scale the loss, or elsescale the loss by LossScaleManager. If it is set, overwrite the level setting. It’s a eyword argument.e.g. Use loss_scale_manager=None to set the value.

Examples

  1. Copy>>> class Net(nn.Cell):
  2. >>> def __init__(self):
  3. >>> super(Net, self).__init__()
  4. >>> self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal')
  5. >>> self.bn = nn.BatchNorm2d(64)
  6. >>> self.relu = nn.ReLU()
  7. >>> self.flatten = nn.Flatten()
  8. >>> self.fc = nn.Dense(64*222*222, 3) # padding=0
  9. >>>
  10. >>> def construct(self, x):
  11. >>> x = self.conv(x)
  12. >>> x = self.bn(x)
  13. >>> x = self.relu(x)
  14. >>> x = self.flatten(x)
  15. >>> out = self.fc(x)
  16. >>> return out
  17. >>>
  18. >>> net = Net()
  19. >>> loss = nn.SoftmaxCrossEntropyWithLogits()
  20. >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
  21. >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None)
  22. >>> dataset = get_dataset()
  23. >>> model.train(2, dataset)
  • eval(valid_dataset, callbacks=None, dataset_sink_mode=True)[source]
  • Evaluation API where the iteration is controlled by python front-end.

Configure to pynative mode, the evaluation will be performed with dataset non-sink mode.

Note

CPU is not supported when dataset_sink_mode is true.

  1. - Parameters
  2. -
  3. -

valid_dataset (Dataset) – Dataset to evaluate the model.

  1. -

callbacks (list) – List of callback object. Callbacks which should be excutedwhile training. Default: None.

  1. -

dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True.

  1. - Returns
  2. -

Dict, returns the loss value & metrics values for the model in test mode.

Examples

  1. Copy>>> dataset = get_dataset()
  2. >>> net = Net()
  3. >>> loss = nn.SoftmaxCrossEntropyWithLogits()
  4. >>> model = Model(net, loss_fn=loss, optimizer=None, metrics={'acc'})
  5. >>> model.eval(dataset)
  • predict(*predict_data)[source]
  • Generates output predictions for the input samples.

Data could be single tensor, or list of tensor, tuple of tensor.

Note

Batch data should be put together in one tensor.

  1. - Parameters
  2. -

predict_data (Tensor) – Tensor of predict data. can be array, list or tuple.

  1. - Returns
  2. -

Tensor, array(s) of predictions.

Examples

  1. Copy>>> input_data = Tensor(np.random.randint(0, 255, [1, 3, 224, 224]), mstype.float32)
  2. >>> model = Model(Net())
  3. >>> model.predict(input_data)
  • train(epoch, train_dataset, callbacks=None, dataset_sink_mode=True)[source]
  • Training API where the iteration is controlled by python front-end.

Configure to pynative mode, the training will be performed with dataset non-sink mode.

Note

CPU is not supported when dataset_sink_mode is true.

  1. - Parameters
  2. -
  3. -

epoch (int) – Total number of iterations on the data.

  1. -

train_dataset (Dataset) – A training dataset iterator. If there is noloss_fn, a tuple with multiply data (data1, data2, data3, …) should bereturned and passed to the network. Otherwise, a tuple (data, label) shouldbe returned, and the data and label are passed to the network and lossfunction respectively.

  1. -

callbacks (list) – List of callback object. Callbacks which should be excuted while training. Default: None.

  1. -

dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True.

Examples

  1. Copy>>> dataset = get_dataset()
  2. >>> net = Net()
  3. >>> loss = nn.SoftmaxCrossEntropyWithLogits()
  4. >>> loss_scale_manager = FixedLossScaleManager()
  5. >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
  6. >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager)
  7. >>> model.train(2, dataset)
  • class mindspore.ParallelMode[source]
  • Parallel mode options.

There are five kinds of parallel modes, “STAND_ALONE”, “DATA_PARALLEL”,“HYBRID_PARALLEL”, “SEMI_AUTO_PARALLEL” and “AUTO_PARALLEL”. Default: “STAND_ALONE”.



  • STAND_ALONE: Only one processor working.


  • DATA_PARALLEL: Distributing the data across different processors.


  • HYBRID_PARALLEL: Achieving data parallelism and model parallelism manually.


  • SEMI_AUTO_PARALLEL: Achieving data parallelism and model parallelism by setting parallel strategies.


  • AUTO_PARALLEL: Achieving parallelism automatically.



MODE_LIST: The list for all supported parallel modes.

  • class mindspore.DatasetHelper(dataset, dataset_sink_mode=True)[source]
  • Help function to use the Minddata dataset.

According to different context, change the iter of dataset, to use the same for loop in different context.

Note

The iter of DatasetHelper will give one epoch data.

  • Parameters
    • dataset (DataSet) – The dataset.

    • dataset_sink_mode (bool) – If true use GetNext to fetch the data, or else feed the data from host.Default: True.

Examples

  1. Copy>>> dataset_helper = DatasetHelper(dataset)
  2. >>> for inputs in dataset_helper:
  3. >>> outputs = network(*inputs)
  • loop_size()[source]
  • Get loop_size for every iteration.

  • types_shapes()[source]

  • Get the types and shapes from dataset on current config.
  • mindspore.get_level()[source]
  • Get the logger level.

    • Returns
    • str, the Log level includes 3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).

Examples

  1. Copy>>> import os
  2. >>> os.environ['GLOG_v'] = '0'
  3. >>> from mindspore import log as logger
  4. >>> logger.get_level()
  • mindspore.get_log_config()[source]
  • Get logger configurations.

    • Returns
    • Dict, the dictionary of logger configurations.

Examples

  1. Copy>>> import os
  2. >>> os.environ['GLOG_v'] = '1'
  3. >>> os.environ['GLOG_logtostderr'] = '0'
  4. >>> os.environ['GLOG_log_dir'] = '/var/log/mindspore'
  5. >>> os.environ['logger_maxBytes'] = '5242880'
  6. >>> os.environ['logger_backupCount'] = '10'
  7. >>> from mindspore import log as logger
  8. >>> logger.get_log_config()