mindarmour.defenses

This module includes classical defense algorithms in defencing adversarialexamples and enhancing model security and trustworthy.

  • class mindarmour.defenses.AdversarialDefense(network, loss_fn=None, optimizer=None)[source]
  • Adversarial training using given adversarial examples.

    • Parameters
      • network (Cell) – A MindSpore network to be defensed.

      • loss_fn (Functions) – Loss function. Default: None.

      • optimizer (Cell) – Optimizer used to train the network. Default: None.

Examples

  1. Copy>>> class Net(Cell):
  2. >>> def __init__(self):
  3. >>> super(Net, self).__init__()
  4. >>> self._reshape = P.Reshape()
  5. >>> self._full_con_1 = Dense(28*28, 120)
  6. >>> self._full_con_2 = Dense(120, 84)
  7. >>> self._full_con_3 = Dense(84, 10)
  8. >>> self._relu = ReLU()
  9. >>>
  10. >>> def construct(self, x):
  11. >>> out = self._reshape(x, (-1, 28*28))
  12. >>> out = self._full_con_1(out)
  13. >>> out = self.relu(out)
  14. >>> out = self._full_con_2(out)
  15. >>> out = self.relu(out)
  16. >>> out = self._full_con_3(out)
  17. >>> return out
  18. >>>
  19. >>> net = Net()
  20. >>> lr = 0.0001
  21. >>> momentum = 0.9
  22. >>> loss_fn = SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True)
  23. >>> optimizer = Momentum(net.trainable_params(), lr, momentum)
  24. >>> adv_defense = AdversarialDefense(net, loss_fn, optimizer)
  25. >>> inputs = np.random.rand(32, 1, 28, 28).astype(np.float32)
  26. >>> labels = np.random.randint(0, 10).astype(np.int32)
  27. >>> adv_defense.defense(inputs, labels)
  • defense(inputs, labels)[source]
  • Enhance model via training with input samples.

    • Parameters
    • Returns

    • numpy.ndarray, loss of defense operation.
  • class mindarmour.defenses.AdversarialDefenseWithAttacks(network, attacks, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5)[source]
  • Adversarial defense with attacks.

    • Parameters
      • network (Cell) – A MindSpore network to be defensed.

      • attacks (list[Attack]) – List of attack method.

      • loss_fn (Functions) – Loss function. Default: None.

      • optimizer (Cell) – Optimizer used to train the network. Default: None.

      • bounds (tuple) – Upper and lower bounds of data. In form of (clip_min,clip_max). Default: (0.0, 1.0).

      • replace_ratio (float) – Ratio of replacing original samples withadversarial, which must be between 0 and 1. Default: 0.5.

    • Raises

    • ValueError – If replace_ratio is not between 0 and 1.

Examples

  1. Copy>>> net = Net()
  2. >>> fgsm = FastGradientSignMethod(net)
  3. >>> pgd = ProjectedGradientDescent(net)
  4. >>> ead = AdversarialDefenseWithAttacks(net, [fgsm, pgd])
  5. >>> ead.defense(inputs, labels)
  • defense(inputs, labels)[source]
  • Enhance model via training with adversarial examples generated from input samples.

    • Parameters
    • Returns

    • numpy.ndarray, loss of adversarial defense operation.
  • class mindarmour.defenses.NaturalAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.1)[source]
  • Adversarial training based on FGSM.

Reference: A. Kurakin, et al., “Adversarial machine learning at scale,” inICLR, 2017.

  • Parameters
    • network (Cell) – A MindSpore network to be defensed.

    • loss_fn (Functions) – Loss function. Default: None.

    • optimizer (Cell) – Optimizer used to train the network. Default: None.

    • bounds (tuple) – Upper and lower bounds of data. In form of (clip_min,clip_max). Default: (0.0, 1.0).

    • replace_ratio (float) – Ratio of replacing original samples withadversarial samples. Default: 0.5.

    • eps (float) – Step size of the attack method(FGSM). Default: 0.1.

Examples

  1. Copy>>> net = Net()
  2. >>> adv_defense = NaturalAdversarialDefense(net)
  3. >>> adv_defense.defense(inputs, labels)
  • class mindarmour.defenses.ProjectedAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.3, eps_iter=0.1, nb_iter=5, norm_level='inf')[source]
  • Adversarial training based on PGD.

Reference: A. Madry, et al., “Towards deep learning models resistant toadversarial attacks,” in ICLR, 2018.

  • Parameters
    • network (Cell) – A MindSpore network to be defensed.

    • loss_fn (Functions) – Loss function. Default: None.

    • optimizer (Cell) – Optimizer used to train the nerwork. Default: None.

    • bounds (tuple) – Upper and lower bounds of input data. In form of(clip_min, clip_max). Default: (0.0, 1.0).

    • replace_ratio (float) – Ratio of replacing original samples withadversarial samples. Default: 0.5.

    • eps (float) – PGD attack parameters, epsilon. Default: 0.3.

    • eps_iter (int) – PGD attack parameters, inner loop epsilon.Default:0.1.

    • nb_iter (int) – PGD attack parameters, number of iteration.Default: 5.

    • norm_level (str) – Norm type. ‘inf’ or ‘l2’. Default: ‘inf’.

Examples

  1. Copy>>> net = Net()
  2. >>> adv_defense = ProjectedAdversarialDefense(net)
  3. >>> adv_defense.defense(inputs, labels)
  • mindarmour.defenses.EnsembleAdversarialDefense
  • alias of mindarmour.defenses.adversarial_defense.AdversarialDefenseWithAttacks