基于TorchText的语言翻译

译者:PengboLiu

校验:PengboLiu

本教程介绍如何使用torchtext的几个类来预处理英德数据集,该数据集可以用来训练seq2seq模型,既而能自动把德语句子翻译成英语。

本文基于PyTorch社区成员Ben Trevett教程,并得到了他本人的许可。

阅读完本教程,你将能够:

Field和 TranslationDataset

torchtext具有创建数据集的功能,可以轻松对其迭代以构建机器翻译模型。一个关键的类是Filed,它指定每个句子的预处理方法,另一个类是TranslationDataset ; torchtext内置了几个翻译数据集;在本教程中,我们将使用 Multi30k dataset数据集,其中包含约30000个英德句对(平均长度约13个字)。

注:本教程中的tokenization 需要使用 Spacy 。Spacy包可以帮助我们对英语以外的语言tokenization。torchtext提供了basic_english的tokenizer ,但是对于其他语言,使用Spacy对我们而言是最好的选择。

为了运行该教程,首先要使用pip或conda安装Spacy。接下来,下载英德原始数据:

  1. python -m spacy download en
  2. python -m spacy download de

安装Spacy后,以下代码将根据Field中定义的tokenizer 处理TranslationDataset中的每个句子。

  1. from torchtext.datasets import Multi30k
  2. from torchtext.data import Field, BucketIterator
  3. SRC = Field(tokenize = "spacy",
  4. tokenizer_language="de",
  5. init_token = '<sos>',
  6. eos_token = '<eos>',
  7. lower = True)
  8. TRG = Field(tokenize = "spacy",
  9. tokenizer_language="en",
  10. init_token = '<sos>',
  11. eos_token = '<eos>',
  12. lower = True)
  13. train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
  14. fields = (SRC, TRG))

输出:
downloading training.tar.gz
downloading validation.tar.gz
downloading mmt_task1_test2016.tar.gz

现在,我们已经定义好了train_datatorchtextField有一个非常有用的功能 :我们可以使用build_vocab方法创建每个语言的词汇表。

  1. SRC.build_vocab(train_data, min_freq = 2)
  2. TRG.build_vocab(train_data, min_freq = 2)

一旦这些代码行被运行,SRC.vocab.stoi将成为一个tokens作为key,索引作为value的词典;对应的, SRC.vocab.itos是一个交换了key和value内容相同的字典。在本教程中我们不会广泛使用此功能,但是你可能在遇到其他NLP任务有用。

BucketIterator

我们使用最后一个torchtext的特性是BucketIterator, 它以TranslationDataset作为第一个参数,所以易于使用。如文档所说:定义一个迭代器,该迭代器将相似长度的数据放在一起。产生每个新bacth时,最大程度地减少所需的填充量。

  1. import torch
  2. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
  3. BATCH_SIZE = 128
  4. train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
  5. (train_data, valid_data, test_data),
  6. batch_size = BATCH_SIZE,
  7. device = device)

可以像DataLoader一样调用这些迭代器。 在下面的训练和评估函数中,它们可以简单地通过以下方式调用:

  1. for i, batch in enumerate(iterator):

每个batch于是具有SRCTRG属性:

  1. src = batch.src
  2. trg = batch.trg

定义我们的nn.ModuleOptimizer

解决了数据集的问题并为之定义好迭代器,我们剩下的任务就是定义模型和优化器完成训练过程。

具体来说,我们的模型遵循此处描述的结构。

注意:我们选择这种模型并不是因为它是目前最优的,而是因为它是机器翻译的标准模型。众所周知,目前机器翻译的最优模型是Transformers。

  1. import random
  2. from typing import Tuple
  3. import torch.nn as nn
  4. import torch.optim as optim
  5. import torch.nn.functional as F
  6. from torch import Tensor
  7. class Encoder(nn.Module):
  8. def __init__(self,
  9. input_dim: int,
  10. emb_dim: int,
  11. enc_hid_dim: int,
  12. dec_hid_dim: int,
  13. dropout: float):
  14. super().__init__()
  15. self.input_dim = input_dim
  16. self.emb_dim = emb_dim
  17. self.enc_hid_dim = enc_hid_dim
  18. self.dec_hid_dim = dec_hid_dim
  19. self.dropout = dropout
  20. self.embedding = nn.Embedding(input_dim, emb_dim)
  21. self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)
  22. self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)
  23. self.dropout = nn.Dropout(dropout)
  24. def forward(self,
  25. src: Tensor) -> Tuple[Tensor]:
  26. embedded = self.dropout(self.embedding(src))
  27. outputs, hidden = self.rnn(embedded)
  28. hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)))
  29. return outputs, hidden
  30. class Attention(nn.Module):
  31. def __init__(self,
  32. enc_hid_dim: int,
  33. dec_hid_dim: int,
  34. attn_dim: int):
  35. super().__init__()
  36. self.enc_hid_dim = enc_hid_dim
  37. self.dec_hid_dim = dec_hid_dim
  38. self.attn_in = (enc_hid_dim * 2) + dec_hid_dim
  39. self.attn = nn.Linear(self.attn_in, attn_dim)
  40. def forward(self,
  41. decoder_hidden: Tensor,
  42. encoder_outputs: Tensor) -> Tensor:
  43. src_len = encoder_outputs.shape[0]
  44. repeated_decoder_hidden = decoder_hidden.unsqueeze(1).repeat(1, src_len, 1)
  45. encoder_outputs = encoder_outputs.permute(1, 0, 2)
  46. energy = torch.tanh(self.attn(torch.cat((
  47. repeated_decoder_hidden,
  48. encoder_outputs),
  49. dim = 2)))
  50. attention = torch.sum(energy, dim=2)
  51. return F.softmax(attention, dim=1)
  52. class Decoder(nn.Module):
  53. def __init__(self,
  54. output_dim: int,
  55. emb_dim: int,
  56. enc_hid_dim: int,
  57. dec_hid_dim: int,
  58. dropout: int,
  59. attention: nn.Module):
  60. super().__init__()
  61. self.emb_dim = emb_dim
  62. self.enc_hid_dim = enc_hid_dim
  63. self.dec_hid_dim = dec_hid_dim
  64. self.output_dim = output_dim
  65. self.dropout = dropout
  66. self.attention = attention
  67. self.embedding = nn.Embedding(output_dim, emb_dim)
  68. self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)
  69. self.out = nn.Linear(self.attention.attn_in + emb_dim, output_dim)
  70. self.dropout = nn.Dropout(dropout)
  71. def _weighted_encoder_rep(self,
  72. decoder_hidden: Tensor,
  73. encoder_outputs: Tensor) -> Tensor:
  74. a = self.attention(decoder_hidden, encoder_outputs)
  75. a = a.unsqueeze(1)
  76. encoder_outputs = encoder_outputs.permute(1, 0, 2)
  77. weighted_encoder_rep = torch.bmm(a, encoder_outputs)
  78. weighted_encoder_rep = weighted_encoder_rep.permute(1, 0, 2)
  79. return weighted_encoder_rep
  80. def forward(self,
  81. input: Tensor,
  82. decoder_hidden: Tensor,
  83. encoder_outputs: Tensor) -> Tuple[Tensor]:
  84. input = input.unsqueeze(0)
  85. embedded = self.dropout(self.embedding(input))
  86. weighted_encoder_rep = self._weighted_encoder_rep(decoder_hidden,
  87. encoder_outputs)
  88. rnn_input = torch.cat((embedded, weighted_encoder_rep), dim = 2)
  89. output, decoder_hidden = self.rnn(rnn_input, decoder_hidden.unsqueeze(0))
  90. embedded = embedded.squeeze(0)
  91. output = output.squeeze(0)
  92. weighted_encoder_rep = weighted_encoder_rep.squeeze(0)
  93. output = self.out(torch.cat((output,
  94. weighted_encoder_rep,
  95. embedded), dim = 1))
  96. return output, decoder_hidden.squeeze(0)
  97. class Seq2Seq(nn.Module):
  98. def __init__(self,
  99. encoder: nn.Module,
  100. decoder: nn.Module,
  101. device: torch.device):
  102. super().__init__()
  103. self.encoder = encoder
  104. self.decoder = decoder
  105. self.device = device
  106. def forward(self,
  107. src: Tensor,
  108. trg: Tensor,
  109. teacher_forcing_ratio: float = 0.5) -> Tensor:
  110. batch_size = src.shape[1]
  111. max_len = trg.shape[0]
  112. trg_vocab_size = self.decoder.output_dim
  113. outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
  114. encoder_outputs, hidden = self.encoder(src)
  115. # first input to the decoder is the <sos> token
  116. output = trg[0,:]
  117. for t in range(1, max_len):
  118. output, hidden = self.decoder(output, hidden, encoder_outputs)
  119. outputs[t] = output
  120. teacher_force = random.random() < teacher_forcing_ratio
  121. top1 = output.max(1)[1]
  122. output = (trg[t] if teacher_force else top1)
  123. return outputs
  124. INPUT_DIM = len(SRC.vocab)
  125. OUTPUT_DIM = len(TRG.vocab)
  126. # ENC_EMB_DIM = 256
  127. # DEC_EMB_DIM = 256
  128. # ENC_HID_DIM = 512
  129. # DEC_HID_DIM = 512
  130. # ATTN_DIM = 64
  131. # ENC_DROPOUT = 0.5
  132. # DEC_DROPOUT = 0.5
  133. ENC_EMB_DIM = 32
  134. DEC_EMB_DIM = 32
  135. ENC_HID_DIM = 64
  136. DEC_HID_DIM = 64
  137. ATTN_DIM = 8
  138. ENC_DROPOUT = 0.5
  139. DEC_DROPOUT = 0.5
  140. enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)
  141. attn = Attention(ENC_HID_DIM, DEC_HID_DIM, ATTN_DIM)
  142. dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)
  143. model = Seq2Seq(enc, dec, device).to(device)
  144. def init_weights(m: nn.Module):
  145. for name, param in m.named_parameters():
  146. if 'weight' in name:
  147. nn.init.normal_(param.data, mean=0, std=0.01)
  148. else:
  149. nn.init.constant_(param.data, 0)
  150. model.apply(init_weights)
  151. optimizer = optim.Adam(model.parameters())
  152. def count_parameters(model: nn.Module):
  153. return sum(p.numel() for p in model.parameters() if p.requires_grad)
  154. print(f'The model has {count_parameters(model):,} trainable parameters')

输出: The model has 1,856,685 trainable parameters

注:当计算模型分数尤其是翻译模型时,我们需要设置nn.CrossEntropyLoss 忽略padding。

  1. PAD_IDX = TRG.vocab.stoi['<pad>']
  2. criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)

最后,我们可以训练和评价模型:

  1. import math
  2. import time
  3. def train(model: nn.Module,
  4. iterator: BucketIterator,
  5. optimizer: optim.Optimizer,
  6. criterion: nn.Module,
  7. clip: float):
  8. model.train()
  9. epoch_loss = 0
  10. for _, batch in enumerate(iterator):
  11. src = batch.src
  12. trg = batch.trg
  13. optimizer.zero_grad()
  14. output = model(src, trg)
  15. output = output[1:].view(-1, output.shape[-1])
  16. trg = trg[1:].view(-1)
  17. loss = criterion(output, trg)
  18. loss.backward()
  19. torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
  20. optimizer.step()
  21. epoch_loss += loss.item()
  22. return epoch_loss / len(iterator)
  23. def evaluate(model: nn.Module,
  24. iterator: BucketIterator,
  25. criterion: nn.Module):
  26. model.eval()
  27. epoch_loss = 0
  28. with torch.no_grad():
  29. for _, batch in enumerate(iterator):
  30. src = batch.src
  31. trg = batch.trg
  32. output = model(src, trg, 0) #turn off teacher forcing
  33. output = output[1:].view(-1, output.shape[-1])
  34. trg = trg[1:].view(-1)
  35. loss = criterion(output, trg)
  36. epoch_loss += loss.item()
  37. return epoch_loss / len(iterator)
  38. def epoch_time(start_time: int,
  39. end_time: int):
  40. elapsed_time = end_time - start_time
  41. elapsed_mins = int(elapsed_time / 60)
  42. elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
  43. return elapsed_mins, elapsed_secs
  44. N_EPOCHS = 10
  45. CLIP = 1
  46. best_valid_loss = float('inf')
  47. for epoch in range(N_EPOCHS):
  48. start_time = time.time()
  49. train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
  50. valid_loss = evaluate(model, valid_iterator, criterion)
  51. end_time = time.time()
  52. epoch_mins, epoch_secs = epoch_time(start_time, end_time)
  53. print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
  54. print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
  55. print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
  56. test_loss = evaluate(model, test_iterator, criterion)
  57. print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')

输出: Epoch: 01 | Time: 0m 36s
Train Loss: 5.686 | Train PPL: 294.579
Val. Loss: 5.250 | Val. PPL: 190.638
Epoch: 02 | Time: 0m 37s
Train Loss: 5.019 | Train PPL: 151.260
Val. Loss: 5.155 | Val. PPL: 173.274 Epoch: 03 | Time: 0m 37s
Train Loss: 4.757 | Train PPL: 116.453
Val. Loss: 4.976 | Val. PPL: 144.824
Epoch: 04 | Time: 0m 35s
Train Loss: 4.574 | Train PPL: 96.914
Val. Loss: 4.835 | Val. PPL: 125.834
Epoch: 05 | Time: 0m 35s
Train Loss: 4.421 | Train PPL: 83.185
Val. Loss: 4.783 | Val. PPL: 119.414
Epoch: 06 | Time: 0m 38s
Train Loss: 4.321 | Train PPL: 75.233
Val. Loss: 4.802 | Val. PPL: 121.734
Epoch: 07 | Time: 0m 38s
Train Loss: 4.233 | Train PPL: 68.957
Val. Loss: 4.675 | Val. PPL: 107.180
Epoch: 08 | Time: 0m 35s
Train Loss: 4.108 | Train PPL: 60.838
Val. Loss: 4.622 | Val. PPL: 101.693
Epoch: 09 | Time: 0m 34s
Train Loss: 4.020 | Train PPL: 55.680
Val. Loss: 4.530 | Val. PPL: 92.785 Epoch: 10 | Time: 0m 34s
Train Loss: 3.919 | Train PPL: 50.367
Val. Loss: 4.448 | Val. PPL: 85.441
| Test Loss: 4.464 | Test PPL: 86.801 |

接下来的步骤

  • 看看Ben Trevett使用torchtext教程的其余部分
  • 请继续关注使用其他torchtext功能以及nn.Transformer语言建模预测下一个单词的教程!

脚本的总运行时间: (6分钟27.732秒)