Text core

Open In Colab

Basic function to preprocess text before assembling it in a DataLoaders.

  1. /usr/local/lib/python3.8/dist-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  2. return torch._C._cuda_getDeviceCount() > 0

Preprocessing rules

The following are rules applied to texts before or after it’s tokenized.

spec_add_spaces[source]

spec_add_spaces(t)

Add spaces around / and #

  1. test_eq(spec_add_spaces('#fastai'), ' # fastai')
  2. test_eq(spec_add_spaces('/fastai'), ' / fastai')
  3. test_eq(spec_add_spaces('fastai'), ' fastai')

rm_useless_spaces[source]

rm_useless_spaces(t)

Remove multiple spaces

  1. test_eq(rm_useless_spaces('a b c'), 'a b c')

replace_rep[source]

replace_rep(t)

Replace repetitions at the character level: cccc — TK_REP 4 c

It starts replacing at 3 repetitions of the same character or more.

  1. test_eq(replace_rep('aa'), 'aa')
  2. test_eq(replace_rep('aaaa'), f' {TK_REP} 4 a ')

replace_wrep[source]

replace_wrep(t)

Replace word repetitions: word word word word — TK_WREP 4 word

It starts replacing at 3 repetitions of the same word or more.

  1. test_eq(replace_wrep('ah ah'), 'ah ah')
  2. test_eq(replace_wrep('ah ah ah'), f' {TK_WREP} 3 ah ')
  3. test_eq(replace_wrep('ah ah ah ah'), f' {TK_WREP} 4 ah ')
  4. test_eq(replace_wrep('ah ah ah ah '), f' {TK_WREP} 4 ah ')
  5. test_eq(replace_wrep('ah ah ah ah.'), f' {TK_WREP} 4 ah .')
  6. test_eq(replace_wrep('ah ah ahi'), f'ah ah ahi')

fix_html[source]

fix_html(x)

Various messy things we’ve seen in documents

  1. test_eq(fix_html('#39;bli#146;'), "'bli'")
  2. test_eq(fix_html('Sarah amp; Duck...'), 'Sarah & Duck …')
  3. test_eq(fix_html('a nbsp; #36;'), 'a $')
  4. test_eq(fix_html('" <unk>'), f'" {UNK}')
  5. test_eq(fix_html('quot; @[email protected] @[email protected] '), "' .-")
  6. test_eq(fix_html('<br />textn'), 'ntextn')

replace_all_caps[source]

replace_all_caps(t)

Replace tokens in ALL CAPS by their lower version and add TK_UP before.

  1. test_eq(replace_all_caps("I'M SHOUTING"), f"{TK_UP} i'm {TK_UP} shouting")
  2. test_eq(replace_all_caps("I'm speaking normally"), "I'm speaking normally")
  3. test_eq(replace_all_caps("I am speaking normally"), "i am speaking normally")

replace_maj[source]

replace_maj(t)

Replace tokens in Sentence Case by their lower version and add TK_MAJ before.

  1. test_eq(replace_maj("Jeremy Howard"), f'{TK_MAJ} jeremy {TK_MAJ} howard')
  2. test_eq(replace_maj("I don't think there is any maj here"), ("i don't think there is any maj here"),)

lowercase[source]

lowercase(t, add_bos=True, add_eos=False)

Converts t to lowercase

replace_space[source]

replace_space(t)

Replace embedded spaces in a token with unicode line char to allow for split/join

Tokenizing

A tokenizer is a class that must implement __call__. This method receives a iterator of texts and must return a generator with their tokenized versions. Here is the most basic example:

class BaseTokenizer[source]

BaseTokenizer(split_char=' ', **kwargs)

Basic tokenizer that just splits on spaces

  1. tok = BaseTokenizer()
  2. test_eq(tok(["This is a text"]), [["This", "is", "a", "text"]])
  3. tok = BaseTokenizer('x')
  4. test_eq(tok(["This is a text"]), [["This is a te", "t"]])

class SpacyTokenizer[source]

SpacyTokenizer(lang='en', special_toks=None, buf_sz=5000)

Spacy tokenizer for lang

  1. tok = SpacyTokenizer()
  2. inp,exp = "This isn't the easiest text.",["This", "is", "n't", "the", "easiest", "text", "."]
  3. test_eq(L(tok([inp,inp])), [exp,exp])

class TokenizeWithRules[source]

TokenizeWithRules(tok, rules=None, post_rules=None)

A wrapper around tok which applies rules, then tokenizes, then applies post_rules

  1. f = TokenizeWithRules(BaseTokenizer(),rules=[replace_all_caps])
  2. test_eq(f(["THIS isn't a problem"]), [[TK_UP, 'this', "isn't", 'a', 'problem']])
  3. f = TokenizeWithRules(SpacyTokenizer())
  4. test_eq(f(["This isn't a problem"]), [[BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem']])
  5. f = TokenizeWithRules(BaseTokenizer(split_char="'"), rules=[])
  6. test_eq(f(["This isn't a problem"]), [['This▁isn', 't▁a▁problem']])

The main function that will be called during one of the processes handling tokenization. It will iterate through the batch of texts, apply them rules and tokenize them.

  1. texts = ["this is a text", "this is another text"]
  2. tok = TokenizeWithRules(BaseTokenizer(), texts.__getitem__)
  3. test_eq(tok([0,1]), [['this', 'is', 'a', 'text'],['this', 'is', 'another', 'text']])

tokenize1[source]

tokenize1(text, tok, rules=None, post_rules=None)

Call TokenizeWithRules with a single text

  1. test_eq(tokenize1("This isn't a problem", SpacyTokenizer()),
  2. [BOS, TK_MAJ, 'this', 'is', "n't", 'a', 'problem'])
  3. test_eq(tokenize1("This isn't a problem", tok=BaseTokenizer(), rules=[]),
  4. ['This',"isn't",'a','problem'])

parallel_tokenize[source]

parallel_tokenize(items, tok=None, rules=None, n_workers=2, **kwargs)

Calls optional setup on tok before launching TokenizeWithRules using `parallel_gen

Note that since this uses parallel_gen behind the scenes, the generator returned contains tuples of indices and results. There is no guarantee that the results are returned in order, so you should sort by the first item of the tuples (the indices) if you need them ordered.

  1. res = parallel_tokenize(['0 1', '1 2'], rules=[], n_workers=2)
  2. idxs,toks = zip(*L(res).sorted(itemgetter(0)))
  3. test_eq(toks, [['0','1'],['1','2']])

Tokenize texts in files

Preprocessing function for texts in filenames. Tokenized texts will be saved in a similar fashion in a directory suffixed with _tok in the parent folder of path (override with output_dir). This directory is the return value.

tokenize_folder[source]

tokenize_folder(path, extensions=None, folders=None, output_dir=None, skip_if_exists=True, output_names=None, n_workers=2, rules=None, tok=None, encoding='utf8')

Tokenize text files in path in parallel using n_workers

The result will be in output_dir (defaults to a folder in the same parent directory as path, with _tok added to path.name) with the same structure as in path. Tokenized texts for a given file will be in the file having the same name in output_dir. Additionally, a file with a .len suffix contains the number of tokens and the count of all words is stored in output_dir/counter.pkl.

extensions will default to ['.txt'] and all text files in path are treated unless you specify a list of folders in include. rules (that defaults to defaults.text_proc_rules) are applied to each text before going in the tokenizer.

tokenize_files[source]

tokenize_files(files, path, output_dir, output_names=None, n_workers=2, rules=None, tok=None, encoding='utf8', skip_if_exists=False)

Tokenize text files in parallel using n_workers

Tokenize texts in a dataframe

tokenize_texts[source]

tokenize_texts(texts, n_workers=2, rules=None, tok=None)

Tokenize texts in parallel using n_workers

tokenize_df[source]

tokenize_df(df, text_cols, n_workers=2, rules=None, mark_fields=None, tok=None, tok_text_col='text')

Tokenize texts in df[text_cols] in parallel using n_workers and stores them in df[tok_text_col]

This function returns a new dataframe with the same non-text columns, a column named text that contains the tokenized texts and a column named text_lengths that contains their respective length. It also returns a counter of all seen words to quickly build a vocabulary afterward.

rules (that defaults to defaults.text_proc_rules) are applied to each text before going in the tokenizer. If mark_fields isn’t specified, it defaults to False when there is a single text column, True when there are several. In that case, the texts in each of those columns are joined with FLD markers followed by the number of the field.

tokenize_csv[source]

tokenize_csv(fname, text_cols, outname=None, n_workers=4, rules=None, mark_fields=None, tok=None, header='infer', chunksize=50000)

Tokenize texts in the text_cols of the csv fname in parallel using n_workers

load_tokenized_csv[source]

load_tokenized_csv(fname)

Utility function to quickly load a tokenized csv ans the corresponding counter

The result will be written in a new csv file in outname (defaults to the same as fname with the suffix _tok.csv) and will have the same header as the original file, the same non-text columns, a text and a text_lengths column as described in tokenize_df.

rules (that defaults to defaults.text_proc_rules) are applied to each text before going in the tokenizer. If mark_fields isn’t specified, it defaults to False when there is a single text column, True when there are several. In that case, the texts in each of those columns are joined with FLD markers followed by the number of the field.

The csv file is opened with header and optionally with blocks of chunksize at a time. If this argument is passed, each chunk is processed independently and saved in the output file to save memory usage.

  1. def _prepare_texts(tmp_d):
  2. "Prepare texts in a folder struct in tmp_d, a csv file and returns a dataframe"
  3. path = Path(tmp_d)/'tmp'
  4. path.mkdir()
  5. for d in ['a', 'b', 'c']:
  6. (path/d).mkdir()
  7. for i in range(5):
  8. with open(path/d/f'text{i}.txt', 'w') as f: f.write(f"This is an example of text {d} {i}")
  9. texts = [f"This is an example of text {d} {i}" for i in range(5) for d in ['a', 'b', 'c']]
  10. df = pd.DataFrame({'text': texts, 'label': list(range(15))}, columns=['text', 'label'])
  11. csv_fname = tmp_d/'input.csv'
  12. df.to_csv(csv_fname, index=False)
  13. return path,df,csv_fname

class Tokenizer[source]

Tokenizer(tok, rules=None, counter=None, lengths=None, mode=None, sep=' ') :: Transform

Provides a consistent Transform interface to tokenizers operating on DataFrames and folders

  1. with tempfile.TemporaryDirectory() as tmp_d:
  2. path,df,csv_fname = _prepare_texts(Path(tmp_d))
  3. items = get_text_files(path)
  4. splits = RandomSplitter()(items)
  5. dsets = Datasets(items, [Tokenizer.from_folder(path)], splits=splits)
  6. print(dsets.train[0])
  7. dsets = Datasets(df, [Tokenizer.from_df('text')], splits=splits)
  8. print(dsets.train[0][0].text)
  1. (['xxbos', 'xxmaj', 'this', 'is', 'an', 'example', 'of', 'text', 'b', '0'],)
  1. ('xxbos', 'xxmaj', 'this', 'is', 'an', 'example', 'of', 'text', 'b', '1')
  1. tst = test_set(dsets, ['This is a test', 'this is another test'])
  2. test_eq(tst, [(['xxbos', 'xxmaj', 'this','is','a','test'],),
  3. (['xxbos','this','is','another','test'],)])

Sentencepiece

class SentencePieceTokenizer[source]

SentencePieceTokenizer(lang='en', special_toks=None, sp_model=None, vocab_sz=None, max_vocab_sz=30000, model_type='unigram', char_coverage=None, cache_dir='tmp')

SentencePiece tokenizer for lang

  1. texts = [f"This is an example of text {i}" for i in range(10)]
  2. df = pd.DataFrame({'text': texts, 'label': list(range(10))}, columns=['text', 'label'])
  3. out,cnt = tokenize_df(df, text_cols='text', tok=SentencePieceTokenizer(vocab_sz=34), n_workers=1)
  1. with tempfile.TemporaryDirectory() as tmp_d:
  2. path,df,csv_fname = _prepare_texts(Path(tmp_d))
  3. items = get_text_files(path)
  4. splits = RandomSplitter()(items)
  5. tok = SentencePieceTokenizer(special_toks=[])
  6. dsets = Datasets(items, [Tokenizer.from_folder(path, tok=tok)], splits=splits)
  7. print(dsets.train[0][0])
  8. with warnings.catch_warnings():
  9. dsets = Datasets(df, [Tokenizer.from_df('text', tok=tok)], splits=splits)
  10. print(dsets.train[0][0].text)
  1. ['▁xx', 'b', 'o', 's', '▁xx', 'm', 'a', 'j', '▁t', 'h', 'i', 's', '▁', 'i', 's', '▁a', 'n', '▁', 'ex', 'a', 'm', 'p', 'l', 'e', '▁', 'o', 'f', '▁t', 'ex', 't', '▁a', '▁', '1']
  1. ['▁xx', 'b', 'o', 's', '▁xx', 'm', 'a', 'j', '▁t', 'h', 'i', 's', '▁', 'i', 's', '▁a', 'n', '▁', 'ex', 'a', 'm', 'p', 'l', 'e', '▁', 'o', 'f', '▁t', 'ex', 't', '▁', 'b', '▁', '2']
  1. /home/yizhang/anaconda3/envs/fastai/lib/python3.8/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
  2. return array(a, dtype, copy=False, order=order)

Company logo

©2021 fast.ai. All rights reserved.
Site last generated: Mar 31, 2021