Description

If gaps is true, it splits the document with the given pattern. If gaps is false, it extract the tokens matching the pattern. It processes streaming data.

Parameters

Name Description Type Required? Default Value
pattern If gaps is true, it’s used as a delimiter; If gaps is false, it’s used as a token String “\s+”
gaps If gaps is true, it splits the document with the given pattern. If gaps is false, it extract the tokens matching the pattern Boolean true
minTokenLength The minimum of token length. Integer 1
toLowerCase If true, transform all the words to lower case。 Boolean true
selectedCol Name of the selected column used for processing String
outputCol Name of the output column String null
reservedCols Names of the columns to be retained in the output table String[] null

Script Example

Code

  1. data = np.array([
  2. [0, 'That is an English Book!'],
  3. [1, 'Do you like math?'],
  4. [2, 'Have a good day!']
  5. ])
  6. df = pd.DataFrame({"id": data[:, 0], "text": data[:, 1]})
  7. inOp1 = dataframeToOperator(df, schemaStr='id long, text string', op_type='batch')
  8. op = RegexTokenizerBatchOp().setSelectedCol("text").setGaps(False).setToLowerCase(True).setOutputCol("token").setPattern("\\w+")
  9. op.linkFrom(inOp1).print()
  10. inOp2 = dataframeToOperator(df, schemaStr='id long, text string', op_type='stream')
  11. op = RegexTokenizerStreamOp().setSelectedCol("text").setGaps(False).setToLowerCase(True).setOutputCol("token").setPattern("\\w+")
  12. op.linkFrom(inOp2).print()
  13. StreamOperator.execute()

Results

  1. id text token
  2. 0 0 That is an English Book! that is an english book
  3. 1 2 Have a good day! have a good day
  4. 2 1 Do you like math? do you like math