Working with text data

Series and Index are equipped with a set of string processing methodsthat make it easy to operate on each element of the array. Perhaps mostimportantly, these methods exclude missing/NA values automatically. These areaccessed via the str attribute and generally have names matchingthe equivalent (scalar) built-in string methods:

  1. In [1]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
  2.  
  3. In [2]: s.str.lower()
  4. Out[2]:
  5. 0 a
  6. 1 b
  7. 2 c
  8. 3 aaba
  9. 4 baca
  10. 5 NaN
  11. 6 caba
  12. 7 dog
  13. 8 cat
  14. dtype: object
  15.  
  16. In [3]: s.str.upper()
  17. Out[3]:
  18. 0 A
  19. 1 B
  20. 2 C
  21. 3 AABA
  22. 4 BACA
  23. 5 NaN
  24. 6 CABA
  25. 7 DOG
  26. 8 CAT
  27. dtype: object
  28.  
  29. In [4]: s.str.len()
  30. Out[4]:
  31. 0 1.0
  32. 1 1.0
  33. 2 1.0
  34. 3 4.0
  35. 4 4.0
  36. 5 NaN
  37. 6 4.0
  38. 7 3.0
  39. 8 3.0
  40. dtype: float64
  1. In [5]: idx = pd.Index([' jack', 'jill ', ' jesse ', 'frank'])
  2.  
  3. In [6]: idx.str.strip()
  4. Out[6]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='object')
  5.  
  6. In [7]: idx.str.lstrip()
  7. Out[7]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='object')
  8.  
  9. In [8]: idx.str.rstrip()
  10. Out[8]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='object')

The string methods on Index are especially useful for cleaning up ortransforming DataFrame columns. For instance, you may have columns withleading or trailing whitespace:

  1. In [9]: df = pd.DataFrame(np.random.randn(3, 2),
  2. ...: columns=[' Column A ', ' Column B '], index=range(3))
  3. ...:
  4.  
  5. In [10]: df
  6. Out[10]:
  7. Column A Column B
  8. 0 0.469112 -0.282863
  9. 1 -1.509059 -1.135632
  10. 2 1.212112 -0.173215

Since df.columns is an Index object, we can use the .str accessor

  1. In [11]: df.columns.str.strip()
  2. Out[11]: Index(['Column A', 'Column B'], dtype='object')
  3.  
  4. In [12]: df.columns.str.lower()
  5. Out[12]: Index([' column a ', ' column b '], dtype='object')

These string methods can then be used to clean up the columns as needed.Here we are removing leading and trailing whitespaces, lower casing all names,and replacing any remaining whitespaces with underscores:

  1. In [13]: df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_')
  2.  
  3. In [14]: df
  4. Out[14]:
  5. column_a column_b
  6. 0 0.469112 -0.282863
  7. 1 -1.509059 -1.135632
  8. 2 1.212112 -0.173215

Note

If you have a Series where lots of elements are repeated(i.e. the number of unique elements in the Series is a lot smaller than the length of theSeries), it can be faster to convert the original Series to one of typecategory and then use .str.<method> or .dt.<property> on that.The performance difference comes from the fact that, for Series of type category, thestring operations are done on the .categories and not on each element of theSeries.

Please note that a Series of type category with string .categories hassome limitations in comparison to Series of type string (e.g. you can’t add strings toeach other: s + " " + s won’t work if s is a Series of type category). Also,.str methods which operate on elements of type list are not available on such aSeries.

Warning

Before v.0.25.0, the .str-accessor did only the most rudimentary type checks. Starting withv.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.

Generally speaking, the .str accessor is intended to work only on strings. With very fewexceptions, other uses are not supported, and may be disabled at a later point.

Splitting and replacing strings

Methods like split return a Series of lists:

  1. In [15]: s2 = pd.Series(['a_b_c', 'c_d_e', np.nan, 'f_g_h'])
  2.  
  3. In [16]: s2.str.split('_')
  4. Out[16]:
  5. 0 [a, b, c]
  6. 1 [c, d, e]
  7. 2 NaN
  8. 3 [f, g, h]
  9. dtype: object

Elements in the split lists can be accessed using get or [] notation:

  1. In [17]: s2.str.split('_').str.get(1)
  2. Out[17]:
  3. 0 b
  4. 1 d
  5. 2 NaN
  6. 3 g
  7. dtype: object
  8.  
  9. In [18]: s2.str.split('_').str[1]
  10. Out[18]:
  11. 0 b
  12. 1 d
  13. 2 NaN
  14. 3 g
  15. dtype: object

It is easy to expand this to return a DataFrame using expand.

  1. In [19]: s2.str.split('_', expand=True)
  2. Out[19]:
  3. 0 1 2
  4. 0 a b c
  5. 1 c d e
  6. 2 NaN NaN NaN
  7. 3 f g h

It is also possible to limit the number of splits:

  1. In [20]: s2.str.split('_', expand=True, n=1)
  2. Out[20]:
  3. 0 1
  4. 0 a b_c
  5. 1 c d_e
  6. 2 NaN NaN
  7. 3 f g_h

rsplit is similar to split except it works in the reverse direction,i.e., from the end of the string to the beginning of the string:

  1. In [21]: s2.str.rsplit('_', expand=True, n=1)
  2. Out[21]:
  3. 0 1
  4. 0 a_b c
  5. 1 c_d e
  6. 2 NaN NaN
  7. 3 f_g h

replace by default replaces regular expressions:

  1. In [22]: s3 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca',
  2. ....: '', np.nan, 'CABA', 'dog', 'cat'])
  3. ....:
  4.  
  5. In [23]: s3
  6. Out[23]:
  7. 0 A
  8. 1 B
  9. 2 C
  10. 3 Aaba
  11. 4 Baca
  12. 5
  13. 6 NaN
  14. 7 CABA
  15. 8 dog
  16. 9 cat
  17. dtype: object
  18.  
  19. In [24]: s3.str.replace('^.a|dog', 'XX-XX ', case=False)
  20. Out[24]:
  21. 0 A
  22. 1 B
  23. 2 C
  24. 3 XX-XX ba
  25. 4 XX-XX ca
  26. 5
  27. 6 NaN
  28. 7 XX-XX BA
  29. 8 XX-XX
  30. 9 XX-XX t
  31. dtype: object

Some caution must be taken to keep regular expressions in mind! For example, thefollowing code will cause trouble because of the regular expression meaning of$:

  1. # Consider the following badly formatted financial data
  2. In [25]: dollars = pd.Series(['12', '-$10', '$10,000'])
  3.  
  4. # This does what you'd naively expect:
  5. In [26]: dollars.str.replace('$', '')
  6. Out[26]:
  7. 0 12
  8. 1 -10
  9. 2 10,000
  10. dtype: object
  11.  
  12. # But this doesn't:
  13. In [27]: dollars.str.replace('-$', '-')
  14. Out[27]:
  15. 0 12
  16. 1 -$10
  17. 2 $10,000
  18. dtype: object
  19.  
  20. # We need to escape the special character (for >1 len patterns)
  21. In [28]: dollars.str.replace(r'-\$', '-')
  22. Out[28]:
  23. 0 12
  24. 1 -10
  25. 2 $10,000
  26. dtype: object

New in version 0.23.0.

If you do want literal replacement of a string (equivalent tostr.replace()), you can set the optional regex parameter toFalse, rather than escaping each character. In this case both patand repl must be strings:

  1. # These lines are equivalent
  2. In [29]: dollars.str.replace(r'-\$', '-')
  3. Out[29]:
  4. 0 12
  5. 1 -10
  6. 2 $10,000
  7. dtype: object
  8.  
  9. In [30]: dollars.str.replace('-$', '-', regex=False)
  10. Out[30]:
  11. 0 12
  12. 1 -10
  13. 2 $10,000
  14. dtype: object

New in version 0.20.0.

The replace method can also take a callable as replacement. It is calledon every pat using re.sub(). The callable should expect onepositional argument (a regex object) and return a string.

  1. # Reverse every lowercase alphabetic word
  2. In [31]: pat = r'[a-z]+'
  3.  
  4. In [32]: def repl(m):
  5. ....: return m.group(0)[::-1]
  6. ....:
  7.  
  8. In [33]: pd.Series(['foo 123', 'bar baz', np.nan]).str.replace(pat, repl)
  9. Out[33]:
  10. 0 oof 123
  11. 1 rab zab
  12. 2 NaN
  13. dtype: object
  14.  
  15. # Using regex groups
  16. In [34]: pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
  17.  
  18. In [35]: def repl(m):
  19. ....: return m.group('two').swapcase()
  20. ....:
  21.  
  22. In [36]: pd.Series(['Foo Bar Baz', np.nan]).str.replace(pat, repl)
  23. Out[36]:
  24. 0 bAR
  25. 1 NaN
  26. dtype: object

New in version 0.20.0.

The replace method also accepts a compiled regular expression objectfrom re.compile() as a pattern. All flags should be included in thecompiled regular expression object.

  1. In [37]: import re
  2.  
  3. In [38]: regex_pat = re.compile(r'^.a|dog', flags=re.IGNORECASE)
  4.  
  5. In [39]: s3.str.replace(regex_pat, 'XX-XX ')
  6. Out[39]:
  7. 0 A
  8. 1 B
  9. 2 C
  10. 3 XX-XX ba
  11. 4 XX-XX ca
  12. 5
  13. 6 NaN
  14. 7 XX-XX BA
  15. 8 XX-XX
  16. 9 XX-XX t
  17. dtype: object

Including a flags argument when calling replace with a compiledregular expression object will raise a ValueError.

  1. In [40]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)

ValueError: case and flags cannot be set when pat is a compiled regex

Concatenation

There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(),resp. Index.str.cat.

Concatenating a single Series into a string

The content of a Series (or Index) can be concatenated:

  1. In [41]: s = pd.Series(['a', 'b', 'c', 'd'])
  2.  
  3. In [42]: s.str.cat(sep=',')
  4. Out[42]: 'a,b,c,d'

If not specified, the keyword sep for the separator defaults to the empty string, sep='':

  1. In [43]: s.str.cat()
  2. Out[43]: 'abcd'

By default, missing values are ignored. Using na_rep, they can be given a representation:

  1. In [44]: t = pd.Series(['a', 'b', np.nan, 'd'])
  2.  
  3. In [45]: t.str.cat(sep=',')
  4. Out[45]: 'a,b,d'
  5.  
  6. In [46]: t.str.cat(sep=',', na_rep='-')
  7. Out[46]: 'a,b,-,d'

Concatenating a Series and something list-like into a Series

The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).

  1. In [47]: s.str.cat(['A', 'B', 'C', 'D'])
  2. Out[47]:
  3. 0 aA
  4. 1 bB
  5. 2 cC
  6. 3 dD
  7. dtype: object

Missing values on either side will result in missing values in the result as well, unless na_rep is specified:

  1. In [48]: s.str.cat(t)
  2. Out[48]:
  3. 0 aa
  4. 1 bb
  5. 2 NaN
  6. 3 dd
  7. dtype: object
  8.  
  9. In [49]: s.str.cat(t, na_rep='-')
  10. Out[49]:
  11. 0 aa
  12. 1 bb
  13. 2 c-
  14. 3 dd
  15. dtype: object

Concatenating a Series and something array-like into a Series

New in version 0.23.0.

The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).

  1. In [50]: d = pd.concat([t, s], axis=1)
  2.  
  3. In [51]: s
  4. Out[51]:
  5. 0 a
  6. 1 b
  7. 2 c
  8. 3 d
  9. dtype: object
  10.  
  11. In [52]: d
  12. Out[52]:
  13. 0 1
  14. 0 a a
  15. 1 b b
  16. 2 NaN c
  17. 3 d d
  18.  
  19. In [53]: s.str.cat(d, na_rep='-')
  20. Out[53]:
  21. 0 aaa
  22. 1 bbb
  23. 2 c-c
  24. 3 ddd
  25. dtype: object

Concatenating a Series and an indexed object into a Series, with alignment

New in version 0.23.0.

For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by settingthe join-keyword.

  1. In [54]: u = pd.Series(['b', 'd', 'a', 'c'], index=[1, 3, 0, 2])
  2.  
  3. In [55]: s
  4. Out[55]:
  5. 0 a
  6. 1 b
  7. 2 c
  8. 3 d
  9. dtype: object
  10.  
  11. In [56]: u
  12. Out[56]:
  13. 1 b
  14. 3 d
  15. 0 a
  16. 2 c
  17. dtype: object
  18.  
  19. In [57]: s.str.cat(u)
  20. Out[57]:
  21. 0 ab
  22. 1 bd
  23. 2 ca
  24. 3 dc
  25. dtype: object
  26.  
  27. In [58]: s.str.cat(u, join='left')
  28. Out[58]:
  29. 0 aa
  30. 1 bb
  31. 2 cc
  32. 3 dd
  33. dtype: object

Warning

If the join keyword is not passed, the method cat() will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),but a FutureWarning will be raised if any of the involved indexes differ, since this default will change to join='left' in a future version.

The usual options are available for join (one of 'left', 'outer', 'inner', 'right').In particular, alignment also means that the different lengths do not need to coincide anymore.

  1. In [59]: v = pd.Series(['z', 'a', 'b', 'd', 'e'], index=[-1, 0, 1, 3, 4])
  2.  
  3. In [60]: s
  4. Out[60]:
  5. 0 a
  6. 1 b
  7. 2 c
  8. 3 d
  9. dtype: object
  10.  
  11. In [61]: v
  12. Out[61]:
  13. -1 z
  14. 0 a
  15. 1 b
  16. 3 d
  17. 4 e
  18. dtype: object
  19.  
  20. In [62]: s.str.cat(v, join='left', na_rep='-')
  21. Out[62]:
  22. 0 aa
  23. 1 bb
  24. 2 c-
  25. 3 dd
  26. dtype: object
  27.  
  28. In [63]: s.str.cat(v, join='outer', na_rep='-')
  29. Out[63]:
  30. -1 -z
  31. 0 aa
  32. 1 bb
  33. 2 c-
  34. 3 dd
  35. 4 -e
  36. dtype: object

The same alignment can be used when others is a DataFrame:

  1. In [64]: f = d.loc[[3, 2, 1, 0], :]
  2.  
  3. In [65]: s
  4. Out[65]:
  5. 0 a
  6. 1 b
  7. 2 c
  8. 3 d
  9. dtype: object
  10.  
  11. In [66]: f
  12. Out[66]:
  13. 0 1
  14. 3 d d
  15. 2 NaN c
  16. 1 b b
  17. 0 a a
  18.  
  19. In [67]: s.str.cat(f, join='left', na_rep='-')
  20. Out[67]:
  21. 0 aaa
  22. 1 bbb
  23. 2 c-c
  24. 3 ddd
  25. dtype: object

Concatenating a Series and many objects into a Series

Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray)can be combined in a list-like container (including iterators, dict-views, etc.).

  1. In [68]: s
  2. Out[68]:
  3. 0 a
  4. 1 b
  5. 2 c
  6. 3 d
  7. dtype: object
  8.  
  9. In [69]: u
  10. Out[69]:
  11. 1 b
  12. 3 d
  13. 0 a
  14. 2 c
  15. dtype: object
  16.  
  17. In [70]: s.str.cat([u, u.to_numpy()], join='left')
  18. Out[70]:
  19. 0 aab
  20. 1 bbd
  21. 2 cca
  22. 3 ddc
  23. dtype: object

All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index),but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):

  1. In [71]: v
  2. Out[71]:
  3. -1 z
  4. 0 a
  5. 1 b
  6. 3 d
  7. 4 e
  8. dtype: object
  9.  
  10. In [72]: s.str.cat([v, u, u.to_numpy()], join='outer', na_rep='-')
  11. Out[72]:
  12. -1 -z--
  13. 0 aaab
  14. 1 bbbd
  15. 2 c-ca
  16. 3 dddc
  17. 4 -e--
  18. dtype: object

If using join='right' on a list-like of others that contains different indexes,the union of these indexes will be used as the basis for the final concatenation:

  1. In [73]: u.loc[[3]]
  2. Out[73]:
  3. 3 d
  4. dtype: object
  5.  
  6. In [74]: v.loc[[-1, 0]]
  7. Out[74]:
  8. -1 z
  9. 0 a
  10. dtype: object
  11.  
  12. In [75]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join='right', na_rep='-')
  13. Out[75]:
  14. -1 --z
  15. 0 a-a
  16. 3 dd-
  17. dtype: object

Indexing with .str

You can use [] notation to directly index by position locations. If you index past the endof the string, the result will be a NaN.

  1. In [76]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan,
  2. ....: 'CABA', 'dog', 'cat'])
  3. ....:
  4.  
  5. In [77]: s.str[0]
  6. Out[77]:
  7. 0 A
  8. 1 B
  9. 2 C
  10. 3 A
  11. 4 B
  12. 5 NaN
  13. 6 C
  14. 7 d
  15. 8 c
  16. dtype: object
  17.  
  18. In [78]: s.str[1]
  19. Out[78]:
  20. 0 NaN
  21. 1 NaN
  22. 2 NaN
  23. 3 a
  24. 4 a
  25. 5 NaN
  26. 6 A
  27. 7 o
  28. 8 a
  29. dtype: object

Extracting substrings

Extract first match in each subject (extract)

Warning

In version 0.18.0, extract gained the expand argument. Whenexpand=False it returns a Series, Index, orDataFrame, depending on the subject and regular expressionpattern (same behavior as pre-0.18.0). When expand=True italways returns a DataFrame, which is more consistent and lessconfusing from the perspective of a user. expand=True is thedefault since version 0.23.0.

The extract method accepts a regular expression with at least onecapture group.

Extracting a regular expression with more than one group returns aDataFrame with one column per group.

  1. In [79]: pd.Series(['a1', 'b2', 'c3']).str.extract(r'([ab])(\d)', expand=False)
  2. Out[79]:
  3. 0 1
  4. 0 a 1
  5. 1 b 2
  6. 2 NaN NaN

Elements that do not match return a row filled with NaN. Thus, aSeries of messy strings can be “converted” into a like-indexed Seriesor DataFrame of cleaned-up or more useful strings, withoutnecessitating get() to access tuples or re.match objects. Thedtype of the result is always object, even if no match is found andthe result only contains NaN.

Named groups like

  1. In [80]: pd.Series(['a1', 'b2', 'c3']).str.extract(r'(?P<letter>[ab])(?P<digit>\d)',
  2. ....: expand=False)
  3. ....:
  4. Out[80]:
  5. letter digit
  6. 0 a 1
  7. 1 b 2
  8. 2 NaN NaN

and optional groups like

  1. In [81]: pd.Series(['a1', 'b2', '3']).str.extract(r'([ab])?(\d)', expand=False)
  2. Out[81]:
  3. 0 1
  4. 0 a 1
  5. 1 b 2
  6. 2 NaN 3

can also be used. Note that any capture group names in the regularexpression will be used for column names; otherwise capture groupnumbers will be used.

Extracting a regular expression with one group returns a DataFramewith one column if expand=True.

  1. In [82]: pd.Series(['a1', 'b2', 'c3']).str.extract(r'[ab](\d)', expand=True)
  2. Out[82]:
  3. 0
  4. 0 1
  5. 1 2
  6. 2 NaN

It returns a Series if expand=False.

  1. In [83]: pd.Series(['a1', 'b2', 'c3']).str.extract(r'[ab](\d)', expand=False)
  2. Out[83]:
  3. 0 1
  4. 1 2
  5. 2 NaN
  6. dtype: object

Calling on an Index with a regex with exactly one capture groupreturns a DataFrame with one column if expand=True.

  1. In [84]: s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"])
  2.  
  3. In [85]: s
  4. Out[85]:
  5. A11 a1
  6. B22 b2
  7. C33 c3
  8. dtype: object
  9.  
  10. In [86]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
  11. Out[86]:
  12. letter
  13. 0 A
  14. 1 B
  15. 2 C

It returns an Index if expand=False.

  1. In [87]: s.index.str.extract("(?P<letter>[a-zA-Z])", expand=False)
  2. Out[87]: Index(['A', 'B', 'C'], dtype='object', name='letter')

Calling on an Index with a regex with more than one capture groupreturns a DataFrame if expand=True.

  1. In [88]: s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=True)
  2. Out[88]:
  3. letter 1
  4. 0 A 11
  5. 1 B 22
  6. 2 C 33

It raises ValueError if expand=False.

  1. >>> s.index.str.extract("(?P<letter>[a-zA-Z])([0-9]+)", expand=False)
  2. ValueError: only one regex group is supported with Index

The table below summarizes the behavior of extract(expand=False)(input subject in first column, number of groups in regex infirst row)

1 group>1 group
IndexIndexValueError
SeriesSeriesDataFrame

Extract all matches in each subject (extractall)

New in version 0.18.0.

Unlike extract (which returns only the first match),

  1. In [89]: s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"])
  2.  
  3. In [90]: s
  4. Out[90]:
  5. A a1a2
  6. B b1
  7. C c1
  8. dtype: object
  9.  
  10. In [91]: two_groups = '(?P<letter>[a-z])(?P<digit>[0-9])'
  11.  
  12. In [92]: s.str.extract(two_groups, expand=True)
  13. Out[92]:
  14. letter digit
  15. A a 1
  16. B b 1
  17. C c 1

the extractall method returns every match. The result ofextractall is always a DataFrame with a MultiIndex on itsrows. The last level of the MultiIndex is named match andindicates the order in the subject.

  1. In [93]: s.str.extractall(two_groups)
  2. Out[93]:
  3. letter digit
  4. match
  5. A 0 a 1
  6. 1 a 2
  7. B 0 b 1
  8. C 0 c 1

When each subject string in the Series has exactly one match,

  1. In [94]: s = pd.Series(['a3', 'b3', 'c2'])
  2.  
  3. In [95]: s
  4. Out[95]:
  5. 0 a3
  6. 1 b3
  7. 2 c2
  8. dtype: object

then extractall(pat).xs(0, level='match') gives the same result asextract(pat).

  1. In [96]: extract_result = s.str.extract(two_groups, expand=True)
  2.  
  3. In [97]: extract_result
  4. Out[97]:
  5. letter digit
  6. 0 a 3
  7. 1 b 3
  8. 2 c 2
  9.  
  10. In [98]: extractall_result = s.str.extractall(two_groups)
  11.  
  12. In [99]: extractall_result
  13. Out[99]:
  14. letter digit
  15. match
  16. 0 0 a 3
  17. 1 0 b 3
  18. 2 0 c 2
  19.  
  20. In [100]: extractall_result.xs(0, level="match")
  21. Out[100]:
  22. letter digit
  23. 0 a 3
  24. 1 b 3
  25. 2 c 2

Index also supports .str.extractall. It returns a DataFrame which has thesame result as a Series.str.extractall with a default index (starts from 0).

New in version 0.19.0.

  1. In [101]: pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
  2. Out[101]:
  3. letter digit
  4. match
  5. 0 0 a 1
  6. 1 a 2
  7. 1 0 b 1
  8. 2 0 c 1
  9.  
  10. In [102]: pd.Series(["a1a2", "b1", "c1"]).str.extractall(two_groups)
  11. Out[102]:
  12. letter digit
  13. match
  14. 0 0 a 1
  15. 1 a 2
  16. 1 0 b 1
  17. 2 0 c 1

Testing for Strings that match or contain a pattern

You can check whether elements contain a pattern:

  1. In [103]: pattern = r'[0-9][a-z]'
  2.  
  3. In [104]: pd.Series(['1', '2', '3a', '3b', '03c']).str.contains(pattern)
  4. Out[104]:
  5. 0 False
  6. 1 False
  7. 2 True
  8. 3 True
  9. 4 True
  10. dtype: bool

Or whether elements match a pattern:

  1. In [105]: pd.Series(['1', '2', '3a', '3b', '03c']).str.match(pattern)
  2. Out[105]:
  3. 0 False
  4. 1 False
  5. 2 True
  6. 3 True
  7. 4 False
  8. dtype: bool

The distinction between match and contains is strictness: matchrelies on strict re.match, while contains relies on re.search.

Methods like match, contains, startswith, and endswith takean extra na argument so missing values can be considered True or False:

  1. In [106]: s4 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
  2.  
  3. In [107]: s4.str.contains('A', na=False)
  4. Out[107]:
  5. 0 True
  6. 1 False
  7. 2 False
  8. 3 True
  9. 4 False
  10. 5 False
  11. 6 True
  12. 7 False
  13. 8 False
  14. dtype: bool

Creating indicator variables

You can extract dummy variables from string columns.For example if they are separated by a '|':

  1. In [108]: s = pd.Series(['a', 'a|b', np.nan, 'a|c'])
  2.  
  3. In [109]: s.str.get_dummies(sep='|')
  4. Out[109]:
  5. a b c
  6. 0 1 0 0
  7. 1 1 1 0
  8. 2 0 0 0
  9. 3 1 0 1

String Index also supports get_dummies which returns a MultiIndex.

New in version 0.18.1.

  1. In [110]: idx = pd.Index(['a', 'a|b', np.nan, 'a|c'])
  2.  
  3. In [111]: idx.str.get_dummies(sep='|')
  4. Out[111]:
  5. MultiIndex([(1, 0, 0),
  6. (1, 1, 0),
  7. (0, 0, 0),
  8. (1, 0, 1)],
  9. names=['a', 'b', 'c'])

See also get_dummies().

Method summary

MethodDescription
cat()Concatenate strings
split()Split strings on delimiter
rsplit()Split strings on delimiter working from the end of the string
get()Index into each element (retrieve i-th element)
join()Join strings in each element of the Series with passed separator
get_dummies()Split strings on the delimiter returning DataFrame of dummy variables
contains()Return boolean array if each string contains pattern/regex
replace()Replace occurrences of pattern/regex/string with some other string or the return value of a callable given the occurrence
repeat()Duplicate values (s.str.repeat(3) equivalent to x * 3)
pad()Add whitespace to left, right, or both sides of strings
center()Equivalent to str.center
ljust()Equivalent to str.ljust
rjust()Equivalent to str.rjust
zfill()Equivalent to str.zfill
wrap()Split long strings into lines with length less than a given width
slice()Slice each string in the Series
slice_replace()Replace slice in each string with passed value
count()Count occurrences of pattern
startswith()Equivalent to str.startswith(pat) for each element
endswith()Equivalent to str.endswith(pat) for each element
findall()Compute list of all occurrences of pattern/regex for each string
match()Call re.match on each element, returning matched groups as list
extract()Call re.search on each element, returning DataFrame with one row for each element and one column for each regex capture group
extractall()Call re.findall on each element, returning DataFrame with one row for each match and one column for each regex capture group
len()Compute string lengths
strip()Equivalent to str.strip
rstrip()Equivalent to str.rstrip
lstrip()Equivalent to str.lstrip
partition()Equivalent to str.partition
rpartition()Equivalent to str.rpartition
lower()Equivalent to str.lower
casefold()Equivalent to str.casefold
upper()Equivalent to str.upper
find()Equivalent to str.find
rfind()Equivalent to str.rfind
index()Equivalent to str.index
rindex()Equivalent to str.rindex
capitalize()Equivalent to str.capitalize
swapcase()Equivalent to str.swapcase
normalize()Return Unicode normal form. Equivalent to unicodedata.normalize
translate()Equivalent to str.translate
isalnum()Equivalent to str.isalnum
isalpha()Equivalent to str.isalpha
isdigit()Equivalent to str.isdigit
isspace()Equivalent to str.isspace
islower()Equivalent to str.islower
isupper()Equivalent to str.isupper
istitle()Equivalent to str.istitle
isnumeric()Equivalent to str.isnumeric
isdecimal()Equivalent to str.isdecimal