Cookbook

This is a repository for short and sweet examples and links for useful pandas recipes.We encourage users to add to this documentation.

Adding interesting links and/or inline examples to this section is a great First Pull Request.

Simplified, condensed, new-user friendly, in-line examples have been inserted where possible toaugment the Stack-Overflow and GitHub links. Many of the links contain expanded information,above what the in-line examples offer.

Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are keptexplicitly imported for newer users.

These examples are written for Python 3. Minor tweaks might be necessary for earlier pythonversions.

Idioms

These are some neat pandas idioms

if-then/if-then-else on one column, and assignment to another one or more columns:

  1. In [1]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ...: 'BBB': [10, 20, 30, 40],
  3. ...: 'CCC': [100, 50, -30, -50]})
  4. ...:
  5.  
  6. In [2]: df
  7. Out[2]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50

if-then…

An if-then on one column

  1. In [3]: df.loc[df.AAA >= 5, 'BBB'] = -1
  2.  
  3. In [4]: df
  4. Out[4]:
  5. AAA BBB CCC
  6. 0 4 10 100
  7. 1 5 -1 50
  8. 2 6 -1 -30
  9. 3 7 -1 -50

An if-then with assignment to 2 columns:

  1. In [5]: df.loc[df.AAA >= 5, ['BBB', 'CCC']] = 555
  2.  
  3. In [6]: df
  4. Out[6]:
  5. AAA BBB CCC
  6. 0 4 10 100
  7. 1 5 555 555
  8. 2 6 555 555
  9. 3 7 555 555

Add another line with different logic, to do the -else

  1. In [7]: df.loc[df.AAA < 5, ['BBB', 'CCC']] = 2000
  2.  
  3. In [8]: df
  4. Out[8]:
  5. AAA BBB CCC
  6. 0 4 2000 2000
  7. 1 5 555 555
  8. 2 6 555 555
  9. 3 7 555 555

Or use pandas where after you’ve set up a mask

  1. In [9]: df_mask = pd.DataFrame({'AAA': [True] * 4,
  2. ...: 'BBB': [False] * 4,
  3. ...: 'CCC': [True, False] * 2})
  4. ...:
  5.  
  6. In [10]: df.where(df_mask, -1000)
  7. Out[10]:
  8. AAA BBB CCC
  9. 0 4 -1000 2000
  10. 1 5 -1000 -1000
  11. 2 6 -1000 555
  12. 3 7 -1000 -1000

if-then-else using numpy’s where()

  1. In [11]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5.  
  6. In [12]: df
  7. Out[12]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50
  13.  
  14. In [13]: df['logic'] = np.where(df['AAA'] > 5, 'high', 'low')
  15.  
  16. In [14]: df
  17. Out[14]:
  18. AAA BBB CCC logic
  19. 0 4 10 100 low
  20. 1 5 20 50 low
  21. 2 6 30 -30 high
  22. 3 7 40 -50 high

Splitting

Split a frame with a boolean criterion

  1. In [15]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5.  
  6. In [16]: df
  7. Out[16]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50
  13.  
  14. In [17]: df[df.AAA <= 5]
  15. Out[17]:
  16. AAA BBB CCC
  17. 0 4 10 100
  18. 1 5 20 50
  19.  
  20. In [18]: df[df.AAA > 5]
  21. Out[18]:
  22. AAA BBB CCC
  23. 2 6 30 -30
  24. 3 7 40 -50

Building criteria

Select with multi-column criteria

  1. In [19]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5.  
  6. In [20]: df
  7. Out[20]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50

…and (without assignment returns a Series)

  1. In [21]: df.loc[(df['BBB'] < 25) & (df['CCC'] >= -40), 'AAA']
  2. Out[21]:
  3. 0 4
  4. 1 5
  5. Name: AAA, dtype: int64

…or (without assignment returns a Series)

  1. In [22]: df.loc[(df['BBB'] > 25) | (df['CCC'] >= -40), 'AAA']
  2. Out[22]:
  3. 0 4
  4. 1 5
  5. 2 6
  6. 3 7
  7. Name: AAA, dtype: int64

…or (with assignment modifies the DataFrame.)

  1. In [23]: df.loc[(df['BBB'] > 25) | (df['CCC'] >= 75), 'AAA'] = 0.1
  2.  
  3. In [24]: df
  4. Out[24]:
  5. AAA BBB CCC
  6. 0 0.1 10 100
  7. 1 5.0 20 50
  8. 2 0.1 30 -30
  9. 3 0.1 40 -50

Select rows with data closest to certain value using argsort

  1. In [25]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5.  
  6. In [26]: df
  7. Out[26]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50
  13.  
  14. In [27]: aValue = 43.0
  15.  
  16. In [28]: df.loc[(df.CCC - aValue).abs().argsort()]
  17. Out[28]:
  18. AAA BBB CCC
  19. 1 5 20 50
  20. 0 4 10 100
  21. 2 6 30 -30
  22. 3 7 40 -50

Dynamically reduce a list of criteria using a binary operators

  1. In [29]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5.  
  6. In [30]: df
  7. Out[30]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50
  13.  
  14. In [31]: Crit1 = df.AAA <= 5.5
  15.  
  16. In [32]: Crit2 = df.BBB == 10.0
  17.  
  18. In [33]: Crit3 = df.CCC > -40.0

One could hard code:

  1. In [34]: AllCrit = Crit1 & Crit2 & Crit3

…Or it can be done with a list of dynamically built criteria

  1. In [35]: import functools
  2.  
  3. In [36]: CritList = [Crit1, Crit2, Crit3]
  4.  
  5. In [37]: AllCrit = functools.reduce(lambda x, y: x & y, CritList)
  6.  
  7. In [38]: df[AllCrit]
  8. Out[38]:
  9. AAA BBB CCC
  10. 0 4 10 100

Selection

DataFrames

The indexing docs.

Using both row labels and value conditionals

  1. In [39]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5.  
  6. In [40]: df
  7. Out[40]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50
  13.  
  14. In [41]: df[(df.AAA <= 6) & (df.index.isin([0, 2, 4]))]
  15. Out[41]:
  16. AAA BBB CCC
  17. 0 4 10 100
  18. 2 6 30 -30

Use loc for label-oriented slicing and iloc positional slicing

  1. In [42]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]},
  4. ....: index=['foo', 'bar', 'boo', 'kar'])
  5. ....:

There are 2 explicit slicing methods, with a third general case

  • Positional-oriented (Python slicing style : exclusive of end)
  • Label-oriented (Non-Python slicing style : inclusive of end)
  • General (Either slicing style : depends on if the slice contains labels or positions)
  1. In [43]: df.loc['bar':'kar'] # Label
  2. Out[43]:
  3. AAA BBB CCC
  4. bar 5 20 50
  5. boo 6 30 -30
  6. kar 7 40 -50
  7.  
  8. # Generic
  9. In [44]: df.iloc[0:3]
  10. Out[44]:
  11. AAA BBB CCC
  12. foo 4 10 100
  13. bar 5 20 50
  14. boo 6 30 -30
  15.  
  16. In [45]: df.loc['bar':'kar']
  17. Out[45]:
  18. AAA BBB CCC
  19. bar 5 20 50
  20. boo 6 30 -30
  21. kar 7 40 -50

Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.

  1. In [46]: data = {'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]}
  4. ....:
  5.  
  6. In [47]: df2 = pd.DataFrame(data=data, index=[1, 2, 3, 4]) # Note index starts at 1.
  7.  
  8. In [48]: df2.iloc[1:3] # Position-oriented
  9. Out[48]:
  10. AAA BBB CCC
  11. 2 5 20 50
  12. 3 6 30 -30
  13.  
  14. In [49]: df2.loc[1:3] # Label-oriented
  15. Out[49]:
  16. AAA BBB CCC
  17. 1 4 10 100
  18. 2 5 20 50
  19. 3 6 30 -30

Using inverse operator (~) to take the complement of a mask

  1. In [50]: df = pd.DataFrame({'AAA': [4, 5, 6, 7],
  2. ....: 'BBB': [10, 20, 30, 40],
  3. ....: 'CCC': [100, 50, -30, -50]})
  4. ....:
  5.  
  6. In [51]: df
  7. Out[51]:
  8. AAA BBB CCC
  9. 0 4 10 100
  10. 1 5 20 50
  11. 2 6 30 -30
  12. 3 7 40 -50
  13.  
  14. In [52]: df[~((df.AAA <= 6) & (df.index.isin([0, 2, 4])))]
  15. Out[52]:
  16. AAA BBB CCC
  17. 1 5 20 50
  18. 3 7 40 -50

New columns

Efficiently and dynamically creating new columns using applymap

  1. In [53]: df = pd.DataFrame({'AAA': [1, 2, 1, 3],
  2. ....: 'BBB': [1, 1, 2, 2],
  3. ....: 'CCC': [2, 1, 3, 1]})
  4. ....:
  5.  
  6. In [54]: df
  7. Out[54]:
  8. AAA BBB CCC
  9. 0 1 1 2
  10. 1 2 1 1
  11. 2 1 2 3
  12. 3 3 2 1
  13.  
  14. In [55]: source_cols = df.columns # Or some subset would work too
  15.  
  16. In [56]: new_cols = [str(x) + "_cat" for x in source_cols]
  17.  
  18. In [57]: categories = {1: 'Alpha', 2: 'Beta', 3: 'Charlie'}
  19.  
  20. In [58]: df[new_cols] = df[source_cols].applymap(categories.get)
  21.  
  22. In [59]: df
  23. Out[59]:
  24. AAA BBB CCC AAA_cat BBB_cat CCC_cat
  25. 0 1 1 2 Alpha Alpha Beta
  26. 1 2 1 1 Beta Alpha Alpha
  27. 2 1 2 3 Alpha Beta Charlie
  28. 3 3 2 1 Charlie Beta Alpha

Keep other columns when using min() with groupby

  1. In [60]: df = pd.DataFrame({'AAA': [1, 1, 1, 2, 2, 2, 3, 3],
  2. ....: 'BBB': [2, 1, 3, 4, 5, 1, 2, 3]})
  3. ....:
  4.  
  5. In [61]: df
  6. Out[61]:
  7. AAA BBB
  8. 0 1 2
  9. 1 1 1
  10. 2 1 3
  11. 3 2 4
  12. 4 2 5
  13. 5 2 1
  14. 6 3 2
  15. 7 3 3

Method 1 : idxmin() to get the index of the minimums

  1. In [62]: df.loc[df.groupby("AAA")["BBB"].idxmin()]
  2. Out[62]:
  3. AAA BBB
  4. 1 1 1
  5. 5 2 1
  6. 6 3 2

Method 2 : sort then take first of each

  1. In [63]: df.sort_values(by="BBB").groupby("AAA", as_index=False).first()
  2. Out[63]:
  3. AAA BBB
  4. 0 1 1
  5. 1 2 1
  6. 2 3 2

Notice the same results, with the exception of the index.

MultiIndexing

The multindexing docs.

Creating a MultiIndex from a labeled frame

  1. In [64]: df = pd.DataFrame({'row': [0, 1, 2],
  2. ....: 'One_X': [1.1, 1.1, 1.1],
  3. ....: 'One_Y': [1.2, 1.2, 1.2],
  4. ....: 'Two_X': [1.11, 1.11, 1.11],
  5. ....: 'Two_Y': [1.22, 1.22, 1.22]})
  6. ....:
  7.  
  8. In [65]: df
  9. Out[65]:
  10. row One_X One_Y Two_X Two_Y
  11. 0 0 1.1 1.2 1.11 1.22
  12. 1 1 1.1 1.2 1.11 1.22
  13. 2 2 1.1 1.2 1.11 1.22
  14.  
  15. # As Labelled Index
  16. In [66]: df = df.set_index('row')
  17.  
  18. In [67]: df
  19. Out[67]:
  20. One_X One_Y Two_X Two_Y
  21. row
  22. 0 1.1 1.2 1.11 1.22
  23. 1 1.1 1.2 1.11 1.22
  24. 2 1.1 1.2 1.11 1.22
  25.  
  26. # With Hierarchical Columns
  27. In [68]: df.columns = pd.MultiIndex.from_tuples([tuple(c.split('_'))
  28. ....: for c in df.columns])
  29. ....:
  30.  
  31. In [69]: df
  32. Out[69]:
  33. One Two
  34. X Y X Y
  35. row
  36. 0 1.1 1.2 1.11 1.22
  37. 1 1.1 1.2 1.11 1.22
  38. 2 1.1 1.2 1.11 1.22
  39.  
  40. # Now stack & Reset
  41. In [70]: df = df.stack(0).reset_index(1)
  42.  
  43. In [71]: df
  44. Out[71]:
  45. level_1 X Y
  46. row
  47. 0 One 1.10 1.20
  48. 0 Two 1.11 1.22
  49. 1 One 1.10 1.20
  50. 1 Two 1.11 1.22
  51. 2 One 1.10 1.20
  52. 2 Two 1.11 1.22
  53.  
  54. # And fix the labels (Notice the label 'level_1' got added automatically)
  55. In [72]: df.columns = ['Sample', 'All_X', 'All_Y']
  56.  
  57. In [73]: df
  58. Out[73]:
  59. Sample All_X All_Y
  60. row
  61. 0 One 1.10 1.20
  62. 0 Two 1.11 1.22
  63. 1 One 1.10 1.20
  64. 1 Two 1.11 1.22
  65. 2 One 1.10 1.20
  66. 2 Two 1.11 1.22

Arithmetic

Performing arithmetic with a MultiIndex that needs broadcasting

  1. In [74]: cols = pd.MultiIndex.from_tuples([(x, y) for x in ['A', 'B', 'C']
  2. ....: for y in ['O', 'I']])
  3. ....:
  4.  
  5. In [75]: df = pd.DataFrame(np.random.randn(2, 6), index=['n', 'm'], columns=cols)
  6.  
  7. In [76]: df
  8. Out[76]:
  9. A B C
  10. O I O I O I
  11. n 0.469112 -0.282863 -1.509059 -1.135632 1.212112 -0.173215
  12. m 0.119209 -1.044236 -0.861849 -2.104569 -0.494929 1.071804
  13.  
  14. In [77]: df = df.div(df['C'], level=1)
  15.  
  16. In [78]: df
  17. Out[78]:
  18. A B C
  19. O I O I O I
  20. n 0.387021 1.633022 -1.244983 6.556214 1.0 1.0
  21. m -0.240860 -0.974279 1.741358 -1.963577 1.0 1.0

Slicing

Slicing a MultiIndex with xs

  1. In [79]: coords = [('AA', 'one'), ('AA', 'six'), ('BB', 'one'), ('BB', 'two'),
  2. ....: ('BB', 'six')]
  3. ....:
  4.  
  5. In [80]: index = pd.MultiIndex.from_tuples(coords)
  6.  
  7. In [81]: df = pd.DataFrame([11, 22, 33, 44, 55], index, ['MyData'])
  8.  
  9. In [82]: df
  10. Out[82]:
  11. MyData
  12. AA one 11
  13. six 22
  14. BB one 33
  15. two 44
  16. six 55

To take the cross section of the 1st level and 1st axis the index:

  1. # Note : level and axis are optional, and default to zero
  2. In [83]: df.xs('BB', level=0, axis=0)
  3. Out[83]:
  4. MyData
  5. one 33
  6. two 44
  7. six 55

…and now the 2nd level of the 1st axis.

  1. In [84]: df.xs('six', level=1, axis=0)
  2. Out[84]:
  3. MyData
  4. AA 22
  5. BB 55

Slicing a MultiIndex with xs, method #2

  1. In [85]: import itertools
  2.  
  3. In [86]: index = list(itertools.product(['Ada', 'Quinn', 'Violet'],
  4. ....: ['Comp', 'Math', 'Sci']))
  5. ....:
  6.  
  7. In [87]: headr = list(itertools.product(['Exams', 'Labs'], ['I', 'II']))
  8.  
  9. In [88]: indx = pd.MultiIndex.from_tuples(index, names=['Student', 'Course'])
  10.  
  11. In [89]: cols = pd.MultiIndex.from_tuples(headr) # Notice these are un-named
  12.  
  13. In [90]: data = [[70 + x + y + (x * y) % 3 for x in range(4)] for y in range(9)]
  14.  
  15. In [91]: df = pd.DataFrame(data, indx, cols)
  16.  
  17. In [92]: df
  18. Out[92]:
  19. Exams Labs
  20. I II I II
  21. Student Course
  22. Ada Comp 70 71 72 73
  23. Math 71 73 75 74
  24. Sci 72 75 75 75
  25. Quinn Comp 73 74 75 76
  26. Math 74 76 78 77
  27. Sci 75 78 78 78
  28. Violet Comp 76 77 78 79
  29. Math 77 79 81 80
  30. Sci 78 81 81 81
  31.  
  32. In [93]: All = slice(None)
  33.  
  34. In [94]: df.loc['Violet']
  35. Out[94]:
  36. Exams Labs
  37. I II I II
  38. Course
  39. Comp 76 77 78 79
  40. Math 77 79 81 80
  41. Sci 78 81 81 81
  42.  
  43. In [95]: df.loc[(All, 'Math'), All]
  44. Out[95]:
  45. Exams Labs
  46. I II I II
  47. Student Course
  48. Ada Math 71 73 75 74
  49. Quinn Math 74 76 78 77
  50. Violet Math 77 79 81 80
  51.  
  52. In [96]: df.loc[(slice('Ada', 'Quinn'), 'Math'), All]
  53. Out[96]:
  54. Exams Labs
  55. I II I II
  56. Student Course
  57. Ada Math 71 73 75 74
  58. Quinn Math 74 76 78 77
  59.  
  60. In [97]: df.loc[(All, 'Math'), ('Exams')]
  61. Out[97]:
  62. I II
  63. Student Course
  64. Ada Math 71 73
  65. Quinn Math 74 76
  66. Violet Math 77 79
  67.  
  68. In [98]: df.loc[(All, 'Math'), (All, 'II')]
  69. Out[98]:
  70. Exams Labs
  71. II II
  72. Student Course
  73. Ada Math 73 74
  74. Quinn Math 76 77
  75. Violet Math 79 80

Setting portions of a MultiIndex with xs

Sorting

Sort by specific column or an ordered list of columns, with a MultiIndex

  1. In [99]: df.sort_values(by=('Labs', 'II'), ascending=False)
  2. Out[99]:
  3. Exams Labs
  4. I II I II
  5. Student Course
  6. Violet Sci 78 81 81 81
  7. Math 77 79 81 80
  8. Comp 76 77 78 79
  9. Quinn Sci 75 78 78 78
  10. Math 74 76 78 77
  11. Comp 73 74 75 76
  12. Ada Sci 72 75 75 75
  13. Math 71 73 75 74
  14. Comp 70 71 72 73

Partial selection, the need for sortedness;

Levels

Prepending a level to a multiindex

Flatten Hierarchical columns

Missing data

The missing data docs.

Fill forward a reversed timeseries

  1. In [100]: df = pd.DataFrame(np.random.randn(6, 1),
  2. .....: index=pd.date_range('2013-08-01', periods=6, freq='B'),
  3. .....: columns=list('A'))
  4. .....:
  5.  
  6. In [101]: df.loc[df.index[3], 'A'] = np.nan
  7.  
  8. In [102]: df
  9. Out[102]:
  10. A
  11. 2013-08-01 0.721555
  12. 2013-08-02 -0.706771
  13. 2013-08-05 -1.039575
  14. 2013-08-06 NaN
  15. 2013-08-07 -0.424972
  16. 2013-08-08 0.567020
  17.  
  18. In [103]: df.reindex(df.index[::-1]).ffill()
  19. Out[103]:
  20. A
  21. 2013-08-08 0.567020
  22. 2013-08-07 -0.424972
  23. 2013-08-06 -0.424972
  24. 2013-08-05 -1.039575
  25. 2013-08-02 -0.706771
  26. 2013-08-01 0.721555

cumsum reset at NaN values

Replace

Using replace with backrefs

Grouping

The grouping docs.

Basic grouping with apply

Unlike agg, apply’s callable is passed a sub-DataFrame which gives you access to all the columns

  1. In [104]: df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),
  2. .....: 'size': list('SSMMMLL'),
  3. .....: 'weight': [8, 10, 11, 1, 20, 12, 12],
  4. .....: 'adult': [False] * 5 + [True] * 2})
  5. .....:
  6.  
  7. In [105]: df
  8. Out[105]:
  9. animal size weight adult
  10. 0 cat S 8 False
  11. 1 dog S 10 False
  12. 2 cat M 11 False
  13. 3 fish M 1 False
  14. 4 dog M 20 False
  15. 5 cat L 12 True
  16. 6 cat L 12 True
  17.  
  18. # List the size of the animals with the highest weight.
  19. In [106]: df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])
  20. Out[106]:
  21. animal
  22. cat L
  23. dog M
  24. fish M
  25. dtype: object

Using get_group

  1. In [107]: gb = df.groupby(['animal'])
  2.  
  3. In [108]: gb.get_group('cat')
  4. Out[108]:
  5. animal size weight adult
  6. 0 cat S 8 False
  7. 2 cat M 11 False
  8. 5 cat L 12 True
  9. 6 cat L 12 True

Apply to different items in a group

  1. In [109]: def GrowUp(x):
  2. .....: avg_weight = sum(x[x['size'] == 'S'].weight * 1.5)
  3. .....: avg_weight += sum(x[x['size'] == 'M'].weight * 1.25)
  4. .....: avg_weight += sum(x[x['size'] == 'L'].weight)
  5. .....: avg_weight /= len(x)
  6. .....: return pd.Series(['L', avg_weight, True],
  7. .....: index=['size', 'weight', 'adult'])
  8. .....:
  9.  
  10. In [110]: expected_df = gb.apply(GrowUp)
  11.  
  12. In [111]: expected_df
  13. Out[111]:
  14. size weight adult
  15. animal
  16. cat L 12.4375 True
  17. dog L 20.0000 True
  18. fish L 1.2500 True

Expanding apply

  1. In [112]: S = pd.Series([i / 100.0 for i in range(1, 11)])
  2.  
  3. In [113]: def cum_ret(x, y):
  4. .....: return x * (1 + y)
  5. .....:
  6.  
  7. In [114]: def red(x):
  8. .....: return functools.reduce(cum_ret, x, 1.0)
  9. .....:
  10.  
  11. In [115]: S.expanding().apply(red, raw=True)
  12. Out[115]:
  13. 0 1.010000
  14. 1 1.030200
  15. 2 1.061106
  16. 3 1.103550
  17. 4 1.158728
  18. 5 1.228251
  19. 6 1.314229
  20. 7 1.419367
  21. 8 1.547110
  22. 9 1.701821
  23. dtype: float64

Replacing some values with mean of the rest of a group

  1. In [116]: df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, -1, 1, 2]})
  2.  
  3. In [117]: gb = df.groupby('A')
  4.  
  5. In [118]: def replace(g):
  6. .....: mask = g < 0
  7. .....: return g.where(mask, g[~mask].mean())
  8. .....:
  9.  
  10. In [119]: gb.transform(replace)
  11. Out[119]:
  12. B
  13. 0 1.0
  14. 1 -1.0
  15. 2 1.5
  16. 3 1.5

Sort groups by aggregated data

  1. In [120]: df = pd.DataFrame({'code': ['foo', 'bar', 'baz'] * 2,
  2. .....: 'data': [0.16, -0.21, 0.33, 0.45, -0.59, 0.62],
  3. .....: 'flag': [False, True] * 3})
  4. .....:
  5.  
  6. In [121]: code_groups = df.groupby('code')
  7.  
  8. In [122]: agg_n_sort_order = code_groups[['data']].transform(sum).sort_values(by='data')
  9.  
  10. In [123]: sorted_df = df.loc[agg_n_sort_order.index]
  11.  
  12. In [124]: sorted_df
  13. Out[124]:
  14. code data flag
  15. 1 bar -0.21 True
  16. 4 bar -0.59 False
  17. 0 foo 0.16 False
  18. 3 foo 0.45 True
  19. 2 baz 0.33 False
  20. 5 baz 0.62 True

Create multiple aggregated columns

  1. In [125]: rng = pd.date_range(start="2014-10-07", periods=10, freq='2min')
  2.  
  3. In [126]: ts = pd.Series(data=list(range(10)), index=rng)
  4.  
  5. In [127]: def MyCust(x):
  6. .....: if len(x) > 2:
  7. .....: return x[1] * 1.234
  8. .....: return pd.NaT
  9. .....:
  10.  
  11. In [128]: mhc = {'Mean': np.mean, 'Max': np.max, 'Custom': MyCust}
  12.  
  13. In [129]: ts.resample("5min").apply(mhc)
  14. Out[129]:
  15. Mean 2014-10-07 00:00:00 1
  16. 2014-10-07 00:05:00 3.5
  17. 2014-10-07 00:10:00 6
  18. 2014-10-07 00:15:00 8.5
  19. Max 2014-10-07 00:00:00 2
  20. 2014-10-07 00:05:00 4
  21. 2014-10-07 00:10:00 7
  22. 2014-10-07 00:15:00 9
  23. Custom 2014-10-07 00:00:00 1.234
  24. 2014-10-07 00:05:00 NaT
  25. 2014-10-07 00:10:00 7.404
  26. 2014-10-07 00:15:00 NaT
  27. dtype: object
  28.  
  29. In [130]: ts
  30. Out[130]:
  31. 2014-10-07 00:00:00 0
  32. 2014-10-07 00:02:00 1
  33. 2014-10-07 00:04:00 2
  34. 2014-10-07 00:06:00 3
  35. 2014-10-07 00:08:00 4
  36. 2014-10-07 00:10:00 5
  37. 2014-10-07 00:12:00 6
  38. 2014-10-07 00:14:00 7
  39. 2014-10-07 00:16:00 8
  40. 2014-10-07 00:18:00 9
  41. Freq: 2T, dtype: int64

Create a value counts column and reassign back to the DataFrame

  1. In [131]: df = pd.DataFrame({'Color': 'Red Red Red Blue'.split(),
  2. .....: 'Value': [100, 150, 50, 50]})
  3. .....:
  4.  
  5. In [132]: df
  6. Out[132]:
  7. Color Value
  8. 0 Red 100
  9. 1 Red 150
  10. 2 Red 50
  11. 3 Blue 50
  12.  
  13. In [133]: df['Counts'] = df.groupby(['Color']).transform(len)
  14.  
  15. In [134]: df
  16. Out[134]:
  17. Color Value Counts
  18. 0 Red 100 3
  19. 1 Red 150 3
  20. 2 Red 50 3
  21. 3 Blue 50 1

Shift groups of the values in a column based on the index

  1. In [135]: df = pd.DataFrame({'line_race': [10, 10, 8, 10, 10, 8],
  2. .....: 'beyer': [99, 102, 103, 103, 88, 100]},
  3. .....: index=['Last Gunfighter', 'Last Gunfighter',
  4. .....: 'Last Gunfighter', 'Paynter', 'Paynter',
  5. .....: 'Paynter'])
  6. .....:
  7.  
  8. In [136]: df
  9. Out[136]:
  10. line_race beyer
  11. Last Gunfighter 10 99
  12. Last Gunfighter 10 102
  13. Last Gunfighter 8 103
  14. Paynter 10 103
  15. Paynter 10 88
  16. Paynter 8 100
  17.  
  18. In [137]: df['beyer_shifted'] = df.groupby(level=0)['beyer'].shift(1)
  19.  
  20. In [138]: df
  21. Out[138]:
  22. line_race beyer beyer_shifted
  23. Last Gunfighter 10 99 NaN
  24. Last Gunfighter 10 102 99.0
  25. Last Gunfighter 8 103 102.0
  26. Paynter 10 103 NaN
  27. Paynter 10 88 103.0
  28. Paynter 8 100 88.0

Select row with maximum value from each group

  1. In [139]: df = pd.DataFrame({'host': ['other', 'other', 'that', 'this', 'this'],
  2. .....: 'service': ['mail', 'web', 'mail', 'mail', 'web'],
  3. .....: 'no': [1, 2, 1, 2, 1]}).set_index(['host', 'service'])
  4. .....:
  5.  
  6. In [140]: mask = df.groupby(level=0).agg('idxmax')
  7.  
  8. In [141]: df_count = df.loc[mask['no']].reset_index()
  9.  
  10. In [142]: df_count
  11. Out[142]:
  12. host service no
  13. 0 other web 2
  14. 1 that mail 1
  15. 2 this mail 2

Grouping like Python’s itertools.groupby

  1. In [143]: df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=['A'])
  2.  
  3. In [144]: df.A.groupby((df.A != df.A.shift()).cumsum()).groups
  4. Out[144]:
  5. {1: Int64Index([0], dtype='int64'),
  6. 2: Int64Index([1], dtype='int64'),
  7. 3: Int64Index([2], dtype='int64'),
  8. 4: Int64Index([3, 4, 5], dtype='int64'),
  9. 5: Int64Index([6], dtype='int64'),
  10. 6: Int64Index([7, 8], dtype='int64')}
  11.  
  12. In [145]: df.A.groupby((df.A != df.A.shift()).cumsum()).cumsum()
  13. Out[145]:
  14. 0 0
  15. 1 1
  16. 2 0
  17. 3 1
  18. 4 2
  19. 5 3
  20. 6 0
  21. 7 1
  22. 8 2
  23. Name: A, dtype: int64

Expanding data

Alignment and to-date

Rolling Computation window based on values instead of counts

Rolling Mean by Time Interval

Splitting

Splitting a frame

Create a list of dataframes, split using a delineation based on logic included in rows.

  1. In [146]: df = pd.DataFrame(data={'Case': ['A', 'A', 'A', 'B', 'A', 'A', 'B', 'A',
  2. .....: 'A'],
  3. .....: 'Data': np.random.randn(9)})
  4. .....:
  5.  
  6. In [147]: dfs = list(zip(*df.groupby((1 * (df['Case'] == 'B')).cumsum()
  7. .....: .rolling(window=3, min_periods=1).median())))[-1]
  8. .....:
  9.  
  10. In [148]: dfs[0]
  11. Out[148]:
  12. Case Data
  13. 0 A 0.276232
  14. 1 A -1.087401
  15. 2 A -0.673690
  16. 3 B 0.113648
  17.  
  18. In [149]: dfs[1]
  19. Out[149]:
  20. Case Data
  21. 4 A -1.478427
  22. 5 A 0.524988
  23. 6 B 0.404705
  24.  
  25. In [150]: dfs[2]
  26. Out[150]:
  27. Case Data
  28. 7 A 0.577046
  29. 8 A -1.715002

Pivot

The Pivot docs.

Partial sums and subtotals

  1. In [151]: df = pd.DataFrame(data={'Province': ['ON', 'QC', 'BC', 'AL', 'AL', 'MN', 'ON'],
  2. .....: 'City': ['Toronto', 'Montreal', 'Vancouver',
  3. .....: 'Calgary', 'Edmonton', 'Winnipeg',
  4. .....: 'Windsor'],
  5. .....: 'Sales': [13, 6, 16, 8, 4, 3, 1]})
  6. .....:
  7.  
  8. In [152]: table = pd.pivot_table(df, values=['Sales'], index=['Province'],
  9. .....: columns=['City'], aggfunc=np.sum, margins=True)
  10. .....:
  11.  
  12. In [153]: table.stack('City')
  13. Out[153]:
  14. Sales
  15. Province City
  16. AL All 12.0
  17. Calgary 8.0
  18. Edmonton 4.0
  19. BC All 16.0
  20. Vancouver 16.0
  21. ... ...
  22. All Montreal 6.0
  23. Toronto 13.0
  24. Vancouver 16.0
  25. Windsor 1.0
  26. Winnipeg 3.0
  27.  
  28. [20 rows x 1 columns]

Frequency table like plyr in R

  1. In [154]: grades = [48, 99, 75, 80, 42, 80, 72, 68, 36, 78]
  2.  
  3. In [155]: df = pd.DataFrame({'ID': ["x%d" % r for r in range(10)],
  4. .....: 'Gender': ['F', 'M', 'F', 'M', 'F',
  5. .....: 'M', 'F', 'M', 'M', 'M'],
  6. .....: 'ExamYear': ['2007', '2007', '2007', '2008', '2008',
  7. .....: '2008', '2008', '2009', '2009', '2009'],
  8. .....: 'Class': ['algebra', 'stats', 'bio', 'algebra',
  9. .....: 'algebra', 'stats', 'stats', 'algebra',
  10. .....: 'bio', 'bio'],
  11. .....: 'Participated': ['yes', 'yes', 'yes', 'yes', 'no',
  12. .....: 'yes', 'yes', 'yes', 'yes', 'yes'],
  13. .....: 'Passed': ['yes' if x > 50 else 'no' for x in grades],
  14. .....: 'Employed': [True, True, True, False,
  15. .....: False, False, False, True, True, False],
  16. .....: 'Grade': grades})
  17. .....:
  18.  
  19. In [156]: df.groupby('ExamYear').agg({'Participated': lambda x: x.value_counts()['yes'],
  20. .....: 'Passed': lambda x: sum(x == 'yes'),
  21. .....: 'Employed': lambda x: sum(x),
  22. .....: 'Grade': lambda x: sum(x) / len(x)})
  23. .....:
  24. Out[156]:
  25. Participated Passed Employed Grade
  26. ExamYear
  27. 2007 3 2 3 74.000000
  28. 2008 3 3 0 68.500000
  29. 2009 3 2 2 60.666667

Plot pandas DataFrame with year over year data

To create year and month cross tabulation:

  1. In [157]: df = pd.DataFrame({'value': np.random.randn(36)},
  2. .....: index=pd.date_range('2011-01-01', freq='M', periods=36))
  3. .....:
  4.  
  5. In [158]: pd.pivot_table(df, index=df.index.month, columns=df.index.year,
  6. .....: values='value', aggfunc='sum')
  7. .....:
  8. Out[158]:
  9. 2011 2012 2013
  10. 1 -1.039268 -0.968914 2.565646
  11. 2 -0.370647 -1.294524 1.431256
  12. 3 -1.157892 0.413738 1.340309
  13. 4 -1.344312 0.276662 -1.170299
  14. 5 0.844885 -0.472035 -0.226169
  15. 6 1.075770 -0.013960 0.410835
  16. 7 -0.109050 -0.362543 0.813850
  17. 8 1.643563 -0.006154 0.132003
  18. 9 -1.469388 -0.923061 -0.827317
  19. 10 0.357021 0.895717 -0.076467
  20. 11 -0.674600 0.805244 -1.187678
  21. 12 -1.776904 -1.206412 1.130127

Apply

Rolling apply to organize - Turning embedded lists into a MultiIndex frame

  1. In [159]: df = pd.DataFrame(data={'A': [[2, 4, 8, 16], [100, 200], [10, 20, 30]],
  2. .....: 'B': [['a', 'b', 'c'], ['jj', 'kk'], ['ccc']]},
  3. .....: index=['I', 'II', 'III'])
  4. .....:
  5.  
  6. In [160]: def SeriesFromSubList(aList):
  7. .....: return pd.Series(aList)
  8. .....:
  9.  
  10. In [161]: df_orgz = pd.concat({ind: row.apply(SeriesFromSubList)
  11. .....: for ind, row in df.iterrows()})
  12. .....:
  13.  
  14. In [162]: df_orgz
  15. Out[162]:
  16. 0 1 2 3
  17. I A 2 4 8 16.0
  18. B a b c NaN
  19. II A 100 200 NaN NaN
  20. B jj kk NaN NaN
  21. III A 10 20 30 NaN
  22. B ccc NaN NaN NaN

Rolling apply with a DataFrame returning a Series

Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned

  1. In [163]: df = pd.DataFrame(data=np.random.randn(2000, 2) / 10000,
  2. .....: index=pd.date_range('2001-01-01', periods=2000),
  3. .....: columns=['A', 'B'])
  4. .....:
  5.  
  6. In [164]: df
  7. Out[164]:
  8. A B
  9. 2001-01-01 -0.000144 -0.000141
  10. 2001-01-02 0.000161 0.000102
  11. 2001-01-03 0.000057 0.000088
  12. 2001-01-04 -0.000221 0.000097
  13. 2001-01-05 -0.000201 -0.000041
  14. ... ... ...
  15. 2006-06-19 0.000040 -0.000235
  16. 2006-06-20 -0.000123 -0.000021
  17. 2006-06-21 -0.000113 0.000114
  18. 2006-06-22 0.000136 0.000109
  19. 2006-06-23 0.000027 0.000030
  20.  
  21. [2000 rows x 2 columns]
  22.  
  23. In [165]: def gm(df, const):
  24. .....: v = ((((df.A + df.B) + 1).cumprod()) - 1) * const
  25. .....: return v.iloc[-1]
  26. .....:
  27.  
  28. In [166]: s = pd.Series({df.index[i]: gm(df.iloc[i:min(i + 51, len(df) - 1)], 5)
  29. .....: for i in range(len(df) - 50)})
  30. .....:
  31.  
  32. In [167]: s
  33. Out[167]:
  34. 2001-01-01 0.000930
  35. 2001-01-02 0.002615
  36. 2001-01-03 0.001281
  37. 2001-01-04 0.001117
  38. 2001-01-05 0.002772
  39. ...
  40. 2006-04-30 0.003296
  41. 2006-05-01 0.002629
  42. 2006-05-02 0.002081
  43. 2006-05-03 0.004247
  44. 2006-05-04 0.003928
  45. Length: 1950, dtype: float64

Rolling apply with a DataFrame returning a Scalar

Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)

  1. In [168]: rng = pd.date_range(start='2014-01-01', periods=100)
  2.  
  3. In [169]: df = pd.DataFrame({'Open': np.random.randn(len(rng)),
  4. .....: 'Close': np.random.randn(len(rng)),
  5. .....: 'Volume': np.random.randint(100, 2000, len(rng))},
  6. .....: index=rng)
  7. .....:
  8.  
  9. In [170]: df
  10. Out[170]:
  11. Open Close Volume
  12. 2014-01-01 -1.611353 -0.492885 1219
  13. 2014-01-02 -3.000951 0.445794 1054
  14. 2014-01-03 -0.138359 -0.076081 1381
  15. 2014-01-04 0.301568 1.198259 1253
  16. 2014-01-05 0.276381 -0.669831 1728
  17. ... ... ... ...
  18. 2014-04-06 -0.040338 0.937843 1188
  19. 2014-04-07 0.359661 -0.285908 1864
  20. 2014-04-08 0.060978 1.714814 941
  21. 2014-04-09 1.759055 -0.455942 1065
  22. 2014-04-10 0.138185 -1.147008 1453
  23.  
  24. [100 rows x 3 columns]
  25.  
  26. In [171]: def vwap(bars):
  27. .....: return ((bars.Close * bars.Volume).sum() / bars.Volume.sum())
  28. .....:
  29.  
  30. In [172]: window = 5
  31.  
  32. In [173]: s = pd.concat([(pd.Series(vwap(df.iloc[i:i + window]),
  33. .....: index=[df.index[i + window]]))
  34. .....: for i in range(len(df) - window)])
  35. .....:
  36.  
  37. In [174]: s.round(2)
  38. Out[174]:
  39. 2014-01-06 0.02
  40. 2014-01-07 0.11
  41. 2014-01-08 0.10
  42. 2014-01-09 0.07
  43. 2014-01-10 -0.29
  44. ...
  45. 2014-04-06 -0.63
  46. 2014-04-07 -0.02
  47. 2014-04-08 -0.03
  48. 2014-04-09 0.34
  49. 2014-04-10 0.29
  50. Length: 95, dtype: float64

Timeseries

Between times

Using indexer between time

Constructing a datetime range that excludes weekends and includes only certain times

Vectorized Lookup

Aggregation and plotting time series

Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.How to rearrange a Python pandas DataFrame?

Dealing with duplicates when reindexing a timeseries to a specified frequency

Calculate the first day of the month for each entry in a DatetimeIndex

  1. In [175]: dates = pd.date_range('2000-01-01', periods=5)
  2.  
  3. In [176]: dates.to_period(freq='M').to_timestamp()
  4. Out[176]:
  5. DatetimeIndex(['2000-01-01', '2000-01-01', '2000-01-01', '2000-01-01',
  6. '2000-01-01'],
  7. dtype='datetime64[ns]', freq=None)

Resampling

The Resample docs.

Using Grouper instead of TimeGrouper for time grouping of values

Time grouping with some missing values

Valid frequency arguments to Grouper

Grouping using a MultiIndex

Using TimeGrouper and another grouping to create subgroups, then apply a custom function

Resampling with custom periods

Resample intraday frame without adding new days

Resample minute data

Resample with groupby

Merge

The Concat docs. The Join docs.

Append two dataframes with overlapping index (emulate R rbind)

  1. In [177]: rng = pd.date_range('2000-01-01', periods=6)
  2.  
  3. In [178]: df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=['A', 'B', 'C'])
  4.  
  5. In [179]: df2 = df1.copy()

Depending on df construction, ignore_index may be needed

  1. In [180]: df = df1.append(df2, ignore_index=True)
  2.  
  3. In [181]: df
  4. Out[181]:
  5. A B C
  6. 0 -0.870117 -0.479265 -0.790855
  7. 1 0.144817 1.726395 -0.464535
  8. 2 -0.821906 1.597605 0.187307
  9. 3 -0.128342 -1.511638 -0.289858
  10. 4 0.399194 -1.430030 -0.639760
  11. 5 1.115116 -2.012600 1.810662
  12. 6 -0.870117 -0.479265 -0.790855
  13. 7 0.144817 1.726395 -0.464535
  14. 8 -0.821906 1.597605 0.187307
  15. 9 -0.128342 -1.511638 -0.289858
  16. 10 0.399194 -1.430030 -0.639760
  17. 11 1.115116 -2.012600 1.810662

Self Join of a DataFrame

  1. In [182]: df = pd.DataFrame(data={'Area': ['A'] * 5 + ['C'] * 2,
  2. .....: 'Bins': [110] * 2 + [160] * 3 + [40] * 2,
  3. .....: 'Test_0': [0, 1, 0, 1, 2, 0, 1],
  4. .....: 'Data': np.random.randn(7)})
  5. .....:
  6.  
  7. In [183]: df
  8. Out[183]:
  9. Area Bins Test_0 Data
  10. 0 A 110 0 -0.433937
  11. 1 A 110 1 -0.160552
  12. 2 A 160 0 0.744434
  13. 3 A 160 1 1.754213
  14. 4 A 160 2 0.000850
  15. 5 C 40 0 0.342243
  16. 6 C 40 1 1.070599
  17.  
  18. In [184]: df['Test_1'] = df['Test_0'] - 1
  19.  
  20. In [185]: pd.merge(df, df, left_on=['Bins', 'Area', 'Test_0'],
  21. .....: right_on=['Bins', 'Area', 'Test_1'],
  22. .....: suffixes=('_L', '_R'))
  23. .....:
  24. Out[185]:
  25. Area Bins Test_0_L Data_L Test_1_L Test_0_R Data_R Test_1_R
  26. 0 A 110 0 -0.433937 -1 1 -0.160552 0
  27. 1 A 160 0 0.744434 -1 1 1.754213 0
  28. 2 A 160 1 1.754213 0 2 0.000850 1
  29. 3 C 40 0 0.342243 -1 1 1.070599 0

How to set the index and join

KDB like asof join

Join with a criteria based on the values

Using searchsorted to merge based on values inside a range

Plotting

The Plotting docs.

Make Matplotlib look like R

Setting x-axis major and minor labels

Plotting multiple charts in an ipython notebook

Creating a multi-line plot

Plotting a heatmap

Annotate a time-series plot

Annotate a time-series plot #2

Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter

Boxplot for each quartile of a stratifying variable

  1. In [186]: df = pd.DataFrame(
  2. .....: {'stratifying_var': np.random.uniform(0, 100, 20),
  3. .....: 'price': np.random.normal(100, 5, 20)})
  4. .....:
  5.  
  6. In [187]: df['quartiles'] = pd.qcut(
  7. .....: df['stratifying_var'],
  8. .....: 4,
  9. .....: labels=['0-25%', '25-50%', '50-75%', '75-100%'])
  10. .....:
  11.  
  12. In [188]: df.boxplot(column='price', by='quartiles')
  13. Out[188]: <matplotlib.axes._subplots.AxesSubplot at 0x7f4529608e90>

../_images/quartile_boxplot.png

Data In/Out

Performance comparison of SQL vs HDF5

CSV

The CSV docs

read_csv in action

appending to a csv

Reading a csv chunk-by-chunk

Reading only certain rows of a csv chunk-by-chunk

Reading the first few lines of a frame

Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands).This example shows a WinZipped file, but is a general application of opening the file within a context manager andusing that handle to read.See here

Inferring dtypes from a file

Dealing with bad lines

Dealing with bad lines II

Reading CSV with Unix timestamps and converting to local timezone

Write a multi-row index CSV without writing duplicates

Reading multiple files to create a single DataFrame

The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put allof the individual frames into a list, and then combine the frames in the list using pd.concat():

  1. In [189]: for i in range(3):
  2. .....: data = pd.DataFrame(np.random.randn(10, 4))
  3. .....: data.to_csv('file_{}.csv'.format(i))
  4. .....:
  5.  
  6. In [190]: files = ['file_0.csv', 'file_1.csv', 'file_2.csv']
  7.  
  8. In [191]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)

You can use the same approach to read all files matching a pattern. Here is an example using glob:

  1. In [192]: import glob
  2.  
  3. In [193]: import os
  4.  
  5. In [194]: files = glob.glob('file_*.csv')
  6.  
  7. In [195]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)

Finally, this strategy will work with the other pd.read_*(…) functions described in the io docs.

Parsing date components in multi-columns

Parsing date components in multi-columns is faster with a format

  1. In [196]: i = pd.date_range('20000101', periods=10000)
  2.  
  3. In [197]: df = pd.DataFrame({'year': i.year, 'month': i.month, 'day': i.day})
  4.  
  5. In [198]: df.head()
  6. Out[198]:
  7. year month day
  8. 0 2000 1 1
  9. 1 2000 1 2
  10. 2 2000 1 3
  11. 3 2000 1 4
  12. 4 2000 1 5
  13.  
  14. In [199]: %timeit pd.to_datetime(df.year * 10000 + df.month * 100 + df.day, format='%Y%m%d')
  15. .....: ds = df.apply(lambda x: "%04d%02d%02d" % (x['year'],
  16. .....: x['month'], x['day']), axis=1)
  17. .....: ds.head()
  18. .....: %timeit pd.to_datetime(ds)
  19. .....:
  20. 9.41 ms +- 596 us per loop (mean +- std. dev. of 7 runs, 100 loops each)
  21. 2.76 ms +- 60.8 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

Skip row between header and data

  1. In [200]: data = """;;;;
  2. .....: ;;;;
  3. .....: ;;;;
  4. .....: ;;;;
  5. .....: ;;;;
  6. .....: ;;;;
  7. .....: ;;;;
  8. .....: ;;;;
  9. .....: ;;;;
  10. .....: ;;;;
  11. .....: date;Param1;Param2;Param4;Param5
  12. .....: ;m²;°C;m²;m
  13. .....: ;;;;
  14. .....: 01.01.1990 00:00;1;1;2;3
  15. .....: 01.01.1990 01:00;5;3;4;5
  16. .....: 01.01.1990 02:00;9;5;6;7
  17. .....: 01.01.1990 03:00;13;7;8;9
  18. .....: 01.01.1990 04:00;17;9;10;11
  19. .....: 01.01.1990 05:00;21;11;12;13
  20. .....: """
  21. .....:
Option 1: pass rows explicitly to skip rows
  1. In [201]: from io import StringIO
  2.  
  3. In [202]: pd.read_csv(StringIO(data), sep=';', skiprows=[11, 12],
  4. .....: index_col=0, parse_dates=True, header=10)
  5. .....:
  6. Out[202]:
  7. Param1 Param2 Param4 Param5
  8. date
  9. 1990-01-01 00:00:00 1 1 2 3
  10. 1990-01-01 01:00:00 5 3 4 5
  11. 1990-01-01 02:00:00 9 5 6 7
  12. 1990-01-01 03:00:00 13 7 8 9
  13. 1990-01-01 04:00:00 17 9 10 11
  14. 1990-01-01 05:00:00 21 11 12 13
Option 2: read column names and then data
  1. In [203]: pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
  2. Out[203]: Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')
  3.  
  4. In [204]: columns = pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
  5.  
  6. In [205]: pd.read_csv(StringIO(data), sep=';', index_col=0,
  7. .....: header=12, parse_dates=True, names=columns)
  8. .....:
  9. Out[205]:
  10. Param1 Param2 Param4 Param5
  11. date
  12. 1990-01-01 00:00:00 1 1 2 3
  13. 1990-01-01 01:00:00 5 3 4 5
  14. 1990-01-01 02:00:00 9 5 6 7
  15. 1990-01-01 03:00:00 13 7 8 9
  16. 1990-01-01 04:00:00 17 9 10 11
  17. 1990-01-01 05:00:00 21 11 12 13

SQL

The SQL docs

Reading from databases with SQL

Excel

The Excel docs

Reading from a filelike handle

Modifying formatting in XlsxWriter output

HTML

Reading HTML tables from a server that cannot handle the default requestheader

HDFStore

The HDFStores docs

Simple queries with a Timestamp Index

Managing heterogeneous data using a linked multiple table hierarchy

Merging on-disk tables with millions of rows

Avoiding inconsistencies when writing to a store from multiple processes/threads

De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data fromcsv file and creating a store by chunks, with date parsing as well.See here

Creating a store chunk-by-chunk from a csv file

Appending to a store, while creating a unique index

Large Data work flows

Reading in a sequence of files, then providing a global unique index to a store while appending

Groupby on a HDFStore with low group density

Groupby on a HDFStore with high group density

Hierarchical queries on a HDFStore

Counting with a HDFStore

Troubleshoot HDFStore exceptions

Setting min_itemsize with strings

Using ptrepack to create a completely-sorted-index on a store

Storing Attributes to a group node

  1. In [206]: df = pd.DataFrame(np.random.randn(8, 3))
  2.  
  3. In [207]: store = pd.HDFStore('test.h5')
  4.  
  5. In [208]: store.put('df', df)
  6.  
  7. # you can store an arbitrary Python object via pickle
  8. In [209]: store.get_storer('df').attrs.my_attribute = {'A': 10}
  9.  
  10. In [210]: store.get_storer('df').attrs.my_attribute
  11. Out[210]: {'A': 10}

Binary files

pandas readily accepts NumPy record arrays, if you need to read in a binaryfile consisting of an array of C structs. For example, given this C programin a file called main.c compiled with gcc main.c -std=gnu99 on a64-bit machine,

  1. #include <stdio.h>
  2. #include <stdint.h>
  3.  
  4. typedef struct _Data
  5. {
  6. int32_t count;
  7. double avg;
  8. float scale;
  9. } Data;
  10.  
  11. int main(int argc, const char *argv[])
  12. {
  13. size_t n = 10;
  14. Data d[n];
  15.  
  16. for (int i = 0; i < n; ++i)
  17. {
  18. d[i].count = i;
  19. d[i].avg = i + 1.0;
  20. d[i].scale = (float) i + 2.0f;
  21. }
  22.  
  23. FILE *file = fopen("binary.dat", "wb");
  24. fwrite(&d, sizeof(Data), n, file);
  25. fclose(file);
  26.  
  27. return 0;
  28. }

the following Python code will read the binary file 'binary.dat' into apandas DataFrame, where each element of the struct corresponds to a columnin the frame:

  1. names = 'count', 'avg', 'scale'
  2.  
  3. # note that the offsets are larger than the size of the type because of
  4. # struct padding
  5. offsets = 0, 8, 16
  6. formats = 'i4', 'f8', 'f4'
  7. dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
  8. align=True)
  9. df = pd.DataFrame(np.fromfile('binary.dat', dt))

Note

The offsets of the structure elements may be different depending on thearchitecture of the machine on which the file was created. Using a rawbinary file format like this for general data storage is not recommended, asit is not cross platform. We recommended either HDF5 or msgpack, both ofwhich are supported by pandas’ IO facilities.

Computation

Numerical integration (sample-based) of a time series

Correlation

Often it’s useful to obtain the lower (or upper) triangular form of a correlation matrix calculated from DataFrame.corr(). This can be achieved by passing a boolean mask to where as follows:

  1. In [211]: df = pd.DataFrame(np.random.random(size=(100, 5)))
  2.  
  3. In [212]: corr_mat = df.corr()
  4.  
  5. In [213]: mask = np.tril(np.ones_like(corr_mat, dtype=np.bool), k=-1)
  6.  
  7. In [214]: corr_mat.where(mask)
  8. Out[214]:
  9. 0 1 2 3 4
  10. 0 NaN NaN NaN NaN NaN
  11. 1 -0.018923 NaN NaN NaN NaN
  12. 2 -0.076296 -0.012464 NaN NaN NaN
  13. 3 -0.169941 -0.289416 0.076462 NaN NaN
  14. 4 0.064326 0.018759 -0.084140 -0.079859 NaN

The method argument within DataFrame.corr can accept a callable in addition to the named correlation types. Here we compute the distance correlation matrix for a DataFrame object.

  1. In [215]: def distcorr(x, y):
  2. .....: n = len(x)
  3. .....: a = np.zeros(shape=(n, n))
  4. .....: b = np.zeros(shape=(n, n))
  5. .....: for i in range(n):
  6. .....: for j in range(i + 1, n):
  7. .....: a[i, j] = abs(x[i] - x[j])
  8. .....: b[i, j] = abs(y[i] - y[j])
  9. .....: a += a.T
  10. .....: b += b.T
  11. .....: a_bar = np.vstack([np.nanmean(a, axis=0)] * n)
  12. .....: b_bar = np.vstack([np.nanmean(b, axis=0)] * n)
  13. .....: A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean())
  14. .....: B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean())
  15. .....: cov_ab = np.sqrt(np.nansum(A * B)) / n
  16. .....: std_a = np.sqrt(np.sqrt(np.nansum(A**2)) / n)
  17. .....: std_b = np.sqrt(np.sqrt(np.nansum(B**2)) / n)
  18. .....: return cov_ab / std_a / std_b
  19. .....:
  20.  
  21. In [216]: df = pd.DataFrame(np.random.normal(size=(100, 3)))
  22.  
  23. In [217]: df.corr(method=distcorr)
  24. Out[217]:
  25. 0 1 2
  26. 0 1.000000 0.199653 0.214871
  27. 1 0.199653 1.000000 0.195116
  28. 2 0.214871 0.195116 1.000000

Timedeltas

The Timedeltas docs.

Using timedeltas

  1. In [218]: import datetime
  2.  
  3. In [219]: s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
  4.  
  5. In [220]: s - s.max()
  6. Out[220]:
  7. 0 -2 days
  8. 1 -1 days
  9. 2 0 days
  10. dtype: timedelta64[ns]
  11.  
  12. In [221]: s.max() - s
  13. Out[221]:
  14. 0 2 days
  15. 1 1 days
  16. 2 0 days
  17. dtype: timedelta64[ns]
  18.  
  19. In [222]: s - datetime.datetime(2011, 1, 1, 3, 5)
  20. Out[222]:
  21. 0 364 days 20:55:00
  22. 1 365 days 20:55:00
  23. 2 366 days 20:55:00
  24. dtype: timedelta64[ns]
  25.  
  26. In [223]: s + datetime.timedelta(minutes=5)
  27. Out[223]:
  28. 0 2012-01-01 00:05:00
  29. 1 2012-01-02 00:05:00
  30. 2 2012-01-03 00:05:00
  31. dtype: datetime64[ns]
  32.  
  33. In [224]: datetime.datetime(2011, 1, 1, 3, 5) - s
  34. Out[224]:
  35. 0 -365 days +03:05:00
  36. 1 -366 days +03:05:00
  37. 2 -367 days +03:05:00
  38. dtype: timedelta64[ns]
  39.  
  40. In [225]: datetime.timedelta(minutes=5) + s
  41. Out[225]:
  42. 0 2012-01-01 00:05:00
  43. 1 2012-01-02 00:05:00
  44. 2 2012-01-03 00:05:00
  45. dtype: datetime64[ns]

Adding and subtracting deltas and dates

  1. In [226]: deltas = pd.Series([datetime.timedelta(days=i) for i in range(3)])
  2.  
  3. In [227]: df = pd.DataFrame({'A': s, 'B': deltas})
  4.  
  5. In [228]: df
  6. Out[228]:
  7. A B
  8. 0 2012-01-01 0 days
  9. 1 2012-01-02 1 days
  10. 2 2012-01-03 2 days
  11.  
  12. In [229]: df['New Dates'] = df['A'] + df['B']
  13.  
  14. In [230]: df['Delta'] = df['A'] - df['New Dates']
  15.  
  16. In [231]: df
  17. Out[231]:
  18. A B New Dates Delta
  19. 0 2012-01-01 0 days 2012-01-01 0 days
  20. 1 2012-01-02 1 days 2012-01-03 -1 days
  21. 2 2012-01-03 2 days 2012-01-05 -2 days
  22.  
  23. In [232]: df.dtypes
  24. Out[232]:
  25. A datetime64[ns]
  26. B timedelta64[ns]
  27. New Dates datetime64[ns]
  28. Delta timedelta64[ns]
  29. dtype: object

Another example

Values can be set to NaT using np.nan, similar to datetime

  1. In [233]: y = s - s.shift()
  2.  
  3. In [234]: y
  4. Out[234]:
  5. 0 NaT
  6. 1 1 days
  7. 2 1 days
  8. dtype: timedelta64[ns]
  9.  
  10. In [235]: y[1] = np.nan
  11.  
  12. In [236]: y
  13. Out[236]:
  14. 0 NaT
  15. 1 NaT
  16. 2 1 days
  17. dtype: timedelta64[ns]

Aliasing axis names

To globally provide aliases for axis names, one can define these 2 functions:

  1. In [237]: def set_axis_alias(cls, axis, alias):
  2. .....: if axis not in cls._AXIS_NUMBERS:
  3. .....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
  4. .....: cls._AXIS_ALIASES[alias] = axis
  5. .....:
  1. In [238]: def clear_axis_alias(cls, axis, alias):
  2. .....: if axis not in cls._AXIS_NUMBERS:
  3. .....: raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
  4. .....: cls._AXIS_ALIASES.pop(alias, None)
  5. .....:
  1. In [239]: set_axis_alias(pd.DataFrame, 'columns', 'myaxis2')
  2.  
  3. In [240]: df2 = pd.DataFrame(np.random.randn(3, 2), columns=['c1', 'c2'],
  4. .....: index=['i1', 'i2', 'i3'])
  5. .....:
  6.  
  7. In [241]: df2.sum(axis='myaxis2')
  8. Out[241]:
  9. i1 -0.461013
  10. i2 2.040016
  11. i3 0.904681
  12. dtype: float64
  13.  
  14. In [242]: clear_axis_alias(pd.DataFrame, 'columns', 'myaxis2')

Creating example data

To create a dataframe from every combination of some given values, like R’s expand.grid()function, we can create a dict where the keys are column names and the values are listsof the data values:

  1. In [243]: def expand_grid(data_dict):
  2. .....: rows = itertools.product(*data_dict.values())
  3. .....: return pd.DataFrame.from_records(rows, columns=data_dict.keys())
  4. .....:
  5.  
  6. In [244]: df = expand_grid({'height': [60, 70],
  7. .....: 'weight': [100, 140, 180],
  8. .....: 'sex': ['Male', 'Female']})
  9. .....:
  10.  
  11. In [245]: df
  12. Out[245]:
  13. height weight sex
  14. 0 60 100 Male
  15. 1 60 100 Female
  16. 2 60 140 Male
  17. 3 60 140 Female
  18. 4 60 180 Male
  19. 5 60 180 Female
  20. 6 70 100 Male
  21. 7 70 100 Female
  22. 8 70 140 Male
  23. 9 70 140 Female
  24. 10 70 180 Male
  25. 11 70 180 Female