10 minutes to pandas

This is a short introduction to pandas, geared mainly for new users.You can see more complex recipes in the Cookbook.

Customarily, we import as follows:

  1. In [1]: import numpy as np
  2.  
  3. In [2]: import pandas as pd

Object creation

See the Data Structure Intro section.

Creating a Series by passing a list of values, letting pandas createa default integer index:

  1. In [3]: s = pd.Series([1, 3, 5, np.nan, 6, 8])
  2.  
  3. In [4]: s
  4. Out[4]:
  5. 0 1.0
  6. 1 3.0
  7. 2 5.0
  8. 3 NaN
  9. 4 6.0
  10. 5 8.0
  11. dtype: float64

Creating a DataFrame by passing a NumPy array, with a datetime indexand labeled columns:

  1. In [5]: dates = pd.date_range('20130101', periods=6)
  2.  
  3. In [6]: dates
  4. Out[6]:
  5. DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
  6. '2013-01-05', '2013-01-06'],
  7. dtype='datetime64[ns]', freq='D')
  8.  
  9. In [7]: df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
  10.  
  11. In [8]: df
  12. Out[8]:
  13. A B C D
  14. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  15. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  16. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  17. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  18. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  19. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

  1. In [9]: df2 = pd.DataFrame({'A': 1.,
  2. ...: 'B': pd.Timestamp('20130102'),
  3. ...: 'C': pd.Series(1, index=list(range(4)), dtype='float32'),
  4. ...: 'D': np.array([3] * 4, dtype='int32'),
  5. ...: 'E': pd.Categorical(["test", "train", "test", "train"]),
  6. ...: 'F': 'foo'})
  7. ...:
  8.  
  9. In [10]: df2
  10. Out[10]:
  11. A B C D E F
  12. 0 1.0 2013-01-02 1.0 3 test foo
  13. 1 1.0 2013-01-02 1.0 3 train foo
  14. 2 1.0 2013-01-02 1.0 3 test foo
  15. 3 1.0 2013-01-02 1.0 3 train foo

The columns of the resulting DataFrame have differentdtypes.

  1. In [11]: df2.dtypes
  2. Out[11]:
  3. A float64
  4. B datetime64[ns]
  5. C float32
  6. D int32
  7. E category
  8. F object
  9. dtype: object

If you’re using IPython, tab completion for column names (as well as publicattributes) is automatically enabled. Here’s a subset of the attributes thatwill be completed:

  1. In [12]: df2.<TAB> # noqa: E225, E999
  2. df2.A df2.bool
  3. df2.abs df2.boxplot
  4. df2.add df2.C
  5. df2.add_prefix df2.clip
  6. df2.add_suffix df2.clip_lower
  7. df2.align df2.clip_upper
  8. df2.all df2.columns
  9. df2.any df2.combine
  10. df2.append df2.combine_first
  11. df2.apply df2.compound
  12. df2.applymap df2.consolidate
  13. df2.D

As you can see, the columns A, B, C, and D are automaticallytab completed. E is there as well; the rest of the attributes have beentruncated for brevity.

Viewing data

See the Basics section.

Here is how to view the top and bottom rows of the frame:

  1. In [13]: df.head()
  2. Out[13]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  7. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  8. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  9.  
  10. In [14]: df.tail(3)
  11. Out[14]:
  12. A B C D
  13. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  14. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401
  15. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988

Display the index, columns:

  1. In [15]: df.index
  2. Out[15]:
  3. DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
  4. '2013-01-05', '2013-01-06'],
  5. dtype='datetime64[ns]', freq='D')
  6.  
  7. In [16]: df.columns
  8. Out[16]: Index(['A', 'B', 'C', 'D'], dtype='object')

DataFrame.to_numpy() gives a NumPy representation of the underlying data.Note that this can be an expensive operation when your DataFrame hascolumns with different data types, which comes down to a fundamental differencebetween pandas and NumPy: NumPy arrays have one dtype for the entire array,while pandas DataFrames have one dtype per column. When you callDataFrame.to_numpy(), pandas will find the NumPy dtype that can hold _all_of the dtypes in the DataFrame. This may end up being object, which requirescasting every value to a Python object.

For df, our DataFrame of all floating-point values,DataFrame.to_numpy() is fast and doesn’t require copying data.

  1. In [17]: df.to_numpy()
  2. Out[17]:
  3. array([[ 0.4691, -0.2829, -1.5091, -1.1356],
  4. [ 1.2121, -0.1732, 0.1192, -1.0442],
  5. [-0.8618, -2.1046, -0.4949, 1.0718],
  6. [ 0.7216, -0.7068, -1.0396, 0.2719],
  7. [-0.425 , 0.567 , 0.2762, -1.0874],
  8. [-0.6737, 0.1136, -1.4784, 0.525 ]])

For df2, the DataFrame with multiple dtypes,DataFrame.to_numpy() is relatively expensive.

  1. In [18]: df2.to_numpy()
  2. Out[18]:
  3. array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
  4. [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo'],
  5. [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'test', 'foo'],
  6. [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'train', 'foo']],
  7. dtype=object)

Note

DataFrame.to_numpy() does not include the index or columnlabels in the output.

describe() shows a quick statistic summary of your data:

  1. In [19]: df.describe()
  2. Out[19]:
  3. A B C D
  4. count 6.000000 6.000000 6.000000 6.000000
  5. mean 0.073711 -0.431125 -0.687758 -0.233103
  6. std 0.843157 0.922818 0.779887 0.973118
  7. min -0.861849 -2.104569 -1.509059 -1.135632
  8. 25% -0.611510 -0.600794 -1.368714 -1.076610
  9. 50% 0.022070 -0.228039 -0.767252 -0.386188
  10. 75% 0.658444 0.041933 -0.034326 0.461706
  11. max 1.212112 0.567020 0.276232 1.071804

Transposing your data:

  1. In [20]: df.T
  2. Out[20]:
  3. 2013-01-01 2013-01-02 2013-01-03 2013-01-04 2013-01-05 2013-01-06
  4. A 0.469112 1.212112 -0.861849 0.721555 -0.424972 -0.673690
  5. B -0.282863 -0.173215 -2.104569 -0.706771 0.567020 0.113648
  6. C -1.509059 0.119209 -0.494929 -1.039575 0.276232 -1.478427
  7. D -1.135632 -1.044236 1.071804 0.271860 -1.087401 0.524988

Sorting by an axis:

  1. In [21]: df.sort_index(axis=1, ascending=False)
  2. Out[21]:
  3. D C B A
  4. 2013-01-01 -1.135632 -1.509059 -0.282863 0.469112
  5. 2013-01-02 -1.044236 0.119209 -0.173215 1.212112
  6. 2013-01-03 1.071804 -0.494929 -2.104569 -0.861849
  7. 2013-01-04 0.271860 -1.039575 -0.706771 0.721555
  8. 2013-01-05 -1.087401 0.276232 0.567020 -0.424972
  9. 2013-01-06 0.524988 -1.478427 0.113648 -0.673690

Sorting by values:

  1. In [22]: df.sort_values(by='B')
  2. Out[22]:
  3. A B C D
  4. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  5. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860
  6. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  7. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  8. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988
  9. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401

Selection

Note

While standard Python / Numpy expressions for selecting and setting areintuitive and come in handy for interactive work, for production code, werecommend the optimized pandas data access methods, .at, .iat,.loc and .iloc.

See the indexing documentation Indexing and Selecting Data and MultiIndex / Advanced Indexing.

Getting

Selecting a single column, which yields a Series,equivalent to df.A:

  1. In [23]: df['A']
  2. Out[23]:
  3. 2013-01-01 0.469112
  4. 2013-01-02 1.212112
  5. 2013-01-03 -0.861849
  6. 2013-01-04 0.721555
  7. 2013-01-05 -0.424972
  8. 2013-01-06 -0.673690
  9. Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

  1. In [24]: df[0:3]
  2. Out[24]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  7.  
  8. In [25]: df['20130102':'20130104']
  9. Out[25]:
  10. A B C D
  11. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  12. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
  13. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860

Selection by label

See more in Selection by Label.

For getting a cross section using a label:

  1. In [26]: df.loc[dates[0]]
  2. Out[26]:
  3. A 0.469112
  4. B -0.282863
  5. C -1.509059
  6. D -1.135632
  7. Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label:

  1. In [27]: df.loc[:, ['A', 'B']]
  2. Out[27]:
  3. A B
  4. 2013-01-01 0.469112 -0.282863
  5. 2013-01-02 1.212112 -0.173215
  6. 2013-01-03 -0.861849 -2.104569
  7. 2013-01-04 0.721555 -0.706771
  8. 2013-01-05 -0.424972 0.567020
  9. 2013-01-06 -0.673690 0.113648

Showing label slicing, both endpoints are included:

  1. In [28]: df.loc['20130102':'20130104', ['A', 'B']]
  2. Out[28]:
  3. A B
  4. 2013-01-02 1.212112 -0.173215
  5. 2013-01-03 -0.861849 -2.104569
  6. 2013-01-04 0.721555 -0.706771

Reduction in the dimensions of the returned object:

  1. In [29]: df.loc['20130102', ['A', 'B']]
  2. Out[29]:
  3. A 1.212112
  4. B -0.173215
  5. Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value:

  1. In [30]: df.loc[dates[0], 'A']
  2. Out[30]: 0.4691122999071863

For getting fast access to a scalar (equivalent to the prior method):

  1. In [31]: df.at[dates[0], 'A']
  2. Out[31]: 0.4691122999071863

Selection by position

See more in Selection by Position.

Select via the position of the passed integers:

  1. In [32]: df.iloc[3]
  2. Out[32]:
  3. A 0.721555
  4. B -0.706771
  5. C -1.039575
  6. D 0.271860
  7. Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python:

  1. In [33]: df.iloc[3:5, 0:2]
  2. Out[33]:
  3. A B
  4. 2013-01-04 0.721555 -0.706771
  5. 2013-01-05 -0.424972 0.567020

By lists of integer position locations, similar to the numpy/python style:

  1. In [34]: df.iloc[[1, 2, 4], [0, 2]]
  2. Out[34]:
  3. A C
  4. 2013-01-02 1.212112 0.119209
  5. 2013-01-03 -0.861849 -0.494929
  6. 2013-01-05 -0.424972 0.276232

For slicing rows explicitly:

  1. In [35]: df.iloc[1:3, :]
  2. Out[35]:
  3. A B C D
  4. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  5. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804

For slicing columns explicitly:

  1. In [36]: df.iloc[:, 1:3]
  2. Out[36]:
  3. B C
  4. 2013-01-01 -0.282863 -1.509059
  5. 2013-01-02 -0.173215 0.119209
  6. 2013-01-03 -2.104569 -0.494929
  7. 2013-01-04 -0.706771 -1.039575
  8. 2013-01-05 0.567020 0.276232
  9. 2013-01-06 0.113648 -1.478427

For getting a value explicitly:

  1. In [37]: df.iloc[1, 1]
  2. Out[37]: -0.17321464905330858

For getting fast access to a scalar (equivalent to the prior method):

  1. In [38]: df.iat[1, 1]
  2. Out[38]: -0.17321464905330858

Boolean indexing

Using a single column’s values to select data.

  1. In [39]: df[df.A > 0]
  2. Out[39]:
  3. A B C D
  4. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
  5. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236
  6. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860

Selecting values from a DataFrame where a boolean condition is met.

  1. In [40]: df[df > 0]
  2. Out[40]:
  3. A B C D
  4. 2013-01-01 0.469112 NaN NaN NaN
  5. 2013-01-02 1.212112 NaN 0.119209 NaN
  6. 2013-01-03 NaN NaN NaN 1.071804
  7. 2013-01-04 0.721555 NaN NaN 0.271860
  8. 2013-01-05 NaN 0.567020 0.276232 NaN
  9. 2013-01-06 NaN 0.113648 NaN 0.524988

Using the isin() method for filtering:

  1. In [41]: df2 = df.copy()
  2.  
  3. In [42]: df2['E'] = ['one', 'one', 'two', 'three', 'four', 'three']
  4.  
  5. In [43]: df2
  6. Out[43]:
  7. A B C D E
  8. 2013-01-01 0.469112 -0.282863 -1.509059 -1.135632 one
  9. 2013-01-02 1.212112 -0.173215 0.119209 -1.044236 one
  10. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
  11. 2013-01-04 0.721555 -0.706771 -1.039575 0.271860 three
  12. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four
  13. 2013-01-06 -0.673690 0.113648 -1.478427 0.524988 three
  14.  
  15. In [44]: df2[df2['E'].isin(['two', 'four'])]
  16. Out[44]:
  17. A B C D E
  18. 2013-01-03 -0.861849 -2.104569 -0.494929 1.071804 two
  19. 2013-01-05 -0.424972 0.567020 0.276232 -1.087401 four

Setting

Setting a new column automatically aligns the databy the indexes.

  1. In [45]: s1 = pd.Series([1, 2, 3, 4, 5, 6], index=pd.date_range('20130102', periods=6))
  2.  
  3. In [46]: s1
  4. Out[46]:
  5. 2013-01-02 1
  6. 2013-01-03 2
  7. 2013-01-04 3
  8. 2013-01-05 4
  9. 2013-01-06 5
  10. 2013-01-07 6
  11. Freq: D, dtype: int64
  12.  
  13. In [47]: df['F'] = s1

Setting values by label:

  1. In [48]: df.at[dates[0], 'A'] = 0

Setting values by position:

  1. In [49]: df.iat[0, 1] = 0

Setting by assigning with a NumPy array:

  1. In [50]: df.loc[:, 'D'] = np.array([5] * len(df))

The result of the prior setting operations.

  1. In [51]: df
  2. Out[51]:
  3. A B C D F
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN
  5. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0
  7. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0
  8. 2013-01-05 -0.424972 0.567020 0.276232 5 4.0
  9. 2013-01-06 -0.673690 0.113648 -1.478427 5 5.0

A where operation with setting.

  1. In [52]: df2 = df.copy()
  2.  
  3. In [53]: df2[df2 > 0] = -df2
  4.  
  5. In [54]: df2
  6. Out[54]:
  7. A B C D F
  8. 2013-01-01 0.000000 0.000000 -1.509059 -5 NaN
  9. 2013-01-02 -1.212112 -0.173215 -0.119209 -5 -1.0
  10. 2013-01-03 -0.861849 -2.104569 -0.494929 -5 -2.0
  11. 2013-01-04 -0.721555 -0.706771 -1.039575 -5 -3.0
  12. 2013-01-05 -0.424972 -0.567020 -0.276232 -5 -4.0
  13. 2013-01-06 -0.673690 -0.113648 -1.478427 -5 -5.0

Missing data

pandas primarily uses the value np.nan to represent missing data. It is bydefault not included in computations. See the Missing Data section.

Reindexing allows you to change/add/delete the index on a specified axis. Thisreturns a copy of the data.

  1. In [55]: df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
  2.  
  3. In [56]: df1.loc[dates[0]:dates[1], 'E'] = 1
  4.  
  5. In [57]: df1
  6. Out[57]:
  7. A B C D F E
  8. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN 1.0
  9. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
  10. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 NaN
  11. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 NaN

To drop any rows that have missing data.

  1. In [58]: df1.dropna(how='any')
  2. Out[58]:
  3. A B C D F E
  4. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0

Filling missing data.

  1. In [59]: df1.fillna(value=5)
  2. Out[59]:
  3. A B C D F E
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 5.0 1.0
  5. 2013-01-02 1.212112 -0.173215 0.119209 5 1.0 1.0
  6. 2013-01-03 -0.861849 -2.104569 -0.494929 5 2.0 5.0
  7. 2013-01-04 0.721555 -0.706771 -1.039575 5 3.0 5.0

To get the boolean mask where values are nan.

  1. In [60]: pd.isna(df1)
  2. Out[60]:
  3. A B C D F E
  4. 2013-01-01 False False False False True False
  5. 2013-01-02 False False False False False False
  6. 2013-01-03 False False False False False True
  7. 2013-01-04 False False False False False True

Operations

See the Basic section on Binary Ops.

Stats

Operations in general exclude missing data.

Performing a descriptive statistic:

  1. In [61]: df.mean()
  2. Out[61]:
  3. A -0.004474
  4. B -0.383981
  5. C -0.687758
  6. D 5.000000
  7. F 3.000000
  8. dtype: float64

Same operation on the other axis:

  1. In [62]: df.mean(1)
  2. Out[62]:
  3. 2013-01-01 0.872735
  4. 2013-01-02 1.431621
  5. 2013-01-03 0.707731
  6. 2013-01-04 1.395042
  7. 2013-01-05 1.883656
  8. 2013-01-06 1.592306
  9. Freq: D, dtype: float64

Operating with objects that have different dimensionality and need alignment.In addition, pandas automatically broadcasts along the specified dimension.

  1. In [63]: s = pd.Series([1, 3, 5, np.nan, 6, 8], index=dates).shift(2)
  2.  
  3. In [64]: s
  4. Out[64]:
  5. 2013-01-01 NaN
  6. 2013-01-02 NaN
  7. 2013-01-03 1.0
  8. 2013-01-04 3.0
  9. 2013-01-05 5.0
  10. 2013-01-06 NaN
  11. Freq: D, dtype: float64
  12.  
  13. In [65]: df.sub(s, axis='index')
  14. Out[65]:
  15. A B C D F
  16. 2013-01-01 NaN NaN NaN NaN NaN
  17. 2013-01-02 NaN NaN NaN NaN NaN
  18. 2013-01-03 -1.861849 -3.104569 -1.494929 4.0 1.0
  19. 2013-01-04 -2.278445 -3.706771 -4.039575 2.0 0.0
  20. 2013-01-05 -5.424972 -4.432980 -4.723768 0.0 -1.0
  21. 2013-01-06 NaN NaN NaN NaN NaN

Apply

Applying functions to the data:

  1. In [66]: df.apply(np.cumsum)
  2. Out[66]:
  3. A B C D F
  4. 2013-01-01 0.000000 0.000000 -1.509059 5 NaN
  5. 2013-01-02 1.212112 -0.173215 -1.389850 10 1.0
  6. 2013-01-03 0.350263 -2.277784 -1.884779 15 3.0
  7. 2013-01-04 1.071818 -2.984555 -2.924354 20 6.0
  8. 2013-01-05 0.646846 -2.417535 -2.648122 25 10.0
  9. 2013-01-06 -0.026844 -2.303886 -4.126549 30 15.0
  10.  
  11. In [67]: df.apply(lambda x: x.max() - x.min())
  12. Out[67]:
  13. A 2.073961
  14. B 2.671590
  15. C 1.785291
  16. D 0.000000
  17. F 4.000000
  18. dtype: float64

Histogramming

See more at Histogramming and Discretization.

  1. In [68]: s = pd.Series(np.random.randint(0, 7, size=10))
  2.  
  3. In [69]: s
  4. Out[69]:
  5. 0 4
  6. 1 2
  7. 2 1
  8. 3 2
  9. 4 6
  10. 5 4
  11. 6 4
  12. 7 6
  13. 8 4
  14. 9 4
  15. dtype: int64
  16.  
  17. In [70]: s.value_counts()
  18. Out[70]:
  19. 4 5
  20. 6 2
  21. 2 2
  22. 1 1
  23. dtype: int64

String Methods

Series is equipped with a set of string processing methods in the str_attribute that make it easy to operate on each element of the array, as in thecode snippet below. Note that pattern-matching in _str generally uses regularexpressions by default (and insome cases always uses them). See more at Vectorized String Methods.

  1. In [71]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
  2.  
  3. In [72]: s.str.lower()
  4. Out[72]:
  5. 0 a
  6. 1 b
  7. 2 c
  8. 3 aaba
  9. 4 baca
  10. 5 NaN
  11. 6 caba
  12. 7 dog
  13. 8 cat
  14. dtype: object

Merge

Concat

pandas provides various facilities for easily combining together Series andDataFrame objects with various kinds of set logic for the indexesand relational algebra functionality in the case of join / merge-typeoperations.

See the Merging section.

Concatenating pandas objects together with concat():

  1. In [73]: df = pd.DataFrame(np.random.randn(10, 4))
  2.  
  3. In [74]: df
  4. Out[74]:
  5. 0 1 2 3
  6. 0 -0.548702 1.467327 -1.015962 -0.483075
  7. 1 1.637550 -1.217659 -0.291519 -1.745505
  8. 2 -0.263952 0.991460 -0.919069 0.266046
  9. 3 -0.709661 1.669052 1.037882 -1.705775
  10. 4 -0.919854 -0.042379 1.247642 -0.009920
  11. 5 0.290213 0.495767 0.362949 1.548106
  12. 6 -1.131345 -0.089329 0.337863 -0.945867
  13. 7 -0.932132 1.956030 0.017587 -0.016692
  14. 8 -0.575247 0.254161 -1.143704 0.215897
  15. 9 1.193555 -0.077118 -0.408530 -0.862495
  16.  
  17. # break it into pieces
  18. In [75]: pieces = [df[:3], df[3:7], df[7:]]
  19.  
  20. In [76]: pd.concat(pieces)
  21. Out[76]:
  22. 0 1 2 3
  23. 0 -0.548702 1.467327 -1.015962 -0.483075
  24. 1 1.637550 -1.217659 -0.291519 -1.745505
  25. 2 -0.263952 0.991460 -0.919069 0.266046
  26. 3 -0.709661 1.669052 1.037882 -1.705775
  27. 4 -0.919854 -0.042379 1.247642 -0.009920
  28. 5 0.290213 0.495767 0.362949 1.548106
  29. 6 -1.131345 -0.089329 0.337863 -0.945867
  30. 7 -0.932132 1.956030 0.017587 -0.016692
  31. 8 -0.575247 0.254161 -1.143704 0.215897
  32. 9 1.193555 -0.077118 -0.408530 -0.862495

Join

SQL style merges. See the Database style joining section.

  1. In [77]: left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
  2.  
  3. In [78]: right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
  4.  
  5. In [79]: left
  6. Out[79]:
  7. key lval
  8. 0 foo 1
  9. 1 foo 2
  10.  
  11. In [80]: right
  12. Out[80]:
  13. key rval
  14. 0 foo 4
  15. 1 foo 5
  16.  
  17. In [81]: pd.merge(left, right, on='key')
  18. Out[81]:
  19. key lval rval
  20. 0 foo 1 4
  21. 1 foo 1 5
  22. 2 foo 2 4
  23. 3 foo 2 5

Another example that can be given is:

  1. In [82]: left = pd.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})
  2.  
  3. In [83]: right = pd.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})
  4.  
  5. In [84]: left
  6. Out[84]:
  7. key lval
  8. 0 foo 1
  9. 1 bar 2
  10.  
  11. In [85]: right
  12. Out[85]:
  13. key rval
  14. 0 foo 4
  15. 1 bar 5
  16.  
  17. In [86]: pd.merge(left, right, on='key')
  18. Out[86]:
  19. key lval rval
  20. 0 foo 1 4
  21. 1 bar 2 5

Append

Append rows to a dataframe. See the Appendingsection.

  1. In [87]: df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
  2.  
  3. In [88]: df
  4. Out[88]:
  5. A B C D
  6. 0 1.346061 1.511763 1.627081 -0.990582
  7. 1 -0.441652 1.211526 0.268520 0.024580
  8. 2 -1.577585 0.396823 -0.105381 -0.532532
  9. 3 1.453749 1.208843 -0.080952 -0.264610
  10. 4 -0.727965 -0.589346 0.339969 -0.693205
  11. 5 -0.339355 0.593616 0.884345 1.591431
  12. 6 0.141809 0.220390 0.435589 0.192451
  13. 7 -0.096701 0.803351 1.715071 -0.708758
  14.  
  15. In [89]: s = df.iloc[3]
  16.  
  17. In [90]: df.append(s, ignore_index=True)
  18. Out[90]:
  19. A B C D
  20. 0 1.346061 1.511763 1.627081 -0.990582
  21. 1 -0.441652 1.211526 0.268520 0.024580
  22. 2 -1.577585 0.396823 -0.105381 -0.532532
  23. 3 1.453749 1.208843 -0.080952 -0.264610
  24. 4 -0.727965 -0.589346 0.339969 -0.693205
  25. 5 -0.339355 0.593616 0.884345 1.591431
  26. 6 0.141809 0.220390 0.435589 0.192451
  27. 7 -0.096701 0.803351 1.715071 -0.708758
  28. 8 1.453749 1.208843 -0.080952 -0.264610

Grouping

By “group by” we are referring to a process involving one or more of thefollowing steps:

  • Splitting the data into groups based on some criteria
  • Applying a function to each group independently
  • Combining the results into a data structure

See the Grouping section.

  1. In [91]: df = pd.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
  2. ....: 'foo', 'bar', 'foo', 'foo'],
  3. ....: 'B': ['one', 'one', 'two', 'three',
  4. ....: 'two', 'two', 'one', 'three'],
  5. ....: 'C': np.random.randn(8),
  6. ....: 'D': np.random.randn(8)})
  7. ....:
  8.  
  9. In [92]: df
  10. Out[92]:
  11. A B C D
  12. 0 foo one -1.202872 -0.055224
  13. 1 bar one -1.814470 2.395985
  14. 2 foo two 1.018601 1.552825
  15. 3 bar three -0.595447 0.166599
  16. 4 foo two 1.395433 0.047609
  17. 5 bar two -0.392670 -0.136473
  18. 6 foo one 0.007207 -0.561757
  19. 7 foo three 1.928123 -1.623033

Grouping and then applying the sum() function to the resultinggroups.

  1. In [93]: df.groupby('A').sum()
  2. Out[93]:
  3. C D
  4. A
  5. bar -2.802588 2.42611
  6. foo 3.146492 -0.63958

Grouping by multiple columns forms a hierarchical index, and again we canapply the sum function.

  1. In [94]: df.groupby(['A', 'B']).sum()
  2. Out[94]:
  3. C D
  4. A B
  5. bar one -1.814470 2.395985
  6. three -0.595447 0.166599
  7. two -0.392670 -0.136473
  8. foo one -1.195665 -0.616981
  9. three 1.928123 -1.623033
  10. two 2.414034 1.600434

Reshaping

See the sections on Hierarchical Indexing andReshaping.

Stack

  1. In [95]: tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
  2. ....: 'foo', 'foo', 'qux', 'qux'],
  3. ....: ['one', 'two', 'one', 'two',
  4. ....: 'one', 'two', 'one', 'two']]))
  5. ....:
  6.  
  7. In [96]: index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
  8.  
  9. In [97]: df = pd.DataFrame(np.random.randn(8, 2), index=index, columns=['A', 'B'])
  10.  
  11. In [98]: df2 = df[:4]
  12.  
  13. In [99]: df2
  14. Out[99]:
  15. A B
  16. first second
  17. bar one 0.029399 -0.542108
  18. two 0.282696 -0.087302
  19. baz one -1.575170 1.771208
  20. two 0.816482 1.100230

The stack() method “compresses” a level in the DataFrame’scolumns.

  1. In [100]: stacked = df2.stack()
  2.  
  3. In [101]: stacked
  4. Out[101]:
  5. first second
  6. bar one A 0.029399
  7. B -0.542108
  8. two A 0.282696
  9. B -0.087302
  10. baz one A -1.575170
  11. B 1.771208
  12. two A 0.816482
  13. B 1.100230
  14. dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as theindex), the inverse operation of stack() isunstack(), which by default unstacks the last level:

  1. In [102]: stacked.unstack()
  2. Out[102]:
  3. A B
  4. first second
  5. bar one 0.029399 -0.542108
  6. two 0.282696 -0.087302
  7. baz one -1.575170 1.771208
  8. two 0.816482 1.100230
  9.  
  10. In [103]: stacked.unstack(1)
  11. Out[103]:
  12. second one two
  13. first
  14. bar A 0.029399 0.282696
  15. B -0.542108 -0.087302
  16. baz A -1.575170 0.816482
  17. B 1.771208 1.100230
  18.  
  19. In [104]: stacked.unstack(0)
  20. Out[104]:
  21. first bar baz
  22. second
  23. one A 0.029399 -1.575170
  24. B -0.542108 1.771208
  25. two A 0.282696 0.816482
  26. B -0.087302 1.100230

Pivot tables

See the section on Pivot Tables.

  1. In [105]: df = pd.DataFrame({'A': ['one', 'one', 'two', 'three'] * 3,
  2. .....: 'B': ['A', 'B', 'C'] * 4,
  3. .....: 'C': ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
  4. .....: 'D': np.random.randn(12),
  5. .....: 'E': np.random.randn(12)})
  6. .....:
  7.  
  8. In [106]: df
  9. Out[106]:
  10. A B C D E
  11. 0 one A foo 1.418757 -0.179666
  12. 1 one B foo -1.879024 1.291836
  13. 2 two C foo 0.536826 -0.009614
  14. 3 three A bar 1.006160 0.392149
  15. 4 one B bar -0.029716 0.264599
  16. 5 one C bar -1.146178 -0.057409
  17. 6 two A foo 0.100900 -1.425638
  18. 7 three B foo -1.035018 1.024098
  19. 8 one C foo 0.314665 -0.106062
  20. 9 one A bar -0.773723 1.824375
  21. 10 two B bar -1.170653 0.595974
  22. 11 three C bar 0.648740 1.167115

We can produce pivot tables from this data very easily:

  1. In [107]: pd.pivot_table(df, values='D', index=['A', 'B'], columns=['C'])
  2. Out[107]:
  3. C bar foo
  4. A B
  5. one A -0.773723 1.418757
  6. B -0.029716 -1.879024
  7. C -1.146178 0.314665
  8. three A 1.006160 NaN
  9. B NaN -1.035018
  10. C 0.648740 NaN
  11. two A NaN 0.100900
  12. B -1.170653 NaN
  13. C NaN 0.536826

Time series

pandas has simple, powerful, and efficient functionality for performingresampling operations during frequency conversion (e.g., converting secondlydata into 5-minutely data). This is extremely common in, but not limited to,financial applications. See the Time Series section.

  1. In [108]: rng = pd.date_range('1/1/2012', periods=100, freq='S')
  2.  
  3. In [109]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
  4.  
  5. In [110]: ts.resample('5Min').sum()
  6. Out[110]:
  7. 2012-01-01 25083
  8. Freq: 5T, dtype: int64

Time zone representation:

  1. In [111]: rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
  2.  
  3. In [112]: ts = pd.Series(np.random.randn(len(rng)), rng)
  4.  
  5. In [113]: ts
  6. Out[113]:
  7. 2012-03-06 0.464000
  8. 2012-03-07 0.227371
  9. 2012-03-08 -0.496922
  10. 2012-03-09 0.306389
  11. 2012-03-10 -2.290613
  12. Freq: D, dtype: float64
  13.  
  14. In [114]: ts_utc = ts.tz_localize('UTC')
  15.  
  16. In [115]: ts_utc
  17. Out[115]:
  18. 2012-03-06 00:00:00+00:00 0.464000
  19. 2012-03-07 00:00:00+00:00 0.227371
  20. 2012-03-08 00:00:00+00:00 -0.496922
  21. 2012-03-09 00:00:00+00:00 0.306389
  22. 2012-03-10 00:00:00+00:00 -2.290613
  23. Freq: D, dtype: float64

Converting to another time zone:

  1. In [116]: ts_utc.tz_convert('US/Eastern')
  2. Out[116]:
  3. 2012-03-05 19:00:00-05:00 0.464000
  4. 2012-03-06 19:00:00-05:00 0.227371
  5. 2012-03-07 19:00:00-05:00 -0.496922
  6. 2012-03-08 19:00:00-05:00 0.306389
  7. 2012-03-09 19:00:00-05:00 -2.290613
  8. Freq: D, dtype: float64

Converting between time span representations:

  1. In [117]: rng = pd.date_range('1/1/2012', periods=5, freq='M')
  2.  
  3. In [118]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
  4.  
  5. In [119]: ts
  6. Out[119]:
  7. 2012-01-31 -1.134623
  8. 2012-02-29 -1.561819
  9. 2012-03-31 -0.260838
  10. 2012-04-30 0.281957
  11. 2012-05-31 1.523962
  12. Freq: M, dtype: float64
  13.  
  14. In [120]: ps = ts.to_period()
  15.  
  16. In [121]: ps
  17. Out[121]:
  18. 2012-01 -1.134623
  19. 2012-02 -1.561819
  20. 2012-03 -0.260838
  21. 2012-04 0.281957
  22. 2012-05 1.523962
  23. Freq: M, dtype: float64
  24.  
  25. In [122]: ps.to_timestamp()
  26. Out[122]:
  27. 2012-01-01 -1.134623
  28. 2012-02-01 -1.561819
  29. 2012-03-01 -0.260838
  30. 2012-04-01 0.281957
  31. 2012-05-01 1.523962
  32. Freq: MS, dtype: float64

Converting between period and timestamp enables some convenient arithmeticfunctions to be used. In the following example, we convert a quarterlyfrequency with year ending in November to 9am of the end of the month followingthe quarter end:

  1. In [123]: prng = pd.period_range('1990Q1', '2000Q4', freq='Q-NOV')
  2.  
  3. In [124]: ts = pd.Series(np.random.randn(len(prng)), prng)
  4.  
  5. In [125]: ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
  6.  
  7. In [126]: ts.head()
  8. Out[126]:
  9. 1990-03-01 09:00 -0.902937
  10. 1990-06-01 09:00 0.068159
  11. 1990-09-01 09:00 -0.057873
  12. 1990-12-01 09:00 -0.368204
  13. 1991-03-01 09:00 -1.144073
  14. Freq: H, dtype: float64

Categoricals

pandas can include categorical data in a DataFrame. For full docs, see thecategorical introduction and the API documentation.

  1. In [127]: df = pd.DataFrame({"id": [1, 2, 3, 4, 5, 6],
  2. .....: "raw_grade": ['a', 'b', 'b', 'a', 'a', 'e']})
  3. .....:

Convert the raw grades to a categorical data type.

  1. In [128]: df["grade"] = df["raw_grade"].astype("category")
  2.  
  3. In [129]: df["grade"]
  4. Out[129]:
  5. 0 a
  6. 1 b
  7. 2 b
  8. 3 a
  9. 4 a
  10. 5 e
  11. Name: grade, dtype: category
  12. Categories (3, object): [a, b, e]

Rename the categories to more meaningful names (assigning toSeries.cat.categories is inplace!).

  1. In [130]: df["grade"].cat.categories = ["very good", "good", "very bad"]

Reorder the categories and simultaneously add the missing categories (methods under Series.cat return a new Series by default).

  1. In [131]: df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium",
  2. .....: "good", "very good"])
  3. .....:
  4.  
  5. In [132]: df["grade"]
  6. Out[132]:
  7. 0 very good
  8. 1 good
  9. 2 good
  10. 3 very good
  11. 4 very good
  12. 5 very bad
  13. Name: grade, dtype: category
  14. Categories (5, object): [very bad, bad, medium, good, very good]

Sorting is per order in the categories, not lexical order.

  1. In [133]: df.sort_values(by="grade")
  2. Out[133]:
  3. id raw_grade grade
  4. 5 6 e very bad
  5. 1 2 b good
  6. 2 3 b good
  7. 0 1 a very good
  8. 3 4 a very good
  9. 4 5 a very good

Grouping by a categorical column also shows empty categories.

  1. In [134]: df.groupby("grade").size()
  2. Out[134]:
  3. grade
  4. very bad 1
  5. bad 0
  6. medium 0
  7. good 2
  8. very good 3
  9. dtype: int64

Plotting

See the Plotting docs.

  1. In [135]: ts = pd.Series(np.random.randn(1000),
  2. .....: index=pd.date_range('1/1/2000', periods=1000))
  3. .....:
  4.  
  5. In [136]: ts = ts.cumsum()
  6.  
  7. In [137]: ts.plot()
  8. Out[137]: <matplotlib.axes._subplots.AxesSubplot at 0x7f45409e1690>

../_images/series_plot_basic.pngOn a DataFrame, the plot() method is a convenience to plot allof the columns with labels:

  1. In [138]: df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index,
  2. .....: columns=['A', 'B', 'C', 'D'])
  3. .....:
  4.  
  5. In [139]: df = df.cumsum()
  6.  
  7. In [140]: plt.figure()
  8. Out[140]: <Figure size 640x480 with 0 Axes>
  9.  
  10. In [141]: df.plot()
  11. Out[141]: <matplotlib.axes._subplots.AxesSubplot at 0x7f453cb4dc50>
  12.  
  13. In [142]: plt.legend(loc='best')
  14. Out[142]: <matplotlib.legend.Legend at 0x7f453cacfc90>

../_images/frame_plot_basic.png

Getting data in/out

CSV

Writing to a csv file.

  1. In [143]: df.to_csv('foo.csv')

Reading from a csv file.

  1. In [144]: pd.read_csv('foo.csv')
  2. Out[144]:
  3. Unnamed: 0 A B C D
  4. 0 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 1 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 3 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 4 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. .. ... ... ... ... ...
  10. 995 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  11. 996 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  12. 997 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  13. 998 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  14. 999 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  15.  
  16. [1000 rows x 5 columns]

HDF5

Reading and writing to HDFStores.

Writing to a HDF5 Store.

  1. In [145]: df.to_hdf('foo.h5', 'df')

Reading from a HDF5 Store.

  1. In [146]: pd.read_hdf('foo.h5', 'df')
  2. Out[146]:
  3. A B C D
  4. 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. ... ... ... ... ...
  10. 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  11. 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  12. 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  13. 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  14. 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  15.  
  16. [1000 rows x 4 columns]

Excel

Reading and writing to MS Excel.

Writing to an excel file.

  1. In [147]: df.to_excel('foo.xlsx', sheet_name='Sheet1')

Reading from an excel file.

  1. In [148]: pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
  2. Out[148]:
  3. Unnamed: 0 A B C D
  4. 0 2000-01-01 0.266457 -0.399641 -0.219582 1.186860
  5. 1 2000-01-02 -1.170732 -0.345873 1.653061 -0.282953
  6. 2 2000-01-03 -1.734933 0.530468 2.060811 -0.515536
  7. 3 2000-01-04 -1.555121 1.452620 0.239859 -1.156896
  8. 4 2000-01-05 0.578117 0.511371 0.103552 -2.428202
  9. .. ... ... ... ... ...
  10. 995 2002-09-22 -8.985362 -8.485624 -4.669462 31.367740
  11. 996 2002-09-23 -9.558560 -8.781216 -4.499815 30.518439
  12. 997 2002-09-24 -9.902058 -9.340490 -4.386639 30.105593
  13. 998 2002-09-25 -10.216020 -9.480682 -3.933802 29.758560
  14. 999 2002-09-26 -11.856774 -10.671012 -3.216025 29.369368
  15.  
  16. [1000 rows x 5 columns]

Gotchas

If you are attempting to perform an operation you might see an exception like:

  1. >>> if pd.Series([False, True, False]):
  2. ... print("I was true")
  3. Traceback
  4. ...
  5. ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().

See Comparisons for an explanation and what to do.

See Gotchas as well.