Working with missing data

In this section, we will discuss missing (also referred to as NA) values inpandas.

Note

The choice of using NaN internally to denote missing data was largelyfor simplicity and performance reasons. It differs from the MaskedArrayapproach of, for example, scikits.timeseries. We are hopeful thatNumPy will soon be able to provide a native NA type solution (similar to R)performant enough to be used in pandas.

See the cookbook for some advanced strategies.

Values considered “missing”

As data comes in many shapes and forms, pandas aims to be flexible with regardto handling missing data. While NaN is the default missing value marker forreasons of computational speed and convenience, we need to be able to easilydetect this value with data of different types: floating point, integer,boolean, and general object. In many cases, however, the Python None willarise and we wish to also consider that “missing” or “not available” or “NA”.

Note

If you want to consider inf and -inf to be “NA” in computations,you can set pandas.options.mode.use_inf_as_na = True.

  1. In [1]: df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],
  2. ...: columns=['one', 'two', 'three'])
  3. ...:
  4.  
  5. In [2]: df['four'] = 'bar'
  6.  
  7. In [3]: df['five'] = df['one'] > 0
  8.  
  9. In [4]: df
  10. Out[4]:
  11. one two three four five
  12. a 0.469112 -0.282863 -1.509059 bar True
  13. c -1.135632 1.212112 -0.173215 bar False
  14. e 0.119209 -1.044236 -0.861849 bar True
  15. f -2.104569 -0.494929 1.071804 bar False
  16. h 0.721555 -0.706771 -1.039575 bar True
  17.  
  18. In [5]: df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
  19.  
  20. In [6]: df2
  21. Out[6]:
  22. one two three four five
  23. a 0.469112 -0.282863 -1.509059 bar True
  24. b NaN NaN NaN NaN NaN
  25. c -1.135632 1.212112 -0.173215 bar False
  26. d NaN NaN NaN NaN NaN
  27. e 0.119209 -1.044236 -0.861849 bar True
  28. f -2.104569 -0.494929 1.071804 bar False
  29. g NaN NaN NaN NaN NaN
  30. h 0.721555 -0.706771 -1.039575 bar True

To make detecting missing values easier (and across different array dtypes),pandas provides the isna() andnotna() functions, which are also methods onSeries and DataFrame objects:

  1. In [7]: df2['one']
  2. Out[7]:
  3. a 0.469112
  4. b NaN
  5. c -1.135632
  6. d NaN
  7. e 0.119209
  8. f -2.104569
  9. g NaN
  10. h 0.721555
  11. Name: one, dtype: float64
  12.  
  13. In [8]: pd.isna(df2['one'])
  14. Out[8]:
  15. a False
  16. b True
  17. c False
  18. d True
  19. e False
  20. f False
  21. g True
  22. h False
  23. Name: one, dtype: bool
  24.  
  25. In [9]: df2['four'].notna()
  26. Out[9]:
  27. a True
  28. b False
  29. c True
  30. d False
  31. e True
  32. f True
  33. g False
  34. h True
  35. Name: four, dtype: bool
  36.  
  37. In [10]: df2.isna()
  38. Out[10]:
  39. one two three four five
  40. a False False False False False
  41. b True True True True True
  42. c False False False False False
  43. d True True True True True
  44. e False False False False False
  45. f False False False False False
  46. g True True True True True
  47. h False False False False False

Warning

One has to be mindful that in Python (and NumPy), the nan's don’t compare equal, but None's do.Note that pandas/NumPy uses the fact that np.nan != np.nan, and treats None like np.nan.

  1. In [11]: None == None # noqa: E711
  2. Out[11]: True
  3.  
  4. In [12]: np.nan == np.nan
  5. Out[12]: False

So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful information.

  1. In [13]: df2['one'] == np.nan
  2. Out[13]:
  3. a False
  4. b False
  5. c False
  6. d False
  7. e False
  8. f False
  9. g False
  10. h False
  11. Name: one, dtype: bool

Integer dtypes and missing data

Because NaN is a float, a column of integers with even one missing valuesis cast to floating-point dtype (see Support for integer NA for more). Pandasprovides a nullable integer array, which can be used by explicitly requestingthe dtype:

  1. In [14]: pd.Series([1, 2, np.nan, 4], dtype=pd.Int64Dtype())
  2. Out[14]:
  3. 0 1
  4. 1 2
  5. 2 NaN
  6. 3 4
  7. dtype: Int64

Alternatively, the string alias dtype='Int64' (note the capital "I") can beused.

See Nullable integer data type for more.

Datetimes

For datetime64[ns] types, NaT represents missing values. This is a pseudo-nativesentinel value that can be represented by NumPy in a singular dtype (datetime64[ns]).pandas objects provide compatibility between NaT and NaN.

  1. In [15]: df2 = df.copy()
  2.  
  3. In [16]: df2['timestamp'] = pd.Timestamp('20120101')
  4.  
  5. In [17]: df2
  6. Out[17]:
  7. one two three four five timestamp
  8. a 0.469112 -0.282863 -1.509059 bar True 2012-01-01
  9. c -1.135632 1.212112 -0.173215 bar False 2012-01-01
  10. e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
  11. f -2.104569 -0.494929 1.071804 bar False 2012-01-01
  12. h 0.721555 -0.706771 -1.039575 bar True 2012-01-01
  13.  
  14. In [18]: df2.loc[['a', 'c', 'h'], ['one', 'timestamp']] = np.nan
  15.  
  16. In [19]: df2
  17. Out[19]:
  18. one two three four five timestamp
  19. a NaN -0.282863 -1.509059 bar True NaT
  20. c NaN 1.212112 -0.173215 bar False NaT
  21. e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
  22. f -2.104569 -0.494929 1.071804 bar False 2012-01-01
  23. h NaN -0.706771 -1.039575 bar True NaT
  24.  
  25. In [20]: df2.dtypes.value_counts()
  26. Out[20]:
  27. float64 3
  28. datetime64[ns] 1
  29. object 1
  30. bool 1
  31. dtype: int64

Inserting missing data

You can insert missing values by simply assigning to containers. Theactual missing value used will be chosen based on the dtype.

For example, numeric containers will always use NaN regardless ofthe missing value type chosen:

  1. In [21]: s = pd.Series([1, 2, 3])
  2.  
  3. In [22]: s.loc[0] = None
  4.  
  5. In [23]: s
  6. Out[23]:
  7. 0 NaN
  8. 1 2.0
  9. 2 3.0
  10. dtype: float64

Likewise, datetime containers will always use NaT.

For object containers, pandas will use the value given:

  1. In [24]: s = pd.Series(["a", "b", "c"])
  2.  
  3. In [25]: s.loc[0] = None
  4.  
  5. In [26]: s.loc[1] = np.nan
  6.  
  7. In [27]: s
  8. Out[27]:
  9. 0 None
  10. 1 NaN
  11. 2 c
  12. dtype: object

Calculations with missing data

Missing values propagate naturally through arithmetic operations between pandasobjects.

  1. In [28]: a
  2. Out[28]:
  3. one two
  4. a NaN -0.282863
  5. c NaN 1.212112
  6. e 0.119209 -1.044236
  7. f -2.104569 -0.494929
  8. h -2.104569 -0.706771
  9.  
  10. In [29]: b
  11. Out[29]:
  12. one two three
  13. a NaN -0.282863 -1.509059
  14. c NaN 1.212112 -0.173215
  15. e 0.119209 -1.044236 -0.861849
  16. f -2.104569 -0.494929 1.071804
  17. h NaN -0.706771 -1.039575
  18.  
  19. In [30]: a + b
  20. Out[30]:
  21. one three two
  22. a NaN NaN -0.565727
  23. c NaN NaN 2.424224
  24. e 0.238417 NaN -2.088472
  25. f -4.209138 NaN -0.989859
  26. h NaN NaN -1.413542

The descriptive statistics and computational methods discussed in thedata structure overview (and listed here and here) are all written toaccount for missing data. For example:

  • When summing data, NA (missing) values will be treated as zero.
  • If the data are all NA, the result will be 0.
  • Cumulative methods like cumsum() and cumprod() ignore NA values by default, but preserve them in the resulting arrays. To override this behaviour and include NA values, use skipna=False.
  1. In [31]: df
  2. Out[31]:
  3. one two three
  4. a NaN -0.282863 -1.509059
  5. c NaN 1.212112 -0.173215
  6. e 0.119209 -1.044236 -0.861849
  7. f -2.104569 -0.494929 1.071804
  8. h NaN -0.706771 -1.039575
  9.  
  10. In [32]: df['one'].sum()
  11. Out[32]: -1.9853605075978744
  12.  
  13. In [33]: df.mean(1)
  14. Out[33]:
  15. a -0.895961
  16. c 0.519449
  17. e -0.595625
  18. f -0.509232
  19. h -0.873173
  20. dtype: float64
  21.  
  22. In [34]: df.cumsum()
  23. Out[34]:
  24. one two three
  25. a NaN -0.282863 -1.509059
  26. c NaN 0.929249 -1.682273
  27. e 0.119209 -0.114987 -2.544122
  28. f -1.985361 -0.609917 -1.472318
  29. h NaN -1.316688 -2.511893
  30.  
  31. In [35]: df.cumsum(skipna=False)
  32. Out[35]:
  33. one two three
  34. a NaN -0.282863 -1.509059
  35. c NaN 0.929249 -1.682273
  36. e NaN -0.114987 -2.544122
  37. f NaN -0.609917 -1.472318
  38. h NaN -1.316688 -2.511893

Sum/prod of empties/nans

Warning

This behavior is now standard as of v0.22.0 and is consistent with the default in numpy; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.See v0.22.0 whatsnew for more.

The sum of an empty or all-NA Series or column of a DataFrame is 0.

  1. In [36]: pd.Series([np.nan]).sum()
  2. Out[36]: 0.0
  3.  
  4. In [37]: pd.Series([]).sum()
  5. Out[37]: 0.0

The product of an empty or all-NA Series or column of a DataFrame is 1.

  1. In [38]: pd.Series([np.nan]).prod()
  2. Out[38]: 1.0
  3.  
  4. In [39]: pd.Series([]).prod()
  5. Out[39]: 1.0

NA values in GroupBy

NA groups in GroupBy are automatically excluded. This behavior is consistentwith R, for example:

  1. In [40]: df
  2. Out[40]:
  3. one two three
  4. a NaN -0.282863 -1.509059
  5. c NaN 1.212112 -0.173215
  6. e 0.119209 -1.044236 -0.861849
  7. f -2.104569 -0.494929 1.071804
  8. h NaN -0.706771 -1.039575
  9.  
  10. In [41]: df.groupby('one').mean()
  11. Out[41]:
  12. two three
  13. one
  14. -2.104569 -0.494929 1.071804
  15. 0.119209 -1.044236 -0.861849

See the groupby section here for more information.

Cleaning / filling missing data

pandas objects are equipped with various data manipulation methods for dealingwith missing data.

Filling missing values: fillna

fillna() can “fill in” NA values with non-NA data in a coupleof ways, which we illustrate:

Replace NA with a scalar value

  1. In [42]: df2
  2. Out[42]:
  3. one two three four five timestamp
  4. a NaN -0.282863 -1.509059 bar True NaT
  5. c NaN 1.212112 -0.173215 bar False NaT
  6. e 0.119209 -1.044236 -0.861849 bar True 2012-01-01
  7. f -2.104569 -0.494929 1.071804 bar False 2012-01-01
  8. h NaN -0.706771 -1.039575 bar True NaT
  9.  
  10. In [43]: df2.fillna(0)
  11. Out[43]:
  12. one two three four five timestamp
  13. a 0.000000 -0.282863 -1.509059 bar True 0
  14. c 0.000000 1.212112 -0.173215 bar False 0
  15. e 0.119209 -1.044236 -0.861849 bar True 2012-01-01 00:00:00
  16. f -2.104569 -0.494929 1.071804 bar False 2012-01-01 00:00:00
  17. h 0.000000 -0.706771 -1.039575 bar True 0
  18.  
  19. In [44]: df2['one'].fillna('missing')
  20. Out[44]:
  21. a missing
  22. c missing
  23. e 0.119209
  24. f -2.10457
  25. h missing
  26. Name: one, dtype: object

Fill gaps forward or backward

Using the same filling arguments as reindexing, wecan propagate non-NA values forward or backward:

  1. In [45]: df
  2. Out[45]:
  3. one two three
  4. a NaN -0.282863 -1.509059
  5. c NaN 1.212112 -0.173215
  6. e 0.119209 -1.044236 -0.861849
  7. f -2.104569 -0.494929 1.071804
  8. h NaN -0.706771 -1.039575
  9.  
  10. In [46]: df.fillna(method='pad')
  11. Out[46]:
  12. one two three
  13. a NaN -0.282863 -1.509059
  14. c NaN 1.212112 -0.173215
  15. e 0.119209 -1.044236 -0.861849
  16. f -2.104569 -0.494929 1.071804
  17. h -2.104569 -0.706771 -1.039575

Limit the amount of filling

If we only want consecutive gaps filled up to a certain number of data points,we can use the limit keyword:

  1. In [47]: df
  2. Out[47]:
  3. one two three
  4. a NaN -0.282863 -1.509059
  5. c NaN 1.212112 -0.173215
  6. e NaN NaN NaN
  7. f NaN NaN NaN
  8. h NaN -0.706771 -1.039575
  9.  
  10. In [48]: df.fillna(method='pad', limit=1)
  11. Out[48]:
  12. one two three
  13. a NaN -0.282863 -1.509059
  14. c NaN 1.212112 -0.173215
  15. e NaN 1.212112 -0.173215
  16. f NaN NaN NaN
  17. h NaN -0.706771 -1.039575

To remind you, these are the available filling methods:

MethodAction
pad / ffillFill values forward
bfill / backfillFill values backward

With time series data, using pad/ffill is extremely common so that the “lastknown value” is available at every time point.

ffill() is equivalent to fillna(method='ffill')and bfill() is equivalent to fillna(method='bfill')

Filling with a PandasObject

You can also fillna using a dict or Series that is alignable. The labels of the dict or index of the Seriesmust match the columns of the frame you wish to fill. Theuse case of this is to fill a DataFrame with the mean of that column.

  1. In [49]: dff = pd.DataFrame(np.random.randn(10, 3), columns=list('ABC'))
  2.  
  3. In [50]: dff.iloc[3:5, 0] = np.nan
  4.  
  5. In [51]: dff.iloc[4:6, 1] = np.nan
  6.  
  7. In [52]: dff.iloc[5:8, 2] = np.nan
  8.  
  9. In [53]: dff
  10. Out[53]:
  11. A B C
  12. 0 0.271860 -0.424972 0.567020
  13. 1 0.276232 -1.087401 -0.673690
  14. 2 0.113648 -1.478427 0.524988
  15. 3 NaN 0.577046 -1.715002
  16. 4 NaN NaN -1.157892
  17. 5 -1.344312 NaN NaN
  18. 6 -0.109050 1.643563 NaN
  19. 7 0.357021 -0.674600 NaN
  20. 8 -0.968914 -1.294524 0.413738
  21. 9 0.276662 -0.472035 -0.013960
  22.  
  23. In [54]: dff.fillna(dff.mean())
  24. Out[54]:
  25. A B C
  26. 0 0.271860 -0.424972 0.567020
  27. 1 0.276232 -1.087401 -0.673690
  28. 2 0.113648 -1.478427 0.524988
  29. 3 -0.140857 0.577046 -1.715002
  30. 4 -0.140857 -0.401419 -1.157892
  31. 5 -1.344312 -0.401419 -0.293543
  32. 6 -0.109050 1.643563 -0.293543
  33. 7 0.357021 -0.674600 -0.293543
  34. 8 -0.968914 -1.294524 0.413738
  35. 9 0.276662 -0.472035 -0.013960
  36.  
  37. In [55]: dff.fillna(dff.mean()['B':'C'])
  38. Out[55]:
  39. A B C
  40. 0 0.271860 -0.424972 0.567020
  41. 1 0.276232 -1.087401 -0.673690
  42. 2 0.113648 -1.478427 0.524988
  43. 3 NaN 0.577046 -1.715002
  44. 4 NaN -0.401419 -1.157892
  45. 5 -1.344312 -0.401419 -0.293543
  46. 6 -0.109050 1.643563 -0.293543
  47. 7 0.357021 -0.674600 -0.293543
  48. 8 -0.968914 -1.294524 0.413738
  49. 9 0.276662 -0.472035 -0.013960

Same result as above, but is aligning the ‘fill’ value which isa Series in this case.

  1. In [56]: dff.where(pd.notna(dff), dff.mean(), axis='columns')
  2. Out[56]:
  3. A B C
  4. 0 0.271860 -0.424972 0.567020
  5. 1 0.276232 -1.087401 -0.673690
  6. 2 0.113648 -1.478427 0.524988
  7. 3 -0.140857 0.577046 -1.715002
  8. 4 -0.140857 -0.401419 -1.157892
  9. 5 -1.344312 -0.401419 -0.293543
  10. 6 -0.109050 1.643563 -0.293543
  11. 7 0.357021 -0.674600 -0.293543
  12. 8 -0.968914 -1.294524 0.413738
  13. 9 0.276662 -0.472035 -0.013960

Dropping axis labels with missing data: dropna

You may wish to simply exclude labels from a data set which refer to missingdata. To do this, use dropna():

  1. In [57]: df
  2. Out[57]:
  3. one two three
  4. a NaN -0.282863 -1.509059
  5. c NaN 1.212112 -0.173215
  6. e NaN 0.000000 0.000000
  7. f NaN 0.000000 0.000000
  8. h NaN -0.706771 -1.039575
  9.  
  10. In [58]: df.dropna(axis=0)
  11. Out[58]:
  12. Empty DataFrame
  13. Columns: [one, two, three]
  14. Index: []
  15.  
  16. In [59]: df.dropna(axis=1)
  17. Out[59]:
  18. two three
  19. a -0.282863 -1.509059
  20. c 1.212112 -0.173215
  21. e 0.000000 0.000000
  22. f 0.000000 0.000000
  23. h -0.706771 -1.039575
  24.  
  25. In [60]: df['one'].dropna()
  26. Out[60]: Series([], Name: one, dtype: float64)

An equivalent dropna() is available for Series.DataFrame.dropna has considerably more options than Series.dropna, which can beexamined in the API.

Interpolation

New in version 0.23.0: The limit_area keyword argument was added.

Both Series and DataFrame objects have interpolate()that, by default, performs linear interpolation at missing data points.

  1. In [61]: ts
  2. Out[61]:
  3. 2000-01-31 0.469112
  4. 2000-02-29 NaN
  5. 2000-03-31 NaN
  6. 2000-04-28 NaN
  7. 2000-05-31 NaN
  8. ...
  9. 2007-12-31 -6.950267
  10. 2008-01-31 -7.904475
  11. 2008-02-29 -6.441779
  12. 2008-03-31 -8.184940
  13. 2008-04-30 -9.011531
  14. Freq: BM, Length: 100, dtype: float64
  15.  
  16. In [62]: ts.count()
  17. Out[62]: 66
  18.  
  19. In [63]: ts.plot()
  20. Out[63]: <matplotlib.axes._subplots.AxesSubplot at 0x7f450959b710>

../_images/series_before_interpolate.png

  1. In [64]: ts.interpolate()
  2. Out[64]:
  3. 2000-01-31 0.469112
  4. 2000-02-29 0.434469
  5. 2000-03-31 0.399826
  6. 2000-04-28 0.365184
  7. 2000-05-31 0.330541
  8. ...
  9. 2007-12-31 -6.950267
  10. 2008-01-31 -7.904475
  11. 2008-02-29 -6.441779
  12. 2008-03-31 -8.184940
  13. 2008-04-30 -9.011531
  14. Freq: BM, Length: 100, dtype: float64
  15.  
  16. In [65]: ts.interpolate().count()
  17. Out[65]: 100
  18.  
  19. In [66]: ts.interpolate().plot()
  20. Out[66]: <matplotlib.axes._subplots.AxesSubplot at 0x7f450953b690>

../_images/series_interpolate.pngIndex aware interpolation is available via the method keyword:

  1. In [67]: ts2
  2. Out[67]:
  3. 2000-01-31 0.469112
  4. 2000-02-29 NaN
  5. 2002-07-31 -5.785037
  6. 2005-01-31 NaN
  7. 2008-04-30 -9.011531
  8. dtype: float64
  9.  
  10. In [68]: ts2.interpolate()
  11. Out[68]:
  12. 2000-01-31 0.469112
  13. 2000-02-29 -2.657962
  14. 2002-07-31 -5.785037
  15. 2005-01-31 -7.398284
  16. 2008-04-30 -9.011531
  17. dtype: float64
  18.  
  19. In [69]: ts2.interpolate(method='time')
  20. Out[69]:
  21. 2000-01-31 0.469112
  22. 2000-02-29 0.270241
  23. 2002-07-31 -5.785037
  24. 2005-01-31 -7.190866
  25. 2008-04-30 -9.011531
  26. dtype: float64

For a floating-point index, use method='values':

  1. In [70]: ser
  2. Out[70]:
  3. 0.0 0.0
  4. 1.0 NaN
  5. 10.0 10.0
  6. dtype: float64
  7.  
  8. In [71]: ser.interpolate()
  9. Out[71]:
  10. 0.0 0.0
  11. 1.0 5.0
  12. 10.0 10.0
  13. dtype: float64
  14.  
  15. In [72]: ser.interpolate(method='values')
  16. Out[72]:
  17. 0.0 0.0
  18. 1.0 1.0
  19. 10.0 10.0
  20. dtype: float64

You can also interpolate with a DataFrame:

  1. In [73]: df = pd.DataFrame({'A': [1, 2.1, np.nan, 4.7, 5.6, 6.8],
  2. ....: 'B': [.25, np.nan, np.nan, 4, 12.2, 14.4]})
  3. ....:
  4.  
  5. In [74]: df
  6. Out[74]:
  7. A B
  8. 0 1.0 0.25
  9. 1 2.1 NaN
  10. 2 NaN NaN
  11. 3 4.7 4.00
  12. 4 5.6 12.20
  13. 5 6.8 14.40
  14.  
  15. In [75]: df.interpolate()
  16. Out[75]:
  17. A B
  18. 0 1.0 0.25
  19. 1 2.1 1.50
  20. 2 3.4 2.75
  21. 3 4.7 4.00
  22. 4 5.6 12.20
  23. 5 6.8 14.40

The method argument gives access to fancier interpolation methods.If you have scipy installed, you can pass the name of a 1-d interpolation routine to method.You’ll want to consult the full scipy interpolation documentation and reference guide for details.The appropriate interpolation method will depend on the type of data you are working with.

  • If you are dealing with a time series that is growing at an increasing rate,method='quadratic' may be appropriate.
  • If you have values approximating a cumulative distribution function,then method='pchip' should work well.
  • To fill missing values with goal of smooth plotting, consider method='akima'.

Warning

These methods require scipy.

  1. In [76]: df.interpolate(method='barycentric')
  2. Out[76]:
  3. A B
  4. 0 1.00 0.250
  5. 1 2.10 -7.660
  6. 2 3.53 -4.515
  7. 3 4.70 4.000
  8. 4 5.60 12.200
  9. 5 6.80 14.400
  10.  
  11. In [77]: df.interpolate(method='pchip')
  12. Out[77]:
  13. A B
  14. 0 1.00000 0.250000
  15. 1 2.10000 0.672808
  16. 2 3.43454 1.928950
  17. 3 4.70000 4.000000
  18. 4 5.60000 12.200000
  19. 5 6.80000 14.400000
  20.  
  21. In [78]: df.interpolate(method='akima')
  22. Out[78]:
  23. A B
  24. 0 1.000000 0.250000
  25. 1 2.100000 -0.873316
  26. 2 3.406667 0.320034
  27. 3 4.700000 4.000000
  28. 4 5.600000 12.200000
  29. 5 6.800000 14.400000

When interpolating via a polynomial or spline approximation, you must also specifythe degree or order of the approximation:

  1. In [79]: df.interpolate(method='spline', order=2)
  2. Out[79]:
  3. A B
  4. 0 1.000000 0.250000
  5. 1 2.100000 -0.428598
  6. 2 3.404545 1.206900
  7. 3 4.700000 4.000000
  8. 4 5.600000 12.200000
  9. 5 6.800000 14.400000
  10.  
  11. In [80]: df.interpolate(method='polynomial', order=2)
  12. Out[80]:
  13. A B
  14. 0 1.000000 0.250000
  15. 1 2.100000 -2.703846
  16. 2 3.451351 -1.453846
  17. 3 4.700000 4.000000
  18. 4 5.600000 12.200000
  19. 5 6.800000 14.400000

Compare several methods:

  1. In [81]: np.random.seed(2)
  2.  
  3. In [82]: ser = pd.Series(np.arange(1, 10.1, .25) ** 2 + np.random.randn(37))
  4.  
  5. In [83]: missing = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
  6.  
  7. In [84]: ser[missing] = np.nan
  8.  
  9. In [85]: methods = ['linear', 'quadratic', 'cubic']
  10.  
  11. In [86]: df = pd.DataFrame({m: ser.interpolate(method=m) for m in methods})
  12.  
  13. In [87]: df.plot()
  14. Out[87]: <matplotlib.axes._subplots.AxesSubplot at 0x7f450951ac10>

../_images/compare_interpolations.pngAnother use case is interpolation at new values.Suppose you have 100 observations from some distribution. And let’s supposethat you’re particularly interested in what’s happening around the middle.You can mix pandas’ reindex and interpolate methods to interpolateat the new values.

  1. In [88]: ser = pd.Series(np.sort(np.random.uniform(size=100)))
  2.  
  3. # interpolate at new_index
  4. In [89]: new_index = ser.index | pd.Index([49.25, 49.5, 49.75, 50.25, 50.5, 50.75])
  5.  
  6. In [90]: interp_s = ser.reindex(new_index).interpolate(method='pchip')
  7.  
  8. In [91]: interp_s[49:51]
  9. Out[91]:
  10. 49.00 0.471410
  11. 49.25 0.476841
  12. 49.50 0.481780
  13. 49.75 0.485998
  14. 50.00 0.489266
  15. 50.25 0.491814
  16. 50.50 0.493995
  17. 50.75 0.495763
  18. 51.00 0.497074
  19. dtype: float64

Interpolation limits

Like other pandas fill methods, interpolate() accepts a limit keywordargument. Use this argument to limit the number of consecutive NaN valuesfilled since the last valid observation:

  1. In [92]: ser = pd.Series([np.nan, np.nan, 5, np.nan, np.nan,
  2. ....: np.nan, 13, np.nan, np.nan])
  3. ....:
  4.  
  5. In [93]: ser
  6. Out[93]:
  7. 0 NaN
  8. 1 NaN
  9. 2 5.0
  10. 3 NaN
  11. 4 NaN
  12. 5 NaN
  13. 6 13.0
  14. 7 NaN
  15. 8 NaN
  16. dtype: float64
  17.  
  18. # fill all consecutive values in a forward direction
  19. In [94]: ser.interpolate()
  20. Out[94]:
  21. 0 NaN
  22. 1 NaN
  23. 2 5.0
  24. 3 7.0
  25. 4 9.0
  26. 5 11.0
  27. 6 13.0
  28. 7 13.0
  29. 8 13.0
  30. dtype: float64
  31.  
  32. # fill one consecutive value in a forward direction
  33. In [95]: ser.interpolate(limit=1)
  34. Out[95]:
  35. 0 NaN
  36. 1 NaN
  37. 2 5.0
  38. 3 7.0
  39. 4 NaN
  40. 5 NaN
  41. 6 13.0
  42. 7 13.0
  43. 8 NaN
  44. dtype: float64

By default, NaN values are filled in a forward direction. Uselimit_direction parameter to fill backward or from both directions.

  1. # fill one consecutive value backwards
  2. In [96]: ser.interpolate(limit=1, limit_direction='backward')
  3. Out[96]:
  4. 0 NaN
  5. 1 5.0
  6. 2 5.0
  7. 3 NaN
  8. 4 NaN
  9. 5 11.0
  10. 6 13.0
  11. 7 NaN
  12. 8 NaN
  13. dtype: float64
  14.  
  15. # fill one consecutive value in both directions
  16. In [97]: ser.interpolate(limit=1, limit_direction='both')
  17. Out[97]:
  18. 0 NaN
  19. 1 5.0
  20. 2 5.0
  21. 3 7.0
  22. 4 NaN
  23. 5 11.0
  24. 6 13.0
  25. 7 13.0
  26. 8 NaN
  27. dtype: float64
  28.  
  29. # fill all consecutive values in both directions
  30. In [98]: ser.interpolate(limit_direction='both')
  31. Out[98]:
  32. 0 5.0
  33. 1 5.0
  34. 2 5.0
  35. 3 7.0
  36. 4 9.0
  37. 5 11.0
  38. 6 13.0
  39. 7 13.0
  40. 8 13.0
  41. dtype: float64

By default, NaN values are filled whether they are inside (surrounded by)existing valid values, or outside existing valid values. Introduced in v0.23the limit_area parameter restricts filling to either inside or outside values.

  1. # fill one consecutive inside value in both directions
  2. In [99]: ser.interpolate(limit_direction='both', limit_area='inside', limit=1)
  3. Out[99]:
  4. 0 NaN
  5. 1 NaN
  6. 2 5.0
  7. 3 7.0
  8. 4 NaN
  9. 5 11.0
  10. 6 13.0
  11. 7 NaN
  12. 8 NaN
  13. dtype: float64
  14.  
  15. # fill all consecutive outside values backward
  16. In [100]: ser.interpolate(limit_direction='backward', limit_area='outside')
  17. Out[100]:
  18. 0 5.0
  19. 1 5.0
  20. 2 5.0
  21. 3 NaN
  22. 4 NaN
  23. 5 NaN
  24. 6 13.0
  25. 7 NaN
  26. 8 NaN
  27. dtype: float64
  28.  
  29. # fill all consecutive outside values in both directions
  30. In [101]: ser.interpolate(limit_direction='both', limit_area='outside')
  31. Out[101]:
  32. 0 5.0
  33. 1 5.0
  34. 2 5.0
  35. 3 NaN
  36. 4 NaN
  37. 5 NaN
  38. 6 13.0
  39. 7 13.0
  40. 8 13.0
  41. dtype: float64

Replacing generic values

Often times we want to replace arbitrary values with other values.

replace() in Series and replace() in DataFrame provides an efficient yetflexible way to perform such replacements.

For a Series, you can replace a single value or a list of values by anothervalue:

  1. In [102]: ser = pd.Series([0., 1., 2., 3., 4.])
  2.  
  3. In [103]: ser.replace(0, 5)
  4. Out[103]:
  5. 0 5.0
  6. 1 1.0
  7. 2 2.0
  8. 3 3.0
  9. 4 4.0
  10. dtype: float64

You can replace a list of values by a list of other values:

  1. In [104]: ser.replace([0, 1, 2, 3, 4], [4, 3, 2, 1, 0])
  2. Out[104]:
  3. 0 4.0
  4. 1 3.0
  5. 2 2.0
  6. 3 1.0
  7. 4 0.0
  8. dtype: float64

You can also specify a mapping dict:

  1. In [105]: ser.replace({0: 10, 1: 100})
  2. Out[105]:
  3. 0 10.0
  4. 1 100.0
  5. 2 2.0
  6. 3 3.0
  7. 4 4.0
  8. dtype: float64

For a DataFrame, you can specify individual values by column:

  1. In [106]: df = pd.DataFrame({'a': [0, 1, 2, 3, 4], 'b': [5, 6, 7, 8, 9]})
  2.  
  3. In [107]: df.replace({'a': 0, 'b': 5}, 100)
  4. Out[107]:
  5. a b
  6. 0 100 100
  7. 1 1 6
  8. 2 2 7
  9. 3 3 8
  10. 4 4 9

Instead of replacing with specified values, you can treat all given values asmissing and interpolate over them:

  1. In [108]: ser.replace([1, 2, 3], method='pad')
  2. Out[108]:
  3. 0 0.0
  4. 1 0.0
  5. 2 0.0
  6. 3 0.0
  7. 4 4.0
  8. dtype: float64

String/regular expression replacement

Note

Python strings prefixed with the r character such as r'hello world'are so-called “raw” strings. They have different semantics regardingbackslashes than strings without this prefix. Backslashes in raw stringswill be interpreted as an escaped backslash, e.g., r'\' == '\'. Youshould read about themif this is unclear.

Replace the ‘.’ with NaN (str -> str):

  1. In [109]: d = {'a': list(range(4)), 'b': list('ab..'), 'c': ['a', 'b', np.nan, 'd']}
  2.  
  3. In [110]: df = pd.DataFrame(d)
  4.  
  5. In [111]: df.replace('.', np.nan)
  6. Out[111]:
  7. a b c
  8. 0 0 a a
  9. 1 1 b b
  10. 2 2 NaN NaN
  11. 3 3 NaN d

Now do it with a regular expression that removes surrounding whitespace(regex -> regex):

  1. In [112]: df.replace(r'\s*\.\s*', np.nan, regex=True)
  2. Out[112]:
  3. a b c
  4. 0 0 a a
  5. 1 1 b b
  6. 2 2 NaN NaN
  7. 3 3 NaN d

Replace a few different values (list -> list):

  1. In [113]: df.replace(['a', '.'], ['b', np.nan])
  2. Out[113]:
  3. a b c
  4. 0 0 b b
  5. 1 1 b b
  6. 2 2 NaN NaN
  7. 3 3 NaN d

list of regex -> list of regex:

  1. In [114]: df.replace([r'\.', r'(a)'], ['dot', r'\1stuff'], regex=True)
  2. Out[114]:
  3. a b c
  4. 0 0 astuff astuff
  5. 1 1 b b
  6. 2 2 dot NaN
  7. 3 3 dot d

Only search in column 'b' (dict -> dict):

  1. In [115]: df.replace({'b': '.'}, {'b': np.nan})
  2. Out[115]:
  3. a b c
  4. 0 0 a a
  5. 1 1 b b
  6. 2 2 NaN NaN
  7. 3 3 NaN d

Same as the previous example, but use a regular expression forsearching instead (dict of regex -> dict):

  1. In [116]: df.replace({'b': r'\s*\.\s*'}, {'b': np.nan}, regex=True)
  2. Out[116]:
  3. a b c
  4. 0 0 a a
  5. 1 1 b b
  6. 2 2 NaN NaN
  7. 3 3 NaN d

You can pass nested dictionaries of regular expressions that use regex=True:

  1. In [117]: df.replace({'b': {'b': r''}}, regex=True)
  2. Out[117]:
  3. a b c
  4. 0 0 a a
  5. 1 1 b
  6. 2 2 . NaN
  7. 3 3 . d

Alternatively, you can pass the nested dictionary like so:

  1. In [118]: df.replace(regex={'b': {r'\s*\.\s*': np.nan}})
  2. Out[118]:
  3. a b c
  4. 0 0 a a
  5. 1 1 b b
  6. 2 2 NaN NaN
  7. 3 3 NaN d

You can also use the group of a regular expression match when replacing (dictof regex -> dict of regex), this works for lists as well.

  1. In [119]: df.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True)
  2. Out[119]:
  3. a b c
  4. 0 0 a a
  5. 1 1 b b
  6. 2 2 .ty NaN
  7. 3 3 .ty d

You can pass a list of regular expressions, of which those that matchwill be replaced with a scalar (list of regex -> regex).

  1. In [120]: df.replace([r'\s*\.\s*', r'a|b'], np.nan, regex=True)
  2. Out[120]:
  3. a b c
  4. 0 0 NaN NaN
  5. 1 1 NaN NaN
  6. 2 2 NaN NaN
  7. 3 3 NaN d

All of the regular expression examples can also be passed with theto_replace argument as the regex argument. In this case the valueargument must be passed explicitly by name or regex must be a nesteddictionary. The previous example, in this case, would then be:

  1. In [121]: df.replace(regex=[r'\s*\.\s*', r'a|b'], value=np.nan)
  2. Out[121]:
  3. a b c
  4. 0 0 NaN NaN
  5. 1 1 NaN NaN
  6. 2 2 NaN NaN
  7. 3 3 NaN d

This can be convenient if you do not want to pass regex=True every time youwant to use a regular expression.

Note

Anywhere in the above replace examples that you see a regular expressiona compiled regular expression is valid as well.

Numeric replacement

replace() is similar to fillna().

  1. In [122]: df = pd.DataFrame(np.random.randn(10, 2))
  2.  
  3. In [123]: df[np.random.rand(df.shape[0]) > 0.5] = 1.5
  4.  
  5. In [124]: df.replace(1.5, np.nan)
  6. Out[124]:
  7. 0 1
  8. 0 -0.844214 -1.021415
  9. 1 0.432396 -0.323580
  10. 2 0.423825 0.799180
  11. 3 1.262614 0.751965
  12. 4 NaN NaN
  13. 5 NaN NaN
  14. 6 -0.498174 -1.060799
  15. 7 0.591667 -0.183257
  16. 8 1.019855 -1.482465
  17. 9 NaN NaN

Replacing more than one value is possible by passing a list.

  1. In [125]: df00 = df.iloc[0, 0]
  2.  
  3. In [126]: df.replace([1.5, df00], [np.nan, 'a'])
  4. Out[126]:
  5. 0 1
  6. 0 a -1.02141
  7. 1 0.432396 -0.32358
  8. 2 0.423825 0.79918
  9. 3 1.26261 0.751965
  10. 4 NaN NaN
  11. 5 NaN NaN
  12. 6 -0.498174 -1.0608
  13. 7 0.591667 -0.183257
  14. 8 1.01985 -1.48247
  15. 9 NaN NaN
  16.  
  17. In [127]: df[1].dtype
  18. Out[127]: dtype('float64')

You can also operate on the DataFrame in place:

  1. In [128]: df.replace(1.5, np.nan, inplace=True)

Warning

When replacing multiple bool or datetime64 objects, the firstargument to replace (to_replace) must match the type of the valuebeing replaced. For example,

  1. >>> s = pd.Series([True, False, True])
  2. >>> s.replace({'a string': 'new value', True: False}) # raises
  3. TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'

will raise a TypeError because one of the dict keys is not of thecorrect type for replacement.

However, when replacing a single object such as,

  1. In [129]: s = pd.Series([True, False, True])
  2.  
  3. In [130]: s.replace('a string', 'another string')
  4. Out[130]:
  5. 0 True
  6. 1 False
  7. 2 True
  8. dtype: bool

the original NDFrame object will be returned untouched. We’re working onunifying this API, but for backwards compatibility reasons we cannot breakthe latter behavior. See GH6354 for more details.

Missing data casting rules and indexing

While pandas supports storing arrays of integer and boolean type, these typesare not capable of storing missing data. Until we can switch to using a nativeNA type in NumPy, we’ve established some “casting rules”. When a reindexingoperation introduces missing data, the Series will be cast according to therules introduced in the table below.

data typeCast to
integerfloat
booleanobject
floatno cast
objectno cast

For example:

  1. In [131]: s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7])
  2.  
  3. In [132]: s > 0
  4. Out[132]:
  5. 0 True
  6. 2 True
  7. 4 True
  8. 6 True
  9. 7 True
  10. dtype: bool
  11.  
  12. In [133]: (s > 0).dtype
  13. Out[133]: dtype('bool')
  14.  
  15. In [134]: crit = (s > 0).reindex(list(range(8)))
  16.  
  17. In [135]: crit
  18. Out[135]:
  19. 0 True
  20. 1 NaN
  21. 2 True
  22. 3 NaN
  23. 4 True
  24. 5 NaN
  25. 6 True
  26. 7 True
  27. dtype: object
  28.  
  29. In [136]: crit.dtype
  30. Out[136]: dtype('O')

Ordinarily NumPy will complain if you try to use an object array (even if itcontains boolean values) instead of a boolean array to get or set values froman ndarray (e.g. selecting values based on some criteria). If a boolean vectorcontains NAs, an exception will be generated:

  1. In [137]: reindexed = s.reindex(list(range(8))).fillna(0)
  2.  
  3. In [138]: reindexed[crit]
  4. ---------------------------------------------------------------------------
  5. ValueError Traceback (most recent call last)
  6. <ipython-input-138-0dac417a4890> in <module>
  7. ----> 1 reindexed[crit]
  8.  
  9. /pandas/pandas/core/series.py in __getitem__(self, key)
  10. 1108 key = list(key)
  11. 1109
  12. -> 1110 if com.is_bool_indexer(key):
  13. 1111 key = check_bool_indexer(self.index, key)
  14. 1112
  15.  
  16. /pandas/pandas/core/common.py in is_bool_indexer(key)
  17. 128 if not lib.is_bool_array(key):
  18. 129 if isna(key).any():
  19. --> 130 raise ValueError(na_msg)
  20. 131 return False
  21. 132 return True
  22.  
  23. ValueError: cannot index with vector containing NA / NaN values

However, these can be filled in using fillna() and it will work fine:

  1. In [139]: reindexed[crit.fillna(False)]
  2. Out[139]:
  3. 0 0.126504
  4. 2 0.696198
  5. 4 0.697416
  6. 6 0.601516
  7. 7 0.003659
  8. dtype: float64
  9.  
  10. In [140]: reindexed[crit.fillna(True)]
  11. Out[140]:
  12. 0 0.126504
  13. 1 0.000000
  14. 2 0.696198
  15. 3 0.000000
  16. 4 0.697416
  17. 5 0.000000
  18. 6 0.601516
  19. 7 0.003659
  20. dtype: float64

Pandas provides a nullable integer dtype, but you must explicitly request itwhen creating the series or column. Notice that we use a capital “I” inthe dtype="Int64".

  1. In [141]: s = pd.Series([0, 1, np.nan, 3, 4], dtype="Int64")
  2.  
  3. In [142]: s
  4. Out[142]:
  5. 0 0
  6. 1 1
  7. 2 NaN
  8. 3 3
  9. 4 4
  10. dtype: Int64

See Nullable integer data type for more.