11.2 时间序列基础

pandas最基本的时间序列类型就是以时间戳(通常以Python字符串或datatime对象表示)为索引的Series:

  1. In [39]: from datetime import datetime
  2. In [40]: dates = [datetime(2011, 1, 2), datetime(2011, 1, 5),
  3. ....: datetime(2011, 1, 7), datetime(2011, 1, 8),
  4. ....: datetime(2011, 1, 10), datetime(2011, 1, 12)]
  5. In [41]: ts = pd.Series(np.random.randn(6), index=dates)
  6. In [42]: ts
  7. Out[42]:
  8. 2011-01-02 -0.204708
  9. 2011-01-05 0.478943
  10. 2011-01-07 -0.519439
  11. 2011-01-08 -0.555730
  12. 2011-01-10 1.965781
  13. 2011-01-12 1.393406
  14. dtype: float64

这些datetime对象实际上是被放在一个DatetimeIndex中的:

  1. In [43]: ts.index
  2. Out[43]:
  3. DatetimeIndex(['2011-01-02', '2011-01-05', '2011-01-07', '2011-01-08',
  4. '2011-01-10', '2011-01-12'],
  5. dtype='datetime64[ns]', freq=None)

跟其他Series一样,不同索引的时间序列之间的算术运算会自动按日期对齐:

  1. In [44]: ts + ts[::2]
  2. Out[44]:
  3. 2011-01-02 -0.409415
  4. 2011-01-05 NaN
  5. 2011-01-07 -1.038877
  6. 2011-01-08 NaN
  7. 2011-01-10 3.931561
  8. 2011-01-12 NaN
  9. dtype: float64

ts[::2] 是每隔两个取一个。

pandas用NumPy的datetime64数据类型以纳秒形式存储时间戳:

  1. In [45]: ts.index.dtype
  2. Out[45]: dtype('<M8[ns]')

DatetimeIndex中的各个标量值是pandas的Timestamp对象:

  1. In [46]: stamp = ts.index[0]
  2. In [47]: stamp
  3. Out[47]: Timestamp('2011-01-02 00:00:00')

只要有需要,TimeStamp可以随时自动转换为datetime对象。此外,它还可以存储频率信息(如果有的话),且知道如何执行时区转换以及其他操作。稍后将对此进行详细讲解。

索引、选取、子集构造

当你根据标签索引选取数据时,时间序列和其它的pandas.Series很像:

  1. In [48]: stamp = ts.index[2]
  2. In [49]: ts[stamp]
  3. Out[49]: -0.51943871505673811

还有一种更为方便的用法:传入一个可以被解释为日期的字符串:

  1. In [50]: ts['1/10/2011']
  2. Out[50]: 1.9657805725027142
  3. In [51]: ts['20110110']
  4. Out[51]: 1.9657805725027142

对于较长的时间序列,只需传入“年”或“年月”即可轻松选取数据的切片:

  1. In [52]: longer_ts = pd.Series(np.random.randn(1000),
  2. ....: index=pd.date_range('1/1/2000', periods=1000))
  3. In [53]: longer_ts
  4. Out[53]:
  5. 2000-01-01 0.092908
  6. 2000-01-02 0.281746
  7. 2000-01-03 0.769023
  8. 2000-01-04 1.246435
  9. 2000-01-05 1.007189
  10. 2000-01-06 -1.296221
  11. 2000-01-07 0.274992
  12. 2000-01-08 0.228913
  13. 2000-01-09 1.352917
  14. 2000-01-10 0.886429
  15. ...
  16. 2002-09-17 -0.139298
  17. 2002-09-18 -1.159926
  18. 2002-09-19 0.618965
  19. 2002-09-20 1.373890
  20. 2002-09-21 -0.983505
  21. 2002-09-22 0.930944
  22. 2002-09-23 -0.811676
  23. 2002-09-24 -1.830156
  24. 2002-09-25 -0.138730
  25. 2002-09-26 0.334088
  26. Freq: D, Length: 1000, dtype: float64
  27. In [54]: longer_ts['2001']
  28. Out[54]:
  29. 2001-01-01 1.599534
  30. 2001-01-02 0.474071
  31. 2001-01-03 0.151326
  32. 2001-01-04 -0.542173
  33. 2001-01-05 -0.475496
  34. 2001-01-06 0.106403
  35. 2001-01-07 -1.308228
  36. 2001-01-08 2.173185
  37. 2001-01-09 0.564561
  38. 2001-01-10 -0.190481
  39. ...
  40. 2001-12-22 0.000369
  41. 2001-12-23 0.900885
  42. 2001-12-24 -0.454869
  43. 2001-12-25 -0.864547
  44. 2001-12-26 1.129120
  45. 2001-12-27 0.057874
  46. 2001-12-28 -0.433739
  47. 2001-12-29 0.092698
  48. 2001-12-30 -1.397820
  49. 2001-12-31 1.457823
  50. Freq: D, Length: 365, dtype: float64

这里,字符串“2001”被解释成年,并根据它选取时间区间。指定月也同样奏效:

  1. In [55]: longer_ts['2001-05']
  2. Out[55]:
  3. 2001-05-01 -0.622547
  4. 2001-05-02 0.936289
  5. 2001-05-03 0.750018
  6. 2001-05-04 -0.056715
  7. 2001-05-05 2.300675
  8. 2001-05-06 0.569497
  9. 2001-05-07 1.489410
  10. 2001-05-08 1.264250
  11. 2001-05-09 -0.761837
  12. 2001-05-10 -0.331617
  13. ...
  14. 2001-05-22 0.503699
  15. 2001-05-23 -1.387874
  16. 2001-05-24 0.204851
  17. 2001-05-25 0.603705
  18. 2001-05-26 0.545680
  19. 2001-05-27 0.235477
  20. 2001-05-28 0.111835
  21. 2001-05-29 -1.251504
  22. 2001-05-30 -2.949343
  23. 2001-05-31 0.634634
  24. Freq: D, Length: 31, dtype: float64

datetime对象也可以进行切片:

  1. In [56]: ts[datetime(2011, 1, 7):]
  2. Out[56]:
  3. 2011-01-07 -0.519439
  4. 2011-01-08 -0.555730
  5. 2011-01-10 1.965781
  6. 2011-01-12 1.393406
  7. dtype: float64

由于大部分时间序列数据都是按照时间先后排序的,因此你也可以用不存在于该时间序列中的时间戳对其进行切片(即范围查询):

  1. In [57]: ts
  2. Out[57]:
  3. 2011-01-02 -0.204708
  4. 2011-01-05 0.478943
  5. 2011-01-07 -0.519439
  6. 2011-01-08 -0.555730
  7. 2011-01-10 1.965781
  8. 2011-01-12 1.393406
  9. dtype: float64
  10. In [58]: ts['1/6/2011':'1/11/2011']
  11. Out[58]:
  12. 2011-01-07 -0.519439
  13. 2011-01-08 -0.555730
  14. 2011-01-10 1.965781
  15. dtype: float64

跟之前一样,你可以传入字符串日期、datetime或Timestamp。注意,这样切片所产生的是原时间序列的视图,跟NumPy数组的切片运算是一样的。

这意味着,没有数据被复制,对切片进行修改会反映到原始数据上。

此外,还有一个等价的实例方法也可以截取两个日期之间TimeSeries:

  1. In [59]: ts.truncate(after='1/9/2011')
  2. Out[59]:
  3. 2011-01-02 -0.204708
  4. 2011-01-05 0.478943
  5. 2011-01-07 -0.519439
  6. 2011-01-08 -0.555730
  7. dtype: float64

面这些操作对DataFrame也有效。例如,对DataFrame的行进行索引:

  1. In [60]: dates = pd.date_range('1/1/2000', periods=100, freq='W-WED')
  2. In [61]: long_df = pd.DataFrame(np.random.randn(100, 4),
  3. ....: index=dates,
  4. ....: columns=['Colorado', 'Texas',
  5. ....: 'New York', 'Ohio'])
  6. In [62]: long_df.loc['5-2001']
  7. Out[62]:
  8. Colorado Texas New York Ohio
  9. 2001-05-02 -0.006045 0.490094 -0.277186 -0.707213
  10. 2001-05-09 -0.560107 2.735527 0.927335 1.513906
  11. 2001-05-16 0.538600 1.273768 0.667876 -0.969206
  12. 2001-05-23 1.676091 -0.817649 0.050188 1.951312
  13. 2001-05-30 3.260383 0.963301 1.201206 -1.852001

带有重复索引的时间序列

在某些应用场景中,可能会存在多个观测数据落在同一个时间点上的情况。下面就是一个例子:

  1. In [63]: dates = pd.DatetimeIndex(['1/1/2000', '1/2/2000', '1/2/2000',
  2. ....: '1/2/2000', '1/3/2000'])
  3. In [64]: dup_ts = pd.Series(np.arange(5), index=dates)
  4. In [65]: dup_ts
  5. Out[65]:
  6. 2000-01-01 0
  7. 2000-01-02 1
  8. 2000-01-02 2
  9. 2000-01-02 3
  10. 2000-01-03 4
  11. dtype: int64

通过检查索引的is_unique属性,我们就可以知道它是不是唯一的:

  1. In [66]: dup_ts.index.is_unique
  2. Out[66]: False

对这个时间序列进行索引,要么产生标量值,要么产生切片,具体要看所选的时间点是否重复:

  1. In [67]: dup_ts['1/3/2000'] # not duplicated
  2. Out[67]: 4
  3. In [68]: dup_ts['1/2/2000'] # duplicated
  4. Out[68]:
  5. 2000-01-02 1
  6. 2000-01-02 2
  7. 2000-01-02 3
  8. dtype: int64

假设你想要对具有非唯一时间戳的数据进行聚合。一个办法是使用groupby,并传入level=0:

  1. In [69]: grouped = dup_ts.groupby(level=0)
  2. In [70]: grouped.mean()
  3. Out[70]:
  4. 2000-01-01 0
  5. 2000-01-02 2
  6. 2000-01-03 4
  7. dtype: int64
  8. In [71]: grouped.count()
  9. Out[71]:
  10. 2000-01-01 1
  11. 2000-01-02 3
  12. 2000-01-03 1
  13. dtype: int64