数据输入输出(dataio)

Performance comparison of SQL vs HDF5

CSV

The CSV docs

read_csv in action

appending to a csv

Reading a csv chunk-by-chunk

Reading only certain rows of a csv chunk-by-chunk

Reading the first few lines of a frame

Reading a file that is compressed but not by gzip/bz2 (the native compressed formats which read_csv understands). This example shows a WinZipped file, but is a general application of opening the file within a context manager and using that handle to read. See here

Inferring dtypes from a file

Dealing with bad lines

Dealing with bad lines II

Reading CSV with Unix timestamps and converting to local timezone

Write a multi-row index CSV without writing duplicates

Reading multiple files to create a single DataFrame

The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of the individual frames into a list, and then combine the frames in the list using pd.concat():

  1. In [159]: for i in range(3):
  2. .....: data = pd.DataFrame(np.random.randn(10, 4))
  3. .....: data.to_csv('file_{}.csv'.format(i))
  4. .....:
  5. In [160]: files = ['file_0.csv', 'file_1.csv', 'file_2.csv']
  6. In [161]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)

You can use the same approach to read all files matching a pattern. Here is an example using glob:

  1. In [162]: import glob
  2. In [163]: files = glob.glob('file_*.csv')
  3. In [164]: result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)

Finally, this strategy will work with the other pd.read_*(…) functions described in the io docs.

Parsing date components in multi-columns

Parsing date components in multi-columns is faster with a format

  1. In [30]: i = pd.date_range('20000101',periods=10000)
  2. In [31]: df = pd.DataFrame(dict(year = i.year, month = i.month, day = i.day))
  3. In [32]: df.head()
  4. Out[32]:
  5. day month year
  6. 0 1 1 2000
  7. 1 2 1 2000
  8. 2 3 1 2000
  9. 3 4 1 2000
  10. 4 5 1 2000
  11. In [33]: %timeit pd.to_datetime(df.year*10000+df.month*100+df.day,format='%Y%m%d')
  12. 100 loops, best of 3: 7.08 ms per loop
  13. # simulate combinging into a string, then parsing
  14. In [34]: ds = df.apply(lambda x: "%04d%02d%02d" % (x['year'],x['month'],x['day']),axis=1)
  15. In [35]: ds.head()
  16. Out[35]:
  17. 0 20000101
  18. 1 20000102
  19. 2 20000103
  20. 3 20000104
  21. 4 20000105
  22. dtype: object
  23. In [36]: %timeit pd.to_datetime(ds)
  24. 1 loops, best of 3: 488 ms per loop

Skip row between header and data

  1. In [165]: data = """;;;;
  2. .....: ;;;;
  3. .....: ;;;;
  4. .....: ;;;;
  5. .....: ;;;;
  6. .....: ;;;;
  7. .....: ;;;;
  8. .....: ;;;;
  9. .....: ;;;;
  10. .....: ;;;;
  11. .....: date;Param1;Param2;Param4;Param5
  12. .....: ;m²;°C;m²;m
  13. .....: ;;;;
  14. .....: 01.01.1990 00:00;1;1;2;3
  15. .....: 01.01.1990 01:00;5;3;4;5
  16. .....: 01.01.1990 02:00;9;5;6;7
  17. .....: 01.01.1990 03:00;13;7;8;9
  18. .....: 01.01.1990 04:00;17;9;10;11
  19. .....: 01.01.1990 05:00;21;11;12;13
  20. .....: """
  21. .....:

Option 1: pass rows explicitly to skiprows

  1. In [166]: pd.read_csv(StringIO(data), sep=';', skiprows=[11,12],
  2. .....: index_col=0, parse_dates=True, header=10)
  3. .....:
  4. Out[166]:
  5. Param1 Param2 Param4 Param5
  6. date
  7. 1990-01-01 00:00:00 1 1 2 3
  8. 1990-01-01 01:00:00 5 3 4 5
  9. 1990-01-01 02:00:00 9 5 6 7
  10. 1990-01-01 03:00:00 13 7 8 9
  11. 1990-01-01 04:00:00 17 9 10 11
  12. 1990-01-01 05:00:00 21 11 12 13

Option 2: read column names and then data

  1. In [167]: pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
  2. Out[167]: Index(['date', 'Param1', 'Param2', 'Param4', 'Param5'], dtype='object')
  3. In [168]: columns = pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns
  4. In [169]: pd.read_csv(StringIO(data), sep=';', index_col=0,
  5. .....: header=12, parse_dates=True, names=columns)
  6. .....:
  7. Out[169]:
  8. Param1 Param2 Param4 Param5
  9. date
  10. 1990-01-01 00:00:00 1 1 2 3
  11. 1990-01-01 01:00:00 5 3 4 5
  12. 1990-01-01 02:00:00 9 5 6 7
  13. 1990-01-01 03:00:00 13 7 8 9
  14. 1990-01-01 04:00:00 17 9 10 11
  15. 1990-01-01 05:00:00 21 11 12 13

SQL

The SQL docs

Reading from databases with SQL

Excel

The Excel docs

Reading from a filelike handle

Modifying formatting in XlsxWriter output

HTML

Reading HTML tables from a server that cannot handle the default request header

HDFStore

The HDFStores docs

Simple Queries with a Timestamp Index

Managing heterogeneous data using a linked multiple table hierarchy

Merging on-disk tables with millions of rows

Avoiding inconsistencies when writing to a store from multiple processes/threads

De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here

Creating a store chunk-by-chunk from a csv file

Appending to a store, while creating a unique index

Large Data work flows

Reading in a sequence of files, then providing a global unique index to a store while appending

Groupby on a HDFStore with low group density

Groupby on a HDFStore with high group density

Hierarchical queries on a HDFStore

Counting with a HDFStore

Troubleshoot HDFStore exceptions

Setting min_itemsize with strings

Using ptrepack to create a completely-sorted-index on a store

Storing Attributes to a group node

  1. In [170]: df = pd.DataFrame(np.random.randn(8,3))
  2. In [171]: store = pd.HDFStore('test.h5')
  3. In [172]: store.put('df',df)
  4. # you can store an arbitrary Python object via pickle
  5. In [173]: store.get_storer('df').attrs.my_attribute = dict(A = 10)
  6. In [174]: store.get_storer('df').attrs.my_attribute
  7. Out[174]: {'A': 10}

Binary Files

pandas readily accepts NumPy record arrays, if you need to read in a binary file consisting of an array of C structs. For example, given this C program in a file called main.c compiled with gcc main.c -std=gnu99 on a 64-bit machine,

  1. #include <stdio.h>
  2. #include <stdint.h>
  3. typedef struct _Data
  4. {
  5. int32_t count;
  6. double avg;
  7. float scale;
  8. } Data;
  9. int main(int argc, const char *argv[])
  10. {
  11. size_t n = 10;
  12. Data d[n];
  13. for (int i = 0; i < n; ++i)
  14. {
  15. d[i].count = i;
  16. d[i].avg = i + 1.0;
  17. d[i].scale = (float) i + 2.0f;
  18. }
  19. FILE *file = fopen("binary.dat", "wb");
  20. fwrite(&d, sizeof(Data), n, file);
  21. fclose(file);
  22. return 0;
  23. }

the following Python code will read the binary file ‘binary.dat’ into a pandas DataFrame, where each element of the struct corresponds to a column in the frame:

  1. names = 'count', 'avg', 'scale'
  2. # note that the offsets are larger than the size of the type because of
  3. # struct padding
  4. offsets = 0, 8, 16
  5. formats = 'i4', 'f8', 'f4'
  6. dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
  7. align=True)
  8. df = pd.DataFrame(np.fromfile('binary.dat', dt))

Note:The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or msgpack, both of which are supported by pandas’ IO facilities.