You can run this notebook in a live sessionBinder or view it on Github.

Bag: Parallel Lists for semi-structured data

Dask-bag excels in processing data that can be represented as a sequence of arbitrary inputs. We’ll refer to this as “messy” data, because it can contain complex nested structures, missing fields, mixtures of data types, etc. The functional programming style fits very nicely with standard Python iteration, such as can be found in the itertools module.

Messy data is often encountered at the beginning of data processing pipelines when large volumes of raw data are first consumed. The initial set of data might be JSON, CSV, XML, or any other format that does not enforce strict structure and datatypes. For this reason, the initial data massaging and processing is often done with Python lists, dicts, and sets.

These core data structures are optimized for general-purpose storage and processing. Adding streaming computation with iterators/generator expressions or libraries like itertools or toolz</code> &lt;[https://toolz.readthedocs.io/en/latest/](https://toolz.readthedocs.io/en/latest/)&gt;__ let us process large volumes in a small space. If we combine this with parallel processing then we can churn through a fair amount of data.

Dask.bag is a high level Dask collection to automate common workloads of this form. In a nutshell

  1. dask.bag = map, filter, toolz + parallel execution

Related Documentation

Create data

  1. [1]:
  1. %run prep.py -d accounts

Setup

Again, we’ll use the distributed scheduler. Schedulers will be explained in depth later.

  1. [2]:
  1. from dask.distributed import Client
  2.  
  3. client = Client(n_workers=4)

Creation

You can create a Bag from a Python sequence, from files, from data on S3, etc. We demonstrate using .take() to show elements of the data. (Doing .take(1) results in a tuple with one element)

Note that the data are partitioned into blocks, and there are many items per block. In the first example, the two partitions contain five elements each, and in the following two, each file is partitioned into one or more bytes blocks.

  1. [3]:
  1. # each element is an integer
  2. import dask.bag as db
  3. b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], npartitions=2)
  4. b.take(3)
  1. [3]:
  1. (1, 2, 3)
  1. [4]:
  1. # each element is a text file, where each line is a JSON object
  2. # note that the compression is handled automatically
  3. import os
  4. b = db.read_text(os.path.join('data', 'accounts.*.json.gz'))
  5. b.take(1)
  1. [4]:
  1. ('{"id": 0, "name": "Oliver", "transactions": [{"transaction-id": 233, "amount": 137}, {"transaction-id": 459, "amount": 73}, {"transaction-id": 2030, "amount": 112}, {"transaction-id": 2769, "amount": 89}, {"transaction-id": 3027, "amount": 59}, {"transaction-id": 4647, "amount": 40}, {"transaction-id": 4672, "amount": 76}, {"transaction-id": 4850, "amount": 112}, {"transaction-id": 5376, "amount": 109}, {"transaction-id": 5473, "amount": 70}, {"transaction-id": 5677, "amount": 91}, {"transaction-id": 5783, "amount": 82}, {"transaction-id": 5986, "amount": 65}, {"transaction-id": 6583, "amount": 121}, {"transaction-id": 6657, "amount": 61}, {"transaction-id": 6797, "amount": 110}, {"transaction-id": 7660, "amount": 112}, {"transaction-id": 8530, "amount": 128}, {"transaction-id": 8547, "amount": 131}, {"transaction-id": 8657, "amount": 85}, {"transaction-id": 8723, "amount": 38}, {"transaction-id": 8779, "amount": 46}, {"transaction-id": 8955, "amount": 122}, {"transaction-id": 9086, "amount": 78}, {"transaction-id": 9194, "amount": 80}, {"transaction-id": 9859, "amount": 114}, {"transaction-id": 9977, "amount": 95}]}\n',)
  1. [5]:
  1. # Requires `s3fs` library
  2. # each partition is a remote CSV text file
  3. b = db.read_text('s3://dask-data/nyc-taxi/2015/yellow_tripdata_2015-01.csv',
  4. storage_options={'anon': True})
  5. b.take(1)
  1. [5]:
  1. ('VendorID,tpep_pickup_datetime,tpep_dropoff_datetime,passenger_count,trip_distance,pickup_longitude,pickup_latitude,RateCodeID,store_and_fwd_flag,dropoff_longitude,dropoff_latitude,payment_type,fare_amount,extra,mta_tax,tip_amount,tolls_amount,improvement_surcharge,total_amount\n',)

Manipulation

Bag objects hold the standard functional API found in projects like the Python standard library, toolz, or pyspark, including map, filter, groupby, etc..

Operations on Bag objects create new bags. Call the .compute() method to trigger execution, as we saw for Delayed objects.

  1. [6]:
  1. def is_even(n):
  2. return n % 2 == 0
  3.  
  4. b = db.from_sequence([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
  5. c = b.filter(is_even).map(lambda x: x ** 2)
  6. c
  1. [6]:
  1. dask.bag<lambda, npartitions=10>
  1. [7]:
  1. # blocking form: wait for completion (which is very fast in this case)
  2. c.compute()
  1. [7]:
  1. [4, 16, 36, 64, 100]

Example: Accounts JSON data

We’ve created a fake dataset of gzipped JSON data in your data directory. This is like the example used in the DataFrame example we will see later, except that it has bundled up all of the entires for each individual id into a single record. This is similar to data that you might collect off of a document store database or a web API.

Each line is a JSON encoded dictionary with the following keys

  • id: Unique identifier of the customer

  • name: Name of the customer

  • transactions: List of transaction-id, amount pairs, one for each transaction for the customer in that file

  1. [8]:
  1. filename = os.path.join('data', 'accounts.*.json.gz')
  2. lines = db.read_text(filename)
  3. lines.take(3)
  1. [8]:
  1. ('{"id": 0, "name": "Oliver", "transactions": [{"transaction-id": 233, "amount": 137}, {"transaction-id": 459, "amount": 73}, {"transaction-id": 2030, "amount": 112}, {"transaction-id": 2769, "amount": 89}, {"transaction-id": 3027, "amount": 59}, {"transaction-id": 4647, "amount": 40}, {"transaction-id": 4672, "amount": 76}, {"transaction-id": 4850, "amount": 112}, {"transaction-id": 5376, "amount": 109}, {"transaction-id": 5473, "amount": 70}, {"transaction-id": 5677, "amount": 91}, {"transaction-id": 5783, "amount": 82}, {"transaction-id": 5986, "amount": 65}, {"transaction-id": 6583, "amount": 121}, {"transaction-id": 6657, "amount": 61}, {"transaction-id": 6797, "amount": 110}, {"transaction-id": 7660, "amount": 112}, {"transaction-id": 8530, "amount": 128}, {"transaction-id": 8547, "amount": 131}, {"transaction-id": 8657, "amount": 85}, {"transaction-id": 8723, "amount": 38}, {"transaction-id": 8779, "amount": 46}, {"transaction-id": 8955, "amount": 122}, {"transaction-id": 9086, "amount": 78}, {"transaction-id": 9194, "amount": 80}, {"transaction-id": 9859, "amount": 114}, {"transaction-id": 9977, "amount": 95}]}\n',
  2. '{"id": 1, "name": "Oliver", "transactions": [{"transaction-id": 168, "amount": 156}, {"transaction-id": 448, "amount": 162}, {"transaction-id": 637, "amount": 153}, {"transaction-id": 1068, "amount": 167}, {"transaction-id": 1257, "amount": 146}, {"transaction-id": 1464, "amount": 160}, {"transaction-id": 1534, "amount": 161}, {"transaction-id": 1651, "amount": 166}, {"transaction-id": 2333, "amount": 172}, {"transaction-id": 3121, "amount": 166}, {"transaction-id": 3360, "amount": 170}, {"transaction-id": 3541, "amount": 155}, {"transaction-id": 3770, "amount": 175}, {"transaction-id": 3818, "amount": 157}, {"transaction-id": 4803, "amount": 174}, {"transaction-id": 5174, "amount": 147}, {"transaction-id": 5580, "amount": 158}, {"transaction-id": 6474, "amount": 164}, {"transaction-id": 6577, "amount": 156}, {"transaction-id": 6952, "amount": 159}, {"transaction-id": 7157, "amount": 151}, {"transaction-id": 7254, "amount": 163}, {"transaction-id": 7340, "amount": 153}, {"transaction-id": 7570, "amount": 166}, {"transaction-id": 7723, "amount": 155}, {"transaction-id": 7875, "amount": 159}, {"transaction-id": 7923, "amount": 164}, {"transaction-id": 8448, "amount": 163}, {"transaction-id": 8789, "amount": 163}, {"transaction-id": 8997, "amount": 159}, {"transaction-id": 9004, "amount": 159}, {"transaction-id": 9605, "amount": 176}, {"transaction-id": 9697, "amount": 168}, {"transaction-id": 9842, "amount": 161}, {"transaction-id": 9994, "amount": 155}]}\n',
  3. '{"id": 3, "name": "Patricia", "transactions": [{"transaction-id": 640, "amount": 401}, {"transaction-id": 776, "amount": 387}, {"transaction-id": 1092, "amount": 370}, {"transaction-id": 1160, "amount": 376}, {"transaction-id": 1335, "amount": 404}, {"transaction-id": 1533, "amount": 403}, {"transaction-id": 1573, "amount": 398}, {"transaction-id": 1749, "amount": 381}, {"transaction-id": 2134, "amount": 417}, {"transaction-id": 2439, "amount": 410}, {"transaction-id": 3653, "amount": 402}, {"transaction-id": 4392, "amount": 407}, {"transaction-id": 4785, "amount": 411}, {"transaction-id": 5720, "amount": 376}, {"transaction-id": 5870, "amount": 392}, {"transaction-id": 6007, "amount": 392}, {"transaction-id": 6180, "amount": 392}, {"transaction-id": 6253, "amount": 420}, {"transaction-id": 6521, "amount": 382}, {"transaction-id": 6769, "amount": 382}, {"transaction-id": 7037, "amount": 363}, {"transaction-id": 7076, "amount": 382}, {"transaction-id": 7370, "amount": 405}, {"transaction-id": 7437, "amount": 378}, {"transaction-id": 7514, "amount": 411}, {"transaction-id": 7554, "amount": 385}, {"transaction-id": 7952, "amount": 391}, {"transaction-id": 8416, "amount": 380}, {"transaction-id": 8425, "amount": 374}, {"transaction-id": 8488, "amount": 396}, {"transaction-id": 8828, "amount": 378}, {"transaction-id": 8947, "amount": 374}, {"transaction-id": 9119, "amount": 378}, {"transaction-id": 9856, "amount": 375}, {"transaction-id": 9982, "amount": 405}]}\n')

Our data comes out of the file as lines of text. Notice that file decompression happened automatically. We can make this data look more reasonable by mapping the json.loads function onto our bag.

  1. [9]:
  1. import json
  2. js = lines.map(json.loads)
  3. # take: inspect first few elements
  4. js.take(3)
  1. [9]:
  1. ({'id': 0,
  2. 'name': 'Oliver',
  3. 'transactions': [{'transaction-id': 233, 'amount': 137},
  4. {'transaction-id': 459, 'amount': 73},
  5. {'transaction-id': 2030, 'amount': 112},
  6. {'transaction-id': 2769, 'amount': 89},
  7. {'transaction-id': 3027, 'amount': 59},
  8. {'transaction-id': 4647, 'amount': 40},
  9. {'transaction-id': 4672, 'amount': 76},
  10. {'transaction-id': 4850, 'amount': 112},
  11. {'transaction-id': 5376, 'amount': 109},
  12. {'transaction-id': 5473, 'amount': 70},
  13. {'transaction-id': 5677, 'amount': 91},
  14. {'transaction-id': 5783, 'amount': 82},
  15. {'transaction-id': 5986, 'amount': 65},
  16. {'transaction-id': 6583, 'amount': 121},
  17. {'transaction-id': 6657, 'amount': 61},
  18. {'transaction-id': 6797, 'amount': 110},
  19. {'transaction-id': 7660, 'amount': 112},
  20. {'transaction-id': 8530, 'amount': 128},
  21. {'transaction-id': 8547, 'amount': 131},
  22. {'transaction-id': 8657, 'amount': 85},
  23. {'transaction-id': 8723, 'amount': 38},
  24. {'transaction-id': 8779, 'amount': 46},
  25. {'transaction-id': 8955, 'amount': 122},
  26. {'transaction-id': 9086, 'amount': 78},
  27. {'transaction-id': 9194, 'amount': 80},
  28. {'transaction-id': 9859, 'amount': 114},
  29. {'transaction-id': 9977, 'amount': 95}]},
  30. {'id': 1,
  31. 'name': 'Oliver',
  32. 'transactions': [{'transaction-id': 168, 'amount': 156},
  33. {'transaction-id': 448, 'amount': 162},
  34. {'transaction-id': 637, 'amount': 153},
  35. {'transaction-id': 1068, 'amount': 167},
  36. {'transaction-id': 1257, 'amount': 146},
  37. {'transaction-id': 1464, 'amount': 160},
  38. {'transaction-id': 1534, 'amount': 161},
  39. {'transaction-id': 1651, 'amount': 166},
  40. {'transaction-id': 2333, 'amount': 172},
  41. {'transaction-id': 3121, 'amount': 166},
  42. {'transaction-id': 3360, 'amount': 170},
  43. {'transaction-id': 3541, 'amount': 155},
  44. {'transaction-id': 3770, 'amount': 175},
  45. {'transaction-id': 3818, 'amount': 157},
  46. {'transaction-id': 4803, 'amount': 174},
  47. {'transaction-id': 5174, 'amount': 147},
  48. {'transaction-id': 5580, 'amount': 158},
  49. {'transaction-id': 6474, 'amount': 164},
  50. {'transaction-id': 6577, 'amount': 156},
  51. {'transaction-id': 6952, 'amount': 159},
  52. {'transaction-id': 7157, 'amount': 151},
  53. {'transaction-id': 7254, 'amount': 163},
  54. {'transaction-id': 7340, 'amount': 153},
  55. {'transaction-id': 7570, 'amount': 166},
  56. {'transaction-id': 7723, 'amount': 155},
  57. {'transaction-id': 7875, 'amount': 159},
  58. {'transaction-id': 7923, 'amount': 164},
  59. {'transaction-id': 8448, 'amount': 163},
  60. {'transaction-id': 8789, 'amount': 163},
  61. {'transaction-id': 8997, 'amount': 159},
  62. {'transaction-id': 9004, 'amount': 159},
  63. {'transaction-id': 9605, 'amount': 176},
  64. {'transaction-id': 9697, 'amount': 168},
  65. {'transaction-id': 9842, 'amount': 161},
  66. {'transaction-id': 9994, 'amount': 155}]},
  67. {'id': 3,
  68. 'name': 'Patricia',
  69. 'transactions': [{'transaction-id': 640, 'amount': 401},
  70. {'transaction-id': 776, 'amount': 387},
  71. {'transaction-id': 1092, 'amount': 370},
  72. {'transaction-id': 1160, 'amount': 376},
  73. {'transaction-id': 1335, 'amount': 404},
  74. {'transaction-id': 1533, 'amount': 403},
  75. {'transaction-id': 1573, 'amount': 398},
  76. {'transaction-id': 1749, 'amount': 381},
  77. {'transaction-id': 2134, 'amount': 417},
  78. {'transaction-id': 2439, 'amount': 410},
  79. {'transaction-id': 3653, 'amount': 402},
  80. {'transaction-id': 4392, 'amount': 407},
  81. {'transaction-id': 4785, 'amount': 411},
  82. {'transaction-id': 5720, 'amount': 376},
  83. {'transaction-id': 5870, 'amount': 392},
  84. {'transaction-id': 6007, 'amount': 392},
  85. {'transaction-id': 6180, 'amount': 392},
  86. {'transaction-id': 6253, 'amount': 420},
  87. {'transaction-id': 6521, 'amount': 382},
  88. {'transaction-id': 6769, 'amount': 382},
  89. {'transaction-id': 7037, 'amount': 363},
  90. {'transaction-id': 7076, 'amount': 382},
  91. {'transaction-id': 7370, 'amount': 405},
  92. {'transaction-id': 7437, 'amount': 378},
  93. {'transaction-id': 7514, 'amount': 411},
  94. {'transaction-id': 7554, 'amount': 385},
  95. {'transaction-id': 7952, 'amount': 391},
  96. {'transaction-id': 8416, 'amount': 380},
  97. {'transaction-id': 8425, 'amount': 374},
  98. {'transaction-id': 8488, 'amount': 396},
  99. {'transaction-id': 8828, 'amount': 378},
  100. {'transaction-id': 8947, 'amount': 374},
  101. {'transaction-id': 9119, 'amount': 378},
  102. {'transaction-id': 9856, 'amount': 375},
  103. {'transaction-id': 9982, 'amount': 405}]})

Basic Queries

Once we parse our JSON data into proper Python objects (dicts, lists, etc.) we can perform more interesting queries by creating small Python functions to run on our data.

  1. [10]:
  1. # filter: keep only some elements of the sequence
  2. js.filter(lambda record: record['name'] == 'Alice').take(5)
  1. [10]:
  1. ({'id': 5,
  2. 'name': 'Alice',
  3. 'transactions': [{'transaction-id': 1535, 'amount': 335},
  4. {'transaction-id': 1792, 'amount': 350},
  5. {'transaction-id': 2554, 'amount': 367},
  6. {'transaction-id': 2560, 'amount': 347},
  7. {'transaction-id': 3063, 'amount': 340},
  8. {'transaction-id': 3445, 'amount': 362},
  9. {'transaction-id': 3467, 'amount': 352},
  10. {'transaction-id': 4260, 'amount': 324},
  11. {'transaction-id': 4334, 'amount': 328},
  12. {'transaction-id': 4654, 'amount': 318},
  13. {'transaction-id': 5900, 'amount': 347},
  14. {'transaction-id': 6856, 'amount': 333},
  15. {'transaction-id': 7445, 'amount': 308},
  16. {'transaction-id': 7744, 'amount': 356},
  17. {'transaction-id': 8117, 'amount': 353},
  18. {'transaction-id': 8587, 'amount': 343}]},
  19. {'id': 23,
  20. 'name': 'Alice',
  21. 'transactions': [{'transaction-id': 11, 'amount': 2316},
  22. {'transaction-id': 228, 'amount': 2522},
  23. {'transaction-id': 319, 'amount': 2204},
  24. {'transaction-id': 577, 'amount': 2558},
  25. {'transaction-id': 737, 'amount': 2772},
  26. {'transaction-id': 797, 'amount': 2526},
  27. {'transaction-id': 989, 'amount': 2294},
  28. {'transaction-id': 1214, 'amount': 2653},
  29. {'transaction-id': 1365, 'amount': 2266},
  30. {'transaction-id': 1435, 'amount': 2245},
  31. {'transaction-id': 1452, 'amount': 2535},
  32. {'transaction-id': 1553, 'amount': 2496},
  33. {'transaction-id': 1776, 'amount': 2684},
  34. {'transaction-id': 2027, 'amount': 2650},
  35. {'transaction-id': 2167, 'amount': 2590},
  36. {'transaction-id': 2404, 'amount': 2562},
  37. {'transaction-id': 2414, 'amount': 2497},
  38. {'transaction-id': 2591, 'amount': 2704},
  39. {'transaction-id': 2686, 'amount': 2688},
  40. {'transaction-id': 2781, 'amount': 2421},
  41. {'transaction-id': 2813, 'amount': 2562},
  42. {'transaction-id': 2865, 'amount': 2705},
  43. {'transaction-id': 2879, 'amount': 2540},
  44. {'transaction-id': 3139, 'amount': 2586},
  45. {'transaction-id': 3188, 'amount': 2663},
  46. {'transaction-id': 3366, 'amount': 2258},
  47. {'transaction-id': 3476, 'amount': 2371},
  48. {'transaction-id': 3618, 'amount': 2438},
  49. {'transaction-id': 3676, 'amount': 2610},
  50. {'transaction-id': 3741, 'amount': 2272},
  51. {'transaction-id': 3936, 'amount': 2432},
  52. {'transaction-id': 4231, 'amount': 2292},
  53. {'transaction-id': 4737, 'amount': 2699},
  54. {'transaction-id': 4922, 'amount': 2470},
  55. {'transaction-id': 4959, 'amount': 2338},
  56. {'transaction-id': 5190, 'amount': 2400},
  57. {'transaction-id': 5465, 'amount': 2511},
  58. {'transaction-id': 5471, 'amount': 2439},
  59. {'transaction-id': 5971, 'amount': 2520},
  60. {'transaction-id': 6072, 'amount': 2441},
  61. {'transaction-id': 6152, 'amount': 2604},
  62. {'transaction-id': 6154, 'amount': 2347},
  63. {'transaction-id': 6200, 'amount': 2224},
  64. {'transaction-id': 6314, 'amount': 2553},
  65. {'transaction-id': 6579, 'amount': 2474},
  66. {'transaction-id': 7142, 'amount': 2269},
  67. {'transaction-id': 7231, 'amount': 2446},
  68. {'transaction-id': 7281, 'amount': 2509},
  69. {'transaction-id': 7379, 'amount': 2657},
  70. {'transaction-id': 7776, 'amount': 2424},
  71. {'transaction-id': 7922, 'amount': 2577},
  72. {'transaction-id': 8319, 'amount': 2524},
  73. {'transaction-id': 8666, 'amount': 2489},
  74. {'transaction-id': 8873, 'amount': 2374},
  75. {'transaction-id': 9167, 'amount': 2301},
  76. {'transaction-id': 9197, 'amount': 2244},
  77. {'transaction-id': 9429, 'amount': 2249},
  78. {'transaction-id': 9695, 'amount': 2410}]},
  79. {'id': 33,
  80. 'name': 'Alice',
  81. 'transactions': [{'transaction-id': 1, 'amount': -311},
  82. {'transaction-id': 262, 'amount': -300},
  83. {'transaction-id': 766, 'amount': -400},
  84. {'transaction-id': 1607, 'amount': -355},
  85. {'transaction-id': 4148, 'amount': -254},
  86. {'transaction-id': 4507, 'amount': -302},
  87. {'transaction-id': 5692, 'amount': -250},
  88. {'transaction-id': 5861, 'amount': -294},
  89. {'transaction-id': 6351, 'amount': -355},
  90. {'transaction-id': 8661, 'amount': -374},
  91. {'transaction-id': 9132, 'amount': -384},
  92. {'transaction-id': 9321, 'amount': -337}]},
  93. {'id': 34,
  94. 'name': 'Alice',
  95. 'transactions': [{'transaction-id': 224, 'amount': 1011},
  96. {'transaction-id': 1059, 'amount': 929},
  97. {'transaction-id': 1995, 'amount': 963},
  98. {'transaction-id': 3191, 'amount': 1002},
  99. {'transaction-id': 3864, 'amount': 948},
  100. {'transaction-id': 4924, 'amount': 961},
  101. {'transaction-id': 5529, 'amount': 887},
  102. {'transaction-id': 6230, 'amount': 935},
  103. {'transaction-id': 6914, 'amount': 968},
  104. {'transaction-id': 7716, 'amount': 963},
  105. {'transaction-id': 7810, 'amount': 942},
  106. {'transaction-id': 7995, 'amount': 980},
  107. {'transaction-id': 8068, 'amount': 912},
  108. {'transaction-id': 8508, 'amount': 894},
  109. {'transaction-id': 8717, 'amount': 942},
  110. {'transaction-id': 9718, 'amount': 875},
  111. {'transaction-id': 9787, 'amount': 967}]},
  112. {'id': 38,
  113. 'name': 'Alice',
  114. 'transactions': [{'transaction-id': 404, 'amount': 3592},
  115. {'transaction-id': 747, 'amount': 3113},
  116. {'transaction-id': 993, 'amount': 3502},
  117. {'transaction-id': 1125, 'amount': 3689},
  118. {'transaction-id': 1226, 'amount': 4087},
  119. {'transaction-id': 2259, 'amount': 3561},
  120. {'transaction-id': 2406, 'amount': 3650},
  121. {'transaction-id': 2616, 'amount': 3985},
  122. {'transaction-id': 2998, 'amount': 2988},
  123. {'transaction-id': 3277, 'amount': 3626},
  124. {'transaction-id': 3516, 'amount': 4007},
  125. {'transaction-id': 3775, 'amount': 4236},
  126. {'transaction-id': 4167, 'amount': 4045},
  127. {'transaction-id': 4947, 'amount': 3702},
  128. {'transaction-id': 5037, 'amount': 3229},
  129. {'transaction-id': 5287, 'amount': 4199},
  130. {'transaction-id': 5323, 'amount': 3867},
  131. {'transaction-id': 5563, 'amount': 3809},
  132. {'transaction-id': 6087, 'amount': 3604},
  133. {'transaction-id': 6292, 'amount': 3463},
  134. {'transaction-id': 6489, 'amount': 3914},
  135. {'transaction-id': 6807, 'amount': 3272},
  136. {'transaction-id': 7731, 'amount': 3517},
  137. {'transaction-id': 7996, 'amount': 3396},
  138. {'transaction-id': 8695, 'amount': 3611},
  139. {'transaction-id': 8927, 'amount': 3710}]})
  1. [11]:
  1. def count_transactions(d):
  2. return {'name': d['name'], 'count': len(d['transactions'])}
  3.  
  4. # map: apply a function to each element
  5. (js.filter(lambda record: record['name'] == 'Alice')
  6. .map(count_transactions)
  7. .take(5))
  1. [11]:
  1. ({'name': 'Alice', 'count': 16},
  2. {'name': 'Alice', 'count': 58},
  3. {'name': 'Alice', 'count': 12},
  4. {'name': 'Alice', 'count': 17},
  5. {'name': 'Alice', 'count': 26})
  1. [12]:
  1. # pluck: select a field, as from a dictionary, element[field]
  2. (js.filter(lambda record: record['name'] == 'Alice')
  3. .map(count_transactions)
  4. .pluck('count')
  5. .take(5))
  1. [12]:
  1. (16, 58, 12, 17, 26)
  1. [13]:
  1. # Average number of transactions for all of the Alice entries
  2. (js.filter(lambda record: record['name'] == 'Alice')
  3. .map(count_transactions)
  4. .pluck('count')
  5. .mean()
  6. .compute())
  1. [13]:
  1. 56.9440353460972

Use flatten to de-nest

In the example below we see the use of .flatten() to flatten results. We compute the average amount for all transactions for all Alices.

  1. [14]:
  1. js.filter(lambda record: record['name'] == 'Alice').pluck('transactions').take(3)
  1. [14]:
  1. ([{'transaction-id': 1535, 'amount': 335},
  2. {'transaction-id': 1792, 'amount': 350},
  3. {'transaction-id': 2554, 'amount': 367},
  4. {'transaction-id': 2560, 'amount': 347},
  5. {'transaction-id': 3063, 'amount': 340},
  6. {'transaction-id': 3445, 'amount': 362},
  7. {'transaction-id': 3467, 'amount': 352},
  8. {'transaction-id': 4260, 'amount': 324},
  9. {'transaction-id': 4334, 'amount': 328},
  10. {'transaction-id': 4654, 'amount': 318},
  11. {'transaction-id': 5900, 'amount': 347},
  12. {'transaction-id': 6856, 'amount': 333},
  13. {'transaction-id': 7445, 'amount': 308},
  14. {'transaction-id': 7744, 'amount': 356},
  15. {'transaction-id': 8117, 'amount': 353},
  16. {'transaction-id': 8587, 'amount': 343}],
  17. [{'transaction-id': 11, 'amount': 2316},
  18. {'transaction-id': 228, 'amount': 2522},
  19. {'transaction-id': 319, 'amount': 2204},
  20. {'transaction-id': 577, 'amount': 2558},
  21. {'transaction-id': 737, 'amount': 2772},
  22. {'transaction-id': 797, 'amount': 2526},
  23. {'transaction-id': 989, 'amount': 2294},
  24. {'transaction-id': 1214, 'amount': 2653},
  25. {'transaction-id': 1365, 'amount': 2266},
  26. {'transaction-id': 1435, 'amount': 2245},
  27. {'transaction-id': 1452, 'amount': 2535},
  28. {'transaction-id': 1553, 'amount': 2496},
  29. {'transaction-id': 1776, 'amount': 2684},
  30. {'transaction-id': 2027, 'amount': 2650},
  31. {'transaction-id': 2167, 'amount': 2590},
  32. {'transaction-id': 2404, 'amount': 2562},
  33. {'transaction-id': 2414, 'amount': 2497},
  34. {'transaction-id': 2591, 'amount': 2704},
  35. {'transaction-id': 2686, 'amount': 2688},
  36. {'transaction-id': 2781, 'amount': 2421},
  37. {'transaction-id': 2813, 'amount': 2562},
  38. {'transaction-id': 2865, 'amount': 2705},
  39. {'transaction-id': 2879, 'amount': 2540},
  40. {'transaction-id': 3139, 'amount': 2586},
  41. {'transaction-id': 3188, 'amount': 2663},
  42. {'transaction-id': 3366, 'amount': 2258},
  43. {'transaction-id': 3476, 'amount': 2371},
  44. {'transaction-id': 3618, 'amount': 2438},
  45. {'transaction-id': 3676, 'amount': 2610},
  46. {'transaction-id': 3741, 'amount': 2272},
  47. {'transaction-id': 3936, 'amount': 2432},
  48. {'transaction-id': 4231, 'amount': 2292},
  49. {'transaction-id': 4737, 'amount': 2699},
  50. {'transaction-id': 4922, 'amount': 2470},
  51. {'transaction-id': 4959, 'amount': 2338},
  52. {'transaction-id': 5190, 'amount': 2400},
  53. {'transaction-id': 5465, 'amount': 2511},
  54. {'transaction-id': 5471, 'amount': 2439},
  55. {'transaction-id': 5971, 'amount': 2520},
  56. {'transaction-id': 6072, 'amount': 2441},
  57. {'transaction-id': 6152, 'amount': 2604},
  58. {'transaction-id': 6154, 'amount': 2347},
  59. {'transaction-id': 6200, 'amount': 2224},
  60. {'transaction-id': 6314, 'amount': 2553},
  61. {'transaction-id': 6579, 'amount': 2474},
  62. {'transaction-id': 7142, 'amount': 2269},
  63. {'transaction-id': 7231, 'amount': 2446},
  64. {'transaction-id': 7281, 'amount': 2509},
  65. {'transaction-id': 7379, 'amount': 2657},
  66. {'transaction-id': 7776, 'amount': 2424},
  67. {'transaction-id': 7922, 'amount': 2577},
  68. {'transaction-id': 8319, 'amount': 2524},
  69. {'transaction-id': 8666, 'amount': 2489},
  70. {'transaction-id': 8873, 'amount': 2374},
  71. {'transaction-id': 9167, 'amount': 2301},
  72. {'transaction-id': 9197, 'amount': 2244},
  73. {'transaction-id': 9429, 'amount': 2249},
  74. {'transaction-id': 9695, 'amount': 2410}],
  75. [{'transaction-id': 1, 'amount': -311},
  76. {'transaction-id': 262, 'amount': -300},
  77. {'transaction-id': 766, 'amount': -400},
  78. {'transaction-id': 1607, 'amount': -355},
  79. {'transaction-id': 4148, 'amount': -254},
  80. {'transaction-id': 4507, 'amount': -302},
  81. {'transaction-id': 5692, 'amount': -250},
  82. {'transaction-id': 5861, 'amount': -294},
  83. {'transaction-id': 6351, 'amount': -355},
  84. {'transaction-id': 8661, 'amount': -374},
  85. {'transaction-id': 9132, 'amount': -384},
  86. {'transaction-id': 9321, 'amount': -337}])
  1. [15]:
  1. (js.filter(lambda record: record['name'] == 'Alice')
  2. .pluck('transactions')
  3. .flatten()
  4. .take(3))
  1. [15]:
  1. ({'transaction-id': 1535, 'amount': 335},
  2. {'transaction-id': 1792, 'amount': 350},
  3. {'transaction-id': 2554, 'amount': 367})
  1. [16]:
  1. (js.filter(lambda record: record['name'] == 'Alice')
  2. .pluck('transactions')
  3. .flatten()
  4. .pluck('amount')
  5. .take(3))
  1. [16]:
  1. (335, 350, 367)
  1. [17]:
  1. (js.filter(lambda record: record['name'] == 'Alice')
  2. .pluck('transactions')
  3. .flatten()
  4. .pluck('amount')
  5. .mean()
  6. .compute())
  1. [17]:
  1. 366.1473166946851

Groupby and Foldby

Often we want to group data by some function or key. We can do this either with the .groupby method, which is straightforward but forces a full shuffle of the data (expensive) or with the harder-to-use but faster .foldby method, which does a streaming combined groupby and reduction.

  • groupby: Shuffles data so that all items with the same key are in the same key-value pair

  • foldby: Walks through the data accumulating a result per key

Note: the full groupby is particularly bad. In actual workloads you would do well to use foldby or switch to DataFrames if possible.

groupby

Groupby collects items in your collection so that all items with the same value under some function are collected together into a key-value pair.

  1. [18]:
  1. b = db.from_sequence(['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank'])
  2. b.groupby(len).compute() # names grouped by length
  1. [18]:
  1. [(7, ['Charlie']), (3, ['Bob', 'Dan']), (5, ['Alice', 'Edith', 'Frank'])]
  1. [19]:
  1. b = db.from_sequence(list(range(10)))
  2. b.groupby(lambda x: x % 2).compute()
  1. [19]:
  1. [(0, [0, 2, 4, 6, 8]), (1, [1, 3, 5, 7, 9])]
  1. [20]:
  1. b.groupby(lambda x: x % 2).starmap(lambda k, v: (k, max(v))).compute()
  1. [20]:
  1. [(0, 8), (1, 9)]

foldby

Foldby can be quite odd at first. It is similar to the following functions from other libraries:

  • toolz.reduceby</code> &lt;[http://toolz.readthedocs.io/en/latest/streaming-analytics.html#streaming-split-apply-combine](https://toolz.readthedocs.io/en/latest/streaming-analytics.html#streaming-split-apply-combine)&gt;__

  • pyspark.RDD.combineByKey</code> &lt;[http://abshinn.github.io/python/apache-spark/2014/10/11/using-combinebykey-in-apache-spark/](https://abshinn.github.io/python/apache-spark/2014/10/11/using-combinebykey-in-apache-spark/)&gt;__

When using foldby you provide

  • A key function on which to group elements

  • A binary operator such as you would pass to reduce that you use to perform reduction per each group

  • A combine binary operator that can combine the results of two reduce calls on different parts of your dataset.

Your reduction must be associative. It will happen in parallel in each of the partitions of your dataset. Then all of these intermediate results will be combined by the combine binary operator.

  1. [21]:
  1. is_even = lambda x: x % 2
  2. b.foldby(is_even, binop=max, combine=max).compute()
  1. [21]:
  1. [(0, 8), (1, 9)]

Example with account data

We find the number of people with the same name.

  1. [22]:
  1. %%time
  2. # Warning, this one takes a while...
  3. result = js.groupby(lambda item: item['name']).starmap(lambda k, v: (k, len(v))).compute()
  4. print(sorted(result))
  1. [('Alice', 679), ('Bob', 400), ('Charlie', 299), ('Dan', 400), ('Edith', 250), ('Frank', 450), ('George', 347), ('Hannah', 344), ('Ingrid', 494), ('Jerry', 485), ('Kevin', 448), ('Laura', 600), ('Michael', 734), ('Norbert', 585), ('Oliver', 633), ('Patricia', 363), ('Quinn', 150), ('Ray', 310), ('Sarah', 500), ('Tim', 559), ('Ursula', 300), ('Victor', 598), ('Wendy', 268), ('Xavier', 1190), ('Yvonne', 395), ('Zelda', 432)]
  2. CPU times: user 4.06 s, sys: 406 ms, total: 4.47 s
  3. Wall time: 1min 12s
  1. [23]:
  1. %%time
  2. # This one is comparatively fast and produces the same result.
  3. from operator import add
  4. def incr(tot, _):
  5. return tot+1
  6.  
  7. result = js.foldby(key='name',
  8. binop=incr,
  9. initial=0,
  10. combine=add,
  11. combine_initial=0).compute()
  12. print(sorted(result))
  1. [('Alice', 679), ('Bob', 400), ('Charlie', 299), ('Dan', 400), ('Edith', 250), ('Frank', 450), ('George', 347), ('Hannah', 344), ('Ingrid', 494), ('Jerry', 485), ('Kevin', 448), ('Laura', 600), ('Michael', 734), ('Norbert', 585), ('Oliver', 633), ('Patricia', 363), ('Quinn', 150), ('Ray', 310), ('Sarah', 500), ('Tim', 559), ('Ursula', 300), ('Victor', 598), ('Wendy', 268), ('Xavier', 1190), ('Yvonne', 395), ('Zelda', 432)]
  2. CPU times: user 164 ms, sys: 0 ns, total: 164 ms
  3. Wall time: 596 ms

Exercise: compute total amount per name

We want to groupby (or foldby) the name key, then add up the all of the amounts for each name.

Steps

  • Create a small function that, given a dictionary like
  1. {'name': 'Alice', 'transactions': [{'amount': 1, 'id': 123}, {'amount': 2, 'id': 456}]}

produces the sum of the amounts, e.g. 3

  • Slightly change the binary operator of the foldby example above so that the binary operator doesn’t count the number of entries, but instead accumulates the sum of the amounts.
  1. [24]:
  1. # Your code here...

DataFrames

For the same reasons that Pandas is often faster than pure Python, dask.dataframe can be faster than dask.bag. We will work more with DataFrames later, but from for the bag point of view, they are frequently the end-point of the “messy” part of data ingestion—once the data can be made into a data-frame, then complex split-apply-combine logic will become much more straight-forward and efficient.

You can transform a bag with a simple tuple or flat dictionary structure into a dask.dataframe with the to_dataframe method.

  1. [25]:
  1. df1 = js.to_dataframe()
  2. df1.head()
  1. [25]:
idnametransactions
00Oliver[{'transaction-id': 233, 'amount': 137}, {'tra…
11Oliver[{'transaction-id': 168, 'amount': 156}, {'tra…
23Patricia[{'transaction-id': 640, 'amount': 401}, {'tra…
34Laura[{'transaction-id': 255, 'amount': -864}, {'tr…
45Alice[{'transaction-id': 1535, 'amount': 335}, {'tr…

This now looks like a well-defined DataFrame, and we can apply Pandas-like computations to it efficiently.

Using a Dask DataFrame, how long does it take to do our prior computation of numbers of people with the same name? It turns out that dask.dataframe.groupby() beats dask.bag.groupby() more than an order of magnitude; but it still cannot match dask.bag.foldby() for this case.

  1. [26]:
  1. %time df1.groupby('name').id.count().compute().head()
  1. CPU times: user 235 ms, sys: 19.5 ms, total: 255 ms
  2. Wall time: 2.17 s
  1. [26]:
  1. name
  2. Alice 679
  3. Bob 400
  4. Charlie 299
  5. Dan 400
  6. Edith 250
  7. Name: id, dtype: int64

Denormalization

This DataFrame format is less-than-optimal because the transactions column is filled with nested data so Pandas has to revert to object dtype, which is quite slow in Pandas. Ideally we want to transform to a dataframe only after we have flattened our data so that each record is a single int, string, float, etc..

  1. [27]:
  1. def denormalize(record):
  2. # returns a list for every nested item, each transaction of each person
  3. return [{'id': record['id'],
  4. 'name': record['name'],
  5. 'amount': transaction['amount'],
  6. 'transaction-id': transaction['transaction-id']}
  7. for transaction in record['transactions']]
  8.  
  9. transactions = js.map(denormalize).flatten()
  10. transactions.take(3)
  1. [27]:
  1. ({'id': 0, 'name': 'Oliver', 'amount': 137, 'transaction-id': 233},
  2. {'id': 0, 'name': 'Oliver', 'amount': 73, 'transaction-id': 459},
  3. {'id': 0, 'name': 'Oliver', 'amount': 112, 'transaction-id': 2030})
  1. [28]:
  1. df = transactions.to_dataframe()
  2. df.head()
  1. [28]:
idnameamounttransaction-id
00Oliver137233
10Oliver73459
20Oliver1122030
30Oliver892769
40Oliver593027
  1. [29]:
  1. %%time
  2. # number of transactions per name
  3. # note that the time here includes the data load and ingestion
  4. df.groupby('name')['transaction-id'].count().compute()
  1. CPU times: user 231 ms, sys: 14.2 ms, total: 245 ms
  2. Wall time: 1.45 s
  1. [29]:
  1. name
  2. Alice 38665
  3. Bob 15852
  4. Charlie 17154
  5. Dan 14775
  6. Edith 10741
  7. Frank 33950
  8. George 11536
  9. Hannah 8527
  10. Ingrid 14609
  11. Jerry 13311
  12. Kevin 14312
  13. Laura 13519
  14. Michael 27488
  15. Norbert 14884
  16. Oliver 18405
  17. Patricia 13752
  18. Quinn 9456
  19. Ray 10627
  20. Sarah 31099
  21. Tim 26600
  22. Ursula 10539
  23. Victor 33491
  24. Wendy 12364
  25. Xavier 50616
  26. Yvonne 16473
  27. Zelda 17255
  28. Name: transaction-id, dtype: int64

Limitations

Bags provide very general computation (any Python function.) This generality comes at cost. Bags have the following known limitations

  • Bag operations tend to be slower than array/dataframe computations in the same way that Python tends to be slower than NumPy/Pandas

  • Bag.groupby is slow. You should try to use Bag.foldby if possible. Using Bag.foldby requires more thought. Even better, consider creating a normalised dataframe.

Learn More

Shutdown

  1. [30]:
  1. client.shutdown()
  1. [ ]:
  1.