API

Client

Client([address, loop, timeout, …])Connect to and submit computation to a Dask cluster
Client.call_stack(self[, futures, keys])The actively running call stack of all relevant keys
Client.cancel(self, futures[, asynchronous, …])Cancel running futures
Client.close(self[, timeout])Close this client
Client.compute(self, collections[, sync, …])Compute dask collections on cluster
Client.gather(self, futures[, errors, …])Gather futures from distributed memory
Client.get(self, dsk, keys[, restrictions, …])Compute dask graph
Client.get_dataset(self, name, **kwargs)Get named dataset from the scheduler
Client.get_executor(self, **kwargs)Return a concurrent.futures Executor for submitting tasks on this Client
Client.get_metadata(self, keys[, default])Get arbitrary metadata from scheduler
Client.get_scheduler_logs(self[, n])Get logs from scheduler
Client.get_worker_logs(self[, n, workers, nanny])Get logs from workers
Client.get_task_stream(self[, start, stop, …])Get task stream data from scheduler
Client.has_what(self[, workers])Which keys are held by which workers
Client.list_datasets(self, **kwargs)List named datasets available on the scheduler
Client.map(self, func, *iterables[, key, …])Map a function on a sequence of arguments
Client.nthreads(self[, workers])The number of threads/cores available on each worker node
Client.persist(self, collections[, …])Persist dask collections on cluster
Client.publish_dataset(self, *args, **kwargs)Publish named datasets to scheduler
Client.profile(self[, key, start, stop, …])Collect statistical profiling information about recent work
Client.rebalance(self[, futures, workers])Rebalance data within network
Client.replicate(self, futures[, n, …])Set replication of futures within network
Client.restart(self, **kwargs)Restart the distributed network
Client.retry(self, futures[, asynchronous])Retry failed futures
Client.run(self, function, *args, **kwargs)Run a function on all workers outside of task scheduling system
Client.run_on_scheduler(self, function, …)Run a function on the scheduler process
Client.scatter(self, data[, workers, …])Scatter data into distributed memory
Client.scheduler_info(self, **kwargs)Basic information about the workers in the cluster
Client.write_scheduler_file(self, scheduler_file)Write the scheduler information to a json file.
Client.set_metadata(self, key, value)Set arbitrary metadata in the scheduler
Client.start_ipython_workers(self[, …])Start IPython kernels on workers
Client.start_ipython_scheduler(self[, …])Start IPython kernel on the scheduler
Client.submit(self, func, *args[, key, …])Submit a function application to the scheduler
Client.unpublish_dataset(self, name, **kwargs)Remove named datasets from scheduler
Client.upload_file(self, filename, **kwargs)Upload local package to workers
Client.who_has(self[, futures])The workers storing each future’s data
worker_client([timeout, separate_thread])Get client for this thread
get_worker()Get the worker currently running this task
get_client([address, timeout, resolve_address])Get a client while within a task.
secede()Have this task secede from the worker’s thread pool
rejoin()Have this thread rejoin the ThreadPoolExecutor
RescheduleReschedule this task
ReplayExceptionClient.get_futures_error(…)Ask the scheduler details of the sub-task of the given failed future
ReplayExceptionClient.recreate_error_locally(…)For a failed calculation, perform the blamed task locally for debugging.

Future

Future(key[, client, inform, state])A remotely running computation
Future.add_done_callback(self, fn)Call callback on future when callback has finished
Future.cancel(self, **kwargs)Cancel request to run this future
Future.cancelled(self)Returns True if the future has been cancelled
Future.done(self)Is the computation complete?
Future.exception(self[, timeout])Return the exception of a failed task
Future.result(self[, timeout])Wait until computation completes, gather result to local process.
Future.retry(self, **kwargs)Retry this future if it has failed
Future.traceback(self[, timeout])Return the traceback of a failed task

Client Coordination

Lock([name, client])Distributed Centralized Lock
Queue([name, client, maxsize])Distributed Queue
Variable([name, client, maxsize])Distributed Global Variable

Other

as_completed([futures, loop, with_results, …])Return futures in the order in which they complete
distributed.diagnostics.progress
wait(fs[, timeout, return_when])Wait until all/any futures are finished
fire_and_forget(obj)Run tasks at least once, even if we release the futures
futures_of(o[, client])Future objects in a collection
get_task_stream([client, plot, filename])Collect task stream within a context block

Asynchronous methods

Most methods and functions can be used equally well within a blocking orasynchronous environment using Tornado coroutines. If used within a TornadoIOLoop then you should yield or await otherwise blocking operationsappropriately.

You must tell the client that you intend to use it within an asynchronousenvironment by passing the asynchronous=True keyword

  1. # blocking
  2. client = Client()
  3. future = client.submit(func, *args) # immediate, no blocking/async difference
  4. result = client.gather(future) # blocking
  5.  
  6. # asynchronous Python 2/3
  7. client = yield Client(asynchronous=True)
  8. future = client.submit(func, *args) # immediate, no blocking/async difference
  9. result = yield client.gather(future) # non-blocking/asynchronous
  10.  
  11. # asynchronous Python 3
  12. client = await Client(asynchronous=True)
  13. future = client.submit(func, *args) # immediate, no blocking/async difference
  14. result = await client.gather(future) # non-blocking/asynchronous

The asynchronous variants must be run within a Tornado coroutine. See theAsynchronous documentation for more information.

Client

  • class distributed.Client(address=None, loop=None, timeout='no_default', set_as_default=True, scheduler_file=None, security=None, asynchronous=False, name=None, heartbeat_interval=None, serializers=None, deserializers=None, extensions=[], direct_to_workers=None, **kwargs)[source]
  • Connect to and submit computation to a Dask cluster

The Client connects users to a Dask cluster. It provides an asynchronoususer interface around functions and futures. This class resemblesexecutors in concurrent.futures but also allows Future objectswithin submit/map calls. When a Client is instantiated it takes overall dask.compute and dask.persist calls by default.

It is also common to create a Client without specifying the scheduleraddress , like Client(). In this case the Client creates aLocalCluster in the background and connects to that. Any extrakeywords are passed from Client to LocalCluster in this case. See theLocalCluster documentation for more information.

Parameters:

  • address: string, or Cluster
  • This can be the address of a Scheduler server like a string'127.0.0.1:8786' or a cluster object like LocalCluster()

  • timeout: int

  • Timeout duration for initial connection to the scheduler

  • set_as_default: bool (True)

  • Claim this scheduler as the global dask scheduler

  • scheduler_file: string (optional)

  • Path to a file with scheduler information if available

  • security: Security or bool, optional

  • Optional security information. If creating a local cluster can alsopass in True, in which case temporary self-signed credentials willbe created automatically.

  • asynchronous: bool (False by default)

  • Set to True if using this client within async/await functions or withinTornado gen.coroutines. Otherwise this should remain False for normaluse.

  • name: string (optional)

  • Gives the client a name that will be included in logs generated onthe scheduler for matters relating to this client

  • direct_to_workers: bool (optional)

  • Whether or not to connect directly to the workers, or to askthe scheduler to serve as intermediary.

  • heartbeat_interval: int Time in milliseconds between heartbeats to scheduler

  • **kwargs: If you do not pass a scheduler address, Client will create aLocalCluster object, passing any extra keyword arguments.

See also

Examples

Provide cluster’s scheduler node address on initialization:

  1. >>> client = Client('127.0.0.1:8786') # doctest: +SKIP

Use submit method to send individual computations to the cluster

  1. >>> a = client.submit(add, 1, 2) # doctest: +SKIP
  2. >>> b = client.submit(add, 10, 20) # doctest: +SKIP

Continue using submit or map on results to build up larger computations

  1. >>> c = client.submit(add, a, b) # doctest: +SKIP

Gather results with the gather method.

  1. >>> client.gather(c) # doctest: +SKIP
  2. 33

You can also call Client with no arguments in order to create your ownlocal cluster.

  1. >>> client = Client() # makes your own local "cluster" # doctest: +SKIP

Extra keywords will be passed directly to LocalCluster

  1. >>> client = Client(processes=False, threads_per_worker=1) # doctest: +SKIP
  • asynchronous
  • Are we running in the event loop?

This is true if the user signaled that we might be when creating theclient as in the following:

  1. client = Client(asynchronous=True)

However, we override this expectation if we can definitively tell thatwe are running from a thread that is not the event loop. This iscommon when calling get_client() from within a worker task. Eventhough the client was originally created in asynchronous mode we mayfind ourselves in contexts when it is better to operate synchronously.

  • callstack(_self, futures=None, keys=None)[source]
  • The actively running call stack of all relevant keys

You can specify data of interest either by providing futures orcollections in the futures= keyword or a list of explicit keys inthe keys= keyword. If neither are provided then all call stackswill be returned.

Parameters:

  1. - **futures: list (optional)**
  2. -

List of futures, defaults to all data

  1. - **keys: list (optional)**
  2. -

List of key names, defaults to all data

Examples

  1. >>> df = dd.read_parquet(...).persist() # doctest: +SKIP
  2. >>> client.call_stack(df) # call on collections
  1. >>> client.call_stack() # Or call with no arguments for all activity # doctest: +SKIP
  • cancel(self, futures, asynchronous=None, force=False)[source]
  • Cancel running futures

This stops future tasks from being scheduled if they have not yet runand deletes them if they have already run. After calling, this resultand all dependent results will no longer be accessible

Parameters:

  1. - **futures: list of Futures**
  2. -
  3. - **force: boolean (False)**
  4. -

Cancel this future even if other clients desire it

  • close(self, timeout='no_default')[source]
  • Close this client

Clients will also close automatically when your Python session ends

If you started a client without arguments like Client() then thiswill also close the local cluster that was started at the same time.

See also

  1. - [<code>Client.restart</code>](#distributed.Client.restart)
  2. -
  • compute(self, collections, sync=False, optimize_graph=True, workers=None, allow_other_workers=False, resources=None, retries=0, priority=0, fifo_timeout='60s', actors=None, traverse=True, **kwargs)[source]
  • Compute dask collections on cluster

Parameters:

  1. - **collections: iterable of dask objects or single dask object**
  2. -

Collections like dask.array or dataframe or dask.value objects

  1. - **sync: bool (optional)**
  2. -

Returns Futures if False (default) or concrete values if True

  1. - **optimize_graph: bool**
  2. -

Whether or not to optimize the underlying graphs

  1. - **workers: str, list, dict**
  2. -

Which workers can run which parts of the computationIf a string a list then the output collections will run on the listedworkers, but other sub-computations can run anywhereIf a dict then keys should be (tuples of) collections and valuesshould be addresses or lists.

  1. - **allow_other_workers: bool, list**
  2. -

If True then all restrictions in workers= are considered looseIf a list then only the keys for the listed collections are loose

  1. - **retries: int (default to 0)**
  2. -

Number of allowed automatic retries if computing a result fails

  1. - **priority: Number**
  2. -

Optional prioritization of task. Zero is default.Higher priorities take precedence

  1. - **fifo_timeout: timedelta str (defaults to 60s’)**
  2. -

Allowed amount of time between calls to consider the same priority

  1. - **traverse: bool (defaults to True)**
  2. -

By default dask traverses builtin python collections looking fordask objects passed to compute. For large collections this canbe expensive. If none of the arguments contain any dask objects,set traverse=False to avoid doing this traversal.

  1. - **resources: dict (defaults to {})**
  2. -

Defines the resources these tasks require on the worker. Canspecify global resources ({'GPU': 2}), or per-task resources({'x': {'GPU': 1}, 'y': {'SSD': 4}}), but not both.See worker resources for details on definingresources.

  1. - **actors: bool or dict (default None)**
  2. -

Whether these tasks should exist on the worker as stateful actors.Specified on a global (True/False) or per-task ({'x': True, 'y': False}) basis. See Actors for additional details.

  1. - ****kwargs:**
  2. -

Options to pass to the graph optimize calls Returns:

  1. - List of Futures if input is a sequence, or a single future otherwise
  2. -

See also

  1. - [<code>Client.get</code>](#distributed.Client.get)
  2. - Normal synchronous dask.get function

Examples

  1. >>> from dask import delayed
  2. >>> from operator import add
  3. >>> x = delayed(add)(1, 2)
  4. >>> y = delayed(add)(x, x)
  5. >>> xx, yy = client.compute([x, y]) # doctest: +SKIP
  6. >>> xx # doctest: +SKIP
  7. <Future: status: finished, key: add-8f6e709446674bad78ea8aeecfee188e>
  8. >>> xx.result() # doctest: +SKIP
  9. 3
  10. >>> yy.result() # doctest: +SKIP
  11. 6

Also support single arguments

  1. >>> xx = client.compute(x) # doctest: +SKIP
  • classmethod current()[source]
  • Return global client if one exists, otherwise raise ValueError

  • gather(self, futures, errors='raise', direct=None, asynchronous=None)[source]

  • Gather futures from distributed memory

Accepts a future, nested container of futures, iterator, or queue.The return type will match the input type.

Parameters:

  1. - **futures: Collection of futures**
  2. -

This can be a possibly nested collection of Future objects.Collections can be lists, sets, or dictionaries

  1. - **errors: string**
  2. -

Either ‘raise’ or ‘skip’ if we should raise if a future has erredor skip its inclusion in the output collection

  1. - **direct: boolean**
  2. -

Whether or not to connect directly to the workers, or to askthe scheduler to serve as intermediary. This can also be set whencreating the Client. Returns:

  1. - results: a collection of the same type as the input, but now with
  2. -
  3. - gathered results rather than futures
  4. -

See also

  1. - [<code>Client.scatter</code>](#distributed.Client.scatter)
  2. - Send data out to cluster

Examples

  1. >>> from operator import add # doctest: +SKIP
  2. >>> c = Client('127.0.0.1:8787') # doctest: +SKIP
  3. >>> x = c.submit(add, 1, 2) # doctest: +SKIP
  4. >>> c.gather(x) # doctest: +SKIP
  5. 3
  6. >>> c.gather([x, [x], x]) # support lists and dicts # doctest: +SKIP
  7. [3, [3], 3]
  • get(self, dsk, keys, restrictions=None, loose_restrictions=None, resources=None, sync=True, asynchronous=None, direct=None, retries=None, priority=0, fifo_timeout='60s', actors=None, **kwargs)[source]
  • Compute dask graph

Parameters:

  1. - **dsk: dict**
  2. -
  3. - **keys: object, or nested lists of objects**
  4. -
  5. - **restrictions: dict (optional)**
  6. -

A mapping of {key: {set of worker hostnames}} that restricts wherejobs can take place

  1. - **retries: int (default to 0)**
  2. -

Number of allowed automatic retries if computing a result fails

  1. - **priority: Number**
  2. -

Optional prioritization of task. Zero is default.Higher priorities take precedence

  1. - **sync: bool (optional)**
  2. -

Returns Futures if False or concrete values if True (default).

  1. - **direct: bool**
  2. -

Whether or not to connect directly to the workers, or to askthe scheduler to serve as intermediary. This can also be set whencreating the Client.

See also

  1. - [<code>Client.compute</code>](#distributed.Client.compute)
  2. - Compute asynchronous collections

Examples

  1. >>> from operator import add # doctest: +SKIP
  2. >>> c = Client('127.0.0.1:8787') # doctest: +SKIP
  3. >>> c.get({'x': (add, 1, 2)}, 'x') # doctest: +SKIP
  4. 3
  • getdataset(_self, name, **kwargs)[source]
  • Get named dataset from the scheduler

See also

  1. - [<code>Client.publish_dataset</code>](#distributed.Client.publish_dataset)
  2. -
  3. - [<code>Client.list_datasets</code>](#distributed.Client.list_datasets)
  4. -
  • getexecutor(_self, **kwargs)[source]
  • Return a concurrent.futures Executor for submitting tasks on this Client

Parameters:

  1. - ****kwargs:**
  2. -

Any submit()- or map()- compatible arguments, such asworkers or resources. Returns:

  1. - An Executor object thats fully compatible with the concurrent.futures
  2. -
  3. - API.
  4. -
  • getmetadata(_self, keys, default='no_default')[source]
  • Get arbitrary metadata from scheduler

See set_metadata for the full docstring with examples

Parameters:

  1. - **keys: key or list**
  2. -

Key to access. If a list then gets within a nested collection

  1. - **default: optional**
  2. -

If the key does not exist then return this value instead.If not provided then this raises a KeyError if the key is notpresent

See also

  1. - [<code>Client.set_metadata</code>](#distributed.Client.set_metadata)
  2. -
  • classmethod getrestrictions(_collections, workers, allow_other_workers)[source]
  • Get restrictions from inputs to compute/persist

  • getscheduler_logs(_self, n=None)[source]

  • Get logs from scheduler

Parameters:

  1. - **n**:int
  2. -

Number of logs to retrive. Maxes out at 10000 by default,confiruable in config.yaml::log-length Returns:

  1. - Logs in reversed order (newest first)
  2. -
  • gettask_stream(_self, start=None, stop=None, count=None, plot=False, filename='task-stream.html')[source]
  • Get task stream data from scheduler

This collects the data present in the diagnostic “Task Stream” plot onthe dashboard. It includes the start, stop, transfer, anddeserialization time of every task for a particular duration.

Note that the task stream diagnostic does not run by default. You maywish to call this function once before you start work to ensure thatthings start recording, and then again after you have completed.

Parameters:

  1. - **start: Number or string**
  2. -

When you want to start recordingIf a number it should be the result of calling time()If a string then it should be a time difference before now,like ’60s’ or ‘500 ms’

  1. - **stop: Number or string**
  2. -

When you want to stop recording

  1. - **count: int**
  2. -

The number of desired records, ignored if both start and stop arespecified

  1. - **plot: boolean, str**
  2. -

If true then also return a Bokeh figureIf plot == ‘save’ then save the figure to a file

  1. - **filename: str (optional)**
  2. -

The filename to save to if you set plot='save' Returns:

  1. - L: List[Dict]
  2. -

See also

  1. - [<code>get_task_stream</code>](#distributed.get_task_stream)
  2. - a context manager version of this method

Examples

  1. >>> client.get_task_stream() # prime plugin if not already connected
  2. >>> x.compute() # do some work
  3. >>> client.get_task_stream()
  4. [{'task': ...,
  5. 'type': ...,
  6. 'thread': ...,
  7. ...}]

Pass the plot=True or plot='save' keywords to get back a Bokehfigure

  1. >>> data, figure = client.get_task_stream(plot='save', filename='myfile.html')

Alternatively consider the context manager

  1. >>> from dask.distributed import get_task_stream
  2. >>> with get_task_stream() as ts:
  3. ... x.compute()
  4. >>> ts.data
  5. [...]
  • getversions(_self, check=False, packages=[])[source]
  • Return version info for the scheduler, all workers and myself

Parameters:

  1. - **check**:boolean, default False
  2. -

raise ValueError if all required & optional packagesdo not match

  1. - **packages**:List[str]
  2. -

Extra package names to check

Examples

  1. >>> c.get_versions() # doctest: +SKIP
  1. >>> c.get_versions(packages=['sklearn', 'geopandas']) # doctest: +SKIP
  • getworker_logs(_self, n=None, workers=None, nanny=False)[source]
  • Get logs from workers

Parameters:

  1. - **n**:int
  2. -

Number of logs to retrive. Maxes out at 10000 by default,confiruable in config.yaml::log-length

  1. - **workers**:iterable
  2. -

List of worker addresses to retrieve. Gets all workers by default.

  1. - **nanny**:bool, default False
  2. -

Whether to get the logs from the workers (False) or the nannies (True). Ifspecified, the addresses in workers should still be the worker addresses,not the nanny addresses. Returns:

  1. - Dictionary mapping worker address to logs.
  2. -
  3. - Logs are returned in reversed order (newest first)
  4. -
  • haswhat(_self, workers=None, **kwargs)[source]
  • Which keys are held by which workers

This returns the keys of the data that are held in each worker’smemory.

Parameters:

  1. - **workers: list (optional)**
  2. -

A list of worker addresses, defaults to all

See also

  1. - [<code>Client.who_has</code>](#distributed.Client.who_has)
  2. -
  3. - [<code>Client.nthreads</code>](#distributed.Client.nthreads)
  4. -
  5. - [<code>Client.processing</code>](#distributed.Client.processing)
  6. -

Examples

  1. >>> x, y, z = c.map(inc, [1, 2, 3]) # doctest: +SKIP
  2. >>> wait([x, y, z]) # doctest: +SKIP
  3. >>> c.has_what() # doctest: +SKIP
  4. {'192.168.1.141:46784': ['inc-1c8dd6be1c21646c71f76c16d09304ea',
  5. 'inc-fd65c238a7ea60f6a01bf4c8a5fcf44b',
  6. 'inc-1e297fc27658d7b67b3a758f16bcf47a']}
  • listdatasets(_self, **kwargs)[source]
  • List named datasets available on the scheduler

See also

  1. - [<code>Client.publish_dataset</code>](#distributed.Client.publish_dataset)
  2. -
  3. - [<code>Client.get_dataset</code>](#distributed.Client.get_dataset)
  4. -
  • map(self, func, *iterables, key=None, workers=None, retries=None, resources=None, priority=0, allow_other_workers=False, fifo_timeout='100 ms', actor=False, actors=False, pure=None, **kwargs)[source]
  • Map a function on a sequence of arguments

Arguments can be normal objects or Futures

Parameters:

  1. - **func: callable**
  2. -
  3. - **iterables: Iterables**
  4. -

List-like objects to map over. They should have the same length.

  1. - **key: str, list**
  2. -

Prefix for task names if string. Explicit names if list.

  1. - **pure: bool (defaults to True)**
  2. -

Whether or not the function is pure. Set pure=False forimpure functions like np.random.random.

  1. - **workers: set, iterable of sets**
  2. -

A set of worker hostnames on which computations may be performed.Leave empty to default to all workers (common case)

  1. - **allow_other_workers: bool (defaults to False)**
  2. -

Used with workers. Indicates whether or not the computationsmay be performed on workers that are not in the workers set(s).

  1. - **retries: int (default to 0)**
  2. -

Number of allowed automatic retries if a task fails

  1. - **priority: Number**
  2. -

Optional prioritization of task. Zero is default.Higher priorities take precedence

  1. - **fifo_timeout: str timedelta (default 100ms’)**
  2. -

Allowed amount of time between calls to consider the same priority

  1. - **resources: dict (defaults to {})**
  2. -

Defines the resources each instance of this mapped task requireson the worker; e.g. {'GPU': 2}. Seeworker resources for details on definingresources.

  1. - **actor: bool (default False)**
  2. -

Whether these tasks should exist on the worker as stateful actors.See Actors for additional details.

  1. - **actors: bool (default False)**
  2. -

Alias for actor

  1. - ****kwargs: dict**
  2. -

Extra keywords to send to the function.Large values will be included explicitly in the task graph. Returns:

  1. - List, iterator, or Queue of futures, depending on the type of the
  2. -
  3. - inputs.
  4. -

See also

  1. - [<code>Client.submit</code>](#distributed.Client.submit)
  2. - Submit a single function

Examples

  1. >>> L = client.map(func, sequence) # doctest: +SKIP
  • nbytes(self, keys=None, summary=True, **kwargs)[source]
  • The bytes taken up by each key on the cluster

This is as measured by sys.getsizeof which may not accuratelyreflect the true cost.

Parameters:

  1. - **keys: list (optional)**
  2. -

A list of keys, defaults to all keys

  1. - **summary: boolean, (optional)**
  2. -

Summarize keys into key types

See also

  1. - [<code>Client.who_has</code>](#distributed.Client.who_has)
  2. -

Examples

  1. >>> x, y, z = c.map(inc, [1, 2, 3]) # doctest: +SKIP
  2. >>> c.nbytes(summary=False) # doctest: +SKIP
  3. {'inc-1c8dd6be1c21646c71f76c16d09304ea': 28,
  4. 'inc-1e297fc27658d7b67b3a758f16bcf47a': 28,
  5. 'inc-fd65c238a7ea60f6a01bf4c8a5fcf44b': 28}
  1. >>> c.nbytes(summary=True) # doctest: +SKIP
  2. {'inc': 84}
  • ncores(self, workers=None, **kwargs)
  • The number of threads/cores available on each worker node

Parameters:

  1. - **workers: list (optional)**
  2. -

A list of workers that we care about specifically.Leave empty to receive information about all workers.

See also

  1. - [<code>Client.who_has</code>](#distributed.Client.who_has)
  2. -
  3. - [<code>Client.has_what</code>](#distributed.Client.has_what)
  4. -

Examples

  1. >>> c.threads() # doctest: +SKIP
  2. {'192.168.1.141:46784': 8,
  3. '192.167.1.142:47548': 8,
  4. '192.167.1.143:47329': 8,
  5. '192.167.1.144:37297': 8}
  • normalizecollection(_self, collection)[source]
  • Replace collection’s tasks by already existing futures if they exist

This normalizes the tasks within a collections task graph against theknown futures within the scheduler. It returns a copy of thecollection with a task graph that includes the overlapping futures.

See also

  1. - [<code>Client.persist</code>](#distributed.Client.persist)
  2. - trigger computation of collections tasks

Examples

  1. >>> len(x.__dask_graph__()) # x is a dask collection with 100 tasks # doctest: +SKIP
  2. 100
  3. >>> set(client.futures).intersection(x.__dask_graph__()) # some overlap exists # doctest: +SKIP
  4. 10
  1. >>> x = client.normalize_collection(x) # doctest: +SKIP
  2. >>> len(x.__dask_graph__()) # smaller computational graph # doctest: +SKIP
  3. 20
  • nthreads(self, workers=None, **kwargs)[source]
  • The number of threads/cores available on each worker node

Parameters:

  1. - **workers: list (optional)**
  2. -

A list of workers that we care about specifically.Leave empty to receive information about all workers.

See also

  1. - [<code>Client.who_has</code>](#distributed.Client.who_has)
  2. -
  3. - [<code>Client.has_what</code>](#distributed.Client.has_what)
  4. -

Examples

  1. >>> c.threads() # doctest: +SKIP
  2. {'192.168.1.141:46784': 8,
  3. '192.167.1.142:47548': 8,
  4. '192.167.1.143:47329': 8,
  5. '192.167.1.144:37297': 8}
  • persist(self, collections, optimize_graph=True, workers=None, allow_other_workers=None, resources=None, retries=None, priority=0, fifo_timeout='60s', actors=None, **kwargs)[source]
  • Persist dask collections on cluster

Starts computation of the collection on the cluster in the background.Provides a new dask collection that is semantically identical to theprevious one, but now based off of futures currently in execution.

Parameters:

  1. - **collections: sequence or single dask object**
  2. -

Collections like dask.array or dataframe or dask.value objects

  1. - **optimize_graph: bool**
  2. -

Whether or not to optimize the underlying graphs

  1. - **workers: str, list, dict**
  2. -

Which workers can run which parts of the computationIf a string a list then the output collections will run on the listedworkers, but other sub-computations can run anywhereIf a dict then keys should be (tuples of) collections and valuesshould be addresses or lists.

  1. - **allow_other_workers: bool, list**
  2. -

If True then all restrictions in workers= are considered looseIf a list then only the keys for the listed collections are loose

  1. - **retries: int (default to 0)**
  2. -

Number of allowed automatic retries if computing a result fails

  1. - **priority: Number**
  2. -

Optional prioritization of task. Zero is default.Higher priorities take precedence

  1. - **fifo_timeout: timedelta str (defaults to 60s’)**
  2. -

Allowed amount of time between calls to consider the same priority

  1. - **resources: dict (defaults to {})**
  2. -

Defines the resources these tasks require on the worker. Canspecify global resources ({'GPU': 2}), or per-task resources({'x': {'GPU': 1}, 'y': {'SSD': 4}}), but not both.See worker resources for details on definingresources.

  1. - **actors: bool or dict (default None)**
  2. -

Whether these tasks should exist on the worker as stateful actors.Specified on a global (True/False) or per-task ({'x': True, 'y': False}) basis. See Actors for additional details.

  1. - ****kwargs:**
  2. -

Options to pass to the graph optimize calls Returns:

  1. - List of collections, or single collection, depending on type of input.
  2. -

See also

  1. - [<code>Client.compute</code>](#distributed.Client.compute)
  2. -

Examples

  1. >>> xx = client.persist(x) # doctest: +SKIP
  2. >>> xx, yy = client.persist([x, y]) # doctest: +SKIP
  • processing(self, workers=None)[source]
  • The tasks currently running on each worker

Parameters:

  1. - **workers: list (optional)**
  2. -

A list of worker addresses, defaults to all

See also

  1. - [<code>Client.who_has</code>](#distributed.Client.who_has)
  2. -
  3. - [<code>Client.has_what</code>](#distributed.Client.has_what)
  4. -
  5. - [<code>Client.nthreads</code>](#distributed.Client.nthreads)
  6. -

Examples

  1. >>> x, y, z = c.map(inc, [1, 2, 3]) # doctest: +SKIP
  2. >>> c.processing() # doctest: +SKIP
  3. {'192.168.1.141:46784': ['inc-1c8dd6be1c21646c71f76c16d09304ea',
  4. 'inc-fd65c238a7ea60f6a01bf4c8a5fcf44b',
  5. 'inc-1e297fc27658d7b67b3a758f16bcf47a']}
  • profile(self, key=None, start=None, stop=None, workers=None, merge_workers=True, plot=False, filename=None, server=False, scheduler=False)[source]
  • Collect statistical profiling information about recent work

Parameters:

  1. - **key: str**
  2. -

Key prefix to select, this is typically a function name like ‘inc’Leave as None to collect all data

  1. - **start: time**
  2. -
  3. - **stop: time**
  4. -
  5. - **workers: list**
  6. -

List of workers to restrict profile information

  1. - **server**:bool
  2. -

If true, return the profile of the worker’s administrative threadrather than the worker threads.This is useful when profiling Dask itself, rather than user code.

  1. - **scheduler: bool**
  2. -

If true, return the profile information from the scheduler’sadministrative thread rather than the workers.This is useful when profiling Dask’s scheduling itself.

  1. - **plot: boolean or string**
  2. -

Whether or not to return a plot object

  1. - **filename: str**
  2. -

Filename to save the plot

Examples

  1. >>> client.profile() # call on collections
  2. >>> client.profile(filename='dask-profile.html') # save to html file
  • publishdataset(_self, *args, **kwargs)[source]
  • Publish named datasets to scheduler

This stores a named reference to a dask collection or list of futureson the scheduler. These references are available to other Clientswhich can download the collection or futures with get_dataset.

Datasets are not immediately computed. You may wish to callClient.persist prior to publishing a dataset.

Parameters:

  1. - **args**:list of objects to publish as name
  2. -
  3. - **name**:optional name of the dataset to publish
  4. -
  5. - **kwargs: dict**
  6. -

named collections to publish on the scheduler Returns:

  1. - None
  2. -

See also

  1. - [<code>Client.list_datasets</code>](#distributed.Client.list_datasets)
  2. -
  3. - [<code>Client.get_dataset</code>](#distributed.Client.get_dataset)
  4. -
  5. - [<code>Client.unpublish_dataset</code>](#distributed.Client.unpublish_dataset)
  6. -
  7. - [<code>Client.persist</code>](#distributed.Client.persist)
  8. -

Examples

Publishing client:

  1. >>> df = dd.read_csv('s3://...') # doctest: +SKIP
  2. >>> df = c.persist(df) # doctest: +SKIP
  3. >>> c.publish_dataset(my_dataset=df) # doctest: +SKIP

Alternative invocation>>> c.publish_dataset(df, name=’my_dataset’)

Receiving client:

  1. >>> c.list_datasets() # doctest: +SKIP
  2. ['my_dataset']
  3. >>> df2 = c.get_dataset('my_dataset') # doctest: +SKIP
  • rebalance(self, futures=None, workers=None, **kwargs)[source]
  • Rebalance data within network

Move data between workers to roughly balance memory burden. Thiseither affects a subset of the keys/workers or the entire network,depending on keyword arguments.

This operation is generally not well tested against normal operation ofthe scheduler. It it not recommended to use it while waiting oncomputations.

Parameters:

  1. - **futures: list, optional**
  2. -

A list of futures to balance, defaults all data

  1. - **workers: list, optional**
  2. -

A list of workers on which to balance, defaults to all workers

  • registerworker_callbacks(_self, setup=None)[source]
  • Registers a setup callback function for all current and future workers.

This registers a new setup function for workers in this cluster. Thefunction will run immediately on all currently connected workers. Itwill also be run upon connection by any workers that are added in thefuture. Multiple setup functions can be registered - these will becalled in the order they were added.

If the function takes an input argument named dask_worker thenthat variable will be populated with the worker itself.

Parameters:

  1. - **setup**:callable(dask_worker: Worker) -&gt; None
  2. -

Function to register and run on all workers

  • registerworker_plugin(_self, plugin=None, name=None)[source]
  • Registers a lifecycle worker plugin for all current and future workers.

This registers a new object to handle setup, task state transitions andteardown for workers in this cluster. The plugin will instantiate itselfon all currently connected workers. It will also be run on any workerthat connects in the future.

The plugin may include methods setup, teardown, andtransition. See the dask.distributed.WorkerPlugin class or theexamples below for the interface and docstrings. It must beserializable with the pickle or cloudpickle modules.

If the plugin has a name attribute, or if the name= keyword isused then that will control idempotency. A a plugin with that name hasalready registered then any future plugins will not run.

For alternatives to plugins, you may also wish to look into preloadscripts.

Parameters:

  1. - **plugin: WorkerPlugin**
  2. -

The plugin object to pass to the workers

  1. - **name: str, optional**
  2. -

A name for the plugin.Registering a plugin with the same name will have no effect.

See also

  1. - <code>distributed.WorkerPlugin</code>
  2. -

Examples

  1. >>> class MyPlugin(WorkerPlugin):
  2. ... def __init__(self, *args, **kwargs):
  3. ... pass # the constructor is up to you
  4. ... def setup(self, worker: dask.distributed.Worker):
  5. ... pass
  6. ... def teardown(self, worker: dask.distributed.Worker):
  7. ... pass
  8. ... def transition(self, key: str, start: str, finish: str, **kwargs):
  9. ... pass
  1. >>> plugin = MyPlugin(1, 2, 3)
  2. >>> client.register_worker_plugin(plugin)

You can get access to the plugin with the get_worker function

  1. >>> client.register_worker_plugin(other_plugin, name='my-plugin')
  2. >>> def f():
  3. ... worker = get_worker()
  4. ... plugin = worker.plugins['my-plugin']
  5. ... return plugin.my_state
  1. >>> future = client.run(f)
  • replicate(self, futures, n=None, workers=None, branching_factor=2, **kwargs)[source]
  • Set replication of futures within network

Copy data onto many workers. This helps to broadcast frequentlyaccessed data and it helps to improve resilience.

This performs a tree copy of the data throughout the networkindividually on each piece of data. This operation blocks untilcomplete. It does not guarantee replication of data to future workers.

Parameters:

  1. - **futures: list of futures**
  2. -

Futures we wish to replicate

  1. - **n: int, optional**
  2. -

Number of processes on the cluster on which to replicate the data.Defaults to all.

  1. - **workers: list of worker addresses**
  2. -

Workers on which we want to restrict the replication.Defaults to all.

  1. - **branching_factor: int, optional**
  2. -

The number of workers that can copy data in each generation

See also

  1. - [<code>Client.rebalance</code>](#distributed.Client.rebalance)
  2. -

Examples

  1. >>> x = c.submit(func, *args) # doctest: +SKIP
  2. >>> c.replicate([x]) # send to all workers # doctest: +SKIP
  3. >>> c.replicate([x], n=3) # send to three workers # doctest: +SKIP
  4. >>> c.replicate([x], workers=['alice', 'bob']) # send to specific # doctest: +SKIP
  5. >>> c.replicate([x], n=1, workers=['alice', 'bob']) # send to one of specific workers # doctest: +SKIP
  6. >>> c.replicate([x], n=1) # reduce replications # doctest: +SKIP
  • restart(self, **kwargs)[source]
  • Restart the distributed network

This kills all active work, deletes all data on the network, andrestarts the worker processes.

  • retireworkers(_self, workers=None, close_workers=True, **kwargs)[source]
  • Retire certain workers on the scheduler

See dask.distributed.Scheduler.retire_workers for the full docstring.

See also

  1. - <code>dask.distributed.Scheduler.retire_workers</code>
  2. -

Examples

You can get information about active workers using the following:>>> workers = client.scheduler_info()[‘workers’]

From that list you may want to select some workers to close>>> client.retire_workers(workers=[‘tcp://address:port’, …])

  • retry(self, futures, asynchronous=None)[source]
  • Retry failed futures

Parameters:

  1. - **futures: list of Futures**
  2. -
  • run(self, function, *args, **kwargs)[source]
  • Run a function on all workers outside of task scheduling system

This calls a function on all currently known workers immediately,blocks until those results come back, and returns the resultsasynchronously as a dictionary keyed by worker address. This methodif generally used for side effects, such and collecting diagnosticinformation or installing libraries.

If your function takes an input argument named dask_worker thenthat variable will be populated with the worker itself.

Parameters:

  1. - **function: callable**
  2. -
  3. - ***args: arguments for remote function**
  4. -
  5. - ****kwargs: keyword arguments for remote function**
  6. -
  7. - **workers: list**
  8. -

Workers on which to run the function. Defaults to all known workers.

  1. - **wait: boolean (optional)**
  2. -

If the function is asynchronous whether or not to wait until thatfunction finishes.

  1. - **nanny**:bool, defualt False
  2. -

Whether to run function on the nanny. By default, the functionis run on the worker process. If specified, the addresses inworkers should still be the worker addresses, not the nanny addresses.

Examples

  1. >>> c.run(os.getpid) # doctest: +SKIP
  2. {'192.168.0.100:9000': 1234,
  3. '192.168.0.101:9000': 4321,
  4. '192.168.0.102:9000': 5555}

Restrict computation to particular workers with the workers=keyword argument.

  1. >>> c.run(os.getpid, workers=['192.168.0.100:9000',
  2. ... '192.168.0.101:9000']) # doctest: +SKIP
  3. {'192.168.0.100:9000': 1234,
  4. '192.168.0.101:9000': 4321}
  1. >>> def get_status(dask_worker):
  2. ... return dask_worker.status
  1. >>> c.run(get_hostname) # doctest: +SKIP
  2. {'192.168.0.100:9000': 'running',
  3. '192.168.0.101:9000': 'running}

Run asynchronous functions in the background:

  1. >>> async def print_state(dask_worker): # doctest: +SKIP
  2. ... while True:
  3. ... print(dask_worker.status)
  4. ... await asyncio.sleep(1)
  1. >>> c.run(print_state, wait=False) # doctest: +SKIP
  • runcoroutine(_self, function, *args, **kwargs)[source]
  • Spawn a coroutine on all workers.

This spaws a coroutine on all currently known workers and then waitsfor the coroutine on each worker. The coroutines’ results are returnedas a dictionary keyed by worker address.

Parameters:

  1. - **function: a coroutine function**
  2. -
  3. - (typically a function wrapped in gen.coroutine or
  4. -

a Python 3.5+ async function)

  1. - ***args: arguments for remote function**
  2. -
  3. - ****kwargs: keyword arguments for remote function**
  4. -
  5. - **wait: boolean (default True)**
  6. -

Whether to wait for coroutines to end.

  1. - **workers: list**
  2. -

Workers on which to run the function. Defaults to all known workers.

  • runon_scheduler(_self, function, *args, **kwargs)[source]
  • Run a function on the scheduler process

This is typically used for live debugging. The function should take akeyword argument dask_scheduler=, which will be given the schedulerobject itself.

See also

  1. - [<code>Client.run</code>](#distributed.Client.run)
  2. - Run a function on all workers
  3. - [<code>Client.start_ipython_scheduler</code>](#distributed.Client.start_ipython_scheduler)
  4. - Start an IPython session on scheduler

Examples

  1. >>> def get_number_of_tasks(dask_scheduler=None):
  2. ... return len(dask_scheduler.tasks)
  1. >>> client.run_on_scheduler(get_number_of_tasks) # doctest: +SKIP
  2. 100

Run asynchronous functions in the background:

  1. >>> async def print_state(dask_scheduler): # doctest: +SKIP
  2. ... while True:
  3. ... print(dask_scheduler.status)
  4. ... await asyncio.sleep(1)
  1. >>> c.run(print_state, wait=False) # doctest: +SKIP
  • scatter(self, data, workers=None, broadcast=False, direct=None, hash=True, timeout='no_default', asynchronous=None)[source]
  • Scatter data into distributed memory

This moves data from the local client process into the workers of thedistributed scheduler. Note that it is often better to submit jobs toyour workers to have them load the data rather than loading datalocally and then scattering it out to them.

Parameters:

  1. - **data: list, dict, or object**
  2. -

Data to scatter out to workers. Output type matches input type.

  1. - **workers: list of tuples (optional)**
  2. -

Optionally constrain locations of data.Specify workers as hostname/port pairs, e.g. ('127.0.0.1', 8787).

  1. - **broadcast: bool (defaults to False)**
  2. -

Whether to send each data element to all workers.By default we round-robin based on number of cores.

  1. - **direct: bool (defaults to automatically check)**
  2. -

Whether or not to connect directly to the workers, or to askthe scheduler to serve as intermediary. This can also be set whencreating the Client.

  1. - **hash: bool (optional)**
  2. -

Whether or not to hash data to determine key.If False then this uses a random key Returns:

  1. - List, dict, iterator, or queue of futures matching the type of input.
  2. -

See also

  1. - [<code>Client.gather</code>](#distributed.Client.gather)
  2. - Gather data back to local process

Examples

  1. >>> c = Client('127.0.0.1:8787') # doctest: +SKIP
  2. >>> c.scatter(1) # doctest: +SKIP
  3. <Future: status: finished, key: c0a8a20f903a4915b94db8de3ea63195>
  1. >>> c.scatter([1, 2, 3]) # doctest: +SKIP
  2. [<Future: status: finished, key: c0a8a20f903a4915b94db8de3ea63195>,
  3. <Future: status: finished, key: 58e78e1b34eb49a68c65b54815d1b158>,
  4. <Future: status: finished, key: d3395e15f605bc35ab1bac6341a285e2>]
  1. >>> c.scatter({'x': 1, 'y': 2, 'z': 3}) # doctest: +SKIP
  2. {'x': <Future: status: finished, key: x>,
  3. 'y': <Future: status: finished, key: y>,
  4. 'z': <Future: status: finished, key: z>}

Constrain location of data to subset of workers

  1. >>> c.scatter([1, 2, 3], workers=[('hostname', 8788)]) # doctest: +SKIP

Broadcast data to all workers

  1. >>> [future] = c.scatter([element], broadcast=True) # doctest: +SKIP

Send scattered data to parallelized function using client futuresinterface

  1. >>> data = c.scatter(data, broadcast=True) # doctest: +SKIP
  2. >>> res = [c.submit(func, data, i) for i in range(100)]
  • schedulerinfo(_self, **kwargs)[source]
  • Basic information about the workers in the cluster

Examples

  1. >>> c.scheduler_info() # doctest: +SKIP
  2. {'id': '2de2b6da-69ee-11e6-ab6a-e82aea155996',
  3. 'services': {},
  4. 'type': 'Scheduler',
  5. 'workers': {'127.0.0.1:40575': {'active': 0,
  6. 'last-seen': 1472038237.4845693,
  7. 'name': '127.0.0.1:40575',
  8. 'services': {},
  9. 'stored': 0,
  10. 'time-delay': 0.0061032772064208984}}}
  • setmetadata(_self, key, value)[source]
  • Set arbitrary metadata in the scheduler

This allows you to store small amounts of data on the central schedulerprocess for administrative purposes. Data should be msgpackserializable (ints, strings, lists, dicts)

If the key corresponds to a task then that key will be cleaned up whenthe task is forgotten by the scheduler.

If the key is a list then it will be assumed that you want to indexinto a nested dictionary structure using those keys. For example ifyou call the following:

  1. >>> client.set_metadata(['a', 'b', 'c'], 123)

Then this is the same as setting

  1. >>> scheduler.task_metadata['a']['b']['c'] = 123

The lower level dictionaries will be created on demand.

See also

  1. - [<code>get_metadata</code>](#distributed.Client.get_metadata)
  2. -

Examples

  1. >>> client.set_metadata('x', 123) # doctest: +SKIP
  2. >>> client.get_metadata('x') # doctest: +SKIP
  3. 123
  1. >>> client.set_metadata(['x', 'y'], 123) # doctest: +SKIP
  2. >>> client.get_metadata('x') # doctest: +SKIP
  3. {'y': 123}
  1. >>> client.set_metadata(['x', 'w', 'z'], 456) # doctest: +SKIP
  2. >>> client.get_metadata('x') # doctest: +SKIP
  3. {'y': 123, 'w': {'z': 456}}
  1. >>> client.get_metadata(['x', 'w']) # doctest: +SKIP
  2. {'z': 456}
  • shutdown(self)[source]
  • Shut down the connected scheduler and workers

Note, this may disrupt other clients that may be using the samescheudler and workers.

See also

  1. - [<code>Client.close</code>](#distributed.Client.close)
  2. - close only this client
  • start(self, **kwargs)[source]
  • Start scheduler running in separate thread

  • startipython_scheduler(_self, magic_name='scheduler_if_ipython', qtconsole=False, qtconsole_args=None)[source]

  • Start IPython kernel on the scheduler

Parameters:

  1. - **magic_name: str or None (optional)**
  2. -

If defined, register IPython magic with this name forexecuting code on the scheduler.If not defined, register %scheduler magic if IPython is running.

  1. - **qtconsole: bool (optional)**
  2. -

If True, launch a Jupyter QtConsole connected to the worker(s).

  1. - **qtconsole_args: list(str) (optional)**
  2. -

Additional arguments to pass to the qtconsole on startup. Returns:

  1. - connection_info: dict
  2. -

connection_info dict containing info necessaryto connect Jupyter clients to the scheduler.

See also

  1. - [<code>Client.start_ipython_workers</code>](#distributed.Client.start_ipython_workers)
  2. - Start IPython on the workers

Examples

  1. >>> c.start_ipython_scheduler() # doctest: +SKIP
  2. >>> %scheduler scheduler.processing # doctest: +SKIP
  3. {'127.0.0.1:3595': {'inc-1', 'inc-2'},
  4. '127.0.0.1:53589': {'inc-2', 'add-5'}}
  1. >>> c.start_ipython_scheduler(qtconsole=True) # doctest: +SKIP
  • startipython_workers(_self, workers=None, magic_names=False, qtconsole=False, qtconsole_args=None)[source]
  • Start IPython kernels on workers

Parameters:

  1. - **workers: list (optional)**
  2. -

A list of worker addresses, defaults to all

  1. - **magic_names: str or list(str) (optional)**
  2. -

If defined, register IPython magics with these names forexecuting code on the workers. If string has asterix then expandasterix into 0, 1, …, n for n workers

  1. - **qtconsole: bool (optional)**
  2. -

If True, launch a Jupyter QtConsole connected to the worker(s).

  1. - **qtconsole_args: list(str) (optional)**
  2. -

Additional arguments to pass to the qtconsole on startup. Returns:

  1. - iter_connection_info: list
  2. -

List of connection_info dicts containing info necessaryto connect Jupyter clients to the workers.

See also

  1. - [<code>Client.start_ipython_scheduler</code>](#distributed.Client.start_ipython_scheduler)
  2. - start ipython on the scheduler

Examples

  1. >>> info = c.start_ipython_workers() # doctest: +SKIP
  2. >>> %remote info['192.168.1.101:5752'] worker.data # doctest: +SKIP
  3. {'x': 1, 'y': 100}
  1. >>> c.start_ipython_workers('192.168.1.101:5752', magic_names='w') # doctest: +SKIP
  2. >>> %w worker.data # doctest: +SKIP
  3. {'x': 1, 'y': 100}
  1. >>> c.start_ipython_workers('192.168.1.101:5752', qtconsole=True) # doctest: +SKIP

Add asterix * in magic names to add one magic per worker

  1. >>> c.start_ipython_workers(magic_names='w_*') # doctest: +SKIP
  2. >>> %w_0 worker.data # doctest: +SKIP
  3. {'x': 1, 'y': 100}
  4. >>> %w_1 worker.data # doctest: +SKIP
  5. {'z': 5}
  • submit(self, func, *args, key=None, workers=None, resources=None, retries=None, priority=0, fifo_timeout='100 ms', allow_other_workers=False, actor=False, actors=False, pure=None, **kwargs)[source]
  • Submit a function application to the scheduler

Parameters:

  1. - **func: callable**
  2. -
  3. - ***args:**
  4. -
  5. - ****kwargs:**
  6. -
  7. - **pure: bool (defaults to True)**
  8. -

Whether or not the function is pure. Set pure=False forimpure functions like np.random.random.

  1. - **workers: set, iterable of sets**
  2. -

A set of worker hostnames on which computations may be performed.Leave empty to default to all workers (common case)

  1. - **key: str**
  2. -

Unique identifier for the task. Defaults to function-name and hash

  1. - **allow_other_workers: bool (defaults to False)**
  2. -

Used with workers. Indicates whether or not the computationsmay be performed on workers that are not in the workers set(s).

  1. - **retries: int (default to 0)**
  2. -

Number of allowed automatic retries if the task fails

  1. - **priority: Number**
  2. -

Optional prioritization of task. Zero is default.Higher priorities take precedence

  1. - **fifo_timeout: str timedelta (default 100ms’)**
  2. -

Allowed amount of time between calls to consider the same priority

  1. - **resources: dict (defaults to {})**
  2. -

Defines the resources this job requires on the worker; e.g.{'GPU': 2}. See worker resources for detailson defining resources.

  1. - **actor: bool (default False)**
  2. -

Whether this task should exist on the worker as a stateful actor.See Actors for additional details.

  1. - **actors: bool (default False)**
  2. -

Alias for actor Returns:

  1. - Future
  2. -

See also

  1. - [<code>Client.map</code>](#distributed.Client.map)
  2. - Submit on many arguments at once

Examples

  1. >>> c = client.submit(add, a, b) # doctest: +SKIP
  • unpublishdataset(_self, name, **kwargs)[source]
  • Remove named datasets from scheduler

See also

  1. - [<code>Client.publish_dataset</code>](#distributed.Client.publish_dataset)
  2. -

Examples

  1. >>> c.list_datasets() # doctest: +SKIP
  2. ['my_dataset']
  3. >>> c.unpublish_datasets('my_dataset') # doctest: +SKIP
  4. >>> c.list_datasets() # doctest: +SKIP
  5. []
  • uploadfile(_self, filename, **kwargs)[source]
  • Upload local package to workers

This sends a local file up to all worker nodes. This file is placedinto a temporary directory on Python’s system path so any .py, .eggor .zip files will be importable.

Parameters:

  1. - **filename: string**
  2. -

Filename of .py, .egg or .zip file to send to workers

Examples

  1. >>> client.upload_file('mylibrary.egg') # doctest: +SKIP
  2. >>> from mylibrary import myfunc # doctest: +SKIP
  3. >>> L = c.map(myfunc, seq) # doctest: +SKIP
  • waitfor_workers(_self, n_workers=0)[source]
  • Blocking call to wait for n workers before continuing

  • whohas(_self, futures=None, **kwargs)[source]

  • The workers storing each future’s data

Parameters:

  1. - **futures: list (optional)**
  2. -

A list of futures, defaults to all data

See also

  1. - [<code>Client.has_what</code>](#distributed.Client.has_what)
  2. -
  3. - [<code>Client.nthreads</code>](#distributed.Client.nthreads)
  4. -

Examples

  1. >>> x, y, z = c.map(inc, [1, 2, 3]) # doctest: +SKIP
  2. >>> wait([x, y, z]) # doctest: +SKIP
  3. >>> c.who_has() # doctest: +SKIP
  4. {'inc-1c8dd6be1c21646c71f76c16d09304ea': ['192.168.1.141:46784'],
  5. 'inc-1e297fc27658d7b67b3a758f16bcf47a': ['192.168.1.141:46784'],
  6. 'inc-fd65c238a7ea60f6a01bf4c8a5fcf44b': ['192.168.1.141:46784']}
  1. >>> c.who_has([x, y]) # doctest: +SKIP
  2. {'inc-1c8dd6be1c21646c71f76c16d09304ea': ['192.168.1.141:46784'],
  3. 'inc-1e297fc27658d7b67b3a758f16bcf47a': ['192.168.1.141:46784']}
  • writescheduler_file(_self, scheduler_file)[source]
  • Write the scheduler information to a json file.

This facilitates easy sharing of scheduler information using a filesystem. The scheduler file can be used to instantiate a second Clientusing the same scheduler.

Parameters:

  1. - **scheduler_file: str**
  2. -

Path to a write the scheduler file.

Examples

  1. >>> client = Client() # doctest: +SKIP
  2. >>> client.write_scheduler_file('scheduler.json') # doctest: +SKIP
  3. # connect to previous client's scheduler
  4. >>> client2 = Client(scheduler_file='scheduler.json') # doctest: +SKIP

  • class distributed.recreateexceptions.ReplayExceptionClient(_client)[source]
  • A plugin for the client allowing replay of remote exceptions locally

Adds the following methods (and their async variants)to the given client:

  • recreate_error_locally: main user method
    • get_futures_error: gets the task, its details and dependencies,
    • responsible for failure of the given future.
  • getfutures_error(_self, future)[source]
  • Ask the scheduler details of the sub-task of the given failed future

When a future evaluates to a status of “error”, i.e., an exceptionwas raised in a task within its graph, we an get information fromthe scheduler. This function gets the details of the specific taskthat raised the exception and led to the error, but does not fetchdata from the cluster or execute the function.

Parameters:

  1. - **future**:future that failed, having <code>status==&#34;error&#34;</code>, typically
  2. -

after an attempt to gather() shows a stack-stace. Returns:

  1. - Tuple:
  2. -
  3. -
  4. - the function that raised an exception
  5. -
  6. -
  7. - argument list (a tuple), may include values and keys
  8. -
  9. -
  10. - keyword arguments (a dictionary), may include values and keys
  11. -
  12. -
  13. - list of keys that the function requires to be fetched to run
  14. -

See also

  1. - [<code>ReplayExceptionClient.recreate_error_locally</code>](#distributed.recreate_exceptions.ReplayExceptionClient.recreate_error_locally)
  2. -
  • recreateerror_locally(_self, future)[source]
  • For a failed calculation, perform the blamed task locally for debugging.

This operation should be performed after a future (result of gather,compute, etc) comes back with a status of “error”, if the stack-trace is not informative enough to diagnose the problem. The specifictask (part of the graph pointing to the future) responsible for theerror will be fetched from the scheduler, together with the values ofits inputs. The function will then be executed, so that pdb canbe used for debugging.

Parameters:

  1. - **future**:future or collection that failed
  2. -

The same thing as was given to gather, but came back withan exception/stack-trace. Can also be a (persisted) dask collectioncontaining any errored futures. Returns:

  1. - Nothing; the function runs and should raise an exception, allowing
  2. -
  3. - the debugger to run.
  4. -

Examples

  1. >>> future = c.submit(div, 1, 0) # doctest: +SKIP
  2. >>> future.status # doctest: +SKIP
  3. 'error'
  4. >>> c.recreate_error_locally(future) # doctest: +SKIP
  5. ZeroDivisionError: division by zero

If you’re in IPython you might take this opportunity to use pdb

  1. >>> %pdb # doctest: +SKIP
  2. Automatic pdb calling has been turned ON
  1. >>> c.recreate_error_locally(future) # doctest: +SKIP
  2. ZeroDivisionError: division by zero
  3. 1 def div(x, y):
  4. ----> 2 return x / y
  5. ipdb>

Future

  • class distributed.Future(key, client=None, inform=True, state=None)[source]
  • A remotely running computation

A Future is a local proxy to a result running on a remote worker. A usermanages future objects in the local Python process to determine whathappens in the larger cluster.

Parameters:

  • key: str, or tuple
  • Key of remote data to which this future refers

  • client: Client

  • Client that should own this future. Defaults to _get_global_client()

  • inform: bool

  • Do we inform the scheduler that we need an update on this future

See also

Examples

Futures typically emerge from Client computations

  1. >>> my_future = client.submit(add, 1, 2) # doctest: +SKIP

We can track the progress and results of a future

  1. >>> my_future # doctest: +SKIP
  2. <Future: status: finished, key: add-8f6e709446674bad78ea8aeecfee188e>

We can get the result or the exception and traceback from the future

  1. >>> my_future.result() # doctest: +SKIP
  • adddone_callback(_self, fn)[source]
  • Call callback on future when callback has finished

The callback fn should take the future as its only argument. Thiswill be called regardless of if the future completes successfully,errs, or is cancelled

The callback is executed in a separate thread.

  • cancel(self, **kwargs)[source]
  • Cancel request to run this future

See also

  1. - [<code>Client.cancel</code>](#distributed.Client.cancel)
  2. -
  • cancelled(self)[source]
  • Returns True if the future has been cancelled

  • done(self)[source]

  • Is the computation complete?

  • exception(self, timeout=None, **kwargs)[source]

  • Return the exception of a failed task

If timeout seconds are elapsed before returning, adask.distributed.TimeoutError is raised.

See also

  1. - [<code>Future.traceback</code>](#distributed.Future.traceback)
  2. -
  • result(self, timeout=None)[source]
  • Wait until computation completes, gather result to local process.

If timeout seconds are elapsed before returning, adask.distributed.TimeoutError is raised.

  • retry(self, **kwargs)[source]
  • Retry this future if it has failed

See also

  1. - [<code>Client.retry</code>](#distributed.Client.retry)
  2. -
  • traceback(self, timeout=None, **kwargs)[source]
  • Return the traceback of a failed task

This returns a traceback object. You can inspect this object using thetraceback module. Alternatively if you call future.result()this traceback will accompany the raised exception.

If timeout seconds are elapsed before returning, adask.distributed.TimeoutError is raised.

See also

  1. - [<code>Future.exception</code>](#distributed.Future.exception)
  2. -

Examples

  1. >>> import traceback # doctest: +SKIP
  2. >>> tb = future.traceback() # doctest: +SKIP
  3. >>> traceback.format_tb(tb) # doctest: +SKIP
  4. [...]

Other

  • class distributed.ascompleted(_futures=None, loop=None, with_results=False, raise_errors=True)[source]
  • Return futures in the order in which they complete

This returns an iterator that yields the input future objects in the orderin which they complete. Calling next on the iterator will block untilthe next future completes, irrespective of order.

Additionally, you can also add more futures to this object duringcomputation with the .add method

Parameters:

  • futures: Collection of futures
  • A list of Future objects to be iterated over in the order in which theycomplete

  • with_results: bool (False)

  • Whether to wait and include results of futures as well;in this case as_completed yields a tuple of (future, result)

  • raise_errors: bool (True)

  • Whether we should raise when the result of a future raises an exception;only affects behavior when with_results=True.

Examples

  1. >>> x, y, z = client.map(inc, [1, 2, 3]) # doctest: +SKIP
  2. >>> for future in as_completed([x, y, z]): # doctest: +SKIP
  3. ... print(future.result()) # doctest: +SKIP
  4. 3
  5. 2
  6. 4

Add more futures during computation

  1. >>> x, y, z = client.map(inc, [1, 2, 3]) # doctest: +SKIP
  2. >>> ac = as_completed([x, y, z]) # doctest: +SKIP
  3. >>> for future in ac: # doctest: +SKIP
  4. ... print(future.result()) # doctest: +SKIP
  5. ... if random.random() < 0.5: # doctest: +SKIP
  6. ... ac.add(c.submit(double, future)) # doctest: +SKIP
  7. 4
  8. 2
  9. 8
  10. 3
  11. 6
  12. 12
  13. 24

Optionally wait until the result has been gathered as well

  1. >>> ac = as_completed([x, y, z], with_results=True) # doctest: +SKIP
  2. >>> for future, result in ac: # doctest: +SKIP
  3. ... print(result) # doctest: +SKIP
  4. 2
  5. 4
  6. 3
  • add(self, future)[source]
  • Add a future to the collection

This future will emit from the iterator once it finishes

  • batches(self)[source]
  • Yield all finished futures at once rather than one-by-one

This returns an iterator of lists of futures or lists of(future, result) tuples rather than individual futures or individual(future, result) tuples. It will yield these as soon as possiblewithout waiting.

Examples

  1. >>> for batch in as_completed(futures).batches(): # doctest: +SKIP
  2. ... results = client.gather(batch)
  3. ... print(results)
  4. [4, 2]
  5. [1, 3, 7]
  6. [5]
  7. [6]
  • count(self)[source]
  • Return the number of futures yet to be returned

This includes both the number of futures still computing, as well asthose that are finished, but have not yet been returned from thisiterator.

  • hasready(_self)[source]
  • Returns True if there are completed futures available.

  • isempty(_self)[source]

  • Returns True if there no completed or computing futures

  • nextbatch(_self, block=True)[source]

  • Get the next batch of completed futures.

Parameters:

  1. - **block: bool, optional**
  2. -

If True then wait until we have some result, otherwise returnimmediately, even with an empty list. Defaults to True. Returns:

  1. - List of futures or (future, result) tuples
  2. -

Examples

  1. >>> ac = as_completed(futures) # doctest: +SKIP
  2. >>> client.gather(ac.next_batch()) # doctest: +SKIP
  3. [4, 1, 3]
  1. >>> client.gather(ac.next_batch(block=False)) # doctest: +SKIP
  2. []
  • update(self, futures)[source]
  • Add multiple futures to the collection.

The added futures will emit from the iterator once they finish

  • distributed.diagnostics.progress()
  • distributed.wait(fs, timeout=None, return_when='ALL_COMPLETED')[source]
  • Wait until all/any futures are finished

Parameters:

  • fs: list of futures
  • timeout: number, optional
  • Time in seconds after which to raise a dask.distributed.TimeoutError

  • return_when: str, optional

  • One of ALL_COMPLETED or FIRST_COMPLETED Returns:
  • Named tuple of completed, not completed
  • distributed.fireand_forget(_obj)[source]
  • Run tasks at least once, even if we release the futures

Under normal operation Dask will not run any tasks for which there is notan active future (this avoids unnecessary work in many situations).However sometimes you want to just fire off a task, not track its future,and expect it to finish eventually. You can use this function on a futureor collection of futures to ask Dask to complete the task even if no activeclient is tracking it.

The results will not be kept in memory after the task completes (unlessthere is an active future) so this is only useful for tasks that depend onside effects.

Parameters:

  • obj: Future, list, dict, dask collection
  • The futures that you want to run at least once

Examples

  1. >>> fire_and_forget(client.submit(func, *args)) # doctest: +SKIP
  • distributed.futuresof(_o, client=None)[source]
  • Future objects in a collection

Parameters:

  • o: collection
  • A possibly nested collection of Dask objects Returns:
  • futures:List[Future]
  • A list of futures held by those collections

Examples

  1. >>> futures_of(my_dask_dataframe)
  2. [<Future: finished key: ...>,
  3. <Future: pending key: ...>]
  • distributed.workerclient(_timeout=3, separate_thread=True)[source]
  • Get client for this thread

This context manager is intended to be called within functions that we runon workers. When run as a context manager it delivers a clientClient object that can submit other tasks directly from that worker.

Parameters:

  • timeout: Number
  • Timeout after which to err

  • separate_thread: bool, optional

  • Whether to run this function outside of the normal thread pooldefaults to True

See also

Examples

  1. >>> def func(x):
  2. ... with worker_client() as c: # connect from worker back to scheduler
  3. ... a = c.submit(inc, x) # this task can submit more tasks
  4. ... b = c.submit(dec, x)
  5. ... result = c.gather([a, b]) # and gather results
  6. ... return result
  1. >>> future = client.submit(func, 1) # submit func(1) on cluster
  • distributed.get_worker()[source]
  • Get the worker currently running this task

See also

Examples

  1. >>> def f():
  2. ... worker = get_worker() # The worker on which this task is running
  3. ... return worker.address
  1. >>> future = client.submit(f) # doctest: +SKIP
  2. >>> future.result() # doctest: +SKIP
  3. 'tcp://127.0.0.1:47373'
  • distributed.getclient(_address=None, timeout=3, resolve_address=True)[source]
  • Get a client while within a task.

This client connects to the same scheduler to which the worker is connected

Parameters:

  • address:str, optional
  • The address of the scheduler to connect to. Defaults to the schedulerthe worker is connected to.

  • timeout:int, default 3

  • Timeout (in seconds) for getting the Client

  • resolve_address:bool, default True

  • Whether to resolve address to its canonical form. Returns:
  • Client

See also

Examples

  1. >>> def f():
  2. ... client = get_client()
  3. ... futures = client.map(lambda x: x + 1, range(10)) # spawn many tasks
  4. ... results = client.gather(futures)
  5. ... return sum(results)
  1. >>> future = client.submit(f) # doctest: +SKIP
  2. >>> future.result() # doctest: +SKIP
  3. 55
  • distributed.secede()[source]
  • Have this task secede from the worker’s thread pool

This opens up a new scheduling slot and a new thread for a new task. Thisenables the client to schedule tasks on this node, which isespecially useful while waiting for other jobs to finish (e.g., withclient.gather).

See also

Examples

  1. >>> def mytask(x):
  2. ... # do some work
  3. ... client = get_client()
  4. ... futures = client.map(...) # do some remote work
  5. ... secede() # while that work happens, remove ourself from the pool
  6. ... return client.gather(futures) # return gathered results
  • distributed.rejoin()[source]
  • Have this thread rejoin the ThreadPoolExecutor

This will block until a new slot opens up in the executor. The next threadto finish a task will leave the pool to allow this one to join.

See also

  • class distributed.Reschedule[source]
  • Reschedule this task

Raising this exception will stop the current execution of the task and askthe scheduler to reschedule this task, possibly on a different machine.

This does not guarantee that the task will move onto a different machine.The scheduler will proceed through its normal heuristics to determine theoptimal machine to accept this task. The machine will likely change if theload across the cluster has significantly changed since first schedulingthe task.

  • class distributed.gettask_stream(_client=None, plot=False, filename='task-stream.html')[source]
  • Collect task stream within a context block

This provides diagnostic information about every task that was run duringthe time when this block was active.

This must be used as a context manager.

Parameters:

  • plot: boolean, str
  • If true then also return a Bokeh figureIf plot == ‘save’ then save the figure to a file

  • filename: str (optional)

  • The filename to save to if you set plot='save'

See also

Examples

  1. >>> with get_task_stream() as ts:
  2. ... x.compute()
  3. >>> ts.data
  4. [...]

Get back a Bokeh figure and optionally save to a file

  1. >>> with get_task_stream(plot='save', filename='task-stream.html') as ts:
  2. ... x.compute()
  3. >>> ts.figure
  4. <Bokeh Figure>

To share this file with others you may wish to upload and serve it online.A common way to do this is to upload the file as a gist, and then serve iton https://raw.githack.com

  1. $ pip install gist
  2. $ gist task-stream.html
  3. https://gist.github.com/8a5b3c74b10b413f612bb5e250856ceb

You can then navigate to that site, click the “Raw” button to the right ofthe task-stream.html file, and then provide that URL tohttps://raw.githack.com . This process should provide a sharable link thatothers can use to see your task stream plot.

  • class distributed.Lock(name=None, client=None)[source]
  • Distributed Centralized Lock

Parameters:

  • name: string
  • Name of the lock to acquire. Choosing the same name allows twodisconnected processes to coordinate a lock.

Examples

  1. >>> lock = Lock('x') # doctest: +SKIP
  2. >>> lock.acquire(timeout=1) # doctest: +SKIP
  3. >>> # do things with protected resource
  4. >>> lock.release() # doctest: +SKIP
  • acquire(self, blocking=True, timeout=None)[source]
  • Acquire the lock

Parameters:

  1. - **blocking**:bool, optional
  2. -

If false, don’t wait on the lock in the scheduler at all.

  1. - **timeout**:number, optional
  2. -

Seconds to wait on the lock in the scheduler. This does notinclude local coroutine time, network transfer time, etc..It is forbidden to specify a timeout when blocking is false. Returns:

  1. - True or False whether or not it sucessfully acquired the lock
  2. -

Examples

  1. >>> lock = Lock('x') # doctest: +SKIP
  2. >>> lock.acquire(timeout=1) # doctest: +SKIP
  • release(self)[source]
  • Release the lock if already acquired
  • class distributed.Queue(name=None, client=None, maxsize=0)[source]
  • Distributed Queue

This allows multiple clients to share futures or small bits of data betweeneach other with a multi-producer/multi-consumer queue. All metadata issequentialized through the scheduler.

Elements of the Queue must be either Futures or msgpack-encodable data(ints, strings, lists, dicts). All data is sent through the scheduler soit is wise not to send large objects. To share large objects scatter thedata and share the future instead.

Warning

This object is experimental and has known issues in Python 2

See also

  • Variable
  • shared variable between clients

Examples

  1. >>> from dask.distributed import Client, Queue # doctest: +SKIP
  2. >>> client = Client() # doctest: +SKIP
  3. >>> queue = Queue('x') # doctest: +SKIP
  4. >>> future = client.submit(f, x) # doctest: +SKIP
  5. >>> queue.put(future) # doctest: +SKIP
  • get(self, timeout=None, batch=False, **kwargs)[source]
  • Get data from the queue

Parameters:

  1. - **timeout: Number (optional)**
  2. -

Time in seconds to wait before timing out

  1. - **batch: boolean, int (optional)**
  2. -

If True then return all elements currently waiting in the queue.If an integer than return that many elements from the queueIf False (default) then return one item at a time

  • put(self, value, timeout=None, **kwargs)[source]
  • Put data into the queue

  • qsize(self, **kwargs)[source]

  • Current number of elements in the queue
  • class distributed.Variable(name=None, client=None, maxsize=0)[source]
  • Distributed Global Variable

This allows multiple clients to share futures and data between each otherwith a single mutable variable. All metadata is sequentialized through thescheduler. Race conditions can occur.

Values must be either Futures or msgpack-encodable data (ints, lists,strings, etc..) All data will be kept and sent through the scheduler, soit is wise not to send too much. If you want to share a large amount ofdata then scatter it and share the future instead.

Warning

This object is experimental and has known issues in Python 2

See also

  • Queue
  • shared multi-producer/multi-consumer queue between clients

Examples

  1. >>> from dask.distributed import Client, Variable # doctest: +SKIP
  2. >>> client = Client() # doctest: +SKIP
  3. >>> x = Variable('x') # doctest: +SKIP
  4. >>> x.set(123) # docttest: +SKIP
  5. >>> x.get() # docttest: +SKIP
  6. 123
  7. >>> future = client.submit(f, x) # doctest: +SKIP
  8. >>> x.set(future) # doctest: +SKIP
  • delete(self)[source]
  • Delete this variable

Caution, this affects all clients currently pointing to this variable.

  • get(self, timeout=None, **kwargs)[source]
  • Get the value of this variable

  • set(self, value, **kwargs)[source]

  • Set the value of this variable

Parameters:

  1. - **value: Future or object**
  2. -

Must be either a Future or a msgpack-encodable value

Adaptive

  • class distributed.deploy.Adaptive(cluster=None, interval='1s', minimum=0, maximum=inf, wait_count=3, target_duration='5s', worker_key=None, **kwargs)[source]
  • Adaptively allocate workers based on scheduler load. A superclass.

Contains logic to dynamically resize a Dask cluster based on current use.This class needs to be paired with a system that can create and destroyDask workers using a cluster resource manager. Typically it is built intoalready existing solutions, rather than used directly by users.It is most commonly used from the .adapt(…) method of various Daskcluster classes.

Parameters:

  • cluster: object
  • Must have scale and scale_down methods/coroutines

  • interval:timedelta or str, default “1000 ms”

  • Milliseconds between checks

  • wait_count: int, default 3

  • Number of consecutive times that a worker should be suggested forremoval before we remove it.

  • target_duration: timedelta or str, default “5s”

  • Amount of time we want a computation to take.This affects how aggressively we scale up.

  • worker_key: Callable[WorkerState]

  • Function to group workers together when scaling downSee Scheduler.workers_to_close for more information

  • minimum: int

  • Minimum number of workers to keep around

  • maximum: int

  • Maximum number of workers to keep around

  • **kwargs:

  • Extra parameters to pass to Scheduler.workers_to_close

Notes

Subclasses can override Adaptive.should_scale_up() andAdaptive.workers_to_close() to control when the cluster should beresized. The default implementation checks if there are too many tasksper worker or too little memory available (see Adaptive.needs_cpu()and Adaptive.needs_memory()).

Examples

This is commonly used from existing Dask classes, like KubeCluster

  1. >>> from dask_kubernetes import KubeCluster
  2. >>> cluster = KubeCluster()
  3. >>> cluster.adapt(minimum=10, maximum=100)

Alternatively you can use it from your own Cluster class by subclassingfrom Dask’s Cluster superclass

  1. >>> from distributed.deploy import Cluster
  2. >>> class MyCluster(Cluster):
  3. ... def scale_up(self, n):
  4. ... """ Bring worker count up to n """
  5. ... def scale_down(self, workers):
  6. ... """ Remove worker addresses from cluster """
  1. >>> cluster = MyCluster()
  2. >>> cluster.adapt(minimum=10, maximum=100)
  • recommendations(self, target: int) → dict[source]
  • Make scale up/down recommendations based on current state and target

  • target(self)[source]

  • The target number of workers that should exist

  • workersto_close(_self, target: int)[source]

  • Determine which, if any, workers should potentially be removed fromthe cluster.

Returns:

  1. - List of worker addresses to close, if any
  2. -

See also

  1. - <code>Scheduler.workers_to_close</code>
  2. -

Notes

Adaptive.workers_to_close dispatches to Scheduler.workers_to_close(),but may be overridden in subclasses.