result

Task results/state and results for groups of tasks.

Module Contents

Classes

ResultBase() Base class for results.
AsyncResult(self,id,backend=None,task_name=None,app=None,parent=None) Query task state.
ResultSet(self,results,app=None,ready_barrier=None,**kwargs) A collection of results.
GroupResult(self,id=None,results=None,parent=None,**kwargs) Like ResultSet, but with an associated id.
EagerResult(self,id,ret_value,state,traceback=None) Result that we know has already been executed.

Functions

assert_will_not_block()
allow_join_result()
denied_join_result()
result_from_tuple(r,app=None) Deserialize result from tuple.
assert_will_not_block()
allow_join_result()
denied_join_result()
class ResultBase

Base class for results.

class AsyncResult(id, backend=None, task_name=None, app=None, parent=None)

Query task state.

Arguments:
id (str): See id. backend (Backend): See backend.
__init__(id, backend=None, task_name=None, app=None, parent=None)
ignored()

“If True, task result retrieval is disabled.

ignored(value)

Enable/disable task result retrieval.

then(callback, on_error=None, weak=False)
_on_fulfilled(result)
as_tuple()
forget()

Forget about (and possibly remove the result of) this task.

revoke(connection=None, terminate=False, signal=None, wait=False, timeout=None)

Send revoke signal to all workers.

Any worker receiving the task, or having reserved the task, must ignore it.

Arguments:
terminate (bool): Also terminate the process currently working
on the task (if any).
signal (str): Name of signal to send to process if terminate.
Default is TERM.
wait (bool): Wait for replies from workers.
The timeout argument specifies the seconds to wait. Disabled by default.
timeout (float): Time in seconds to wait for replies when
wait is enabled.
get(timeout=None, propagate=True, interval=0.5, no_ack=True, follow_parents=True, callback=None, on_message=None, on_interval=None, disable_sync_subtasks=True, EXCEPTION_STATES=None, PROPAGATE_STATES=None)

Wait until task is ready, and return its result.

Warning:
Waiting for tasks within a task may lead to deadlocks. Please read task-synchronous-subtasks.
Warning:
Backends use resources to store and transmit results. To ensure that resources are released, you must eventually call get() or forget() on EVERY @AsyncResult instance returned after calling a task.
Arguments:
timeout (float): How long to wait, in seconds, before the
operation times out.

propagate (bool): Re-raise exception if the task failed. interval (float): Time to wait (in seconds) before retrying to

retrieve the result. Note that this does not have any effect when using the RPC/redis result store backends, as they don’t use polling.
no_ack (bool): Enable amqp no ack (automatically acknowledge
message). If this is False then the message will not be acked.
follow_parents (bool): Re-raise any exception raised by
parent tasks.
disable_sync_subtasks (bool): Disable tasks to wait for sub tasks
this is the default configuration. CAUTION do not enable this unless you must.
Raises:
celery.exceptions.TimeoutError: if timeout isn’t
None and the result does not arrive within timeout seconds.
Exception: If the remote call raised an exception then that
exception will be re-raised in the caller process.
_maybe_reraise_parent_error()
_parents()
collect(intermediate=False, **kwargs)

Collect results as they return.

Iterator, like get() will wait for the task to complete, but will also follow AsyncResult and ResultSet returned by the task, yielding (result, value) tuples for each result in the tree.

An example would be having the following tasks:

from celery import group
from proj.celery import app

@app.task(trail=True)
def A(how_many):
    return group(B.s(i) for i in range(how_many))()

@app.task(trail=True)
def B(i):
    return pow2.delay(i)

@app.task(trail=True)
def pow2(i):
    return i ** 2
>>> from celery.result import ResultBase
>>> from proj.tasks import A

>>> result = A.delay(10)
>>> [v for v in result.collect()
...  if not isinstance(v, (ResultBase, tuple))]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
Note:
The Task.trail option must be enabled so that the list of children is stored in result.children. This is the default but enabled explicitly for illustration.
Yields:
Tuple[AsyncResult, Any]: tuples containing the result instance of the child task, and the return value of that task.
get_leaf()
iterdeps(intermediate=False)
ready()

Return True if the task has executed.

If the task is still running, pending, or is waiting for retry then False is returned.

successful()

Return True if the task executed successfully.

failed()

Return True if the task failed.

throw(*args, **kwargs)
maybe_throw(propagate=True, callback=None)
_to_remote_traceback(tb)
build_graph(intermediate=False, formatter=None)
__str__()

str(self) -> self.id.

__hash__()

hash(self) -> hash(self.id).

__repr__()
__eq__(other)
__ne__(other)
__copy__()
__reduce__()
__reduce_args__()
__del__()

Cancel pending operations when the instance is destroyed.

graph()
supports_native_join()
children()
_maybe_set_cache(meta)
_get_task_meta()
_iter_meta()
_set_cache(d)
result()

Task return value.

Note:
When the task has been executed, this contains the return value. If the task raised an exception, this will be the exception instance.
traceback()

Get the traceback of a failed task.

state()

The tasks current state.

Possible values includes:

PENDING

The task is waiting for execution.

STARTED

The task has been started.

RETRY

The task is to be retried, possibly because of failure.

FAILURE

The task raised an exception, or has exceeded the retry limit. The result attribute then contains the exception raised by the task.

SUCCESS

The task executed successfully. The result attribute then contains the tasks return value.
task_id()

Compat. alias to id.

task_id(id)
class ResultSet(results, app=None, ready_barrier=None, **kwargs)

A collection of results.

Arguments:
results (Sequence[AsyncResult]): List of result instances.
__init__(results, app=None, ready_barrier=None, **kwargs)
add(result)

Add AsyncResult as a new member of the set.

Does nothing if the result is already a member.

_on_ready()
remove(result)

Remove result from the set; it must be a member.

Raises:
KeyError: if the result isn’t a member.
discard(result)

Remove result from the set if it is a member.

Does nothing if it’s not a member.

update(results)

Extend from iterable of results.

clear()

Remove all results from this set.

successful()

Return true if all tasks successful.

Returns:
bool: true if all of the tasks finished
successfully (i.e. didn’t raise an exception).
failed()

Return true if any of the tasks failed.

Returns:
bool: true if one of the tasks failed.
(i.e., raised an exception)
maybe_throw(callback=None, propagate=True)
waiting()

Return true if any of the tasks are incomplete.

Returns:
bool: true if one of the tasks are still
waiting for execution.
ready()

Did all of the tasks complete? (either by success of failure).

Returns:
bool: true if all of the tasks have been executed.
completed_count()

Task completion count.

Returns:
int: the number of tasks completed.
forget()

Forget about (and possible remove the result of) all the tasks.

revoke(connection=None, terminate=False, signal=None, wait=False, timeout=None)

Send revoke signal to all workers for all tasks in the set.

Arguments:
terminate (bool): Also terminate the process currently working
on the task (if any).
signal (str): Name of signal to send to process if terminate.
Default is TERM.
wait (bool): Wait for replies from worker.
The timeout argument specifies the number of seconds to wait. Disabled by default.
timeout (float): Time in seconds to wait for replies when
the wait argument is enabled.
__iter__()
__getitem__(index)

res[i] -> res.results[i].

iterate(timeout=None, propagate=True, interval=0.5)

Deprecated method, use get() with a callback argument.

get(timeout=None, propagate=True, interval=0.5, callback=None, no_ack=True, on_message=None, disable_sync_subtasks=True, on_interval=None)

See join().

This is here for API compatibility with AsyncResult, in addition it uses join_native() if available for the current result backend.

join(timeout=None, propagate=True, interval=0.5, callback=None, no_ack=True, on_message=None, disable_sync_subtasks=True, on_interval=None)

Gather the results of all tasks as a list in order.

Note:

This can be an expensive operation for result store backends that must resort to polling (e.g., database).

You should consider using join_native() if your backend supports it.

Warning:
Waiting for tasks within a task may lead to deadlocks. Please see task-synchronous-subtasks.
Arguments:
timeout (float): The number of seconds to wait for results
before the operation times out.
propagate (bool): If any of the tasks raises an exception,
the exception will be re-raised when this flag is set.
interval (float): Time to wait (in seconds) before retrying to
retrieve a result from the set. Note that this does not have any effect when using the amqp result store backend, as it does not use polling.
callback (Callable): Optional callback to be called for every
result received. Must have signature (task_id, value) No results will be returned by this function if a callback is specified. The order of results is also arbitrary when a callback is used. To get access to the result object for a particular id you’ll have to generate an index first: index = {r.id: r for r in gres.results.values()} Or you can create new result objects on the fly: result = app.AsyncResult(task_id) (both will take advantage of the backend cache anyway).
no_ack (bool): Automatic message acknowledgment (Note that if this
is set to False then the messages will not be acknowledged).
disable_sync_subtasks (bool): Disable tasks to wait for sub tasks
this is the default configuration. CAUTION do not enable this unless you must.
Raises:
celery.exceptions.TimeoutError: if timeout isn’t
None and the operation takes longer than timeout seconds.
then(callback, on_error=None, weak=False)
iter_native(timeout=None, interval=0.5, no_ack=True, on_message=None, on_interval=None)

Backend optimized version of iterate().

New in version 2.2.

Note that this does not support collecting the results for different task types using different backends.

This is currently only supported by the amqp, Redis and cache result backends.

join_native(timeout=None, propagate=True, interval=0.5, callback=None, no_ack=True, on_message=None, on_interval=None, disable_sync_subtasks=True)

Backend optimized version of join().

New in version 2.2.

Note that this does not support collecting the results for different task types using different backends.

This is currently only supported by the amqp, Redis and cache result backends.

_iter_meta()
_failed_join_report()
__len__()
__eq__(other)
__ne__(other)
__repr__()
supports_native_join()
app()
app(app)
backend()
class GroupResult(id=None, results=None, parent=None, **kwargs)

Like ResultSet, but with an associated id.

This type is returned by group.

It enables inspection of the tasks state and return values as a single entity.

Arguments:
id (str): The id of the group. results (Sequence[AsyncResult]): List of result instances. parent (ResultBase): Parent result of this group.
__init__(id=None, results=None, parent=None, **kwargs)
_on_ready()
save(backend=None)

Save group-result for later retrieval using restore().

Example:
>>> def save_and_restore(result):
...     result.save()
...     result = GroupResult.restore(result.id)
delete(backend=None)

Remove this result if it was previously saved.

__reduce__()
__reduce_args__()
__bool__()
__eq__(other)
__ne__(other)
__repr__()
__str__()

str(self) -> self.id.

__hash__()

hash(self) -> hash(self.id).

as_tuple()
children()
restore(id, backend=None, app=None)

Restore previously saved group result.

class EagerResult(id, ret_value, state, traceback=None)

Result that we know has already been executed.

__init__(id, ret_value, state, traceback=None)
then(callback, on_error=None, weak=False)
_get_task_meta()
__reduce__()
__reduce_args__()
__copy__()
ready()
get(timeout=None, propagate=True, disable_sync_subtasks=True, **kwargs)
forget()
revoke(*args, **kwargs)
__repr__()
_cache()
result()

The tasks return value.

state()

The tasks state.

traceback()

The traceback if the task failed.

supports_native_join()
result_from_tuple(r, app=None)

Deserialize result from tuple.