Program used to start a Celery worker instance.

The celery worker command (previously known as celeryd)

See also

See preload-options.

-c, --concurrency

Number of child processes processing the queue. The default is the number of CPUs available on your system.

-P, --pool

Pool implementation:

prefork (default), eventlet, gevent or solo.

-n, --hostname

Set custom hostname (e.g., ‘w1@%%h’). Expands: %%h (hostname), %%n (name) and %%d, (domain).

-B, --beat

Also run the celery beat periodic task scheduler. Please note that there must only be one instance of this service.


-B is meant to be used for development purposes. For production environment, you need to start celery beat separately.

-Q, --queues

List of queues to enable for this worker, separated by comma. By default all configured queues are enabled. Example: -Q video,image

-X, --exclude-queues

List of queues to disable for this worker, separated by comma. By default all configured queues are enabled. Example: -X video,image.

-I, --include

Comma separated list of additional modules to import. Example: -I foo.tasks,bar.tasks

-s, --schedule

Path to the schedule database if running with the -B option. Defaults to celerybeat-schedule. The extension “.db” may be appended to the filename.


Apply optimization profile. Supported: default, fair


Set custom prefetch multiplier value for this worker instance.


Scheduler class to use. Default is celery.beat.PersistentScheduler

-S, --statedb

Path to the state database. The extension ‘.db’ may be appended to the filename. Default: {default}

-E, --task-events

Send task-related events that can be captured by monitors like celery events, celerymon, and others.


Don’t subscribe to other workers events.


Don’t synchronize with other workers at start-up.


Don’t send event heartbeats.


Interval in seconds at which to send worker heartbeat


Purges all waiting tasks before the daemon is started. WARNING: This is unrecoverable, and the tasks will be deleted from the messaging server.


Enables a hard time limit (in seconds int/float) for tasks.


Enables a soft time limit (in seconds int/float) for tasks.


Maximum number of tasks a pool worker can execute before it’s terminated and replaced by a new worker.


Maximum amount of resident memory, in KiB, that may be consumed by a child process before it will be replaced by a new one. If a single task causes a child process to exceed this limit, the task will be completed and the child process will be replaced afterwards. Default: no limit.


Enable autoscaling by providing max_concurrency, min_concurrency. Example:


(always keep 3 processes, but grow to 10 if necessary)


Start worker as a background process.

-f, --logfile

Path to log file. If no logfile is specified, stderr is used.

-l, --loglevel

Logging level, choose between DEBUG, INFO, WARNING, ERROR, CRITICAL, or FATAL.


Optional file used to store the process pid.

The program won’t start if this file already exists and the pid is still alive.


User id, or user name of the user to run as after detaching.


Group id, or group name of the main group to change to after detaching.


Effective umask(1) (in octal) of the process after detaching. Inherits the umask(1) of the parent process by default.


Optional directory to change to after detaching.


Executable to use for the detached process.

Module Contents


worker() Start worker instance.


main(app=None) Start worker.
class worker

Start worker instance.

$ celery worker --app=proj -l info
$ celery worker -A proj -l info -Q hipri,lopri

$ celery worker -A proj --concurrency=4
$ celery worker -A proj --concurrency=1000 -P eventlet
$ celery worker --autoscale=10,0
run_from_argv(prog_name, argv=None, command=None)
maybe_detach(argv, dopts=list)
run(hostname=None, pool_cls=None, app=None, uid=None, gid=None, loglevel=None, logfile=None, pidfile=None, statedb=None, **kwargs)

Start worker.