Celery
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.
Celery is a task queue based on distributed message passing. It is used to handle long running asynchronous tasks. RabbitMQ, on the other hand, is message broker which is used by Celery to send and receive messages. Celery is perfectly suited for tasks which will take some time to execute but we don't want our request to be blocked while there tasks are processed,
Choose a Broker
Celery requires a solution to send and receive messages; usually this comes in the form of a separate service called a message broker.
Example of using Celery with RabbitMQ
BROKER_URL = 'amqp://guest:guest@localhost//'
CELERY_ACCEPT_CONTENT = ['json'] (type of content allowed to received)
CELERY_TASK_SERIALIZER = 'json' (a string used for identifying default serialization method)
CELERY_RESULT_SERIALIZER = 'json'(the type of result serialization format)
Application
The first thing you need is a Celery instance. We call this the Celery application or just app for short.(creating tasks, and managing workers)
Running the Celery worker server
$ celery -A tasks worker --loglevel=info
In production, you want to run the worker in the background as a daemon.
Calling the task
To call our task you use the delay() method. This is handy shortcut to the apply_async() method that gives greater control of the task execution.
>>> from tasks import add
>>> add.delay(4, 6)
Calling a tasks return an AsyncResult instance: this can be used to check the state of the task, wait for the task to finish, or get its return value.
Keeping results
if you want to keep track of the task's states, Celery needs to store or send the state somewhere.
$ app = Celery('tasks', backend='rpc://', broker='pyamqp://')
An AsyncResult instance returned when you call to a task:
>>> result = add.delay(4, 5)
The ready() method returns whether the task has finished processing or not:
>>> result = add.delay(4, 4)
You can wait for the result to complete, but this is rare used since it turns the asynchronous call
into a synchronous one:
>>> result.get(timeout=1)
In case the task raised an exception, get() will re-raise the exception, but you can override this by
specifying the propagate argument.
>>> result.get(propagate=False)
If the task raised an exception, you can also gain access to the original traceback.
>>> result.traceback
Disable result
Disabled for individual tasks by setting the @task(ignore_result=True)