celery+Rabbit MQ的安装和使用

celery 和Rabbit MQ作为异步任务处理的组合,在生产中得到广泛的使用。本文简单介绍其安装和使用方法。

安装 RabbitMQ

安装erlang

RabbitMQ 依赖erlang环境,首先安装erlang环境,如果没有安装会报错

erl: command not found

下载安装包otp_src_19.0.tar.gz
https://www.erlang.org/downloads/19.0

解压,配置,编译,安装

$ ./configure
$ make
$ make install

安装RabbitMQ

对于centos 6,
https://www.rabbitmq.com/install-generic-unix.html

解压即可使用

$ cd rabbitmq_server-3.6.10
# 启动服务
$ sbin/rabbitmq-server

启动为后台服务(只有一个破折号)

$ sbin/rabbitmq-server -detached

查看状态

$ sbin/rabbitmqctl status

停止服务
永远不要用 kill 停止 RabbitMQ 服务器,而是应该用 rabbitmqctl 命令:

$ sbin/rabbitmqctl stop

默认用户guest,密码guest,只允许本地访问,如需远程访问,需要设置;

添加用户

$ sbin/rabbitmqctl add_user test 123456
Creating user "test"

添加虚拟主机,并赋予用户test权限

$ sbin/rabbitmqctl add_vhost myvhost
Creating vhost "myvhost"

$ sbin/rabbitmqctl set_permissions -p myvhost test ".*" ".*" ".*"
Setting permissions for user "test" in vhost "myvhost"

安装celery

使用pip安装celery

$ pip install celery

使用celery

创建celery实例,指定task及MQ地址。

# tasks.py
from celery import Celery

app = Celery('tasks', broker='amqp://test:123456@localhost/myvhost')

@app.task
def add(x, y):
	return x + y

启动celery

在tasks.py文件所在目录,执行如下命令:

$ /data/happy_env/bin/celery worker -A tasks  --loglevel=info
 
 -------------- [email protected] v4.1.0 (latentcall)
---- **** ----- 
--- * ***  * -- Linux-2.6.32-642.el6.x86_64-x86_64-with-centos-6.8-Final 2017-08-05 17:20:25
-- * - **** --- 
- ** ---------- [config]
- ** ---------- .> app:         tasks:0x7f1e73342190
- ** ---------- .> transport:   amqp://test:**@localhost:5672/myvhost
- ** ---------- .> results:     disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . tasks.add

[2017-08-05 17:20:25,519: INFO/MainProcess] Connected to amqp://test:**@127.0.0.1:5672/myvhost
[2017-08-05 17:20:25,535: INFO/MainProcess] mingle: searching for neighbors
[2017-08-05 17:20:26,576: INFO/MainProcess] mingle: all alone
[2017-08-05 17:20:26,692: INFO/MainProcess] [email protected] ready.

其中,

  • –app or -A 指定app实例名称
  • –loglevel or -l指定日志打印级别

默认使用的queue名称是celery.

调用任务

所谓调用任务,就是向MQ放消息,worker从MQ取消息,进行消费。
进入python 终端:

$ /data/happy_env/bin/python 
Python 2.7.13 (default, Aug  5 2017, 14:56:29) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-17)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from tasks import add
>>> add.delay(6, 6)

>>>

add.delay(6,6)相当于add.apply_async((2, 2)),默认的queue是celery。
apply_async还可以指定queue等参数:

>>> add.apply_async((2, 2), queue='lopri', countdown=10)

celery worker终端输出:

[2017-08-05 17:32:37,289: INFO/MainProcess] Received task: tasks.add[6f270cbd-4e90-4d21-842c-50f8488f8216]
[2017-08-05 17:32:37,300: INFO/ForkPoolWorker-3] Task tasks.add[6f270cbd-4e90-4d21-842c-50f8488f8216] succeeded in 0.0021863639995s: 12

从文件中读取配置

可以将配置写到文件中。

#tasks.py
from celery import Celery

app = Celery("orange")
app.config_from_object("celeryconfig")

@app.task
def add(x, y):
	return x + y

配置文件celeryconfig.py内容:

RABBIT_MQ = {
    'HOST': '127.0.0.1',
    'PORT': 5672,
    'USER': 'test',
    'PASSWORD': '123456'
}

# broker
BROKER_URL = 'amqp://%s:%s@%s:%s/myvhost' % (RABBIT_MQ['USER'], RABBIT_MQ['PASSWORD'], RABBIT_MQ['HOST'], RABBIT_MQ['PORT'])

# celery日志格式
CELERYD_LOG_FORMAT = '[%(asctime)s] [%(levelname)s] %(message)s'

配置celery日志保存文件

$ /data/happy_env/bin/celery worker -A tasks --loglevel=info -f /home/lanyang/python_ex/celery_test/test.log

保存结果

后端通过 Celery 的 backend 参数来指定。如果你选择使用配置模块,则通过 CELERY_RESULT_BACKEND 选项来设置。

routing

默认情况下,worker从queue名称为celery的队列中消费消息。可以通过定义CELERY_ROUTES配置项来指定queue。

配置文件 celeryconfig.py:

RABBIT_MQ = {
    'HOST': '127.0.0.1',
    'PORT': 5672,
    'USER': 'test',
    'PASSWORD': '123456'
}

BROKER_URL = 'amqp://%s:%s@%s:%s/myvhost' % (RABBIT_MQ['USER'], RABBIT_MQ['PASSWORD'], RABBIT_MQ['HOST'], RABBIT_MQ['PORT'])

CELERYD_LOG_FORMAT = '[%(asctime)s] [%(levelname)s] %(message)s'

CELERY_ROUTES = {
        'tasks.add': {'queue': 'sunday'},
}

启动生产者

>>> from tasks import add
>>> add.apply_async((2, 2))

配置文件中的queue是sunday,所以是向sunday发送消息;

启动消费者celery worker

$ /data/happy_env/bin/celery worker  -A tasks --loglevel=info –Q sunday -f /home/lanyang/python_ex/celery_test/test.log
-------------- [email protected] v4.1.0 (latentcall)
---- **** ----- 
--- * ***  * -- Linux-2.6.32-642.el6.x86_64-x86_64-with-centos-6.8-Final 2017-08-06 11:14:35
-- * - **** --- 
- ** ---------- [config]
- ** ---------- .> app:         assetv2:0x7f76f3049590
- ** ---------- .> transport:   amqp://test:**@127.0.0.1:5672/myvhost
- ** ---------- .> results:     disabled://
- *** --- * --- .> concurrency: 4 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> sunday           exchange=sunday(direct) key=sunday
                

[tasks]
  . tasks.add

其中,

  • -Q指定从这个queue中消费消息
  • -f 指定log文件

log文件:

[2017-08-06 11:14:35,940] [INFO] Connected to amqp://test:**@127.0.0.1:5672/myvhost
[2017-08-06 11:14:35,952] [INFO] mingle: searching for neighbors
[2017-08-06 11:14:37,019] [INFO] mingle: all alone
[2017-08-06 11:14:37,033] [INFO] [email protected] ready.
[2017-08-06 11:14:37,034] [INFO] Received task: tasks.add[384b604f-ed50-4066-be3a-681120520d1a]
[2017-08-06 11:14:37,136] [INFO] logger this is a message from arm…
[2017-08-06 11:14:37,137] [INFO] logging from arm…
[2017-08-06 11:14:37,137] [WARNING] logger warning…
[2017-08-06 11:14:37,137] [INFO] Task tasks.add[384b604f-ed50-4066-be3a-681120520d1a] succeeded in 0.00111318299969s: 4

Exchanges, queues and routing keys

官网文档如下:

  1. Messages are sent to exchanges.
  2. An exchange routes messages to one or more queues. Several exchange types exists, providing different ways to do routing, or implementing different messaging scenarios.根据route_key,找到对应queue。celery中的queue就是此处的route_key.
  3. The message waits in the queue until someone consumes it.
  4. The message is deleted from the queue when it has been acknowledged.

The steps required to send and receive messages are:

  1. .Create an exchange
  2. Create a queue
  3. Bind the queue to the exchange.

Celery automatically creates the entities necessary for the queues in CELERY_QUEUES to work (默认True,except if the queue’s auto_declare setting is set to False).

关于Exchanges, queues and routing keys,翻译一下,大体关系如下:

  1. 消息被发送到exchage
  2. 关联到具体的queue
  3. 消息被消费之前一直存在于queue中
  4. 消息被消费后,自动从queue中删除

生产和消费消息的过程如下:

  1. 创建exchange
  2. 创建queue
  3. 将queue绑定到exchange

celery会自动创建exchange、queue,并把queue绑定到exchange。

更多信息
http://docs.jinkan.org/docs/celery/userguide/routing.html

参考

celery
http://docs.jinkan.org/docs/celery/getting-started/first-steps-with-celery.html#first-steps

rabbitMQ
https://www.rabbitmq.com/install-generic-unix.html

关于celery 精灵化
http://docs.jinkan.org/docs/celery/tutorials/daemonizing.html#centos

supervisor+celery+celerybeat入门指南

https://github.com/importcjj/notes/issues/2

你可能感兴趣的:(python)