启动celery守护程序报错:OSError: [Errno 5] Input/output error

在使用命令对celery进行停止的时候,如下命令:
操作用户为root

C_FAKEFORK=1 sh -x /etc/init.d/celeryd stop

得到如下报错:

[2021-07-10 13:54:45,258: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/celery/worker/consumer/consumer.py", line 311, in start
    blueprint.start(self)
  File "/usr/local/lib/python3.8/dist-packages/celery/bootsteps.py", line 116, in start
    step.start(parent)
  File "/usr/local/lib/python3.8/dist-packages/celery/worker/consumer/consumer.py", line 592, in start
    c.loop(*c.loop_args())
  File "/usr/local/lib/python3.8/dist-packages/celery/worker/loops.py", line 81, in asynloop
    next(loop)
  File "/usr/local/lib/python3.8/dist-packages/kombu/asynchronous/hub.py", line 305, in create_loop
    events = poll(poll_timeout)
  File "/usr/local/lib/python3.8/dist-packages/kombu/utils/eventio.py", line 82, in poll
    return self._epoll.poll(timeout if timeout is not None else -1)
  File "/usr/local/lib/python3.8/dist-packages/celery/apps/worker.py", line 290, in _handle_request
    safe_say(f'worker: {how} shutdown (MainProcess)')
  File "/usr/local/lib/python3.8/dist-packages/celery/apps/worker.py", line 82, in safe_say
    print(f'\n{msg}', file=sys.__stderr__, flush=True)
OSError: [Errno 5] Input/output error
[2021-07-10 13:54:45,259: DEBUG/MainProcess] Canceling task consumer...
[2021-07-10 13:54:45,266: INFO/MainProcess] Connected to redis://:**@127.0.0.1:6579/3
[2021-07-10 13:54:45,271: INFO/MainProcess] mingle: searching for neighbors
[2021-07-10 13:54:46,274: INFO/MainProcess] mingle: all alone

原因:

有可能是因为错误的关闭了xshell,导致某些文件错误, 无法正确的读写

解决办法:

将celery的pid文件进行删除, 文件一般存在于 /var/run/celery/worker1.log, 建议直接删除掉celery文件夹

然后再次运行:

C_FAKEFORK=1 sh -x /etc/init.d/celeryd start

启动成功如下所示:

root@hecs01:~# C_FAKEFORK=1 sh -x /etc/init.d/celeryd start
+ VERSION=10.1
+ echo celery init v10.1.
celery init v10.1.
+ id -u
+ [ 0 -ne 0 ]
+ origin_is_runlevel_dir
+ set +e
+ dirname /etc/init.d/celeryd
+ grep -q /etc/rc.\.d
+ echo 1
+ [ 1 -eq 0 ]
+ SCRIPT_FILE=/etc/init.d/celeryd
+ basename /etc/init.d/celeryd
+ SCRIPT_NAME=celeryd
+ DEFAULT_USER=celery
+ DEFAULT_PID_FILE=/var/run/celery/%n.pid
+ DEFAULT_LOG_FILE=/var/log/celery/%n%I.log
+ DEFAULT_LOG_LEVEL=INFO
+ DEFAULT_NODES=celery
+ DEFAULT_CELERYD=-m celery worker --detach
+ [ -d /etc/default ]
+ CELERY_CONFIG_DIR=/etc/default
+ CELERY_DEFAULTS=/etc/default/celeryd
+ [ -f /etc/default/celeryd ]
+ _config_sanity /etc/default/celeryd
+ local path=/etc/default/celeryd
+ ls -ld /etc/default/celeryd
+ awk {print $3}
+ local owner=root
+ ls -ld /etc/default/celeryd
+ cut -b 6
+ local iwgrp=-
+ ls -ld /etc/default/celeryd
+ cut -b 9
+ local iwoth=-
+ id -u root
+ [ 0 != 0 ]
+ [ - != - ]
+ [ - != - ]
+ echo Using config script: /etc/default/celeryd
Using config script: /etc/default/celeryd
+ . /etc/default/celeryd
+ CELERYD_NODES=worker1
+ CELERY_BIN=/usr/local/bin/celery
+ CELERY_APP=runyi
+ CELERYD_CHDIR=/home/runyi/runyi/runyi/
+ CELERYD_OPTS= --time-limit=300 --concurrency=8
+ CELERYD_LOG_LEVEL=DEBUG
+ CELERYD_LOG_FILE=/var/log/celery/%n%I.log
+ CELERYD_PID_FILE=/var/run/celery/%n.pid
+ CELERYD_USER=celery
+ CELERYD_GROUP=celery
+ CELERY_CREATE_DIRS=1
+ CELERYD_ULIMIT=65535
+ export DJANGO_SETTINGS_MODULE=settings
+ export PYTHONPATH=:/home/runyi/runyi
+ CELERY_APP_ARG=
+ [ ! -z runyi ]
+ CELERY_APP_ARG=--app=runyi
+ CELERYD_SU=su
+ CELERYD_SU_ARGS=
+ CELERYD_USER=celery
+ CELERY_CREATE_DIRS=1
+ CELERY_CREATE_RUNDIR=1
+ CELERY_CREATE_LOGDIR=1
+ [ -z /var/run/celery/%n.pid ]
+ [ -z /var/log/celery/%n%I.log ]
+ CELERYD_LOG_LEVEL=DEBUG
+ CELERY_BIN=/usr/local/bin/celery
+ CELERYD_MULTI=/usr/local/bin/celery multi
+ CELERYD_NODES=worker1
+ export CELERY_LOADER
+ [ -n  ]
+ dirname /var/log/celery/%n%I.log
+ CELERYD_LOG_DIR=/var/log/celery
+ dirname /var/run/celery/%n.pid
+ CELERYD_PID_DIR=/var/run/celery
+ [ -n /home/runyi/runyi/runyi/ ]
+ DAEMON_OPTS= --workdir=/home/runyi/runyi/runyi/
+ export PATH=.:/usr/local/jdk1.8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/sbin:/sbin
+ check_dev_null
+ [ ! -c /dev/null ]
+ check_paths
+ [ 1 -eq 1 ]
+ create_default_dir /var/log/celery
+ [ ! -d /var/log/celery ]
+ [ 1 -eq 1 ]
+ create_default_dir /var/run/celery
+ [ ! -d /var/run/celery ]
+ start_workers
+ [ ! -z 65535 ]
+ ulimit -n 65535
+ _chuid start worker1 --workdir=/home/runyi/runyi/runyi/ --pidfile=/var/run/celery/%n.pid --logfile=/var/log/celery/%n%I.log --loglevel=DEBUG --app=runyi --time-limit=300 --concurrency=8
+ su celery -c /usr/local/bin/celery multi start worker1 --workdir=/home/runyi/runyi/runyi/ --pidfile=/var/run/celery/%n.pid --logfile=/var/log/celery/%n%I.log --loglevel=DEBUG --app=runyi --time-limit=300 --concurrency=8
celery multi v5.0.4 (singularity)
> Starting nodes...
 
 -------------- worker1@hecs01 v5.0.4 (singularity)
--- ***** ----- 
-- ******* ---- Linux-5.4.0-47-generic-x86_64-with-glibc2.29 2021-07-10 14:10:32
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         runyi:0x7f6b31bff100
- ** ---------- .> transport:   redis://:**@127.0.0.1:6579/3
- ** ---------- .> results:     redis://:**@127.0.0.1:6579/3
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . aliyun.utils.large_file_to_oss
  . celery.accumulate
  . celery.backend_cleanup
  . celery.chain
  . celery.chord
  . celery.chord_unlock
  . celery.chunks
  . celery.group
  . celery.map
  . celery.starmap
  . course.tasks.async_browse_data
  . course.tasks.course_notify
  . course.tasks.search_job
  . get_has_account_lock
  . get_has_notify_course
  . get_live_num
  . questionnaire.tasks.get_address_by_ip
  . runyi.celery_app.debug_task
  . user.tasks.image_to_oss_task
  . user.tasks.test_celery

再次测试停止指令: 停止成功

root@hecs01:~# C_FAKEFORK=1 sh -x /etc/init.d/celeryd stop
+ VERSION=10.1
+ echo celery init v10.1.
celery init v10.1.
+ id -u
+ [ 0 -ne 0 ]
+ origin_is_runlevel_dir
+ set +e
+ dirname /etc/init.d/celeryd
+ grep -q /etc/rc.\.d
+ echo 1
+ [ 1 -eq 0 ]
+ SCRIPT_FILE=/etc/init.d/celeryd
+ basename /etc/init.d/celeryd
+ SCRIPT_NAME=celeryd
+ DEFAULT_USER=celery
+ DEFAULT_PID_FILE=/var/run/celery/%n.pid
+ DEFAULT_LOG_FILE=/var/log/celery/%n%I.log
+ DEFAULT_LOG_LEVEL=INFO
+ DEFAULT_NODES=celery
+ DEFAULT_CELERYD=-m celery worker --detach
+ [ -d /etc/default ]
+ CELERY_CONFIG_DIR=/etc/default
+ CELERY_DEFAULTS=/etc/default/celeryd
+ [ -f /etc/default/celeryd ]
+ _config_sanity /etc/default/celeryd
+ local path=/etc/default/celeryd
+ ls -ld /etc/default/celeryd
+ awk {print $3}
+ local owner=root
+ ls -ld /etc/default/celeryd
+ cut -b 6
+ local iwgrp=-
+ ls -ld /etc/default/celeryd
+ cut -b 9
+ local iwoth=-
+ id -u root
+ [ 0 != 0 ]
+ [ - != - ]
+ [ - != - ]
+ echo Using config script: /etc/default/celeryd
Using config script: /etc/default/celeryd
+ . /etc/default/celeryd
+ CELERYD_NODES=worker1
+ CELERY_BIN=/usr/local/bin/celery
+ CELERY_APP=runyi
+ CELERYD_CHDIR=/home/runyi/runyi/runyi/
+ CELERYD_OPTS= --time-limit=300 --concurrency=8
+ CELERYD_LOG_LEVEL=DEBUG
+ CELERYD_LOG_FILE=/var/log/celery/%n%I.log
+ CELERYD_PID_FILE=/var/run/celery/%n.pid
+ CELERYD_USER=celery
+ CELERYD_GROUP=celery
+ CELERY_CREATE_DIRS=1
+ CELERYD_ULIMIT=65535
+ export DJANGO_SETTINGS_MODULE=settings
+ export PYTHONPATH=:/home/runyi/runyi
+ CELERY_APP_ARG=
+ [ ! -z runyi ]
+ CELERY_APP_ARG=--app=runyi
+ CELERYD_SU=su
+ CELERYD_SU_ARGS=
+ CELERYD_USER=celery
+ CELERY_CREATE_DIRS=1
+ CELERY_CREATE_RUNDIR=1
+ CELERY_CREATE_LOGDIR=1
+ [ -z /var/run/celery/%n.pid ]
+ [ -z /var/log/celery/%n%I.log ]
+ CELERYD_LOG_LEVEL=DEBUG
+ CELERY_BIN=/usr/local/bin/celery
+ CELERYD_MULTI=/usr/local/bin/celery multi
+ CELERYD_NODES=worker1
+ export CELERY_LOADER
+ [ -n  ]
+ dirname /var/log/celery/%n%I.log
+ CELERYD_LOG_DIR=/var/log/celery
+ dirname /var/run/celery/%n.pid
+ CELERYD_PID_DIR=/var/run/celery
+ [ -n /home/runyi/runyi/runyi/ ]
+ DAEMON_OPTS= --workdir=/home/runyi/runyi/runyi/
+ export PATH=.:/usr/local/jdk1.8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/sbin:/sbin
+ check_dev_null
+ [ ! -c /dev/null ]
+ check_paths
+ [ 1 -eq 1 ]
+ create_default_dir /var/log/celery
+ [ ! -d /var/log/celery ]
+ [ 1 -eq 1 ]
+ create_default_dir /var/run/celery
+ [ ! -d /var/run/celery ]
+ stop_workers
+ _chuid stopwait worker1 --pidfile=/var/run/celery/%n.pid
+ su celery -c /usr/local/bin/celery multi stopwait worker1 --pidfile=/var/run/celery/%n.pid
celery multi v5.0.4 (singularity)
> Stopping nodes...
	> worker1@hecs01: TERM -> 191329

worker: Warm shutdown (MainProcess)
> Waiting for 1 node -> 191329.....
	> worker1@hecs01: OK
> worker1@hecs01: DOWN
> Waiting for 1 node -> None...
+ exit 0

注意:

  1. 指令:
C_FAKEFORK=1 sh -x /etc/init.d/celeryd stop
C_FAKEFORK=1 sh -x /etc/init.d/celeryd start

上面这两个指令,仅仅是root用户,查看当前celery出问题的时候才会使用的指令

  1. 指令:
/etc/init.d/celeryd start
/etc/init.d/celeryd stop
/etc/init.d/celeryd restart

上面这三个命令才是root用户控制celery经常使用的指令

  1. 指令
celery -A proj multi start worker1 --pidfile="$HOME/run/celery/%n.pid" --logfile="$HOME/log/celery/%n%I.log"
celery -A proj multi restart worker1 --pidfile="$HOME/run/celery/%n.pid" --logfile="$HOME/log/celery/%n%I.log"
celery multi stopwait worker1 --pidfile="$HOME/run/celery/%n.pid"

上面这些指令是celery用户才能使用的指令

如下所示:

celery@hecs01:/home/runyi/runyi$ celery -A runyi multi start worker1 --pidfile="$HOME/run/celery/%n.pid" --logfile="$HOME/log/celery/%n%I.log"
celery multi v5.0.4 (singularity)
> Starting nodes...
	> worker1@hecs01: OK
celery@hecs01:/home/runyi/runyi$ 
celery@hecs01:/home/runyi/runyi$ 
celery@hecs01:/home/runyi/runyi$ 
celery@hecs01:/home/runyi/runyi$ celery -A runyi multi restart worker1 --pidfile="$HOME/run/celery/%n.pid" --logfile="$HOME/log/celery/%n%I.log"
celery multi v5.0.4 (singularity)
> Stopping nodes...
	> worker1@hecs01: TERM -> 192393
> Waiting for 1 node -> 192393.....
	> worker1@hecs01: OK
> Restarting node worker1@hecs01: OK
> Waiting for 1 node -> None...
celery@hecs01:/home/runyi/runyi$ celery multi stopwait worker1 --pidfile="$HOME/run/celery/%n.pid"
celery multi v5.0.4 (singularity)
> Stopping nodes...
	> worker1@hecs01: TERM -> 192406
> Waiting for 1 node -> 192406.....
	> worker1@hecs01: OK
> worker1@hecs01: DOWN
> Waiting for 1 node -> None...
celery@hecs01:/home/runyi/runyi$ 

你可能感兴趣的:(celery,linux)