repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
celery/celery | 3,997 | celery__celery-3997 | [
"4412"
] | 8c8354f77eb5f1639bd4c314131ab123c1e5ad53 | diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -285,7 +285,7 @@ def __repr__(self):
'WARNING', old={'celery_redirect_stdouts_level'},
),
send_task_events=Option(
- False, type='bool', old={'celeryd_send_events'},
+ False, type='bool', old={'celery_send_events'},
),
state_db=Option(),
task_log_format=Option(DEFAULT_TASK_LOG_FMT),
| Request on_timeout should ignore soft time limit exception
When Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.
But the task may catch this exception and eg. return (this is what soft timeout are for).
This cause:
1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task
2. the task status to be passed to failure and to success on the same manner
3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasks…
1, 2 and 3 can leads of course to strange race conditions…
## Steps to reproduce (Illustration)
with the program in test_timeout.py:
```python
import time
import celery
app = celery.Celery('test_timeout')
app.conf.update(
result_backend="redis://localhost/0",
broker_url="amqp://celery:celery@localhost:5672/host",
)
@app.task(soft_time_limit=1)
def test():
try:
time.sleep(2)
except Exception:
return 1
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())
result.get()
```
start a worker and the program:
```
$ celery -A test_timeout worker -l WARNING
$ python3 test_timeout.py
```
## Expected behavior
add method is called with `[1, 1]` as argument and test_timeout.py return normally
## Actual behavior
The test_timeout.py fails, with
```
celery.backends.base.ChordError: Callback error: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",
```
On the worker side, the **on_error is called but the add method as well !**
```
[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]
[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]
[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in <listcomp>
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 243, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)
[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding
[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]
```
Of course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:
- the chord result is incremented twice by the error of soft time limit
- the chord result is again incremented twice by the correct returning of `test` task
## Conclusion
Request.on_timeout should not process soft time limit exception.
here is a quick monkey patch (correction of celery is trivial)
```python
def patch_celery_request_on_timeout():
from celery.worker import request
orig = request.Request.on_timeout
def patched_on_timeout(self, soft, timeout):
if not soft:
orig(self, soft, timeout)
request.Request.on_timeout = patched_on_timeout
patch_celery_request_on_timeout()
```
## version info
software -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://10.0.3.253/0
| 2017-04-25T13:21:05 |
||
celery/celery | 4,037 | celery__celery-4037 | [
"4036"
] | 897f2a7270308e0d60f13d895ba49158dae8105c | diff --git a/celery/contrib/sphinx.py b/celery/contrib/sphinx.py
--- a/celery/contrib/sphinx.py
+++ b/celery/contrib/sphinx.py
@@ -69,5 +69,5 @@ def get_signature_prefix(self, sig):
def setup(app):
"""Setup Sphinx extension."""
app.add_autodocumenter(TaskDocumenter)
- app.domains['py'].directives['task'] = TaskDirective
+ app.add_directive_to_domain('py', 'task', TaskDirective)
app.add_config_value('celery_task_prefix', '(task)', True)
| celery.contrib.sphinx fails with Sphinx 1.6.1
When using the `celery.contrib.sphinx` extension with Sphinx 1.6.1 with Celery 4.0.2 the following occurs:
```
Exception occurred:
File "/home/ubuntu/virtualenvs/venv-system/lib/python2.7/site-packages/celery/contrib/sphinx.py", line 72, in setup
app.domains['py'].directives['task'] = TaskDirective
AttributeError: 'Sphinx' object has no attribute 'domains'
The full traceback has been saved in /tmp/sphinx-err-oOWabx.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
make: *** [html] Error 1
```
The `domains` property seems to have been removed in sphinx-doc/sphinx#3656 and I think this line needs to be replaced with the [`add_directive` method](http://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_directive) (or more likely the [`add_directive_to_domain` method](http://www.sphinx-doc.org/en/stable/extdev/appapi.html#sphinx.application.Sphinx.add_directive_to_domain)).
| 2017-05-18T13:14:38 |
||
celery/celery | 4,131 | celery__celery-4131 | [
"3813"
] | 63c747889640bdea7753e83373a3a3e0dffc4bd9 | diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -99,7 +99,7 @@ def __init__(self, id, backend=None,
self.id = id
self.backend = backend or self.app.backend
self.parent = parent
- self.on_ready = promise(self._on_fulfilled, weak=True)
+ self.on_ready = promise(self._on_fulfilled)
self._cache = None
def then(self, callback, on_error=None, weak=False):
@@ -180,7 +180,7 @@ def get(self, timeout=None, propagate=True, interval=0.5,
assert_will_not_block()
_on_interval = promise()
if follow_parents and propagate and self.parent:
- on_interval = promise(self._maybe_reraise_parent_error, weak=True)
+ on_interval = promise(self._maybe_reraise_parent_error)
self._maybe_reraise_parent_error()
if on_interval:
_on_interval.then(on_interval)
@@ -474,7 +474,7 @@ def __init__(self, results, app=None, ready_barrier=None, **kwargs):
self.on_ready = promise(args=(self,))
self._on_full = ready_barrier or barrier(results)
if self._on_full:
- self._on_full.then(promise(self.on_ready, weak=True))
+ self._on_full.then(promise(self._on_ready))
def add(self, result):
"""Add :class:`AsyncResult` as a new member of the set.
| Redis Backend has a memory leak with AsyncResult.get()
## Checklist
- [X] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:3.5.2
billiard:3.5.0.2 redis:2.10.5
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://[redacted]
```
### Celery Settings
```
BROKER_BACKEND = 'redis'
BROKER_URL = environ.get('REDIS_URL')
CELERY_RESULT_BACKEND = environ.get('REDIS_URL')
CELERYD_CONCURRENCY = 1
BROKER_POOL_LIMIT = 0
CELERY_TASK_RESULT_EXPIRES = 60
CELERY_MAX_CACHED_RESULTS = 1
CELERY_DISABLE_RATE_LIMITS = True
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
```
- [X] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
1) Create celery app with Redis as transport and Backend (same cache for both)
2) Create a task within the celery app that returns a result
``` python
@CELERY_APP.task
def add():
return 1 + 2
```
3) Execute `add.apply_async().get()` somehow (I have a Django app on Heroku so I made an endpoint for it)
4) Use `objgraph.show_growth()` to see that `AsyncResult` objects are leaking
## Expected behavior
`AsyncResult` objects get released after the result is received after the `AsyncResult.get()` call. The `AsyncResult` is properly released when getting the result from `AsyncResult.result`.
## Actual behavior
`AsyncResult` objects accumulate in the `AsyncResult.backend._pending_results.concrete` dict. If the `AsyncResult` is popped from the dict, it is released as expected.
Note that the leak does not occur when executing tasks eagerly.
## Code sample
``` python
@CELERY_APP.task
def add():
return 1 + 2
def add_request():
response = add.apply_async().get()
import objgraph
objgraph.show_growth()
```
| +1
I'm seeing leaks with AsyncResult with RPC results backend as well on Ubuntu 16.04.
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.12
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:rpc://admin:**@localhost/
```
Seeing this behavior as well.
@thedrow @ask [celery.result.ResultSet._on_ready](https://github.com/celery/celery/blob/master/celery/result.py#L482) no longer appears to be used anywhere, after [this change](https://github.com/celery/celery/commit/072ad1937f7d445a496369f0370033a0ba558ddf#diff-58b5816dbb95159ae9810e56eac54312L440). The promise directly calls `self.on_ready` now. Would that contribute to why the pending result is never removed? | 2017-07-11T19:39:23 |
|
celery/celery | 4,173 | celery__celery-4173 | [
"4160"
] | 9d345f583631709cfdd38b77d2947ec74d83a562 | diff --git a/celery/app/base.py b/celery/app/base.py
--- a/celery/app/base.py
+++ b/celery/app/base.py
@@ -2,6 +2,7 @@
"""Actual App instance implementation."""
from __future__ import absolute_import, unicode_literals
+from datetime import datetime
import os
import threading
import warnings
@@ -36,7 +37,8 @@
from celery.utils.collections import AttributeDictMixin
from celery.utils.dispatch import Signal
from celery.utils.functional import first, maybe_list, head_from_fun
-from celery.utils.time import timezone, get_exponential_backoff_interval
+from celery.utils.time import timezone, \
+ get_exponential_backoff_interval, to_utc
from celery.utils.imports import gen_task_name, instantiate, symbol_by_name
from celery.utils.log import get_logger
from celery.utils.objects import FallbackContext, mro_lookup
@@ -880,8 +882,8 @@ def prepare_config(self, c):
def now(self):
"""Return the current time and date as a datetime."""
- from datetime import datetime
- return datetime.utcnow().replace(tzinfo=self.timezone)
+ now_in_utc = to_utc(datetime.utcnow())
+ return now_in_utc.astimezone(self.timezone)
def select_queues(self, queues=None):
"""Select subset of queues.
| diff --git a/t/unit/app/test_app.py b/t/unit/app/test_app.py
--- a/t/unit/app/test_app.py
+++ b/t/unit/app/test_app.py
@@ -79,10 +79,10 @@ def test_now(self):
tz_utc = timezone.get_timezone('UTC')
tz_us_eastern = timezone.get_timezone(timezone_setting_value)
- now = datetime.utcnow().replace(tzinfo=tz_utc)
+ now = to_utc(datetime.utcnow())
app_now = self.app.now()
- assert app_now.tzinfo == tz_utc
+ assert app_now.tzinfo is tz_utc
assert app_now - now <= timedelta(seconds=1)
# Check that timezone conversion is applied from configuration
@@ -92,7 +92,8 @@ def test_now(self):
del self.app.timezone
app_now = self.app.now()
- assert app_now.tzinfo == tz_us_eastern
+
+ assert app_now.tzinfo.zone == tz_us_eastern.zone
diff = to_utc(datetime.utcnow()) - localize(app_now, tz_utc)
assert diff <= timedelta(seconds=1)
@@ -102,7 +103,7 @@ def test_now(self):
del self.app.timezone
app_now = self.app.now()
assert self.app.timezone == tz_us_eastern
- assert app_now.tzinfo == tz_us_eastern
+ assert app_now.tzinfo.zone == tz_us_eastern.zone
@patch('celery.app.base.set_default_app')
def test_set_default(self, set_default_app):
| Timezone calculations in app.now() are incorrect
From `celery/app/base.py`
```
def now(self):
"""Return the current time and date as a datetime."""
from datetime import datetime
return datetime.utcnow().replace(tzinfo=self.timezone)
```
This is just semantically wrong. This takes the UTC timezone-aware time and replaces the timezone, _without actually ensuring it's the same point in time_.
So 9 AM UTC becomes 9 AM JTC, even though 9 AM UTC is 6 PM JTC
The "proper" way of doing this is `self.timezone.normalize(datetime.utcnow())`
This bug breaks things like `crontab` for celery beat, for example, since the schedules get shifted around. And it's the wrong time too
| This may be caused by the changes in this [Pull Request](https://github.com/celery/celery/pull/3867/files).
Perhaps also related: https://github.com/celery/celery/issues/4145
@rtpg Based on your suggestion, I think this might be the correct implementation:
```
def now(self):
"""Return the current time and date as a datetime."""
from celery.utils.time import to_utc
from datetime import datetime
now_in_utc = to_utc(datetime.utcnow())
return self.timezone.normalize(now_in_utc)
```
I need to test it some more though.
one detail (I'm not sure how it applies here, but it feels relevant) is whether `self.timezone` differs from the system timezone.
For example if self.timezone is UTC+1 but the system timezone is UTC+9
If those differs, then `datetime.utcnow()` will return the current timestamp (probably) by subtracting 9 hours from the `now`, but then `self.timezone.normalize(now_in_utc)` will shift it forward by only 1 hour?
Though..... maybe the system timezone is "always right". This might be ramblings based off of imprecise understanding of this
@rtpg I think it is actually the other way round, the system clock counts in UTC (Universal Coordinated Time) and when you set the system timezone, it changes the representation only. Thus UTC is an absolute measure of time, whereas timezones are relative offsets to UTC. | 2017-07-28T05:50:57 |
celery/celery | 4,192 | celery__celery-4192 | [
"4104"
] | afebe7a6e0e4320b87d6a73e8514d206d7ccf564 | diff --git a/celery/utils/dispatch/signal.py b/celery/utils/dispatch/signal.py
--- a/celery/utils/dispatch/signal.py
+++ b/celery/utils/dispatch/signal.py
@@ -5,10 +5,12 @@
import threading
import weakref
import warnings
+from kombu.utils.functional import retry_over_time
from celery.exceptions import CDeprecationWarning
from celery.five import python_2_unicode_compatible, range, text_t
from celery.local import PromiseProxy, Proxy
from celery.utils.functional import fun_accepts_kwargs
+from celery.utils.time import humanize_seconds
from celery.utils.log import get_logger
try:
from weakref import WeakMethod
@@ -36,6 +38,10 @@ def _make_id(target): # pragma: no cover
NO_RECEIVERS = object()
+RECEIVER_RETRY_ERROR = """\
+Could not process signal receiver %(receiver)s. Retrying %(when)s...\
+"""
+
@python_2_unicode_compatible
class Signal(object): # pragma: no cover
@@ -103,12 +109,49 @@ def connect(self, *args, **kwargs):
dispatch_uid (Hashable): An identifier used to uniquely identify a
particular instance of a receiver. This will usually be a
string, though it may be anything hashable.
+
+ retry (bool): If the signal receiver raises an exception
+ (e.g. ConnectionError), the receiver will be retried until it
+ runs successfully. A strong ref to the receiver will be stored
+ and the `weak` option will be ignored.
"""
- def _handle_options(sender=None, weak=True, dispatch_uid=None):
+ def _handle_options(sender=None, weak=True, dispatch_uid=None,
+ retry=False):
def _connect_signal(fun):
- self._connect_signal(fun, sender, weak, dispatch_uid)
+
+ options = {'dispatch_uid': dispatch_uid,
+ 'weak': weak}
+
+ def _retry_receiver(retry_fun):
+
+ def _try_receiver_over_time(*args, **kwargs):
+ def on_error(exc, intervals, retries):
+ interval = next(intervals)
+ err_msg = RECEIVER_RETRY_ERROR % \
+ {'receiver': retry_fun,
+ 'when': humanize_seconds(interval, 'in', ' ')}
+ logger.error(err_msg)
+ return interval
+
+ return retry_over_time(retry_fun, Exception, args,
+ kwargs, on_error)
+
+ return _try_receiver_over_time
+
+ if retry:
+ options['weak'] = False
+ if not dispatch_uid:
+ # if there's no dispatch_uid then we need to set the
+ # dispatch uid to the original func id so we can look
+ # it up later with the original func id
+ options['dispatch_uid'] = _make_id(fun)
+ fun = _retry_receiver(fun)
+
+ self._connect_signal(fun, sender, options['weak'],
+ options['dispatch_uid'])
return fun
+
return _connect_signal
if args and callable(args[0]):
@@ -158,6 +201,7 @@ def _connect_signal(self, receiver, sender, weak, dispatch_uid):
else:
self.receivers.append((lookup_key, receiver))
self.sender_receivers_cache.clear()
+
return receiver
def disconnect(self, receiver=None, sender=None, weak=None,
| diff --git a/t/unit/utils/test_dispatcher.py b/t/unit/utils/test_dispatcher.py
--- a/t/unit/utils/test_dispatcher.py
+++ b/t/unit/utils/test_dispatcher.py
@@ -143,3 +143,32 @@ def test_disconnection(self):
finally:
a_signal.disconnect(receiver_3)
self._testIsClean(a_signal)
+
+ def test_retry(self):
+
+ class non_local:
+ counter = 1
+
+ def succeeds_eventually(val, **kwargs):
+ non_local.counter += 1
+ if non_local.counter < 3:
+ raise ValueError('this')
+
+ return val
+
+ a_signal.connect(succeeds_eventually, sender=self, retry=True)
+ try:
+ result = a_signal.send(sender=self, val='test')
+ assert non_local.counter == 3
+ assert result[0][1] == 'test'
+ finally:
+ a_signal.disconnect(succeeds_eventually, sender=self)
+ self._testIsClean(a_signal)
+
+ def test_retry_with_dispatch_uid(self):
+ uid = 'abc123'
+ a_signal.connect(receiver_1_arg, sender=self, retry=True,
+ dispatch_uid=uid)
+ assert a_signal.receivers[0][0][0] == uid
+ a_signal.disconnect(receiver_1_arg, sender=self, dispatch_uid=uid)
+ self._testIsClean(a_signal)
| revoke() does not support wait-and-retry
Potentially this is not an actual issue, but it is behavior which is not necessarily ideal, and potentially something which could be fixed.
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
```
software -> celery:3.1.20 (Cipater) kombu:3.0.33 py:2.7.13
billiard:3.3.0.23 py-amqp:1.4.9
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
```
Save this code as reproducer.py. Ensure the broker is down.
```
#!/usr/env python
from celery import Celery
from celery.app import control
from celery.signals import celeryd_after_setup
from uuid import uuid4
app = Celery(broker='amqp://')
@celeryd_after_setup.connect
def delete_worker(*args, **kwargs):
"""
Imagine if we wanted to do some cleanup, supposing that maybe things
had previously been left in a dirty state for some reason.
"""
controller = control.Control(app=app)
controller.revoke(uuid4(), terminate=True)
#@worker_shutdown.connect
#def delete_worker(*args, **kwargs):
# controller = control.Control(app=app)
# controller.revoke(uuid4(), terminate=True)
```
We're attempting to do a bit of cleanup just in case a previous task that may have not been cleaned up properly, and to do so we're calling Control.revoke(...., terminate=True).
## Expected behavior
The celeryd_after_setup signal supposedly takes place after the queue setup has been handled. Therefore, I would expect that if it cannot communicate with the broker during this period it would attempt to wait and retry.
Whether the worker_shutdown case should attempt to retry is more questionable, but I still wouldn't necessarily expect it to immediately fail and traceback.
My question is: is this intentional behavior, and if it is intentional behavior, would the Celery devs consider adding retry functionality to either of these cases to be an improvement?
## Actual behavior
**celeryd_after_setup** signal:
```
[2017-06-21 22:42:52,145: ERROR/MainProcess] Unrecoverable error: error(error(111, 'Connection refused'),)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/usr/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
self.on_start()
File "/usr/lib/python2.7/site-packages/celery/apps/worker.py", line 158, in on_start
sender=self.hostname, instance=self, conf=self.app.conf,
File "/usr/lib/python2.7/site-packages/celery/utils/dispatch/signal.py", line 166, in send
response = receiver(signal=self, sender=sender, **named)
File "/home/vagrant/devel/reproducer.py", line 14, in delete_worker
controller.revoke(uuid4(), terminate=True)
File "/usr/lib/python2.7/site-packages/celery/app/control.py", line 172, in revoke
'signal': signal}, **kwargs)
File "/usr/lib/python2.7/site-packages/celery/app/control.py", line 316, in broadcast
limit, callback, channel=channel,
File "/usr/lib/python2.7/site-packages/kombu/pidbox.py", line 283, in _broadcast
chan = channel or self.connection.default_channel
File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 756, in default_channel
self.connection
File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 741, in connection
self._connection = self._establish_connection()
File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 696, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
conn = self.Connection(**opts)
File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
self.transport = self.Transport(host, connect_timeout, ssl)
File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 186, in Transport
return create_transport(host, connect_timeout, ssl)
File "/usr/lib/python2.7/site-packages/amqp/transport.py", line 299, in create_transport
return TCPTransport(host, connect_timeout)
File "/usr/lib/python2.7/site-packages/amqp/transport.py", line 95, in __init__
raise socket.error(last_err)
error: [Errno 111] Connection refused
```
**worker_shutdown** signal:
```
[2017-06-21 22:40:47,542: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds...
[2017-06-21 22:40:49,554: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**@127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 4.00 seconds...
^C
worker: Hitting Ctrl+C again will terminate all running tasks!
worker: Warm shutdown (MainProcess)
[2017-06-21 22:40:52,860: WARNING/MainProcess] Traceback (most recent call last):
[2017-06-21 22:40:52,861: WARNING/MainProcess] File "/usr/lib64/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
[2017-06-21 22:40:52,861: WARNING/MainProcess] finalizer()
[2017-06-21 22:40:52,861: WARNING/MainProcess] File "/usr/lib64/python2.7/multiprocessing/util.py", line 207, in __call__
[2017-06-21 22:40:52,862: WARNING/MainProcess] res = self._callback(*self._args, **self._kwargs)
[2017-06-21 22:40:52,862: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/celery/worker/__init__.py", line 201, in _send_worker_shutdown
[2017-06-21 22:40:52,862: WARNING/MainProcess] signals.worker_shutdown.send(sender=self)
[2017-06-21 22:40:52,862: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/celery/utils/dispatch/signal.py", line 166, in send
[2017-06-21 22:40:52,863: WARNING/MainProcess] response = receiver(signal=self, sender=sender, **named)
[2017-06-21 22:40:52,863: WARNING/MainProcess] File "/home/vagrant/devel/reproducer.py", line 20, in delete_worker
[2017-06-21 22:40:52,864: WARNING/MainProcess] controller.revoke(uuid4(), terminate=True)
[2017-06-21 22:40:52,864: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/celery/app/control.py", line 172, in revoke
[2017-06-21 22:40:52,864: WARNING/MainProcess] 'signal': signal}, **kwargs)
[2017-06-21 22:40:52,864: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/celery/app/control.py", line 316, in broadcast
[2017-06-21 22:40:52,865: WARNING/MainProcess] limit, callback, channel=channel,
[2017-06-21 22:40:52,865: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/kombu/pidbox.py", line 283, in _broadcast
[2017-06-21 22:40:52,865: WARNING/MainProcess] chan = channel or self.connection.default_channel
[2017-06-21 22:40:52,865: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 756, in default_channel
[2017-06-21 22:40:52,866: WARNING/MainProcess] self.connection
[2017-06-21 22:40:52,866: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 741, in connection
[2017-06-21 22:40:52,866: WARNING/MainProcess] self._connection = self._establish_connection()
[2017-06-21 22:40:52,866: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/kombu/connection.py", line 696, in _establish_connection
[2017-06-21 22:40:52,867: WARNING/MainProcess] conn = self.transport.establish_connection()
[2017-06-21 22:40:52,867: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 116, in establish_connection
[2017-06-21 22:40:52,867: WARNING/MainProcess] conn = self.Connection(**opts)
[2017-06-21 22:40:52,867: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 165, in __init__
[2017-06-21 22:40:52,868: WARNING/MainProcess] self.transport = self.Transport(host, connect_timeout, ssl)
[2017-06-21 22:40:52,868: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 186, in Transport
[2017-06-21 22:40:52,868: WARNING/MainProcess] return create_transport(host, connect_timeout, ssl)
[2017-06-21 22:40:52,869: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/amqp/transport.py", line 299, in create_transport
[2017-06-21 22:40:52,869: WARNING/MainProcess] return TCPTransport(host, connect_timeout)
[2017-06-21 22:40:52,869: WARNING/MainProcess] File "/usr/lib/python2.7/site-packages/amqp/transport.py", line 95, in __init__
[2017-06-21 22:40:52,869: WARNING/MainProcess] raise socket.error(last_err)
[2017-06-21 22:40:52,870: WARNING/MainProcess] error: [Errno 111] Connection refused
```
| If the `revoke()` is being run by a worker, I expect it to have this wait-and-retry behavior.
Imagine that a webhandler, e.g. httpd, imports the Celery code and calls `revoke()`. In that case having the wait-and-retry behavior could be problematic because after some time, e.g. 500 seconds, the WSGI request will time-out. So revoke has different usages and I'm not sure if those usages affect the expected wait-and-retry behavior.
I'd like to hear some opinions from the current core maintainers about their thoughts on this expectation. I'm also looking for insight into why this isn't working given that our usage is running on the worker during its startup and shutdown signals.
@auvipy Do you or another core dev have an opinion on this? We are trying to determine if this is the intended behavior and we should implement a workaround or if this is expected to work and just not working correctly.
sorry for being late. I couldn't remember any reason for the exception, but I am on the side of inclusion if that increase dev experience.
@thedrow what you say?
I'm willing to accept a PR that adds a retry parameter to `signal.connect` defaulting to **false**. Other functionality such as backoff can also be implemented.
@thedrow That idea looks clear and would resolve the problem for us. Thanks.
Hopefully someone from the Pulp team can contribute a PR. We want to target the 4.0 branch right? We should also submit docs with it. Ask used to put together the release notes so we left that out of our commits. Do you want this PR to include a release note for it too?
Also we want to backport the feature to the 3.1 branch so that we can carry the patch downstream for 3.1. Will there be any more 3.1 releases?
no one except me got time or interest for 3.x branch. you should send against master first. after that things could be back ported. | 2017-08-11T17:09:08 |
celery/celery | 4,203 | celery__celery-4203 | [
"4201"
] | 2394e738a7d0c0c736f5d689de4d32325ba54f48 | diff --git a/celery/platforms.py b/celery/platforms.py
--- a/celery/platforms.py
+++ b/celery/platforms.py
@@ -86,7 +86,7 @@
You're running the worker with superuser privileges: this is
absolutely not recommended!
-Please specify a different user using the -u option.
+Please specify a different user using the --uid option.
User information: uid={uid} euid={euid} gid={gid} egid={egid}
"""
| -u option does not exist
## Steps to reproduce
Start -> celery -A application worker -l info
## Actual behavior
RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the -u option.
User information: uid=0 euid=0 gid=0 egid=0
uid=uid, euid=euid, gid=gid, egid=egid
## Fixes
When displaying the help menu -> celery -A application worker -l info -h
There is currently no longer a -u option and the warning should be changed to use --uid / --gid options
| Nice observation. Would you like to open a pull request for this? | 2017-08-15T13:31:58 |
|
celery/celery | 4,205 | celery__celery-4205 | [
"4106"
] | 2394e738a7d0c0c736f5d689de4d32325ba54f48 | diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -815,6 +815,7 @@ class GroupResult(ResultSet):
Arguments:
id (str): The id of the group.
results (Sequence[AsyncResult]): List of result instances.
+ parent (ResultBase): Parent result of this group.
"""
#: The UUID of the group.
@@ -823,8 +824,9 @@ class GroupResult(ResultSet):
#: List/iterator of results in the group
results = None
- def __init__(self, id=None, results=None, **kwargs):
+ def __init__(self, id=None, results=None, parent=None, **kwargs):
self.id = id
+ self.parent = parent
ResultSet.__init__(self, results, **kwargs)
def save(self, backend=None):
@@ -853,7 +855,11 @@ def __bool__(self):
def __eq__(self, other):
if isinstance(other, GroupResult):
- return other.id == self.id and other.results == self.results
+ return (
+ other.id == self.id and
+ other.results == self.results and
+ other.parent == self.parent
+ )
return NotImplemented
def __ne__(self, other):
@@ -865,7 +871,7 @@ def __repr__(self):
', '.join(r.id for r in self.results))
def as_tuple(self):
- return self.id, [r.as_tuple() for r in self.results]
+ return (self.id, self.parent), [r.as_tuple() for r in self.results]
@property
def children(self):
@@ -969,13 +975,15 @@ def result_from_tuple(r, app=None):
Result = app.AsyncResult
if not isinstance(r, ResultBase):
res, nodes = r
- if nodes:
- return app.GroupResult(
- res, [result_from_tuple(child, app) for child in nodes],
- )
- # previously didn't include parent
id, parent = res if isinstance(res, (list, tuple)) else (res, None)
if parent:
parent = result_from_tuple(parent, app)
+
+ if nodes:
+ return app.GroupResult(
+ id, [result_from_tuple(child, app) for child in nodes],
+ parent=parent,
+ )
+
return Result(id, parent=parent)
return r
| diff --git a/t/unit/tasks/test_result.py b/t/unit/tasks/test_result.py
--- a/t/unit/tasks/test_result.py
+++ b/t/unit/tasks/test_result.py
@@ -595,6 +595,21 @@ def test_len(self):
def test_eq_other(self):
assert self.ts != 1
+ def test_eq_with_parent(self):
+ # GroupResult instances with different .parent are not equal
+ grp_res = self.app.GroupResult(
+ uuid(), [self.app.AsyncResult(uuid()) for _ in range(10)],
+ parent=self.app.AsyncResult(uuid())
+ )
+ grp_res_2 = self.app.GroupResult(grp_res.id, grp_res.results)
+ assert grp_res != grp_res_2
+
+ grp_res_2.parent = self.app.AsyncResult(uuid())
+ assert grp_res != grp_res_2
+
+ grp_res_2.parent = grp_res.parent
+ assert grp_res == grp_res_2
+
@pytest.mark.usefixtures('depends_on_current_app')
def test_pickleable(self):
assert pickle.loads(pickle.dumps(self.ts))
@@ -892,3 +907,29 @@ def test_GroupResult(self):
)
assert x, result_from_tuple(x.as_tuple() == self.app)
assert x, result_from_tuple(x == self.app)
+
+ def test_GroupResult_with_parent(self):
+ parent = self.app.AsyncResult(uuid())
+ result = self.app.GroupResult(
+ uuid(), [self.app.AsyncResult(uuid()) for _ in range(10)],
+ parent
+ )
+ second_result = result_from_tuple(result.as_tuple(), self.app)
+ assert second_result == result
+ assert second_result.parent == parent
+
+ def test_GroupResult_as_tuple(self):
+ parent = self.app.AsyncResult(uuid())
+ result = self.app.GroupResult(
+ 'group-result-1',
+ [self.app.AsyncResult('async-result-{}'.format(i))
+ for i in range(2)],
+ parent
+ )
+ (result_id, parent_id), group_results = result.as_tuple()
+ assert result_id == result.id
+ assert parent_id == parent.id
+ assert isinstance(group_results, list)
+ expected_grp_res = [(('async-result-{}'.format(i), None), None)
+ for i in range(2)]
+ assert group_results == expected_grp_res
| GroupResult.as_tuple forget serialize group parent
```
celery -A ct report
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.9
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:cache+memcached://127.0.0.1:11211/
broker_url: u'amqp://guest:********@localhost:5672//'
result_backend: u'cache+memcached://127.0.0.1:11211/'
```
**Steps to reproduce**
```
# ct.py
# ct.py
from __future__ import absolute_import, unicode_literals
from celery import Celery
app = Celery(
'ct',
broker='amqp://guest@localhost//',
backend='cache+memcached://127.0.0.1:11211/'
)
@app.task
def nothing(*args):
return args
if __name__ == '__main__':
app.start()
```
Start worker
`python ct.py worker -l INFO`
In `python` console
```
>>> from celery import chain, group
>>> from celery.result import result_from_tuple
>>> from ct import nothing
>>> c = nothing.s() | group(nothing.s(), nothing.s())
>>> r = c.delay()
>>> r
<GroupResult: f46d7bcc-3140-40e8-9e21-d2f74eef6f28 [4854af6e-8e4a-48d8-98e9-d5f0eefcce20, 5c8d4ec1-84ff-4845-a9b2-9f07fd9bed19]>
>>> r.parent
<AsyncResult: a359ae60-ed18-4876-a67b-fc8cd83759a2>
>>> t = r.as_tuple()
>>> r2 = result_from_tuple(t)
>>> r2
<GroupResult: f46d7bcc-3140-40e8-9e21-d2f74eef6f28 [4854af6e-8e4a-48d8-98e9-d5f0eefcce20, 5c8d4ec1-84ff-4845-a9b2-9f07fd9bed19]>
>>> r2.parent
>>> r2.parent is None
True
```
**Expected behavior**
```
>>> r2 = result_from_tuple(t)
>>> r2.parent
<AsyncResult: a359ae60-ed18-4876-a67b-fc8cd83759a2>
>>>
```
| does this produce same against master branch
> does this produce same against master branch
yes, it is | 2017-08-16T09:55:32 |
celery/celery | 4,240 | celery__celery-4240 | [
"4232"
] | 9b2a1720781930f8eed87bce2c3396e40a99529e | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -395,23 +395,19 @@ def __or__(self, other):
other = maybe_unroll_group(other)
if isinstance(self, _chain):
# chain | group() -> chain
- sig = self.clone()
- sig.tasks.append(other)
- return sig
+ return _chain(seq_concat_item(
+ self.unchain_tasks(), other), app=self._app)
# task | group() -> chain
return _chain(self, other, app=self.app)
if not isinstance(self, _chain) and isinstance(other, _chain):
# task | chain -> chain
- return _chain(
- seq_concat_seq((self,), other.tasks), app=self._app)
+ return _chain(seq_concat_seq(
+ (self,), other.unchain_tasks()), app=self._app)
elif isinstance(other, _chain):
# chain | chain -> chain
- sig = self.clone()
- if isinstance(sig.tasks, tuple):
- sig.tasks = list(sig.tasks)
- sig.tasks.extend(other.tasks)
- return sig
+ return _chain(seq_concat_seq(
+ self.unchain_tasks(), other.unchain_tasks()), app=self._app)
elif isinstance(self, chord):
# chord(ONE, body) | other -> ONE | body | other
# chord with one header task is unecessary.
@@ -436,8 +432,8 @@ def __or__(self, other):
return sig
else:
# chain | task -> chain
- return _chain(
- seq_concat_item(self.tasks, other), app=self._app)
+ return _chain(seq_concat_item(
+ self.unchain_tasks(), other), app=self._app)
# task | task -> chain
return _chain(self, other, app=self._app)
return NotImplemented
@@ -557,6 +553,15 @@ def clone(self, *args, **kwargs):
]
return s
+ def unchain_tasks(self):
+ # Clone chain's tasks assigning sugnatures from link_error
+ # to each task
+ tasks = [t.clone() for t in self.tasks]
+ for sig in self.options.get('link_error', []):
+ for task in tasks:
+ task.link_error(sig)
+ return tasks
+
def apply_async(self, args=(), kwargs={}, **options):
# python is best at unpacking kwargs, so .run is here to do that.
app = self.app
| diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -188,6 +188,52 @@ def test_apply_async_when_not_registered(self):
s = signature('xxx.not.registered', app=self.app)
assert s._apply_async
+ def test_keeping_link_error_on_chaining(self):
+ x = self.add.s(2, 2) | self.mul.s(4)
+ assert isinstance(x, _chain)
+ x.link_error(SIG)
+ assert SIG in x.options['link_error']
+
+ t = signature(SIG)
+ z = x | t
+ assert isinstance(z, _chain)
+ assert t in z.tasks
+ assert not z.options.get('link_error')
+ assert SIG in z.tasks[0].options['link_error']
+ assert not z.tasks[2].options.get('link_error')
+ assert SIG in x.options['link_error']
+ assert t not in x.tasks
+ assert not x.tasks[0].options.get('link_error')
+
+ z = t | x
+ assert isinstance(z, _chain)
+ assert t in z.tasks
+ assert not z.options.get('link_error')
+ assert SIG in z.tasks[1].options['link_error']
+ assert not z.tasks[0].options.get('link_error')
+ assert SIG in x.options['link_error']
+ assert t not in x.tasks
+ assert not x.tasks[0].options.get('link_error')
+
+ y = self.add.s(4, 4) | self.div.s(2)
+ assert isinstance(y, _chain)
+
+ z = x | y
+ assert isinstance(z, _chain)
+ assert not z.options.get('link_error')
+ assert SIG in z.tasks[0].options['link_error']
+ assert not z.tasks[2].options.get('link_error')
+ assert SIG in x.options['link_error']
+ assert not x.tasks[0].options.get('link_error')
+
+ z = y | x
+ assert isinstance(z, _chain)
+ assert not z.options.get('link_error')
+ assert SIG in z.tasks[3].options['link_error']
+ assert not z.tasks[1].options.get('link_error')
+ assert SIG in x.options['link_error']
+ assert not x.tasks[0].options.get('link_error')
+
class test_xmap_xstarmap(CanvasCase):
| Joining to chains loses link_error of least chain
https://github.com/celery/celery/blob/cbbf481801079f0e2cfbfe464c9ecfe3ccc7a067/celery/canvas.py#L408-L414
Should be something like
```python
link_error_sigs = other._with_list_option('link_error')
sig.tasks.extend(
reduce(
lambda t, s: t.on_error(s), link_error_sigs, t.clone())
for t in other.tasks)
```
| 2017-08-31T07:44:24 |
|
celery/celery | 4,251 | celery__celery-4251 | [
"3888"
] | 10adb99538b23621418a69d674c1c01de267e045 | diff --git a/celery/worker/consumer/consumer.py b/celery/worker/consumer/consumer.py
--- a/celery/worker/consumer/consumer.py
+++ b/celery/worker/consumer/consumer.py
@@ -305,6 +305,12 @@ def _limit_task(self, request, bucket, tokens):
return bucket.add(request)
return self._schedule_bucket_request(request, bucket, tokens)
+ def _limit_post_eta(self, request, bucket, tokens):
+ self.qos.decrement_eventually()
+ if bucket.contents:
+ return bucket.add(request)
+ return self._schedule_bucket_request(request, bucket, tokens)
+
def start(self):
blueprint = self.blueprint
while blueprint.state not in STOP_CONDITIONS:
diff --git a/celery/worker/strategy.py b/celery/worker/strategy.py
--- a/celery/worker/strategy.py
+++ b/celery/worker/strategy.py
@@ -84,6 +84,7 @@ def default(task, app, consumer,
get_bucket = consumer.task_buckets.__getitem__
handle = consumer.on_task_request
limit_task = consumer._limit_task
+ limit_post_eta = consumer._limit_post_eta
body_can_be_buffer = consumer.pool.body_can_be_buffer
Request = symbol_by_name(task.Request)
Req = create_request_cls(Request, task, consumer.pool, hostname, eventer)
@@ -123,6 +124,8 @@ def task_message_handler(message, body, ack, reject, callbacks,
expires=req.expires and req.expires.isoformat(),
)
+ bucket = None
+ eta = None
if req.eta:
try:
if req.utc:
@@ -133,17 +136,22 @@ def task_message_handler(message, body, ack, reject, callbacks,
error("Couldn't convert ETA %r to timestamp: %r. Task: %r",
req.eta, exc, req.info(safe=True), exc_info=True)
req.reject(requeue=False)
- else:
- consumer.qos.increment_eventually()
- call_at(eta, apply_eta_task, (req,), priority=6)
- else:
- if rate_limits_enabled:
- bucket = get_bucket(task.name)
- if bucket:
- return limit_task(req, bucket, 1)
- task_reserved(req)
- if callbacks:
- [callback(req) for callback in callbacks]
- handle(req)
-
+ if rate_limits_enabled:
+ bucket = get_bucket(task.name)
+
+ if eta and bucket:
+ consumer.qos.increment_eventually()
+ return call_at(eta, limit_post_eta, (req, bucket, 1),
+ priority=6)
+ if eta:
+ consumer.qos.increment_eventually()
+ call_at(eta, apply_eta_task, (req,), priority=6)
+ return task_message_handler
+ if bucket:
+ return limit_task(req, bucket, 1)
+
+ task_reserved(req)
+ if callbacks:
+ [callback(req) for callback in callbacks]
+ handle(req)
return task_message_handler
| diff --git a/t/unit/worker/test_strategy.py b/t/unit/worker/test_strategy.py
--- a/t/unit/worker/test_strategy.py
+++ b/t/unit/worker/test_strategy.py
@@ -98,6 +98,14 @@ def was_rate_limited(self):
assert not self.was_reserved()
return self.consumer._limit_task.called
+ def was_limited_with_eta(self):
+ assert not self.was_reserved()
+ called = self.consumer.timer.call_at.called
+ if called:
+ assert self.consumer.timer.call_at.call_args[0][1] == \
+ self.consumer._limit_post_eta
+ return called
+
def was_scheduled(self):
assert not self.was_reserved()
assert not self.was_rate_limited()
@@ -186,6 +194,13 @@ def test_when_rate_limited(self):
C()
assert C.was_rate_limited()
+ def test_when_rate_limited_with_eta(self):
+ task = self.add.s(2, 2).set(countdown=10)
+ with self._context(task, rate_limits=True, limit='1/m') as C:
+ C()
+ assert C.was_limited_with_eta()
+ C.consumer.qos.increment_eventually.assert_called_with()
+
def test_when_rate_limited__limits_disabled(self):
task = self.add.s(2, 2)
with self._context(task, rate_limits=False, limit='1/m') as C:
| Rate limit ignored when ETA specified
While I accept this is a limitation, it's not a very clear one from the documentation. It would be good to add it as a note/warning somewhere to clarify that ETA and rate limit are incompatible.
http://docs.celeryproject.org/en/latest/internals/worker.html#id1
http://docs.celeryproject.org/en/latest/userguide/tasks.html?highlight=delay#Task.rate_limit
## Checklist
- [ ] I have included the output of ``celery -A proj report`` in the issue.
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
This is explained [here](http://stackoverflow.com/questions/30804857/celery-rate-limit-not-respected-when-scheduling-via-eta-option).
## Expected behavior
I was expecting that the task would be delayed as per the ETA, and then be executed sometimes after that, respecting the rate limit.
## Actual behavior
The rate limit is ignored entirely by Celery.
| I don't think the documentation changes fix this issue but it's a start until someone implements the support for using both features at the same time. | 2017-09-06T06:00:58 |
celery/celery | 4,260 | celery__celery-4260 | [
"4259"
] | d59518f5fb68957b2d179aa572af6f58cd02de40 | diff --git a/celery/app/amqp.py b/celery/app/amqp.py
--- a/celery/app/amqp.py
+++ b/celery/app/amqp.py
@@ -398,7 +398,8 @@ def as_task_v1(self, task_id, name, args=None, kwargs=None,
chord=None, callbacks=None, errbacks=None, reply_to=None,
time_limit=None, soft_time_limit=None,
create_sent_event=False, root_id=None, parent_id=None,
- shadow=None, now=None, timezone=None):
+ shadow=None, now=None, timezone=None,
+ **compat_kwargs):
args = args or ()
kwargs = kwargs or {}
utc = self.utc
diff --git a/celery/app/base.py b/celery/app/base.py
--- a/celery/app/base.py
+++ b/celery/app/base.py
@@ -739,6 +739,8 @@ def send_task(self, name, args=None, kwargs=None, countdown=None,
reply_to or self.oid, time_limit, soft_time_limit,
self.conf.task_send_sent_event,
root_id, parent_id, shadow, chain,
+ argsrepr=options.get('argsrepr'),
+ kwargsrepr=options.get('kwargsrepr'),
)
if connection:
| diff --git a/t/unit/tasks/test_tasks.py b/t/unit/tasks/test_tasks.py
--- a/t/unit/tasks/test_tasks.py
+++ b/t/unit/tasks/test_tasks.py
@@ -41,7 +41,6 @@ def apply_async(self, *args, **kwargs):
class TasksCase:
def setup(self):
- self.app.conf.task_protocol = 1 # XXX Still using proto1
self.mytask = self.app.task(shared=False)(return_True)
@self.app.task(bind=True, count=0, shared=False)
@@ -412,20 +411,28 @@ def test_AsyncResult(self):
def assert_next_task_data_equal(self, consumer, presult, task_name,
test_eta=False, test_expires=False,
- **kwargs):
+ properties=None, headers=None, **kwargs):
next_task = consumer.queues[0].get(accept=['pickle', 'json'])
- task_data = next_task.decode()
- assert task_data['id'] == presult.id
- assert task_data['task'] == task_name
- task_kwargs = task_data.get('kwargs', {})
+ task_properties = next_task.properties
+ task_headers = next_task.headers
+ task_body = next_task.decode()
+ task_args, task_kwargs, embed = task_body
+ assert task_headers['id'] == presult.id
+ assert task_headers['task'] == task_name
if test_eta:
- assert isinstance(task_data.get('eta'), string_t)
- to_datetime = parse_iso8601(task_data.get('eta'))
+ assert isinstance(task_headers.get('eta'), string_t)
+ to_datetime = parse_iso8601(task_headers.get('eta'))
assert isinstance(to_datetime, datetime)
if test_expires:
- assert isinstance(task_data.get('expires'), string_t)
- to_datetime = parse_iso8601(task_data.get('expires'))
+ assert isinstance(task_headers.get('expires'), string_t)
+ to_datetime = parse_iso8601(task_headers.get('expires'))
assert isinstance(to_datetime, datetime)
+ properties = properties or {}
+ for arg_name, arg_value in items(properties):
+ assert task_properties.get(arg_name) == arg_value
+ headers = headers or {}
+ for arg_name, arg_value in items(headers):
+ assert task_headers.get(arg_name) == arg_value
for arg_name, arg_value in items(kwargs):
assert task_kwargs.get(arg_name) == arg_value
@@ -500,6 +507,27 @@ def test_regular_task(self):
name='George Costanza', test_eta=True, test_expires=True,
)
+ # Default argsrepr/kwargsrepr behavior
+ presult2 = self.mytask.apply_async(
+ args=('spam',), kwargs={'name': 'Jerry Seinfeld'}
+ )
+ self.assert_next_task_data_equal(
+ consumer, presult2, self.mytask.name,
+ headers={'argsrepr': "('spam',)",
+ 'kwargsrepr': "{'name': 'Jerry Seinfeld'}"},
+ )
+
+ # With argsrepr/kwargsrepr
+ presult2 = self.mytask.apply_async(
+ args=('secret',), argsrepr="'***'",
+ kwargs={'password': 'foo'}, kwargsrepr="{'password': '***'}",
+ )
+ self.assert_next_task_data_equal(
+ consumer, presult2, self.mytask.name,
+ headers={'argsrepr': "'***'",
+ 'kwargsrepr': "{'password': '***'}"},
+ )
+
# Discarding all tasks.
consumer.purge()
self.mytask.apply_async()
| Celery not using kwargsrepr when creating task message
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
```
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:3.6.1
billiard:3.5.0.3 py-amqp:2.2.1
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://redis:6379/
```
## Steps to reproduce
1. Ensure that you are using task protocol v2 (this is the default)
1. Call a task with:
```
my_task.apply_async(kwargs={'foo': 'bar'}, kwargsrepr='')
```
## Expected behavior
The `kwargsrepr` task header should contain the empty string `''`
## Actual behavior
The `kwargsrepr` header uses the automatically generated `safe_repr` of the kwargs:
```
kwargsrepr: {'foo': 'bar'}
```
This also shows up in the worker logging if the level is set to DEBUG:
```
TaskPool: Apply <function _trace_task_ret at 0x7f52e2b887b8> (args:('my_app.my_task', 'dfc0d5b7-dcef-4ee4-aa9f-a3d3581345a0', {'lang': 'py', 'task': 'my_app.my_task', 'id': 'dfc0d5b7-dcef-4ee4-aa9f-a3d3581345a0', 'eta': None, 'expires': '2017-09-12T15:33:38.453707+00:00', 'group': None, 'retries': 0, 'timelimit': [None, None], 'root_id': 'dfc0d5b7-dcef-4ee4-aa9f-a3d3581345a0', 'parent_id': None, 'argsrepr': '()', 'kwargsrepr': "{'foo': 'bar'}", 'origin': 'gen8@d8d88575bcec', 'reply_to': 'b836236d-0ff5-31b7-b0fa-e4948dccfcf4', 'correlation_id': 'dfc0d5b7-dcef-4ee4-aa9f-a3d3581345a0', 'delivery_info': {'exchange': '', 'routing_key': 'my_app', 'priority': None, 'redelivered': False}}, '[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]', 'application/json', 'utf-8') kwargs:{})
```
It appears this can be traced to https://github.com/celery/celery/blob/master/celery/app/base.py#L735 - no code actually passes the `kwargsrepr` (or `argsrepr` for that matter) to the task creating method.
I will submit a PR shortly that addresses this by passing the arguments if the task protocol is v2 or higher. I looked through the unit tests, and it appears that the task-focused unit tests all use task protocol v1, meaning this isn't something that could be tested in the existing test cases.
| 2017-09-12T15:52:16 |
|
celery/celery | 4,278 | celery__celery-4278 | [
"4223"
] | 06c6cfefb5948286e5c634cfc5b575dafe9dc98d | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -1174,8 +1174,9 @@ class chord(Signature):
@classmethod
def from_dict(cls, d, app=None):
- args, d['kwargs'] = cls._unpack_args(**d['kwargs'])
- return _upgrade(d, cls(*args, app=app, **d))
+ options = d.copy()
+ args, options['kwargs'] = cls._unpack_args(**options['kwargs'])
+ return _upgrade(d, cls(*args, app=app, **options))
@staticmethod
def _unpack_args(header=None, body=None, **kwargs):
| Chaining chords causes TypeError: 'NoneType' object is not iterable,
```chord_unlock``` task fails after retry with ```TypeError: 'NoneType' object is not iterable``` Exception.
Am I doing something wrong?
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
```
celery -A tasks report
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.12+
billiard:3.5.0.3 py-amqp:2.2.1
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:db+mysql://celery:**@localhost/celery
broker_url: u'amqp://celery:********@rbn-box:5672//'
result_backend: u'db+mysql://celery:********@localhost/celery'
```
## Steps to reproduce
Tasks file
```python
from celery import Celery
app = Celery('tasks', broker='pyamqp://celery:celery@rbn-box//', backend='db+mysql://celery:celery@localhost/celery')
@app.task
def dummy(data):
print(data)
import time
time.sleep(7)
```
Problematic chain
```python
from celery import chain, chord
from tasks import dummy
c1 = chord(
header=chain([
dummy.subtask(args=('c1-{}'.format(i),), immutable=True)
for i in range(0, 3)]),
body=dummy.subtask(args=('c1 body',), immutable=True))
c2 = chord(
header=chain([
dummy.subtask(args=('c2-{}'.format(i),), immutable=True)
for i in range(0, 3)]),
body=dummy.subtask(args=('c2 body',), immutable=True))
sig = c1 | c2
sig.apply_async()
```
## Expected behavior
It's possible to chain chords.
## Actual behavior
```
celery worker -A tasks --loglevel INFO -c 10
-------------- celery@rbn-box v4.1.0 (latentcall)
---- **** -----
--- * *** * -- Linux-4.8.0-51-generic-x86_64-with-Ubuntu-16.10-yakkety 2017-08-22 21:47:22
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x7f2bd97f73d0
- ** ---------- .> transport: amqp://celery:**@rbn-box:5672//
- ** ---------- .> results: mysql://celery:**@localhost/celery
- *** --- * --- .> concurrency: 10 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. tasks.dummy
[2017-08-22 21:47:23,343: INFO/MainProcess] Connected to amqp://celery:**@rbn-box:5672//
[2017-08-22 21:47:23,352: INFO/MainProcess] mingle: searching for neighbors
[2017-08-22 21:47:24,376: INFO/MainProcess] mingle: all alone
[2017-08-22 21:47:24,407: INFO/MainProcess] celery@rbn-box ready.
[2017-08-22 21:47:36,462: INFO/MainProcess] Received task: tasks.dummy[831ff49c-cd08-4aa8-8ca5-2a3a553a5567]
[2017-08-22 21:47:36,464: INFO/MainProcess] Received task: tasks.dummy[0e1cf302-aa23-4a10-835f-76496940bd0f]
[2017-08-22 21:47:36,465: INFO/MainProcess] Received task: tasks.dummy[d02014c5-0b34-4a9c-a0d6-f8d1dc0e4a66]
[2017-08-22 21:47:36,468: INFO/MainProcess] Received task: celery.chord_unlock[5658c4ce-07dc-4208-acf2-ae00d64247b8] ETA:[2017-08-22 19:47:37.458625+00:00]
[2017-08-22 21:47:36,473: WARNING/ForkPoolWorker-10] c1-1
[2017-08-22 21:47:36,479: WARNING/ForkPoolWorker-9] c1-2
[2017-08-22 21:47:36,484: WARNING/ForkPoolWorker-8] c1-0
[2017-08-22 21:47:38,617: INFO/MainProcess] Received task: celery.chord_unlock[5658c4ce-07dc-4208-acf2-ae00d64247b8] ETA:[2017-08-22 19:47:39.597538+00:00]
[2017-08-22 21:47:38,623: INFO/ForkPoolWorker-6] Task celery.chord_unlock[5658c4ce-07dc-4208-acf2-ae00d64247b8] retry: Retry in 1s
[2017-08-22 21:47:40,504: ERROR/ForkPoolWorker-1] Task celery.chord_unlock[5658c4ce-07dc-4208-acf2-ae00d64247b8] raised unexpected: TypeError("'NoneType' object is not iterable",)
Traceback (most recent call last):
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/app/builtins.py", line 59, in unlock_chord
callback = maybe_signature(callback, app)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 1390, in maybe_signature
d = signature(d)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 1365, in signature
return Signature.from_dict(varies, app=app)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 147, in from_dict
return target_cls.from_dict(d, app=app)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 534, in from_dict
tasks = [maybe_signature(task, app=app) for task in tasks]
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 1390, in maybe_signature
d = signature(d)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 1365, in signature
return Signature.from_dict(varies, app=app)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 147, in from_dict
return target_cls.from_dict(d, app=app)
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 1178, in from_dict
return _upgrade(d, cls(*args, app=app, **d))
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 1190, in __init__
dict(kwargs=kwargs, header=_maybe_group(header, app),
File "/home/rbn/.virtualenvs/celery-4.1/local/lib/python2.7/site-packages/celery-4.1.0-py2.7.egg/celery/canvas.py", line 904, in _maybe_group
tasks = [signature(t, app=app) for t in tasks]
TypeError: 'NoneType' object is not iterable
[2017-08-22 21:47:43,647: INFO/ForkPoolWorker-9] Task tasks.dummy[d02014c5-0b34-4a9c-a0d6-f8d1dc0e4a66] succeeded in 7.16881482498s: None
[2017-08-22 21:47:43,682: INFO/ForkPoolWorker-10] Task tasks.dummy[0e1cf302-aa23-4a10-835f-76496940bd0f] succeeded in 7.20971017997s: None
[2017-08-22 21:47:43,695: INFO/ForkPoolWorker-8] Task tasks.dummy[831ff49c-cd08-4aa8-8ca5-2a3a553a5567] succeeded in 7.21184302299s: None
```
| I've made some investigation and found that it incorrectly reconstructs `celery.chord` task in the `body`.
Initial record
{
"task": "celery.chord",
"args": [],
"kwargs": {
"kwargs": {
"kwargs": {}
},
"header": {
"task": "celery.group",
"args": [],
"kwargs": {
"tasks": [
{
"task": "tasks.dummy",
"args": [
"c2-0"
],
"kwargs": {},
"options": {
"task_id": "37e8d72c-d3b0-47d7-819d-6418309211ca",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae",
"chord": {
"task": "tasks.dummy",
"args": [
"c2 body"
],
"kwargs": {},
"options": {
"task_id": "9d4861c6-0a91-47fe-a1ba-5b5ecaa31f32",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae"
},
"subtask_type": null,
"chord_size": null,
"immutable": true
}
},
"subtask_type": null,
"chord_size": null,
"immutable": true
},
{
"task": "tasks.dummy",
"args": [
"c2-1"
],
"kwargs": {},
"options": {
"task_id": "df022257-cbb1-4ad7-ac67-9a2c64972ca2",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae",
"chord": {
"task": "tasks.dummy",
"args": [
"c2 body"
],
"kwargs": {},
"options": {
"task_id": "9d4861c6-0a91-47fe-a1ba-5b5ecaa31f32",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae"
},
"subtask_type": null,
"chord_size": null,
"immutable": true
}
},
"subtask_type": null,
"chord_size": null,
"immutable": true
},
{
"task": "tasks.dummy",
"args": [
"c2-2"
],
"kwargs": {},
"options": {
"task_id": "dcce66b7-b932-4f69-bfd4-0f464606031e",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae",
"chord": {
"task": "tasks.dummy",
"args": [
"c2 body"
],
"kwargs": {},
"options": {
"task_id": "9d4861c6-0a91-47fe-a1ba-5b5ecaa31f32",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae"
},
"subtask_type": null,
"chord_size": null,
"immutable": true
}
},
"subtask_type": null,
"chord_size": null,
"immutable": true
}
]
},
"options": {
"task_id": "4e338e07-4754-45dd-a7ac-439ff955d7f8",
"chord": {
"task": "tasks.dummy",
"args": [
"c2 body"
],
"kwargs": {},
"options": {
"task_id": "9d4861c6-0a91-47fe-a1ba-5b5ecaa31f32",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae"
},
"subtask_type": null,
"chord_size": null,
"immutable": true
},
"root_id": null,
"parent_id": null
},
"subtask_type": "group",
"immutable": false,
"chord_size": null
},
"body": {
"task": "tasks.dummy",
"args": [
"c2 body"
],
"kwargs": {},
"options": {
"task_id": "9d4861c6-0a91-47fe-a1ba-5b5ecaa31f32",
"reply_to": "0498cd30-a435-3c83-99ef-ff2cf7c1d6ae"
},
"subtask_type": null,
"chord_size": null,
"immutable": true
}
},
"options": {
"chord_size": null,
"task_id": "4e338e07-4754-45dd-a7ac-439ff955d7f8"
},
"subtask_type": "chord",
"immutable": false,
"chord_size": null
}
Retry record
{
"task": "celery.chord",
"args": [],
"kwargs": {
"kwargs": {
"kwargs": {}
}
},
"options": {
"chord_size": null,
"task_id": "4e338e07-4754-45dd-a7ac-439ff955d7f8"
},
"subtask_type": "chord",
"immutable": false,
"chord_size": null
}
I've found it.
https://github.com/celery/celery/blob/06c6cfefb5948286e5c634cfc5b575dafe9dc98d/celery/canvas.py#L1175-L1178
Here it modifies task's request context (removes `header` and `body` from `d['kwargs']`) which is used by `task.retry` to build new signature for retrying. | 2017-09-22T15:02:38 |
|
celery/celery | 4,280 | celery__celery-4280 | [
"4255"
] | 06c6cfefb5948286e5c634cfc5b575dafe9dc98d | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -950,6 +950,8 @@ def __init__(self, *tasks, **options):
tasks = tasks[0]
if isinstance(tasks, group):
tasks = tasks.tasks
+ if isinstance(tasks, abstract.CallableSignature):
+ tasks = [tasks.clone()]
if not isinstance(tasks, _regen):
tasks = regen(tasks)
Signature.__init__(
| diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -547,6 +547,14 @@ def test_iter(self):
g = group([self.add.s(i, i) for i in range(10)])
assert list(iter(g)) == list(g.keys())
+ def test_single_task(self):
+ g = group([self.add.s(1, 1)])
+ assert isinstance(g, group)
+ assert len(g.tasks) == 1
+ g = group(self.add.s(1, 1))
+ assert isinstance(g, group)
+ assert len(g.tasks) == 1
+
@staticmethod
def helper_test_get_delay(result):
import time
| [BUG] Exception on single task in group.
## Checklist
```
// I have included the output of ``celery -A proj report`` in the issue.
// (if you are not able to do this, then at least specify the Celery
// version affected).
//
// Doesn't make sense as it client-side issue.
```
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
On client send a group with single task in it
```python3.6
def fta_add_to_pool(applications_id: List[int]):
return group(
*[
app.signature('fta_tasks.add_to_pool', args=[application_id])
for application_id in applications_id
]
).apply_async().get()
```
## Expected behavior
I expected group to execute with a single task like there was no group.
## Actual behavior
Exception:
```
File "***/src/celery/celery/canvas.py", line 976, in apply_async
app = self.app
File "***/src/celery/celery/canvas.py", line 1140, in app
app = self.tasks[0].app
File "***/src/celery/celery/utils/functional.py", line 213, in __getitem__
self.__consumed.append(next(self.__it))
TypeError: 'Signature' object is not an iterator```
```
## Comment
It should be easy to fix but I'm not sure about celery typing and etc, so I decided to make the issue.
https://github.com/celery/celery/blob/39d86e7dbe7f3a0a43e973910be880146de96fb7/celery/canvas.py#L948-L958
We just need to check if this single task is Task and not something else.
| 2017-09-23T16:59:33 |
|
celery/celery | 4,292 | celery__celery-4292 | [
"4116",
"4116"
] | be55de622381816d087993f1c7f9afcf7f44ab33 | diff --git a/celery/fixups/django.py b/celery/fixups/django.py
--- a/celery/fixups/django.py
+++ b/celery/fixups/django.py
@@ -183,7 +183,7 @@ def close_database(self, **kwargs):
def _close_database(self):
for conn in self._db.connections.all():
try:
- conn.close()
+ conn.close_if_unusable_or_obsolete()
except self.interface_errors:
pass
except self.DatabaseError as exc:
| diff --git a/t/unit/fixups/test_django.py b/t/unit/fixups/test_django.py
--- a/t/unit/fixups/test_django.py
+++ b/t/unit/fixups/test_django.py
@@ -216,11 +216,12 @@ def test__close_database(self):
f._db.connections.all.side_effect = lambda: conns
f._close_database()
- conns[0].close.assert_called_with()
- conns[1].close.assert_called_with()
- conns[2].close.assert_called_with()
+ conns[0].close_if_unusable_or_obsolete.assert_called_with()
+ conns[1].close_if_unusable_or_obsolete.assert_called_with()
+ conns[2].close_if_unusable_or_obsolete.assert_called_with()
- conns[1].close.side_effect = KeyError('omg')
+ conns[1].close_if_unusable_or_obsolete.side_effect = KeyError(
+ 'omg')
with pytest.raises(KeyError):
f._close_database()
| Django celery fixup doesn't respect Django settings for PostgreSQL connections
When using the Django-Celery fixup to run background tasks for a Django web service, the tasks in the background do not respect the settings in Django for PostgreSQL (possibly other) connections. Every task will always create a new connection no matter the Django settings. Although it is possible to bypass this with the environment variable CELERY_DB_REUSE_MAX, it is preferred for it to follow the settings given in Django.
## Checklist
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
Celery 4.0.2 with potentially all versions of Django (tested on 1.10.3 and 1.11.2)
- [X] I have verified that the issue exists against the `master` branch of Celery.
This line causes this "issue":
https://github.com/celery/celery/blob/master/celery/fixups/django.py#L186
## Steps to reproduce
Note that these steps require some monitoring service to be used, we have New Relic.
Note also that we use Heroku for this app in question.
1) Have a web facing process with Django that connects to your PostgreSQL database for ORM purposes
2) Have a worker process that also connects to the PostgreSQL for ORM purposes
3) Have the DATABASES['default']['CONN_MAX_AGE'] setting set to anything that isn't 0 (easiest to see with `None` for persistent connections)
4) Make multiple requests to the web portion of Django to cause some ORM activity (easiest to see if it happens on every request)
5) Get multiple tasks to execute on the worker that will cause some ORM activity (easiest to see if it happens on every task)
6) Use your monitoring service (New Relic in our case) to view a breakdown of all of the requests and worker activity. In New Relic you can check this using the transaction tracing; select the endpoint/task that made the db queries and check the breakdown.
## Expected behavior
psycopg2:connect would occur rarely with an average calls per transaction <<< 1
## Actual behavior
psycopg2:connect occurs very rarely with an average calls per transaction of <<< 1 for the web processes.
psycopg2:connect occurs every time with an average calls per transaction of 1 for the worker processes.
## Potential Resolution
With my limited knowledge of Celery's inner workings, it feels like a fairly simple fix that I could make on a PR myself, but I wanted some input before I spend the time setting that all up.
This fix seems to work when monkey patched into the `DjangoWorkerFixup` class.
``` Python
def _close_database(self):
try:
# Use Django's built in method of closing old connections.
# This ensures that the database settings are respected.
self._db.close_old_connections()
except AttributeError:
# Legacy functionality if we can't use the old connections for whatever reason.
for conn in self._db.connections.all():
try:
conn.close()
except self.interface_errors:
pass
except self.DatabaseError as exc:
str_exc = str(exc)
if 'closed' not in str_exc and 'not connected' not in str_exc:
raise
celery.fixups.django.DjangoWorkerFixup._close_database = _close_database
```
Django celery fixup doesn't respect Django settings for PostgreSQL connections
When using the Django-Celery fixup to run background tasks for a Django web service, the tasks in the background do not respect the settings in Django for PostgreSQL (possibly other) connections. Every task will always create a new connection no matter the Django settings. Although it is possible to bypass this with the environment variable CELERY_DB_REUSE_MAX, it is preferred for it to follow the settings given in Django.
## Checklist
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
Celery 4.0.2 with potentially all versions of Django (tested on 1.10.3 and 1.11.2)
- [X] I have verified that the issue exists against the `master` branch of Celery.
This line causes this "issue":
https://github.com/celery/celery/blob/master/celery/fixups/django.py#L186
## Steps to reproduce
Note that these steps require some monitoring service to be used, we have New Relic.
Note also that we use Heroku for this app in question.
1) Have a web facing process with Django that connects to your PostgreSQL database for ORM purposes
2) Have a worker process that also connects to the PostgreSQL for ORM purposes
3) Have the DATABASES['default']['CONN_MAX_AGE'] setting set to anything that isn't 0 (easiest to see with `None` for persistent connections)
4) Make multiple requests to the web portion of Django to cause some ORM activity (easiest to see if it happens on every request)
5) Get multiple tasks to execute on the worker that will cause some ORM activity (easiest to see if it happens on every task)
6) Use your monitoring service (New Relic in our case) to view a breakdown of all of the requests and worker activity. In New Relic you can check this using the transaction tracing; select the endpoint/task that made the db queries and check the breakdown.
## Expected behavior
psycopg2:connect would occur rarely with an average calls per transaction <<< 1
## Actual behavior
psycopg2:connect occurs very rarely with an average calls per transaction of <<< 1 for the web processes.
psycopg2:connect occurs every time with an average calls per transaction of 1 for the worker processes.
## Potential Resolution
With my limited knowledge of Celery's inner workings, it feels like a fairly simple fix that I could make on a PR myself, but I wanted some input before I spend the time setting that all up.
This fix seems to work when monkey patched into the `DjangoWorkerFixup` class.
``` Python
def _close_database(self):
try:
# Use Django's built in method of closing old connections.
# This ensures that the database settings are respected.
self._db.close_old_connections()
except AttributeError:
# Legacy functionality if we can't use the old connections for whatever reason.
for conn in self._db.connections.all():
try:
conn.close()
except self.interface_errors:
pass
except self.DatabaseError as exc:
str_exc = str(exc)
if 'closed' not in str_exc and 'not connected' not in str_exc:
raise
celery.fixups.django.DjangoWorkerFixup._close_database = _close_database
```
| plz proceed with the PR
plz proceed with the PR | 2017-09-27T01:38:43 |
celery/celery | 4,357 | celery__celery-4357 | [
"4356"
] | d08b1057234bb7774623108fa343af7a1c658e7a | diff --git a/celery/worker/consumer.py b/celery/worker/consumer.py
--- a/celery/worker/consumer.py
+++ b/celery/worker/consumer.py
@@ -450,10 +450,10 @@ def create_task_handler(self):
def on_task_received(body, message):
headers = message.headers
try:
- type_, is_proto2 = headers['task'], 1
+ type_, is_proto2 = body['task'], 0
except (KeyError, TypeError):
try:
- type_, is_proto2 = body['task'], 0
+ type_, is_proto2 = headers['task'], 1
except (KeyError, TypeError):
return on_unknown_message(body, message)
| Task loss on retry when using a hybrid/staged Celery 3->4 deployment
If you have a Celery 3.1.25 deployment involving many workers, and you want to upgrade to Celery 4, you may wish to do "canary" testing of a limited subset of workers to validate that the upgrade won't introduce any problems, prior to upgrading your entire worker fleet to Celery4. This "canary" mode involves having both Celery 3.1.25 and Celery 4 workers running at the same time.
However, if you do this, and you have tasks that retry, you experience problems if a task is attempted on a Celery 3.1.25 node, then a Celery 4 node, and then a Celery 3.1.25 node.
When the Celery 3.1.25 task is executed on a Celery 4, the task message is upgraded to Protocol 2. However, the upgrade results in a hybrid message that complies with *both* formats, and when the task fails and is retried on the Celery 3.1.25 worker, the "hybrid" message is mis-identified as a Protocol 1 message, resulting in a hard crash and message loss.
## Checklist
- [X] I have included the output of ``celery -A proj report`` in the issue.
- [X] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
A full reproduction case can be found in this gist:
https://gist.github.com/ewdurbin/ddf4b0f0c0a4b190251a4a23859dd13c
In local testing, the following two versions were used:
###Celery 3.1.25:
```
software -> celery:3.1.25 (Cipater) kombu:3.0.37 py:2.7.13
billiard:3.3.0.23 redis:2.10.6
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:disabled
BROKER_URL: 'redis://localhost:6379//'
CELERY_ENABLE_UTC: True
CELERY_RESULT_SERIALIZER: 'json'
CELERY_ACCEPT_CONTENT: ['json']
CELERY_TIMEZONE: 'UTC'
CELERY_TASK_SERIALIZER: 'json'
```
###Celery 4.1.0:
```
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.13
billiard:3.5.0.3 redis:2.10.6
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:disabled
task_serializer: 'json'
result_serializer: 'json'
CELERY_ENABLE_UTC: True
accept_content: ['json']
enable_utc: True
timezone: 'UTC'
broker_url: u'redis://localhost:6379//'
```
Although these test results were obtained on a Mac running Sierra, the problem has also been observed in production on AWS EC2 Linux machines.
## Expected behavior
A task *should* be able to move back and forth between a 3.1.25 worker and a 4.1.0 worker without any problems.
## Actual behavior
A task can be executed on a Celery 3.1.25 worker , then on a Celery 4.1.0 worker; but when the task is then run on a Celery 3.1.25 worker, the following error is produced:
```
Traceback (most recent call last):
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/worker/__init__.py", line 206, in start
self.blueprint.start(self)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/bootsteps.py", line 374, in start
return self.obj.start()
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 280, in start
blueprint.start(self)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/bootsteps.py", line 123, in start
step.start(parent)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 884, in start
c.loop(*c.loop_args())
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/worker/loops.py", line 76, in asynloop
next(loop)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/kombu/async/hub.py", line 340, in create_loop
cb(*cbargs)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/kombu/transport/redis.py", line 1019, in on_readable
self._callbacks[queue](message)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 535, in _callback
return callback(message)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/kombu/messaging.py", line 598, in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/kombu/messaging.py", line 564, in receive
[callback(body, message) for callback in callbacks]
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 462, in on_task_received
self.app, type_, body, message, headers)
File "/Users/rkm/zapier/celery/celery3-venv/lib/python2.7/site-packages/celery/worker/consumer.py", line 483, in proto2_to_proto1
args, kwargs, embed = body
ValueError: too many values to unpack
```
This kills the worker, and the task message is lost.
| 2017-11-01T04:46:21 |
||
celery/celery | 4,369 | celery__celery-4369 | [
"4368",
"4368"
] | 5eba340aae2e994091afb7a0ed7839e7d944ee13 | diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -11,7 +11,8 @@
from celery import current_app, group, states
from celery._state import _task_stack
from celery.canvas import signature
-from celery.exceptions import Ignore, MaxRetriesExceededError, Reject, Retry
+from celery.exceptions import (Ignore, ImproperlyConfigured,
+ MaxRetriesExceededError, Reject, Retry)
from celery.five import items, python_2_unicode_compatible
from celery.local import class_property
from celery.result import EagerResult, denied_join_result
@@ -839,27 +840,26 @@ def replace(self, sig):
"""
chord = self.request.chord
if 'chord' in sig.options:
- if chord:
- chord = sig.options['chord'] | chord
- else:
- chord = sig.options['chord']
+ raise ImproperlyConfigured(
+ "A signature replacing a task must not be part of a chord"
+ )
if isinstance(sig, group):
sig |= self.app.tasks['celery.accumulate'].s(index=0).set(
- chord=chord,
link=self.request.callbacks,
link_error=self.request.errbacks,
)
- chord = None
if self.request.chain:
for t in reversed(self.request.chain):
sig |= signature(t, app=self.app)
- sig.freeze(self.request.id,
- group_id=self.request.group,
- chord=chord,
- root_id=self.request.root_id)
+ sig.set(
+ chord=chord,
+ group_id=self.request.group,
+ root_id=self.request.root_id,
+ )
+ sig.freeze(self.request.id)
sig.delay()
raise Ignore('Replaced by new task')
@@ -878,9 +878,12 @@ def add_to_chord(self, sig, lazy=False):
"""
if not self.request.chord:
raise ValueError('Current task is not member of any chord')
- result = sig.freeze(group_id=self.request.group,
- chord=self.request.chord,
- root_id=self.request.root_id)
+ sig.set(
+ group_id=self.request.group,
+ chord=self.request.chord,
+ root_id=self.request.root_id,
+ )
+ result = sig.freeze()
self.backend.add_to_chord(self.request.group, result)
return sig.delay() if not lazy else sig
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -10,6 +10,11 @@
logger = get_task_logger(__name__)
+@shared_task
+def identity(x):
+ return x
+
+
@shared_task
def add(x, y):
"""Add two numbers."""
@@ -35,6 +40,12 @@ def delayed_sum_with_soft_guard(numbers, pause_time=1):
return 0
+@shared_task
+def tsum(nums):
+ """Sum an iterable of numbers"""
+ return sum(nums)
+
+
@shared_task(bind=True)
def add_replaced(self, x, y):
"""Add two numbers (via the add task)."""
@@ -48,6 +59,20 @@ def add_to_all(self, nums, val):
raise self.replace(group(*subtasks))
+@shared_task(bind=True)
+def add_to_all_to_chord(self, nums, val):
+ for num in nums:
+ self.add_to_chord(add.s(num, val))
+ return 0
+
+
+@shared_task(bind=True)
+def add_chord_to_chord(self, nums, val):
+ subtasks = [add.s(num, val) for num in nums]
+ self.add_to_chord(group(subtasks) | tsum.s())
+ return 0
+
+
@shared_task
def print_unicode(log_message='hå它 valmuefrø', print_message='hiöäüß'):
"""Task that both logs and print strings containing funny characters."""
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -10,9 +10,10 @@
from celery.result import AsyncResult, GroupResult
from .conftest import flaky
-from .tasks import (add, add_replaced, add_to_all, collect_ids, delayed_sum,
- delayed_sum_with_soft_guard, ids, redis_echo,
- second_order_replace1)
+from .tasks import (add, add_chord_to_chord, add_replaced, add_to_all,
+ add_to_all_to_chord, collect_ids, delayed_sum,
+ delayed_sum_with_soft_guard, identity, ids, redis_echo,
+ second_order_replace1, tsum)
TIMEOUT = 120
@@ -211,6 +212,44 @@ def test_redis_subscribed_channels_leak(self, manager):
len(redis_client.execute_command('PUBSUB CHANNELS'))
assert channels_after < channels_before
+ @flaky
+ def test_replaced_nested_chord(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c1 = chord([
+ chord(
+ [add.s(1, 2), add_replaced.s(3, 4)],
+ add_to_all.s(5),
+ ) | tsum.s(),
+ chord(
+ [add_replaced.s(6, 7), add.s(0, 0)],
+ add_to_all.s(8),
+ ) | tsum.s(),
+ ], add_to_all.s(9))
+ res1 = c1()
+ assert res1.get(timeout=TIMEOUT) == [29, 38]
+
+ @flaky
+ def test_add_to_chord(self, manager):
+ if not manager.app.conf.result_backend.startswith('redis'):
+ raise pytest.skip('Requires redis result backend.')
+
+ c = group([add_to_all_to_chord.s([1, 2, 3], 4)]) | identity.s()
+ res = c()
+ assert res.get() == [0, 5, 6, 7]
+
+ @flaky
+ def test_add_chord_to_chord(self, manager):
+ if not manager.app.conf.result_backend.startswith('redis'):
+ raise pytest.skip('Requires redis result backend.')
+
+ c = group([add_chord_to_chord.s([1, 2, 3], 4)]) | identity.s()
+ res = c()
+ assert res.get() == [0, 5 + 6 + 7]
+
@flaky
def test_group_chain(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
diff --git a/t/unit/tasks/test_tasks.py b/t/unit/tasks/test_tasks.py
--- a/t/unit/tasks/test_tasks.py
+++ b/t/unit/tasks/test_tasks.py
@@ -10,7 +10,7 @@
from celery import Task, group, uuid
from celery.app.task import _reprtask
-from celery.exceptions import Ignore, Retry
+from celery.exceptions import Ignore, ImproperlyConfigured, Retry
from celery.five import items, range, string_t
from celery.result import EagerResult
from celery.utils.time import parse_iso8601
@@ -589,6 +589,12 @@ def test_replace(self):
with pytest.raises(Ignore):
self.mytask.replace(sig1)
+ def test_replace_with_chord(self):
+ sig1 = Mock(name='sig1')
+ sig1.options = {'chord': None}
+ with pytest.raises(ImproperlyConfigured):
+ self.mytask.replace(sig1)
+
@pytest.mark.usefixtures('depends_on_current_app')
def test_replace_callback(self):
c = group([self.mytask.s()], app=self.app)
@@ -617,7 +623,6 @@ def reprcall(self, *args, **kwargs):
self.mytask.replace(c)
except Ignore:
mocked_signature.return_value.set.assert_called_with(
- chord=None,
link='callbacks',
link_error='errbacks',
)
| task replace chord inside chord not working.
## Steps to reproduce
if we trying to replace task in a chord by another chord we have a problem - outer chord body not called
add_to_chord not working too
for example
```python
@app.task(bind=True, ignore_result=True)
def some_task(self, args=None, **kwargs):
#add_to_chord not working too
#self.add_to_chord(chord([echo.s(name='t1'), echo_s.s(name='t2'), echo.s(name='t3')], echo.s(name='inner_chord_end')))
raise self.replace(chord([echo.s(name='t1'), echo.s(name='t2'), echo.s(name='t3')], echo.s(name='inner_chord_end')))
@app.task(bind=True, ignore_result=True)
def echo(self, args=None, **kwargs):
print(kwargs['name'])
return kwargs['name']
result = chord([some_task.s(), echo.s(name='task')], echo.s(name='total end')).apply_async()
```
## Expected behavior
echo.s(name='total end') called
## Actual behavior
echo.s(name='total end') SKIPPED
task replace chord inside chord not working.
## Steps to reproduce
if we trying to replace task in a chord by another chord we have a problem - outer chord body not called
add_to_chord not working too
for example
```python
@app.task(bind=True, ignore_result=True)
def some_task(self, args=None, **kwargs):
#add_to_chord not working too
#self.add_to_chord(chord([echo.s(name='t1'), echo_s.s(name='t2'), echo.s(name='t3')], echo.s(name='inner_chord_end')))
raise self.replace(chord([echo.s(name='t1'), echo.s(name='t2'), echo.s(name='t3')], echo.s(name='inner_chord_end')))
@app.task(bind=True, ignore_result=True)
def echo(self, args=None, **kwargs):
print(kwargs['name'])
return kwargs['name']
result = chord([some_task.s(), echo.s(name='task')], echo.s(name='total end')).apply_async()
```
## Expected behavior
echo.s(name='total end') called
## Actual behavior
echo.s(name='total end') SKIPPED
| 2017-11-06T21:17:21 |
|
celery/celery | 4,399 | celery__celery-4399 | [
"3993",
"4412"
] | e14ecd9a3a7f77bdf53b9e763a1acd47d566223c | diff --git a/celery/contrib/sphinx.py b/celery/contrib/sphinx.py
--- a/celery/contrib/sphinx.py
+++ b/celery/contrib/sphinx.py
@@ -29,11 +29,13 @@
Use ``.. autotask::`` to manually document a task.
"""
from __future__ import absolute_import, unicode_literals
-from inspect import formatargspec
from sphinx.domains.python import PyModulelevel
from sphinx.ext.autodoc import FunctionDocumenter
from celery.app.task import BaseTask
-from celery.five import getfullargspec
+try: # pragma: no cover
+ from inspect import formatargspec, getfullargspec
+except ImportError: # Py2
+ from inspect import formatargspec, getargspec as getfullargspec # noqa
class TaskDocumenter(FunctionDocumenter):
| Build task documentation with sphinx fails (error while formatting arguments)
## Checklist
this has been tested with both version 4.0.2 and master (8c8354f)
## Steps to reproduce
```bash
$ git clone https://github.com/inveniosoftware/invenio-indexer.git
$ cd invenio-indexer/
$ pip install -e .[all]
$ sphinx-build -qnNW docs docs/_build/html
```
You can see that `invenio-indexer` correctly implements the requirements to document a celery task:
- https://github.com/inveniosoftware/invenio-indexer/blob/master/docs/conf.py#L52
- https://github.com/inveniosoftware/invenio-indexer/blob/master/docs/api.rst#celery-tasks
## Expected behavior
It should build the documentation of the tasks. This is **working** in Celery 3.1.25.
## Actual behavior
I get the following error:
```
invenio-indexer/docs/api.rst:54: WARNING: error while formatting arguments for invenio_indexer.tasks.index_record: 'NoneType' object is not callable
```
Am I missing something? Should it work differently than Celery 3?
Request on_timeout should ignore soft time limit exception
When Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.
But the task may catch this exception and eg. return (this is what soft timeout are for).
This cause:
1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task
2. the task status to be passed to failure and to success on the same manner
3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasks…
1, 2 and 3 can leads of course to strange race conditions…
## Steps to reproduce (Illustration)
with the program in test_timeout.py:
```python
import time
import celery
app = celery.Celery('test_timeout')
app.conf.update(
result_backend="redis://localhost/0",
broker_url="amqp://celery:celery@localhost:5672/host",
)
@app.task(soft_time_limit=1)
def test():
try:
time.sleep(2)
except Exception:
return 1
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())
result.get()
```
start a worker and the program:
```
$ celery -A test_timeout worker -l WARNING
$ python3 test_timeout.py
```
## Expected behavior
add method is called with `[1, 1]` as argument and test_timeout.py return normally
## Actual behavior
The test_timeout.py fails, with
```
celery.backends.base.ChordError: Callback error: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",
```
On the worker side, the **on_error is called but the add method as well !**
```
[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]
[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]
[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in <listcomp>
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 243, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)
[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding
[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]
```
Of course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:
- the chord result is incremented twice by the error of soft time limit
- the chord result is again incremented twice by the correct returning of `test` task
## Conclusion
Request.on_timeout should not process soft time limit exception.
here is a quick monkey patch (correction of celery is trivial)
```python
def patch_celery_request_on_timeout():
from celery.worker import request
orig = request.Request.on_timeout
def patched_on_timeout(self, soft, timeout):
if not soft:
orig(self, soft, timeout)
request.Request.on_timeout = patched_on_timeout
patch_celery_request_on_timeout()
```
## version info
software -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://10.0.3.253/0
| It looks like the problem is [format_args()](https://github.com/celery/celery/blob/3.1/celery/contrib/sphinx.py#L54) with python 2.
The [getfullargspec(()](https://github.com/celery/vine/blob/master/vine/five.py#L349) looks not really equivalent to the python 3 version.
This is an example of [task (with at least an argument)](https://github.com/inveniosoftware/invenio-indexer/blob/master/invenio_indexer/tasks.py#L44) that will produce the warning.
| 2017-11-20T13:30:09 |
|
celery/celery | 4,402 | celery__celery-4402 | [
"4047"
] | f51204f13a6efafd746ad4f61d0ec8ce4229b355 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -419,13 +419,13 @@ def __or__(self, other):
return sig
elif isinstance(other, Signature):
if isinstance(self, _chain):
- if isinstance(self.tasks[-1], group):
+ if self.tasks and isinstance(self.tasks[-1], group):
# CHAIN [last item is group] | TASK -> chord
sig = self.clone()
sig.tasks[-1] = chord(
sig.tasks[-1], other, app=self._app)
return sig
- elif isinstance(self.tasks[-1], chord):
+ elif self.tasks and isinstance(self.tasks[-1], chord):
# CHAIN [last item is chord] -> chain with chord body.
sig = self.clone()
sig.tasks[-1].body = sig.tasks[-1].body | other
| diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -478,6 +478,16 @@ def test_chord_sets_result_parent(self):
seen.add(node.id)
node = node.parent
+ def test_append_to_empty_chain(self):
+ x = chain()
+ x |= self.add.s(1, 1)
+ x |= self.add.s(1)
+ x.freeze()
+ tasks, _ = x._frozen
+ assert len(tasks) == 2
+
+ assert x.apply().get() == 3
+
class test_group(CanvasCase):
| Exception raised when appending to a chain
celery==3.1.20
django-redis==4.0.0
redis==2.10.5
websocket-client==0.35.0
Using the above versions of celery, redis and websocket. On production when I pass nearly 200 tasks to celery, the worker accepts only one task and then freezed and even not generated any error logs. It will not even process the current one. Although it works fine with 150 tasks.
I have also gone through some solutions and suggestions but not be able to fix the issue.
Am I using the correct versions of celery and redis in terms of compatibility?
Is there any fix for this issue? Please let me know If I need to put some other details.
| Is there a reason that you prefer the 3.1.20 version? I believe the latest version 4 release has much better handling for Redis brokers.
Because I'm also using django-celery==3.1.16 which is not compatible with the new version of Celery, I mean with Celery 4.0. Is it the only way to fix it?
If you have Django 1.8 or newer you can use Celery 4.0 without the django-celery package http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html#using-celery-with-django
Well, I tried to implement it without django-celery and it gives me error "Tupple index out of range" on line which is folllowed by comment below. I'm using celery signature here and trying to append the result in "part_process". Here is the snapshot of the code.
from celery import chain
part_process = chain()
for part_obj in queryset:
data = {
'id': part_obj.id,
'user_id': user_id,
'customer_id': customer_id,
'part_id': part_obj.id,
'username': username,
'part_file': part_obj.file_name,
'part_name': part_obj.part_name,
'machine_id': part_obj.dataset.id,
'ppi_type': part_obj.ppi_type,
'revision':part_obj.revision,
'queue_id': pending_task.id,
'ip_addr': ip_addr
}
part_process |= process_part_library.si(parts=data) #in this line I get error
lib_dataset_id = list(queryset.values_list('dataset_id', flat=True))
Part.update_machine_flat_file(customer_id, lib_dataset_id)
Do I need to pass them in different way? As these things are also in official documentation of celery. I would appreciate any suggestion.
Could you post the versions of Celery you're using and the full traceback?
@r3m0t I'm using Celery 4.0 right now. Following is the traceback -
Traceback:
File "/home/user/.virtualenvs/opti/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
132. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/user/.virtualenvs/opti/local/lib/python2.7/site-packages/django/views/decorators/csrf.py" in wrapped_view
58. return view_func(*args, **kwargs)
File "/home/user/.virtualenvs/opti/local/lib/python2.7/site-packages/rest_framework/viewsets.py" in view
87. return self.dispatch(request, *args, **kwargs)
File "/home/user/.virtualenvs/opti/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
466. response = self.handle_exception(exc)
File "/home/user/.virtualenvs/opti/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
463. response = handler(request, *args, **kwargs)
File "/home/user/worksapce/proj/app/views.py" in create
1625. username=username)
File "/home/user/worksapce/proj/app/views.py" in process_applib_queue_task
1377. part_process |= process_app_library.si(parts=data)
File "/home/user/.virtualenvs/opti/local/lib/python2.7/site-packages/celery/canvas.py" in __or__
426. if isinstance(self.tasks[-1], group):
Exception Type: IndexError at /api/v1/app
Exception Value: tuple index out of range
Although, It was working fine upto Celery 3.1.25, If I tried with less than 200 tasks. Is there any other alternative for si function in newer version? or I am missing something?
Thanks for your help :)
Can you please provide a test case that we can use to reproduce this problem?
I have the same problem as @gauravt1 with v4.1.0, in v3.1.25 it was still working.
The following testcase shows the problem:
```
from celery import Celery, Task, chain
app = Celery()
@app.task
def add(x, y):
return x + y
#works
c = chain(add.s(2,2) | add.s(3))
#7
print c.apply().get()
#fails, used to work in 3.1.25
c = chain()
c |= add.s(2,2)
c |= add.s(3)
print c.apply().get()
```
Resulting in:
```
7
Traceback (most recent call last):
File "/home/jurrian/Development/python/celerytest.py", line 16, in <module>
c |= add.s(2,2)
File "/usr/local/lib/python2.7/dist-packages/celery/canvas.py", line 426, in __or__
if isinstance(self.tasks[-1], group):
IndexError: tuple index out of range
``` | 2017-11-23T07:25:32 |
celery/celery | 4,403 | celery__celery-4403 | [
"1604"
] | f148709062f728104dcca32d475d11af4a496be3 | diff --git a/celery/utils/time.py b/celery/utils/time.py
--- a/celery/utils/time.py
+++ b/celery/utils/time.py
@@ -209,6 +209,9 @@ def remaining(start, ends_in, now=None, relative=False):
~datetime.timedelta: Remaining time.
"""
now = now or datetime.utcnow()
+ if now.utcoffset() != start.utcoffset():
+ # Timezone has changed, or DST started/ended
+ start = start.replace(tzinfo=now.tzinfo)
end_date = start + ends_in
if relative:
end_date = delta_resolution(end_date, ends_in)
| diff --git a/t/unit/app/test_schedules.py b/t/unit/app/test_schedules.py
--- a/t/unit/app/test_schedules.py
+++ b/t/unit/app/test_schedules.py
@@ -1,6 +1,7 @@
from __future__ import absolute_import, unicode_literals
import time
+import pytz
from contextlib import contextmanager
from datetime import datetime, timedelta
from pickle import dumps, loads
@@ -439,6 +440,40 @@ def test_leapyear(self):
)
assert next == datetime(2016, 2, 29, 14, 30)
+ def test_day_after_dst_end(self):
+ # Test for #1604 issue with region configuration using DST
+ tzname = "Europe/Paris"
+ self.app.timezone = tzname
+ tz = pytz.timezone(tzname)
+ crontab = self.crontab(minute=0, hour=9)
+
+ # Set last_run_at Before DST end
+ last_run_at = tz.localize(datetime(2017, 10, 28, 9, 0))
+ # Set now after DST end
+ now = tz.localize(datetime(2017, 10, 29, 7, 0))
+ crontab.nowfun = lambda: now
+ next = now + crontab.remaining_estimate(last_run_at)
+
+ assert next.utcoffset().seconds == 3600
+ assert next == tz.localize(datetime(2017, 10, 29, 9, 0))
+
+ def test_day_after_dst_start(self):
+ # Test for #1604 issue with region configuration using DST
+ tzname = "Europe/Paris"
+ self.app.timezone = tzname
+ tz = pytz.timezone(tzname)
+ crontab = self.crontab(minute=0, hour=9)
+
+ # Set last_run_at Before DST start
+ last_run_at = tz.localize(datetime(2017, 3, 25, 9, 0))
+ # Set now after DST start
+ now = tz.localize(datetime(2017, 3, 26, 7, 0))
+ crontab.nowfun = lambda: now
+ next = now + crontab.remaining_estimate(last_run_at)
+
+ assert next.utcoffset().seconds == 7200
+ assert next == tz.localize(datetime(2017, 3, 26, 9, 0))
+
class test_crontab_is_due:
| Possible DST issue?
I just noticed a crontab task setup to run at 12:00 today for some reason ran at at
both 11:00 and then again at 12:00. I've checked the server date and time and it does seem to have made the switch to winter time last night so i'm not sure if this was a fluke or really related to the DST but i don't see any other reason for it to run the task twice...
My crontab setup, for this particular task, looks like this:
```
from celery.schedules import crontab
CELERYBEAT_SCHEDULE = {
'event_suggestions': {
'task': 'events.event_suggestions',
'schedule': crontab(hour=12, minute=0, day_of_week=1)
}
}
```
Versions are:
celery==3.0.24
django-celery==3.0.23
| We've experienced the same issue last night. Our task scheduled at 01:01 running both at 00:01 and 01:01.
```
'daily_notifications': {
'task': 'notifications.tasks.daily_notifications',
'schedule': crontab(minute=1, hour=1),
},
```
Versions:
celery==3.0.23
django-celery==3.0.11
We've django's TIME_ZONE setting set to 'Europe/London', as well as the system's /etc/timezone file. USE_TZ to False.
My TIME_ZONE setting is 'Europe/Amsterdam' as is /etc/timezone. USE_TZ is True in my case.
Do you have pytz installed? Celery 3.0.x doesn't depend on it, so could be you're running without it. I'm not sure if python does account for dst changes when comparing, but you probably would need the timezone information present (which pytz provides)
@ask `pytz==2013.7` i think it's actually installed as part of the requirements of django-celery-with-redis?
@tijs Yeah, django-celery installs it, and celery 3.1 will depend on pytz (replacing the python-dateutil dependency).
Thanks for the info though, important to know it doesn't work even if pytz is installed.
Yes, also pytz installed here. In case it helps: pytz==2013b
Hi, we saw this problem today as well. We are on heroku...
celery==3.0.23
django-celery==3.0.23
pytz==2013.7
amqp==1.0.13
We also saw this issue this week.
celery==3.0.21
django-celery==3.0.21
pytz==2013b
amqp==1.0.13
I will look into this soon, probably there is some way to handle this in pytz.
Yep, just experienced the issue live !!! I got affected slightly different, here is a description: http://stackoverflow.com/questions/26568990/celery-django-rabbitmq-dst-bug
Is this a django specific issue?
We also ran into this issue during the last DST changeover. We have a daily task that executes at 5:00pm, however it was triggered at both 4:00pm and 5:00 pm. I'm wondering if we'll see similar behavior when DST starts in March.
Using celery version 3.1.18
Django settings roughly look like the following:
``` py
USE_TZ = True
TIME_ZONE = 'America/New_York'
CELERY_TIMEZONE = 'America/New_York'
# probably not relevant?
CELERY_ENABLE_UTC = True
CELERYBEAT_SCHEDULE = {
'send_reminder_emails': {
'task': 'app.tasks.send_reminder_emails',
'schedule': crontab(hour=17, minute=0, day_of_week=[1, 2, 3, 4, 5]),
},
# snip
# ...
}
```
@auvipy
Maybe, but I would doubt it. In our case, there isn't any Django integration beyond loading settings/discovering tasks from the Django project. django-celery isn't being used, so it's not an issue w/ the database-based scheduler.
Closing this, as we don't have the resources to complete this task.
I just hit this issue too. Caused significant confusion and disruption. How can this issue just be closed?!
so another DST change, bitten again.
in my opinion this is a showstopper bug and if (sadly) the project does not have the resources
at the moment to fix it, it should own up to that **openly** so users have the chance to factor
this in in their workflow/risk management.
what is wrong is not _not fixing_ this bug for lack of resources, but _burrying_ it in a closed bug report.
> what is wrong is not not fixing this bug for lack of resources, but burrying it in a closed bug report.
I kind of have to agree with this. I understand (although disagree with) the idea of closing issues that cannot be tackled. However, by closing the issue and removing bug labels, these issues cannot be tracked. There is no way for potential users to audit the health of a package, understand its caveats etc...
A few ideas:
- Create a new label to indicate the lack of resources such as "help wanted"
- Create a separate issue to track these issues.
I'm going to reopen this for now. But if it won't be resolved by 5.0 it won't be resolved at all.
| 2017-11-26T15:21:08 |
celery/celery | 4,432 | celery__celery-4432 | [
"4274"
] | 28c2c09d380c62f9e17776811735a5c8c4ed8320 | diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -628,7 +628,7 @@ def iterate(self, timeout=None, propagate=True, interval=0.5):
def get(self, timeout=None, propagate=True, interval=0.5,
callback=None, no_ack=True, on_message=None,
- disable_sync_subtasks=True):
+ disable_sync_subtasks=True, on_interval=None):
"""See :meth:`join`.
This is here for API compatibility with :class:`AsyncResult`,
@@ -640,7 +640,8 @@ def get(self, timeout=None, propagate=True, interval=0.5,
return (self.join_native if self.supports_native_join else self.join)(
timeout=timeout, propagate=propagate,
interval=interval, callback=callback, no_ack=no_ack,
- on_message=on_message, disable_sync_subtasks=disable_sync_subtasks
+ on_message=on_message, disable_sync_subtasks=disable_sync_subtasks,
+ on_interval=on_interval,
)
def join(self, timeout=None, propagate=True, interval=0.5,
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -130,6 +130,24 @@ def test_parent_ids(self, manager):
assert parent_id == expected_parent_id
assert value == i + 2
+ @flaky
+ def test_nested_group(self, manager):
+ assert manager.inspect().ping()
+
+ c = group(
+ add.si(1, 10),
+ group(
+ add.si(1, 100),
+ group(
+ add.si(1, 1000),
+ add.si(1, 2000),
+ ),
+ ),
+ )
+ res = c()
+
+ assert res.get(timeout=TIMEOUT) == [11, 101, 1001, 2001]
+
def assert_ids(r, expected_value, expected_root_id, expected_parent_id):
root_id, parent_id, value = r.get(timeout=TIMEOUT)
@@ -152,6 +170,32 @@ def test_group_chain(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [12, 13, 14, 15]
+ @flaky
+ def test_nested_group_chain(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ if not manager.app.backend.supports_native_join:
+ raise pytest.skip('Requires native join support.')
+ c = chain(
+ add.si(1, 0),
+ group(
+ add.si(1, 100),
+ chain(
+ add.si(1, 200),
+ group(
+ add.si(1, 1000),
+ add.si(1, 2000),
+ ),
+ ),
+ ),
+ add.si(1, 10),
+ )
+ res = c()
+ assert res.get(timeout=TIMEOUT) == 11
+
@flaky
def test_parent_ids(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
diff --git a/t/unit/tasks/test_result.py b/t/unit/tasks/test_result.py
--- a/t/unit/tasks/test_result.py
+++ b/t/unit/tasks/test_result.py
@@ -519,12 +519,16 @@ def get(self, propagate=True, **kwargs):
class MockAsyncResultSuccess(AsyncResult):
forgotten = False
+ def __init__(self, *args, **kwargs):
+ self._result = kwargs.pop('result', 42)
+ super(MockAsyncResultSuccess, self).__init__(*args, **kwargs)
+
def forget(self):
self.forgotten = True
@property
def result(self):
- return 42
+ return self._result
@property
def state(self):
@@ -622,6 +626,37 @@ def test_forget(self):
for sub in subs:
assert sub.forgotten
+ def test_get_nested_without_native_join(self):
+ backend = SimpleBackend()
+ backend.supports_native_join = False
+ ts = self.app.GroupResult(uuid(), [
+ MockAsyncResultSuccess(uuid(), result='1.1',
+ app=self.app, backend=backend),
+ self.app.GroupResult(uuid(), [
+ MockAsyncResultSuccess(uuid(), result='2.1',
+ app=self.app, backend=backend),
+ self.app.GroupResult(uuid(), [
+ MockAsyncResultSuccess(uuid(), result='3.1',
+ app=self.app, backend=backend),
+ MockAsyncResultSuccess(uuid(), result='3.2',
+ app=self.app, backend=backend),
+ ]),
+ ]),
+ ])
+ ts.app.backend = backend
+
+ vals = ts.get()
+ assert vals == [
+ '1.1',
+ [
+ '2.1',
+ [
+ '3.1',
+ '3.2',
+ ]
+ ],
+ ]
+
def test_getitem(self):
subs = [MockAsyncResultSuccess(uuid(), app=self.app),
MockAsyncResultSuccess(uuid(), app=self.app)]
| TypeError: get() got an unexpected keyword argument 'on_interval' when nested chain ends in group
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [ ] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
Given this task…
```python
from myapp.celery import app
@app.task
def say_something(msg):
print(msg)
```
…this moderately complex workflow…
```python
import celery
from myapp.tasks import say_something
workflow = celery.chain(
say_something.si('outer chain start'),
celery.group(
say_something.si('outer group'),
celery.chain(
say_something.si('inner chain start'),
celery.group(
say_something.si('inner group a'),
say_something.si('inner group b'),
),
# say_something.si('inner chain end'),
),
),
say_something.si('outer chain end'),
)
workflow.delay()
```
…causes this error…
ERROR:celery.app.builtins:Chord '70cb9fbe-4843-49ba-879e-6ffd76d63226' raised: TypeError("get() got an unexpected keyword argument 'on_interval'",)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/app/builtins.py", line 80, in unlock_chord
ret = j(timeout=3.0, propagate=True)
File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 698, in join
interval=interval, no_ack=no_ack, on_interval=on_interval,
TypeError: get() got an unexpected keyword argument 'on_interval'
INFO:celery.app.trace:Task celery.chord_unlock[cac53aaf-c193-442c-b060-73577be77d0f] succeeded in 0.04921864200150594s: None
…and the `outer chain end` task is never executed.
If I un-comment the `inner chain end` task, the whole workflow executes as expected.
Below is a sample of the output of `celery -A proj report`:
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:3.6.1
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:django-db
BROKER_URL: 'amqp://admin:********@rabbitmq:5672//'
CELERY_ACCEPT_CONTENT: ['json']
CELERY_RESULT_BACKEND: 'django-db'
CELERY_TASK_SERIALIZER: 'json'
CELERY_WORKER_HIJACK_ROOT_LOGGER: False
## Expected behaviour
The `outer chain end` should be executed once all other tasks have completed.
## Actual behavior
ERROR:celery.app.builtins:Chord '70cb9fbe-4843-49ba-879e-6ffd76d63226' raised: TypeError("get() got an unexpected keyword argument 'on_interval'",)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/app/builtins.py", line 80, in unlock_chord
ret = j(timeout=3.0, propagate=True)
File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 698, in join
interval=interval, no_ack=no_ack, on_interval=on_interval,
TypeError: get() got an unexpected keyword argument 'on_interval'
INFO:celery.app.trace:Task celery.chord_unlock[cac53aaf-c193-442c-b060-73577be77d0f] succeeded in 0.04921864200150594s: None
| I believe there may be a similar issue when a `celery.group` is comprised solely of `celery.chain` tasks.
I can work around this issue by adding a `noop` task.
@app.task
def noop():
"""No-operation task, used to address https://github.com/celery/celery/issues/4274."""
pass
| 2017-12-09T20:51:20 |
celery/celery | 4,448 | celery__celery-4448 | [
"4337"
] | 25f5e29610b2224122cf10d5252de92b4efe3e81 | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -415,8 +415,11 @@ def on_chord_part_return(self, request, state, result, **kwargs):
def fallback_chord_unlock(self, header_result, body, countdown=1,
**kwargs):
kwargs['result'] = [r.as_tuple() for r in header_result]
+ queue = body.options.get('queue', getattr(body.type, 'queue', None))
self.app.tasks['celery.chord_unlock'].apply_async(
- (header_result.id, body,), kwargs, countdown=countdown,
+ (header_result.id, body,), kwargs,
+ countdown=countdown,
+ queue=queue,
)
def ensure_chords_allowed(self):
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -63,6 +63,12 @@ class test_BaseBackend_interface:
def setup(self):
self.b = BaseBackend(self.app)
+ @self.app.task(shared=False)
+ def callback(result):
+ pass
+
+ self.callback = callback
+
def test__forget(self):
with pytest.raises(NotImplementedError):
self.b._forget('SOMExx-N0Nex1stant-IDxx-')
@@ -80,9 +86,33 @@ def test_apply_chord(self, unlock='celery.chord_unlock'):
uuid(),
[self.app.AsyncResult(x) for x in range(3)],
)
- self.b.apply_chord(header_result, None)
+ self.b.apply_chord(header_result, self.callback.s())
assert self.app.tasks[unlock].apply_async.call_count
+ def test_chord_unlock_queue(self, unlock='celery.chord_unlock'):
+ self.app.tasks[unlock] = Mock()
+ header_result = self.app.GroupResult(
+ uuid(),
+ [self.app.AsyncResult(x) for x in range(3)],
+ )
+ body = self.callback.s()
+
+ self.b.apply_chord(header_result, body)
+ called_kwargs = self.app.tasks[unlock].apply_async.call_args[1]
+ assert called_kwargs['queue'] is None
+
+ self.b.apply_chord(header_result, body.set(queue='test_queue'))
+ called_kwargs = self.app.tasks[unlock].apply_async.call_args[1]
+ assert called_kwargs['queue'] == 'test_queue'
+
+ @self.app.task(shared=False, queue='test_queue_two')
+ def callback_queue(result):
+ pass
+
+ self.b.apply_chord(header_result, callback_queue.s())
+ called_kwargs = self.app.tasks[unlock].apply_async.call_args[1]
+ assert called_kwargs['queue'] == 'test_queue_two'
+
class test_exception_pickle:
| chord_unlock isn't using the same queue as chord
I've created a tasks_queue and priority_task_queue. in addition to that i've set the tasks_queue to be used as default queue. Now every time i would like a task to be perfrom right away i use priority_queue which is generally empty most of the time. now, the problem araise whenever i'm using chord. i noticed that chords are unable to complete their task because their chord_unlock counterpart always run on the default tasks queue(tasks_queue)
## Steps to reproduce
- Ubnutu 16 server
- celery==4.1.0
- kombu==4.1.0
- boto==2.48.0
- boto3==1.4.7
- botocore==1.7.31
- Using SQS for as broker.
## Expected behavior
I expected the chord_unlock to be performed in the same queue the chord was executed. This causes a major problem for me. Since SQS does not support priority tasks. I'm forced to use two queues to enable the effect of tasks priority. but since chord_unlock will always use the same default queue. priority_tasks using chords, will never be able to finish whenever the default queue is packed.
## Actual behavior
the chord was consumed from priority_task_queue and the chord_unlock got stuck in line on the regular tasks_queue
| So after doing some tests I've realized that the problem resides within my code. I've forgotten to set the queue for one of my chords. and it was not running in priority.
My bad. Celery is awesome. Peace!
Okay, so apparently, it wasn't my coding error. a chord_unlock will always be running in the default queue instead of the queue its counterpart chord will run. If you are using celery with SQS as a broker, this is somewhat problematic since priority is a non-existent feature and you are forced to use different queue for priority tasks. but if your tasks canvas is using chord, its unlock could be stuck in a pile of tasks in the default queue.
the way I eventually overcome this problem is by using custom router to route chord_unlock to the queue where its group(chord) was sent.
using the following simple code -
```
def route_chord_unlock_to_chord_queue(name, args, kwargs, options, task=None, **kw):
if name == 'celery.chord_unlock':
callback_sig = args[1]
callback_sig.get("options").get('queue')
return {'queue': callback_sig.get("options").get('queue')}
```
it worked. but i still think the default situation is not what the author meant. I might be wrong. but I leave this up to you guys | 2017-12-13T15:32:38 |
celery/celery | 4,456 | celery__celery-4456 | [
"4008"
] | 47ca2b462f22a8d48ed8d80c2f9bf8b9dc4a4de6 | diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -5,6 +5,7 @@
import sys
from billiard.einfo import ExceptionInfo
+from kombu import serialization
from kombu.exceptions import OperationalError
from kombu.utils.uuid import uuid
@@ -514,6 +515,17 @@ def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,
app = self._get_app()
if app.conf.task_always_eager:
+ with app.producer_or_acquire(producer) as eager_producer:
+ serializer = options.get(
+ 'serializer', eager_producer.serializer
+ )
+ body = args, kwargs
+ content_type, content_encoding, data = serialization.dumps(
+ body, serializer
+ )
+ args, kwargs = serialization.loads(
+ data, content_type, content_encoding
+ )
with denied_join_result():
return self.apply(args, kwargs, task_id=task_id or uuid(),
link=link, link_error=link_error, **options)
| diff --git a/t/unit/app/test_builtins.py b/t/unit/app/test_builtins.py
--- a/t/unit/app/test_builtins.py
+++ b/t/unit/app/test_builtins.py
@@ -94,7 +94,9 @@ def setup(self):
self.maybe_signature = self.patching('celery.canvas.maybe_signature')
self.maybe_signature.side_effect = pass1
self.app.producer_or_acquire = Mock()
- self.app.producer_or_acquire.attach_mock(ContextMock(), 'return_value')
+ self.app.producer_or_acquire.attach_mock(
+ ContextMock(serializer='json'), 'return_value'
+ )
self.app.conf.task_always_eager = True
self.task = builtins.add_group_task(self.app)
BuiltinsCase.setup(self)
diff --git a/t/unit/tasks/test_tasks.py b/t/unit/tasks/test_tasks.py
--- a/t/unit/tasks/test_tasks.py
+++ b/t/unit/tasks/test_tasks.py
@@ -7,6 +7,7 @@
import pytest
from case import ANY, ContextMock, MagicMock, Mock, patch
from kombu import Queue
+from kombu.exceptions import EncodeError
from celery import Task, group, uuid
from celery.app.task import _reprtask
@@ -824,6 +825,13 @@ def common_send_task_arguments(self):
ignore_result=False
)
+ def test_eager_serialization_failure(self):
+ @self.app.task
+ def task(*args, **kwargs):
+ pass
+ with pytest.raises(EncodeError):
+ task.apply_async((1, 2, 3, 4, {1}))
+
def test_task_with_ignored_result(self):
with patch.object(self.app, 'send_task') as send_task:
self.task_with_ignored_result.apply_async()
| Eager mode hides serialization side-effects
Running a task in eager mode doesn't pass the arguments through serialization, and therefore hides some of its side-effects. For example, calling a celery task in eager mode with UUID arguments, will leave them as UUIDs, whereas passing them through serialization (with eager mode switched off) converts them to strings. This hid a bunch of bugs in our application code.
Do you agree it would be more reasonable to pass the arguments through the serialization backends, even in eager mode, to have a more consistently behaving API?
| Sounds totally crazy but it's true. Passing a native date that I usually need to parse from a string, the date remains native in eager mode, breaking the functionality !
Just got it by that as well with a nested data structure containing a non JSON serializable object.
Any update on this?
FWIW we worked around the issue by making `app.Task` a subclass that performs a round trip when `CELERY_ALWAYS_EAGER is True`.
```python
from kombu.serialization import registry
app = Celery('app')
class EagerSerializationTask(app.Task):
abstract = True
@classmethod
def apply_async(cls, args=None, kwargs=None, *args_, **kwargs_):
app = cls._get_app()
if app.conf.CELERY_ALWAYS_EAGER and kwargs_.pop('serialize', True):
# Perform a noop serialization backtrip to assert args and kwargs
# will be serialized appropriately when an async call through kombu
# is actually performed. This is done to make sure we catch the
# serializations errors with our test suite which runs with the
# CELERY_ALWAYS_EAGER setting set to True. See the following Celery
# issue for details https://github.com/celery/celery/issues/4008.
producer = kwargs.get('producer') if kwargs else None
with app.producer_or_acquire(producer) as producer:
serializer = kwargs_.get('serializer', producer.serializer)
args_content_type, args_content_encoding, args_data = registry.encode(args, serializer)
kwargs_content_type, kwargs_content_encoding, kwargs_data = registry.encode(kwargs, serializer)
args = registry.decode(args_data, args_content_type, args_content_encoding)
kwargs = registry.decode(kwargs_data, kwargs_content_type, kwargs_content_encoding)
return super(EagerSerializationTask, cls).apply_async(args=args, kwargs=kwargs, *args_, **kwargs_)
app.Task = EagerSerializationTask
```
Just had an issue with it today : / Big problem.
I was just debugging this issue myself with passing a datetime.date object and am now wondering how many other places I have passing tests which don't test what is actually happening in real world use.
I ran into this issue as well (with celery 4.x). Working on a monkey patch for `apply_async` for tests until this can get handled.
Does anyone of the commenters here have the knowledge, time and motivation to open a PR to remediate this issue?
@Korijn I'd happily submit a PR with the above solution if it's deemed appropriate by the maintainers of the project.
Hi Simon, I've only been a maintainer for about a week, but for what it's worth I support this 100% for calls to `delay` and `apply_async` with ALWAYS_EAGER enabled.
I'll see if I can swing some time at work to take a crack at it next week. Not a ton of free time outside of work this time of year, unfortunately.
I was digging a bit myself to understand it all better and, not surprisingly, the conclusion I came to was that I pretty much just needed to drop @charettes' code into `apply_async()` before it calls `self.apply()`. I do wonder if `apply()` should be doing it instead so that when called directly the data it passes to tasks will be consistent with `apply_async()`.
@jmichalicek I think the change should only be done for `delay()`/`apply_async()` because AFAIK `task.apply()` is an equivalent of `task()` and doesn't perform a serialization roundtrip for the sake of preserving provided `args`/`kwargs` identity.
I agree with leaving the existing behaviour of apply() alone. A patch should only address the case of using delay() and apply_async() while always_eager is enabled.
On Sun, 17 Dec 2017 at 00:28 Simon Charette <[email protected]>
wrote:
> @jmichalicek <https://github.com/jmichalicek> I think the change should
> only be done for delay()/apply_async() because AFAIK task.apply() is an
> equivalent of task() and doesn't perform a serialization roundtrip for
> the sake of preserving provided args/kwargs identity.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/celery/celery/issues/4008#issuecomment-352193745>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AA9vG5tFEfZhGk9_1C7qpk49-kufu1Z7ks5tA--tgaJpZM4NOC7e>
> .
>
@charettes @AlexHill Makes sense if `apply()` is intended to be conceptually different from `apply_async()` beyond just that one is async and the other is not.
I have always thought of them as "do the exact same thing, except one of these runs async in a separate process". They are calling the same task with the same inputs so to me should pass the same data into the task. But if I misunderstood the use case, then leaving `apply()` itself alone totally makes sense. I pretty much never use `apply()`, anyway. If I wanted it to run inline, I'd just call the function directly - I pretty much always just use the decorator on a function and that is usually just a light wrapper around calling a different function or method. | 2017-12-16T17:48:05 |
celery/celery | 4,473 | celery__celery-4473 | [
"4412",
"4412"
] | a7915054d0e1e896c9ccf5ff0497dd8e3d5ed541 | diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -18,8 +18,8 @@
from celery import signals
from celery.app.trace import trace_task, trace_task_ret
from celery.exceptions import (Ignore, InvalidTaskError, Reject, Retry,
- SoftTimeLimitExceeded, TaskRevokedError,
- Terminated, TimeLimitExceeded, WorkerLostError)
+ TaskRevokedError, Terminated,
+ TimeLimitExceeded, WorkerLostError)
from celery.five import python_2_unicode_compatible, string
from celery.platforms import signals as _signals
from celery.utils.functional import maybe, noop
@@ -299,22 +299,21 @@ def on_accepted(self, pid, time_accepted):
def on_timeout(self, soft, timeout):
"""Handler called if the task times out."""
- task_ready(self)
if soft:
warn('Soft time limit (%ss) exceeded for %s[%s]',
timeout, self.name, self.id)
- exc = SoftTimeLimitExceeded(soft)
else:
+ task_ready(self)
error('Hard time limit (%ss) exceeded for %s[%s]',
timeout, self.name, self.id)
exc = TimeLimitExceeded(timeout)
- self.task.backend.mark_as_failure(
- self.id, exc, request=self, store_result=self.store_errors,
- )
+ self.task.backend.mark_as_failure(
+ self.id, exc, request=self, store_result=self.store_errors,
+ )
- if self.task.acks_late:
- self.acknowledge()
+ if self.task.acks_late:
+ self.acknowledge()
def on_success(self, failed__retval__runtime, **kwargs):
"""Handler called if the task was successfully processed."""
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -4,6 +4,7 @@
from time import sleep
from celery import chain, group, shared_task
+from celery.exceptions import SoftTimeLimitExceeded
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
@@ -24,6 +25,16 @@ def delayed_sum(numbers, pause_time=1):
return sum(numbers)
+@shared_task
+def delayed_sum_with_soft_guard(numbers, pause_time=1):
+ """Sum the iterable of numbers."""
+ try:
+ sleep(pause_time)
+ return sum(numbers)
+ except SoftTimeLimitExceeded:
+ return 0
+
+
@shared_task(bind=True)
def add_replaced(self, x, y):
"""Add two numbers (via the add task)."""
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -11,7 +11,8 @@
from .conftest import flaky
from .tasks import (add, add_replaced, add_to_all, collect_ids, delayed_sum,
- ids, redis_echo, second_order_replace1)
+ delayed_sum_with_soft_guard, ids, redis_echo,
+ second_order_replace1)
TIMEOUT = 120
@@ -110,6 +111,26 @@ def assert_ids(self, res, size):
node = node.parent
i -= 1
+ def test_chord_soft_timeout_recuperation(self, manager):
+ """Test that if soft timeout happens in task but is managed by task,
+ chord still get results normally
+ """
+ if not manager.app.conf.result_backend.startswith('redis'):
+ raise pytest.skip('Requires redis result backend.')
+
+ c = chord([
+ # return 3
+ add.s(1, 2),
+ # return 0 after managing soft timeout
+ delayed_sum_with_soft_guard.s(
+ [100], pause_time=2
+ ).set(
+ soft_time_limit=1
+ ),
+ ])
+ result = c(delayed_sum.s(pause_time=0)).get()
+ assert result == 3
+
class test_group:
diff --git a/t/unit/worker/test_request.py b/t/unit/worker/test_request.py
--- a/t/unit/worker/test_request.py
+++ b/t/unit/worker/test_request.py
@@ -599,31 +599,39 @@ def test_from_message_invalid_kwargs(self):
with pytest.raises(InvalidTaskError):
raise req.execute().exception
- def test_on_timeout(self, patching):
- warn = patching('celery.worker.request.warn')
+ def test_on_hard_timeout(self, patching):
error = patching('celery.worker.request.error')
job = self.xRequest()
job.acknowledge = Mock(name='ack')
job.task.acks_late = True
- job.on_timeout(soft=True, timeout=1337)
- assert 'Soft time limit' in warn.call_args[0][0]
job.on_timeout(soft=False, timeout=1337)
assert 'Hard time limit' in error.call_args[0][0]
assert self.mytask.backend.get_status(job.id) == states.FAILURE
job.acknowledge.assert_called_with()
- self.mytask.ignore_result = True
job = self.xRequest()
- job.on_timeout(soft=True, timeout=1336)
- assert self.mytask.backend.get_status(job.id) == states.PENDING
+ job.acknowledge = Mock(name='ack')
+ job.task.acks_late = False
+ job.on_timeout(soft=False, timeout=1335)
+ job.acknowledge.assert_not_called()
+
+ def test_on_soft_timeout(self, patching):
+ warn = patching('celery.worker.request.warn')
job = self.xRequest()
job.acknowledge = Mock(name='ack')
- job.task.acks_late = False
- job.on_timeout(soft=True, timeout=1335)
+ job.task.acks_late = True
+ job.on_timeout(soft=True, timeout=1337)
+ assert 'Soft time limit' in warn.call_args[0][0]
+ assert self.mytask.backend.get_status(job.id) == states.PENDING
job.acknowledge.assert_not_called()
+ self.mytask.ignore_result = True
+ job = self.xRequest()
+ job.on_timeout(soft=True, timeout=1336)
+ assert self.mytask.backend.get_status(job.id) == states.PENDING
+
def test_fast_trace_task(self):
from celery.app import trace
setup_worker_optimizations(self.app)
| Request on_timeout should ignore soft time limit exception
When Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.
But the task may catch this exception and eg. return (this is what soft timeout are for).
This cause:
1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task
2. the task status to be passed to failure and to success on the same manner
3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasks…
1, 2 and 3 can leads of course to strange race conditions…
## Steps to reproduce (Illustration)
with the program in test_timeout.py:
```python
import time
import celery
app = celery.Celery('test_timeout')
app.conf.update(
result_backend="redis://localhost/0",
broker_url="amqp://celery:celery@localhost:5672/host",
)
@app.task(soft_time_limit=1)
def test():
try:
time.sleep(2)
except Exception:
return 1
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())
result.get()
```
start a worker and the program:
```
$ celery -A test_timeout worker -l WARNING
$ python3 test_timeout.py
```
## Expected behavior
add method is called with `[1, 1]` as argument and test_timeout.py return normally
## Actual behavior
The test_timeout.py fails, with
```
celery.backends.base.ChordError: Callback error: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",
```
On the worker side, the **on_error is called but the add method as well !**
```
[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]
[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]
[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in <listcomp>
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 243, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)
[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding
[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]
```
Of course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:
- the chord result is incremented twice by the error of soft time limit
- the chord result is again incremented twice by the correct returning of `test` task
## Conclusion
Request.on_timeout should not process soft time limit exception.
here is a quick monkey patch (correction of celery is trivial)
```python
def patch_celery_request_on_timeout():
from celery.worker import request
orig = request.Request.on_timeout
def patched_on_timeout(self, soft, timeout):
if not soft:
orig(self, soft, timeout)
request.Request.on_timeout = patched_on_timeout
patch_celery_request_on_timeout()
```
## version info
software -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://10.0.3.253/0
Request on_timeout should ignore soft time limit exception
When Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.
But the task may catch this exception and eg. return (this is what soft timeout are for).
This cause:
1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task
2. the task status to be passed to failure and to success on the same manner
3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasks…
1, 2 and 3 can leads of course to strange race conditions…
## Steps to reproduce (Illustration)
with the program in test_timeout.py:
```python
import time
import celery
app = celery.Celery('test_timeout')
app.conf.update(
result_backend="redis://localhost/0",
broker_url="amqp://celery:celery@localhost:5672/host",
)
@app.task(soft_time_limit=1)
def test():
try:
time.sleep(2)
except Exception:
return 1
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())
result.get()
```
start a worker and the program:
```
$ celery -A test_timeout worker -l WARNING
$ python3 test_timeout.py
```
## Expected behavior
add method is called with `[1, 1]` as argument and test_timeout.py return normally
## Actual behavior
The test_timeout.py fails, with
```
celery.backends.base.ChordError: Callback error: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",
```
On the worker side, the **on_error is called but the add method as well !**
```
[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]
[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]
[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in <listcomp>
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 243, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)
[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding
[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]
```
Of course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:
- the chord result is incremented twice by the error of soft time limit
- the chord result is again incremented twice by the correct returning of `test` task
## Conclusion
Request.on_timeout should not process soft time limit exception.
here is a quick monkey patch (correction of celery is trivial)
```python
def patch_celery_request_on_timeout():
from celery.worker import request
orig = request.Request.on_timeout
def patched_on_timeout(self, soft, timeout):
if not soft:
orig(self, soft, timeout)
request.Request.on_timeout = patched_on_timeout
patch_celery_request_on_timeout()
```
## version info
software -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://10.0.3.253/0
| @ask I think this is quite a big problem (with a trivial fix).
It requires attention though as it is bringging a new behaviour (but previous behaviour is not well documented, and, in my opinion, the new behaviour was the one expected)
This change in behaviour is what kept my team from upgrading to celery 4. Indeed, the chord callback was often not called at all.
I don't know if it is related, but I modified your code sample and it resulted in some `Exception raised outside body` errors and multiple other errors if you try running `python test_timeout.py` multiple times.
Here is my script:
```python
import time
import celery
app = celery.Celery(
'test_timeout',
broker='amqp://localhost',
backend='redis://localhost')
@app.task(soft_time_limit=1)
def test(nb_seconds):
try:
time.sleep(nb_seconds)
return nb_seconds
except:
print("### error handled")
return nb_seconds
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([
test.s(i).set(link_error=on_error.s()) for i in range(0, 2)
])(add.s())
result.get()
```
NB: if you update the range for range(2, 4) for instance, the `Exception raised outside body` does not seem to happen. It seems this particular issue happens when the `SoftTimeLimitExceeded` is raised exactly during the `return`.
could you please send a PR with your proposed fix/workaround on master branch?
hi @auvipy, I won't have the time until jannuary. I'll need help on how to write the tests also (that's the reason why I didn't propose a PR).
OK. dont get afraid of sending logical changes just for the reason you don't know how to write test. we will certainly try to help you
@ask I think this is quite a big problem (with a trivial fix).
It requires attention though as it is bringging a new behaviour (but previous behaviour is not well documented, and, in my opinion, the new behaviour was the one expected)
This change in behaviour is what kept my team from upgrading to celery 4. Indeed, the chord callback was often not called at all.
I don't know if it is related, but I modified your code sample and it resulted in some `Exception raised outside body` errors and multiple other errors if you try running `python test_timeout.py` multiple times.
Here is my script:
```python
import time
import celery
app = celery.Celery(
'test_timeout',
broker='amqp://localhost',
backend='redis://localhost')
@app.task(soft_time_limit=1)
def test(nb_seconds):
try:
time.sleep(nb_seconds)
return nb_seconds
except:
print("### error handled")
return nb_seconds
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([
test.s(i).set(link_error=on_error.s()) for i in range(0, 2)
])(add.s())
result.get()
```
NB: if you update the range for range(2, 4) for instance, the `Exception raised outside body` does not seem to happen. It seems this particular issue happens when the `SoftTimeLimitExceeded` is raised exactly during the `return`.
could you please send a PR with your proposed fix/workaround on master branch?
hi @auvipy, I won't have the time until jannuary. I'll need help on how to write the tests also (that's the reason why I didn't propose a PR).
OK. dont get afraid of sending logical changes just for the reason you don't know how to write test. we will certainly try to help you | 2018-01-04T14:09:04 |
celery/celery | 4,540 | celery__celery-4540 | [
"4539",
"4539"
] | 1a4497e94791e6662c328638db7cf0513534a7ac | diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -487,7 +487,6 @@ def add(self, result):
self._on_full.add(result)
def _on_ready(self):
- self.backend.remove_pending_result(self)
if self.backend.is_async:
self._cache = [r.get() for r in self.results]
self.on_ready()
@@ -845,6 +844,10 @@ def __init__(self, id=None, results=None, parent=None, **kwargs):
self.parent = parent
ResultSet.__init__(self, results, **kwargs)
+ def _on_ready(self):
+ self.backend.remove_pending_result(self)
+ ResultSet._on_ready(self)
+
def save(self, backend=None):
"""Save group-result for later retrieval using :meth:`restore`.
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -7,7 +7,7 @@
from celery import chain, chord, group
from celery.exceptions import TimeoutError
-from celery.result import AsyncResult, GroupResult
+from celery.result import AsyncResult, GroupResult, ResultSet
from .conftest import flaky, get_redis_connection
from .tasks import (add, add_chord_to_chord, add_replaced, add_to_all,
@@ -186,6 +186,16 @@ def test_chain_error_handler_with_eta(self, manager):
assert result == 10
+class test_result_set:
+
+ @flaky
+ def test_result_set(self, manager):
+ assert manager.inspect().ping()
+
+ rs = ResultSet([add.delay(1, 1), add.delay(2, 2)])
+ assert rs.get(timeout=TIMEOUT) == [2, 4]
+
+
class test_group:
@flaky
| ResultSet incorrectly calls backend.remove_pending_result
The master branch currently defines `ResultSet` with:
class ResultSet(...):
...
def _on_ready(self):
self.backend.remove_pending_result(self)
if self.backend.is_async:
self._cache = [r.get() for r in self.results]
self.on_ready()
However a typical implemenation of `backend.remove_pending_result` in `celery.backends.async` will assume that the value has an `id`
def remove_pending_result(self, result):
self._remove_pending_result(result.id)
self.on_result_fulfilled(result)
return result
Presumably, this call to `self.backend.remove_pending_result(self)` is really concerned with deregistering groups from the backend, in which case it should appear in the `GroupResult._on_ready`
This problem was exposed by #4131. Prior to that, `_on_ready` was not called (which as a bug).
ResultSet incorrectly calls backend.remove_pending_result
The master branch currently defines `ResultSet` with:
class ResultSet(...):
...
def _on_ready(self):
self.backend.remove_pending_result(self)
if self.backend.is_async:
self._cache = [r.get() for r in self.results]
self.on_ready()
However a typical implemenation of `backend.remove_pending_result` in `celery.backends.async` will assume that the value has an `id`
def remove_pending_result(self, result):
self._remove_pending_result(result.id)
self.on_result_fulfilled(result)
return result
Presumably, this call to `self.backend.remove_pending_result(self)` is really concerned with deregistering groups from the backend, in which case it should appear in the `GroupResult._on_ready`
This problem was exposed by #4131. Prior to that, `_on_ready` was not called (which as a bug).
| 2018-02-15T00:39:51 |
|
celery/celery | 4,545 | celery__celery-4545 | [
"3723"
] | b46bea25539cc26a76f0a491b95f4899f5b32c34 | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -12,6 +12,7 @@
import time
from collections import namedtuple
from datetime import timedelta
+from functools import partial
from weakref import WeakValueDictionary
from billiard.einfo import ExceptionInfo
@@ -163,7 +164,12 @@ def _call_task_errbacks(self, request, exc, traceback):
old_signature = []
for errback in request.errbacks:
errback = self.app.signature(errback)
- if arity_greater(errback.type.__header__, 1):
+ if (
+ # workaround to support tasks with bind=True executed as
+ # link errors. Otherwise retries can't be used
+ not isinstance(errback.type.__header__, partial) and
+ arity_greater(errback.type.__header__, 1)
+ ):
errback(request, exc, traceback)
else:
old_signature.append(errback)
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -218,6 +218,17 @@ class test_BaseBackend_dict:
def setup(self):
self.b = DictBackend(app=self.app)
+ @self.app.task(shared=False, bind=True)
+ def bound_errback(self, result):
+ pass
+
+ @self.app.task(shared=False)
+ def errback(arg1, arg2):
+ errback.last_result = arg1 + arg2
+
+ self.bound_errback = bound_errback
+ self.errback = errback
+
def test_delete_group(self):
self.b.delete_group('can-delete')
assert 'can-delete' not in self.b._data
@@ -303,6 +314,28 @@ def test_mark_as_done__chord(self):
b.mark_as_done('id', 10, request=request)
b.on_chord_part_return.assert_called_with(request, states.SUCCESS, 10)
+ def test_mark_as_failure__bound_errback(self):
+ b = BaseBackend(app=self.app)
+ b._store_result = Mock()
+ request = Mock(name='request')
+ request.errbacks = [
+ self.bound_errback.subtask(args=[1], immutable=True)]
+ exc = KeyError()
+ group = self.patching('celery.backends.base.group')
+ b.mark_as_failure('id', exc, request=request)
+ group.assert_called_with(request.errbacks, app=self.app)
+ group.return_value.apply_async.assert_called_with(
+ (request.id, ), parent_id=request.id, root_id=request.root_id)
+
+ def test_mark_as_failure__errback(self):
+ b = BaseBackend(app=self.app)
+ b._store_result = Mock()
+ request = Mock(name='request')
+ request.errbacks = [self.errback.subtask(args=[2, 3], immutable=True)]
+ exc = KeyError()
+ b.mark_as_failure('id', exc, request=request)
+ assert self.errback.last_result == 5
+
def test_mark_as_failure__chord(self):
b = BaseBackend(app=self.app)
b._store_result = Mock()
| Bound tasks as link error raises TypeError exception (celery 4.0.2, 4.2.0)
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.6
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:db+mysql://celery:**@127.0.0.1/celery
CELERY_ACKS_LATE: True
BROKER_URL: u'amqp://celery:********@127.0.0.1:****/celery'
CELERY_RESULT_ENGINE_OPTIONS: {
'isolation_level': 'READ_COMMITTED'}
CELERY_RESULT_SERIALIZER: 'json'
CELERY_ACCEPT_CONTENT: ['json']
CELERYD_MAX_TASKS_PER_CHILD: 100
CELERYD_PREFETCH_MULTIPLIER: 1
CELERY_REDIRECT_STDOUTS_LEVEL: 'INFO'
CELERY_TRACK_STARTED: True
CELERYD_TASK_SOFT_TIME_LIMIT: 900
CELERYD_TASK_TIME_LIMIT: 910
CELERY_TASK_SERIALIZER: 'json'
CELERY_RESULT_BACKEND: u'db+mysql://celery:********@127.0.0.1/celery'
CELERY_SEND_TASK_ERROR_EMAILS: True
```
## Steps to reproduce
Try to apply_async (or async, it doesn't matter) any task with link_error set to bound task. Till now I was using celery 3.1.18 and bound tasks were working fine as link errors.
See example below:
```python
@app.task(name="raise_exception", bind=True)
def raise_exception(self):
raise Exception("Bad things happened")
@app.task(name="handle_task_exception", bind=True)
def handle_task_exception(self):
print("Exception detected")
```
```python
subtask = raise_exception.subtask()
subtask.apply_async(link_error=handle_task_exception.s())
```
## Expected behavior
Bound tasks can be used as link errors.
## Actual behavior
Worker fails with following exception
```
[2016-12-28 07:36:12,692: INFO/MainProcess] Received task: raise_exception[b10b8451-3f4d-4cf0-b4b0-f964105cf849]
[2016-12-28 07:36:12,847: INFO/PoolWorker-5] /usr/local/lib/python2.7/dist-packages/celery/app/trace.py:542: RuntimeWarning: Exception raised outside body: TypeError('<functools.partial object at 0x7f74848dffc8> is not a Python function',):
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 381, in trace_task
I, R, state, retval = on_error(task_request, exc, uuid)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 323, in on_error
task, request, eager=eager, call_errbacks=call_errbacks,
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 157, in handle_error_state
call_errbacks=call_errbacks)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 202, in handle_failure
call_errbacks=call_errbacks,
File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 168, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 174, in _call_task_errbacks
if arity_greater(errback.type.__header__, 1):
File "/usr/local/lib/python2.7/dist-packages/celery/utils/functional.py", line 292, in arity_greater
argspec = getfullargspec(fun)
File "/usr/local/lib/python2.7/dist-packages/vine/five.py", line 350, in getfullargspec
s = _getargspec(fun)
File "/usr/lib/python2.7/inspect.py", line 816, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <functools.partial object at 0x7f74848dffc8> is not a Python function
exc, exc_info.traceback)))
[2016-12-28 07:36:12,922: ERROR/MainProcess] Pool callback raised exception: TypeError('<functools.partial object at 0x7f74848dffc8> is not a Python function',)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/billiard/pool.py", line 1748, in safe_apply_callback
fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/worker/request.py", line 366, in on_failure
self.id, exc, request=self, store_result=self.store_errors,
File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 168, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/base.py", line 174, in _call_task_errbacks
if arity_greater(errback.type.__header__, 1):
File "/usr/local/lib/python2.7/dist-packages/celery/utils/functional.py", line 292, in arity_greater
argspec = getfullargspec(fun)
File "/usr/local/lib/python2.7/dist-packages/vine/five.py", line 350, in getfullargspec
s = _getargspec(fun)
File "/usr/lib/python2.7/inspect.py", line 816, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <functools.partial object at 0x7f74848dffc8> is not a Python function
```
| Also encountering this issue when using a celery task with `on_error` of a signature object, just adding `bind=True` causes the error to occur.
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.13
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
```
any workaround? got the same problem..
maybe different approach to fail a task instead of raising exception?
I think celery code should be patched.
As far as I remember there is an if statement before launching link error handle function.
The if checks whether handler is a function and it fails when function is "partial".
Handler should be launched if it is a function or partial.
I decided to omit the `bind=True` and ref my `base` task class via the task name..
Hi, folks! I've also encountered this issue on Mac OS X 10.11.6 (El Capitan).
I have bound tasks also, they are linked together via task signature special method -
```
fail_job_si = fail_job.si(*args)
run_job_si = run_job.si(*args)
run_job_si.link_error(fail_job_si)
```
Here is my configuration:
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.13
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:rpc:///
```
So I did some digging lately... It turns out that when vine/five.py is patched to handle partials
```python
try: # pragma: no cover
from inspect import formatargspec, getfullargspec
except ImportError: # Py2
from collections import namedtuple
from inspect import formatargspec, getargspec as _getargspec # noqa
FullArgSpec = namedtuple('FullArgSpec', (
'args', 'varargs', 'varkw', 'defaults',
'kwonlyargs', 'kwonlydefaults', 'annotations',
))
def getfullargspec(fun, _fill=(None, ) * 3): # noqa
"""For compatibility with Python 3."""
# HERE GOES THE AWFUL IF
if type(fun) is partial:
s = _getargspec(fun.func)
else:
s = _getargspec(fun)
return FullArgSpec(*s + _fill)
```
Bound link errors are executed properly, although when I try to do self.retry inside link error I get
```
/env/local/lib/python2.7/site-packages/celery/app/trace.py:549: RuntimeWarning: Exception raised outside body: Retry(Retry(...), None, None):
Traceback (most recent call last):
File "/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 388, in trace_task
I, R, state, retval = on_error(task_request, exc, uuid)
File "/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 330, in on_error
task, request, eager=eager, call_errbacks=call_errbacks,
File "/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 164, in handle_error_state
call_errbacks=call_errbacks)
File "/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 209, in handle_failure
call_errbacks=call_errbacks,
File "/env/local/lib/python2.7/site-packages/celery/backends/base.py", line 168, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/env/local/lib/python2.7/site-packages/celery/backends/base.py", line 175, in _call_task_errbacks
errback(request, exc, traceback)
File "/env/local/lib/python2.7/site-packages/celery/canvas.py", line 178, in __call__
return self.type(*args, **kwargs)
File "/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 630, in __protected_call__
return orig(self, *args, **kwargs)
File "/env/local/lib/python2.7/site-packages/celery/app/task.py", line 380, in __call__
return self.run(*args, **kwargs)
File "/lib/tasks.py", line 1225, in handle_task_exception
return self.retry(countdown=1, max_retries=10, throw=False)
File "/env/local/lib/python2.7/site-packages/celery/app/task.py", line 653, in retry
raise_with_context(exc or Retry('Task can be retried', None))
File "/env/local/lib/python2.7/site-packages/celery/utils/serialization.py", line 276, in raise_with_context
reraise(type(exc), exc, exc_info[2]
File "/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/env/local/lib/python2.7/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/lib/tasks.py", line 1218, in raise_exception
raise Exception("My custom exception")
Retry: Task can be retried
```
Perhaps @ask could give us some hint? :)
Getting exact same error using celery 4.10 using on_error to a bound task.
We were experiencing the same thing. Our workaround is here: https://github.com/WikiWatershed/model-my-watershed/pull/2287/commits/0871277f665b301ad42b5309e74fd70ddde70e69
Previously we would use the `self.app` reference to get the `AsyncResult` from the given `uuid` and get the exception and traceback through that. But now with Celery 4 it turns out we didn't need the error handler to be a bound task anymore, because it now gets `request`, `exc`, and `traceback` as arguments. See the documentation here: http://docs.celeryproject.org/en/latest/whatsnew-4.0.html#canvas-refactor and https://github.com/celery/celery/pull/2538#issuecomment-227865509.
I think if you _still_ need access to `self.app` within the task, you could just import `app` from wherever it is defined in the file, and use that.
I've just tested against master, unfortunately the bug still exists.
```
software -> celery:4.2.0 (latentcall) kombu:4.1.0 py:2.7.6
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:db+mysql://celery:**@127.0.0.1/celery
task_serializer: 'json'
broker_heartbeat: 360
result_serializer: 'json'
task_time_limit: 910
event_queue_ttl: 60
worker_max_tasks_per_child: 100
worker_prefetch_multiplier: 1
task_acks_late: True
task_soft_time_limit: 900
task_track_started: True
```
The result is
```
[2018-01-31 11:40:20,556: INFO/MainProcess] Received task: raise_exception[21dfd0c5-1aa1-4e18-9326-b7f8326c441b]
[2018-01-31 11:40:20,719: INFO/ForkPoolWorker-1] /localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/app/trace.py:561: RuntimeWarning: Exception raised outside body: TypeError('<functools.partial object at 0x7ff86fe95c58> is not a Python function',):
Traceback (most recent call last):
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 396, in trace_task
I, R, state, retval = on_error(task_request, exc, uuid)
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 338, in on_error
task, request, eager=eager, call_errbacks=call_errbacks,
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 172, in handle_error_state
call_errbacks=call_errbacks)
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 217, in handle_failure
call_errbacks=call_errbacks,
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 160, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 166, in _call_task_errbacks
if arity_greater(errback.type.__header__, 1):
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/utils/functional.py", line 293, in arity_greater
argspec = getfullargspec(fun)
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/vine/five.py", line 357, in getfullargspec
s = _getargspec(fun)
File "/usr/lib/python2.7/inspect.py", line 816, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <functools.partial object at 0x7ff86fe95c58> is not a Python function
exc, exc_info.traceback)))
[2018-01-31 11:40:20,771: ERROR/MainProcess] Pool callback raised exception: TypeError('<functools.partial object at 0x7ff86fe95c58> is not a Python function',)
Traceback (most recent call last):
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/billiard/pool.py", line 1747, in safe_apply_callback
fun(*args, **kwargs)
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/worker/request.py", line 366, in on_failure
self.id, exc, request=self, store_result=self.store_errors,
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 160, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 166, in _call_task_errbacks
if arity_greater(errback.type.__header__, 1):
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/celery/utils/functional.py", line 293, in arity_greater
argspec = getfullargspec(fun)
File "/localscripts/celery-test/venv/local/lib/python2.7/site-packages/vine/five.py", line 357, in getfullargspec
s = _getargspec(fun)
File "/usr/lib/python2.7/inspect.py", line 816, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <functools.partial object at 0x7ff86fe95c58> is not a Python function
```
@auvipy could you reopen this task?
could you plz also try upgrading the dependencies to check? like kombu etc
Tested with kombu and billiard versions build from master
```
Processing ./kombu-4.1.0-py2.py3-none-any.whl
Requirement already satisfied: amqp<3.0,>=2.1.4 in /localscripts/celery-test/venv/lib/python2.7/site-packages (from kombu==4.1.0)
Requirement already satisfied: vine>=1.1.3 in /localscripts/celery-test/venv/lib/python2.7/site-packages (from amqp<3.0,>=2.1.4->kombu==4.1.0)
Installing collected packages: kombu
Successfully installed kombu-4.1.0
```
```
Processing ./billiard-3.5.0.3-cp27-none-linux_x86_64.whl
Installing collected packages: billiard
Successfully installed billiard-3.5.0.3
```
Bug still exists.
Hmm I just noticed that I didn't test with vine 1.1.4 while the issue seems to be related to it!
I'll check tomorrow morning.
Nope. False hope.
Tested with vine 1.1.4 too, got the same error.
I think I got it...
```diff
diff --git a/celery/backends/base.py b/celery/backends/base.py
index fc49105..3725452 100644
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -12,6 +12,7 @@ import sys
import time
from collections import namedtuple
from datetime import timedelta
+from functools import partial
from weakref import WeakValueDictionary
from billiard.einfo import ExceptionInfo
@@ -163,7 +164,10 @@ class Backend(object):
old_signature = []
for errback in request.errbacks:
errback = self.app.signature(errback)
- if arity_greater(errback.type.__header__, 1):
+ if (
+ type(errback.type.__header__) is not partial
+ and arity_greater(errback.type.__header__, 1)
+ ):
errback(request, exc, traceback)
else:
old_signature.append(errback)
```
This patch fixes my problem although it looks like a workaround and I don't know how to fix it properly...
@auvipy any ideas?
Please use isinstance instead of type() is ...
So just to sum up, the problem was introduced with following commit:
https://github.com/celery/celery/commit/ae3f36f7dadde455098963ecdada12b043fdfe41 (see also https://github.com/celery/celery/pull/2538#issuecomment-227865509)
Unfortunately in my case I need a bound task as link error because I want to use retry mechanism during error handling - in distributed systems it's common that different components are not 100% available.
Is there any straightforward way to use retries machinery without bound tasks? | 2018-02-19T19:44:21 |
celery/celery | 4,549 | celery__celery-4549 | [
"3050"
] | 03ed93f675b3029824098928c51d0058a0a10434 | diff --git a/celery/app/utils.py b/celery/app/utils.py
--- a/celery/app/utils.py
+++ b/celery/app/utils.py
@@ -102,6 +102,13 @@ def broker_url(self):
self.first('broker_url', 'broker_host')
)
+ @property
+ def result_backend(self):
+ return (
+ os.environ.get('CELERY_RESULT_BACKEND') or
+ self.get('CELERY_RESULT_BACKEND')
+ )
+
@property
def task_default_exchange(self):
return self.first(
diff --git a/celery/bin/base.py b/celery/bin/base.py
--- a/celery/bin/base.py
+++ b/celery/bin/base.py
@@ -298,6 +298,7 @@ def add_preload_arguments(self, parser):
group = parser.add_argument_group('Global Options')
group.add_argument('-A', '--app', default=None)
group.add_argument('-b', '--broker', default=None)
+ group.add_argument('--result-backend', default=None)
group.add_argument('--loader', default=None)
group.add_argument('--config', default=None)
group.add_argument('--workdir', default=None)
@@ -467,6 +468,9 @@ def setup_app_from_commandline(self, argv):
broker = preload_options.get('broker', None)
if broker:
os.environ['CELERY_BROKER_URL'] = broker
+ result_backend = preload_options.get('result_backend', None)
+ if result_backend:
+ os.environ['CELERY_RESULT_BACKEND'] = result_backend
config = preload_options.get('config')
if config:
os.environ['CELERY_CONFIG_MODULE'] = config
| diff --git a/t/unit/bin/test_base.py b/t/unit/bin/test_base.py
--- a/t/unit/bin/test_base.py
+++ b/t/unit/bin/test_base.py
@@ -166,6 +166,18 @@ def test_with_custom_broker(self, app):
else:
os.environ.pop('CELERY_BROKER_URL', None)
+ def test_with_custom_result_backend(self, app):
+ prev = os.environ.pop('CELERY_RESULT_BACKEND', None)
+ try:
+ cmd = MockCommand(app=app)
+ cmd.setup_app_from_commandline(['--result-backend=xyzza://'])
+ assert os.environ.get('CELERY_RESULT_BACKEND') == 'xyzza://'
+ finally:
+ if prev:
+ os.environ['CELERY_RESULT_BACKEND'] = prev
+ else:
+ os.environ.pop('CELERY_RESULT_BACKEND', None)
+
def test_with_custom_app(self, app):
cmd = MockCommand(app=app)
appstr = '.'.join([__name__, 'APP'])
@@ -276,8 +288,10 @@ def test_with_cmdline_config(self, app):
cmd.namespace = 'worker'
rest = cmd.setup_app_from_commandline(argv=[
'--loglevel=INFO', '--',
+ 'result.backend=redis://backend.example.com',
'broker.url=amqp://broker.example.com',
'.prefetch_multiplier=100'])
+ assert cmd.app.conf.result_backend == 'redis://backend.example.com'
assert cmd.app.conf.broker_url == 'amqp://broker.example.com'
assert cmd.app.conf.worker_prefetch_multiplier == 100
assert rest == ['--loglevel=INFO']
| celery worker no command line option for setting result backend
With celery 3.1.20 I get :
```
(celery)alexv@asmodehn:~/Projects$ celery worker --help
Usage: celery worker [options]
Start worker instance.
Examples::
celery worker --app=proj -l info
celery worker -A proj -l info -Q hipri,lopri
celery worker -A proj --concurrency=4
celery worker -A proj --concurrency=1000 -P eventlet
celery worker --autoscale=10,0
Options:
-A APP, --app=APP app instance to use (e.g. module.attr_name)
-b BROKER, --broker=BROKER
url to broker. default is 'amqp://guest@localhost//'
--loader=LOADER name of custom loader class to use.
--config=CONFIG Name of the configuration module
--workdir=WORKING_DIRECTORY
Optional directory to change to after detaching.
-C, --no-color
-q, --quiet
-c CONCURRENCY, --concurrency=CONCURRENCY
Number of child processes processing the queue. The
default is the number of CPUs available on your
system.
-P POOL_CLS, --pool=POOL_CLS
Pool implementation: prefork (default), eventlet,
gevent, solo or threads.
--purge, --discard Purges all waiting tasks before the daemon is started.
**WARNING**: This is unrecoverable, and the tasks will
be deleted from the messaging server.
-l LOGLEVEL, --loglevel=LOGLEVEL
Logging level, choose between DEBUG, INFO, WARNING,
ERROR, CRITICAL, or FATAL.
-n HOSTNAME, --hostname=HOSTNAME
Set custom hostname, e.g. 'w1.%h'. Expands: %h
(hostname), %n (name) and %d, (domain).
-B, --beat Also run the celery beat periodic task scheduler.
Please note that there must only be one instance of
this service.
-s SCHEDULE_FILENAME, --schedule=SCHEDULE_FILENAME
Path to the schedule database if running with the -B
option. Defaults to celerybeat-schedule. The extension
".db" may be appended to the filename. Apply
optimization profile. Supported: default, fair
--scheduler=SCHEDULER_CLS
Scheduler class to use. Default is
celery.beat.PersistentScheduler
-S STATE_DB, --statedb=STATE_DB
Path to the state database. The extension '.db' may be
appended to the filename. Default: None
-E, --events Send events that can be captured by monitors like
celery events, celerymon, and others.
--time-limit=TASK_TIME_LIMIT
Enables a hard time limit (in seconds int/float) for
tasks.
--soft-time-limit=TASK_SOFT_TIME_LIMIT
Enables a soft time limit (in seconds int/float) for
tasks.
--maxtasksperchild=MAX_TASKS_PER_CHILD
Maximum number of tasks a pool worker can execute
before it's terminated and replaced by a new worker.
-Q QUEUES, --queues=QUEUES
List of queues to enable for this worker, separated by
comma. By default all configured queues are enabled.
Example: -Q video,image
-X EXCLUDE_QUEUES, --exclude-queues=EXCLUDE_QUEUES
-I INCLUDE, --include=INCLUDE
Comma separated list of additional modules to import.
Example: -I foo.tasks,bar.tasks
--autoscale=AUTOSCALE
Enable autoscaling by providing max_concurrency,
min_concurrency. Example:: --autoscale=10,3 (always
keep 3 processes, but grow to 10 if necessary)
--autoreload Enable autoreloading.
--no-execv Don't do execv after multiprocessing child fork.
--without-gossip Do not subscribe to other workers events.
--without-mingle Do not synchronize with other workers at startup.
--without-heartbeat Do not send event heartbeats.
--heartbeat-interval=HEARTBEAT_INTERVAL
Interval in seconds at which to send worker heartbeat
-O OPTIMIZATION
-D, --detach
-f LOGFILE, --logfile=LOGFILE
Path to log file. If no logfile is specified, stderr
is used.
--pidfile=PIDFILE Optional file used to store the process pid. The
program will not start if this file already exists and
the pid is still alive.
--uid=UID User id, or user name of the user to run as after
detaching.
--gid=GID Group id, or group name of the main group to change to
after detaching.
--umask=UMASK Effective umask (in octal) of the process after
detaching. Inherits the umask of the parent process
by default.
--executable=EXECUTABLE
Executable to use for the detached process.
--version show program's version number and exit
-h, --help show this help message and exit
```
I do not see how I can set the result database.
In config file I have :
```
BROKER_URL = 'redis://localhost:6379'
CELERY_RESULT_BACKEND = BROKER_URL
```
Then starting the worker I can pass `--broker=redis://another-host:6379`
But I cannot find anyway to change the result backend to override the configuration setting...
Am I missing something ?
| In addition, it seems to me that the BROKER_URL value from the config module is not used when using `--config` command line option to start the worker... not sure if it s related or not ( or maybe just because I don't really get how configuration is working ?)
I have been trying to extend/specialize/wrap celery (https://github.com/asmodehn/celeros), but the way to configure it on runtime is still not clear to me...
OK I think I've pinned down the issue bothering me (beside the missing --result-backend option)...
I am using celery 3.1.20.
Here is a simplified configuration file to illustrate the issue :
``` python
# config.py
CELERY_BROKER_URL = 'redis://localhost:6379/0'
BROKER_URL = CELERY_BROKER_URL
CELERY_RESULT_BACKEND = 'redis://localhost:6379/1'
```
To setup your app configuration you can do :
``` python
# app.py
from . import config
from celery import Celery
my_app = Celery()
my_app.config_from_object(config)
```
and launch with : `python -m celery -A app`
XOR
``` python
# app.py
from . import config
from celery import Celery
my_app = Celery()
```
and launch with : `python -m celery -A app --config=config`
BUT NOT A MIX :
In my case, doing something like this:
``` python
# app.py
from . import config
from celery import Celery
my_app = Celery()
my_app.config_from_object(config_default)
```
and launch with : `python -m celery -A app --config=config_override`
config_override content is ignored somehow (I noticed with BROKER_URL or CELERY_RESULT_BACKEND since they are very visible settings, celery worker display them on startup ) ...
Although I would have expected the values from config_override to replace the values configured from config_default...
Disclaimer : I haven't tested this code exactly though but if you cannot reproduce, let me know I'll dig more, when i get more time, to come up with a [SSCCE](http://www.sscce.org/)...
So for now I ll just drop the idea to have a "default"+"override" config strategy, in this way at least...
Same problem here: there is no way to dynamically set the backend of the worker.
Eg:
- I start a server where the celery app config is updated afterwards (custom redis port)
- I need the worker to know what the custom redis port is
| 2018-02-20T11:52:09 |
celery/celery | 4,565 | celery__celery-4565 | [
"4560",
"4560"
] | a035680a96b5ab1bd323955566b0c18168d49fb4 | diff --git a/celery/app/amqp.py b/celery/app/amqp.py
--- a/celery/app/amqp.py
+++ b/celery/app/amqp.py
@@ -328,7 +328,8 @@ def as_task_v2(self, task_id, name, args=None, kwargs=None,
expires = maybe_make_aware(
now + timedelta(seconds=expires), tz=timezone,
)
- eta = eta and eta.isoformat()
+ if not isinstance(eta, string_t):
+ eta = eta and eta.isoformat()
# If we retry a task `expires` will already be ISO8601-formatted.
if not isinstance(expires, string_t):
expires = expires and expires.isoformat()
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -165,7 +165,6 @@ def test_chord_soft_timeout_recuperation(self, manager):
result = c(delayed_sum.s(pause_time=0)).get()
assert result == 3
- @pytest.mark.xfail()
def test_chain_error_handler_with_eta(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
diff --git a/t/unit/app/test_amqp.py b/t/unit/app/test_amqp.py
--- a/t/unit/app/test_amqp.py
+++ b/t/unit/app/test_amqp.py
@@ -359,6 +359,13 @@ def test_expires_to_datetime(self):
assert m.headers['expires'] == (
now + timedelta(seconds=30)).isoformat()
+ def test_eta_to_datetime(self):
+ eta = datetime.utcnow()
+ m = self.app.amqp.as_task_v2(
+ uuid(), 'foo', eta=eta,
+ )
+ assert m.headers['eta'] == eta.isoformat()
+
def test_callbacks_errbacks_chord(self):
@self.app.task
| AttributeError: 'str' object has no attribute 'isoformat' with ETA
## Checklist
```
$ pip freeze | grep celery
celery==4.1.0
django-celery-results==1.0.1
```
## Steps to reproduce
Create chord, add eta.
```
from datetime import datetime, timedelta
from celery import chain, group
@shared_task
def add(a, b):
return a + b
@shared_task
def sum(inputs):
return sum(inputs)
@shared_task
def err_handler(*args, **kwargs):
print 'an error occurred'
def stuff():
eta = datetime.utcnow() + timedelta(seconds=10)
return chain(
group(
add.s(1, 2),
add.s(3, 4),
),
sum.s()
).on_error(err_handler.s()).apply_async(eta=eta)
```
## Expected behavior
after ETA, chain begins execution and executes to completion
## Actual behavior
Group begins execution after ETA, but callback throws exception:
```
[2018-02-26 16:29:44,350: INFO/ForkPoolWorker-5] Task tasks.add[4ecc6d45-40cc-40bc-ba79-b7efad956383] succeeded in 0.00307657103986s: 3
[2018-02-26 16:29:44,351: INFO/ForkPoolWorker-7] Task celery.chord_unlock[d17aa171-1b48-49b0-bcfa-bf961c65d0ea] retry: Retry in 1s
[2018-02-26 16:29:44,352: INFO/MainProcess] Received task: celery.chord_unlock[d17aa171-1b48-49b0-bcfa-bf961c65d0ea] ETA:[2018-02-27 00:29:45.350604+00:00]
[2018-02-26 16:29:44,353: INFO/ForkPoolWorker-4] Task tasks.add[62706ef5-97e3-444b-9037-33b64dac866d] succeeded in 0.00494244601578s: 7
[2018-02-26 16:29:45,795: ERROR/ForkPoolWorker-2] Chord 'a5329bed-f970-4969-9177-fd0c79af0be9' raised: AttributeError("'str' object has no attribute 'isoformat'",)
Traceback (most recent call last):
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/builtins.py", line 91, in unlock_chord
callback.delay(ret)
File "/home/steve/env/local/lib/python2.7/site-packages/celery/canvas.py", line 182, in delay
return self.apply_async(partial_args, partial_kwargs)
File "/home/steve/env/local/lib/python2.7/site-packages/celery/canvas.py", line 221, in apply_async
return _apply(args, kwargs, **options)
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/task.py", line 536, in apply_async
**options
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/base.py", line 729, in send_task
root_id, parent_id, shadow, chain,
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/amqp.py", line 333, in as_task_v2
eta = eta and eta.isoformat()
AttributeError: 'str' object has no attribute 'isoformat'
```
Interesting to note, I am using redis as backend, but the exception is thrown in amqp code.
AttributeError: 'str' object has no attribute 'isoformat' with ETA
## Checklist
```
$ pip freeze | grep celery
celery==4.1.0
django-celery-results==1.0.1
```
## Steps to reproduce
Create chord, add eta.
```
from datetime import datetime, timedelta
from celery import chain, group
@shared_task
def add(a, b):
return a + b
@shared_task
def sum(inputs):
return sum(inputs)
@shared_task
def err_handler(*args, **kwargs):
print 'an error occurred'
def stuff():
eta = datetime.utcnow() + timedelta(seconds=10)
return chain(
group(
add.s(1, 2),
add.s(3, 4),
),
sum.s()
).on_error(err_handler.s()).apply_async(eta=eta)
```
## Expected behavior
after ETA, chain begins execution and executes to completion
## Actual behavior
Group begins execution after ETA, but callback throws exception:
```
[2018-02-26 16:29:44,350: INFO/ForkPoolWorker-5] Task tasks.add[4ecc6d45-40cc-40bc-ba79-b7efad956383] succeeded in 0.00307657103986s: 3
[2018-02-26 16:29:44,351: INFO/ForkPoolWorker-7] Task celery.chord_unlock[d17aa171-1b48-49b0-bcfa-bf961c65d0ea] retry: Retry in 1s
[2018-02-26 16:29:44,352: INFO/MainProcess] Received task: celery.chord_unlock[d17aa171-1b48-49b0-bcfa-bf961c65d0ea] ETA:[2018-02-27 00:29:45.350604+00:00]
[2018-02-26 16:29:44,353: INFO/ForkPoolWorker-4] Task tasks.add[62706ef5-97e3-444b-9037-33b64dac866d] succeeded in 0.00494244601578s: 7
[2018-02-26 16:29:45,795: ERROR/ForkPoolWorker-2] Chord 'a5329bed-f970-4969-9177-fd0c79af0be9' raised: AttributeError("'str' object has no attribute 'isoformat'",)
Traceback (most recent call last):
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/builtins.py", line 91, in unlock_chord
callback.delay(ret)
File "/home/steve/env/local/lib/python2.7/site-packages/celery/canvas.py", line 182, in delay
return self.apply_async(partial_args, partial_kwargs)
File "/home/steve/env/local/lib/python2.7/site-packages/celery/canvas.py", line 221, in apply_async
return _apply(args, kwargs, **options)
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/task.py", line 536, in apply_async
**options
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/base.py", line 729, in send_task
root_id, parent_id, shadow, chain,
File "/home/steve/env/local/lib/python2.7/site-packages/celery/app/amqp.py", line 333, in as_task_v2
eta = eta and eta.isoformat()
AttributeError: 'str' object has no attribute 'isoformat'
```
Interesting to note, I am using redis as backend, but the exception is thrown in amqp code.
| 2018-02-28T15:34:36 |
|
celery/celery | 4,611 | celery__celery-4611 | [
"4594"
] | 3b5873cff2c89b7a4bd579e1cfd12ac4c5e530a8 | diff --git a/celery/platforms.py b/celery/platforms.py
--- a/celery/platforms.py
+++ b/celery/platforms.py
@@ -791,6 +791,7 @@ def check_privileges(accept_content):
uid=uid, euid=euid, gid=gid, egid=egid,
), file=sys.stderr)
finally:
+ sys.stderr.flush()
os._exit(1)
warnings.warn(RuntimeWarning(ROOT_DISCOURAGED.format(
uid=uid, euid=euid, gid=gid, egid=egid,
diff --git a/celery/utils/threads.py b/celery/utils/threads.py
--- a/celery/utils/threads.py
+++ b/celery/utils/threads.py
@@ -74,6 +74,7 @@ def run(self):
self.on_crash('{0!r} crashed: {1!r}', self.name, exc)
self._set_stopped()
finally:
+ sys.stderr.flush()
os._exit(1) # exiting by normal means won't work
finally:
self._set_stopped()
diff --git a/celery/utils/timer2.py b/celery/utils/timer2.py
--- a/celery/utils/timer2.py
+++ b/celery/utils/timer2.py
@@ -91,6 +91,7 @@ def run(self):
pass
except Exception as exc:
logger.error('Thread Timer crashed: %r', exc, exc_info=True)
+ sys.stderr.flush()
os._exit(1)
def stop(self):
| Please use `sys.exit()` or flush STDERR if failed on `platform.check_privileges`
https://github.com/celery/celery/blob/a3c377474ab1109a26de5169066a4fae0d30524b/celery/platforms.py#L794:24
The way it is now, the message could be lost because `os._exit()` does not flushes STDERR.
Why are we using `os._exit()` instead of the recommended `sys.exit()`? I can provide a PR if asked, but please clarify why the `os._exit()` before the patch is written.
| cc: @ask @sbneto
Can you check the history of the file to see if there is a comment in the commit introducing this function?
The commit that introduced this code says `worker/beat: --uid + --gid now works even without --detach`. I could not find anything related to the use of `os._exit()` specifically. Also, this was the first and only version of the code written about 4 years ago. Not sure where else to look for clues on why this is as it is.
This was by design. See [here](https://stackoverflow.com/a/9591397/920374) and if you notice in the [documentation on sys.exit](https://docs.python.org/3.5/library/sys.html#sys.exit), it raises an Exception which could potentially be caught somewhere in the code. On the other hand, [os._exit](https://docs.python.org/3.5/library/os.html#os._exit) causes an immediate shutdown, I believe using a system call.
Alright 👍. I could advocate for sys.exit(), but not strongly.
However, the `sys.stderr` is not being flushed and this is a real problem. When running on Docker I can only see the thing reboot on loop. No sign of why. Because the error message is not flushed.
On dev machine the thing works. On docker it cycles, because I forgot that it runs as `root` by default. Had to exec into the container to discover why by accident.
Could I provide a PR flushing `sys.stderr`?
https://docs.python.org/3.5/library/exceptions.html#SystemExit:
> **exception SystemExit**
> This exception is raised by the sys.exit() function. It inherits from BaseException instead of Exception so that it is not accidentally caught by code that catches Exception.
>...
@alanjds sorry to hear about the issues. I don't see any reason why not to flush output streams before exiting, so please feel free to open a PR.
I had the same issue (celery exiting with error code 1 without printing anything), and it was quite painful to figure out what was happening (basically had to isolate the bug by adding dozens of print() statements in celery to figure out where it was dying, because I had no access to a debugger in that environment). | 2018-03-21T11:02:35 |
|
celery/celery | 4,617 | celery__celery-4617 | [
"4576"
] | 80ffa61fd83fed228a127086a2c13b4bc61fd1d8 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -24,7 +24,7 @@
from celery._state import current_app
from celery.five import python_2_unicode_compatible
from celery.local import try_import
-from celery.result import GroupResult
+from celery.result import GroupResult, allow_join_result
from celery.utils import abstract
from celery.utils.functional import _regen
from celery.utils.functional import chunks as _chunks
@@ -554,7 +554,8 @@ def apply_async(self, args=(), kwargs={}, **options):
# python is best at unpacking kwargs, so .run is here to do that.
app = self.app
if app.conf.task_always_eager:
- return self.apply(args, kwargs, **options)
+ with allow_join_result():
+ return self.apply(args, kwargs, **options)
return self.run(args, kwargs, app=app, **(
dict(self.options, **options) if options else self.options))
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -23,6 +23,13 @@ def add(x, y):
return x + y
+@shared_task
+def chain_add(x, y):
+ (
+ add.s(x, x) | add.s(y)
+ ).apply_async()
+
+
@shared_task
def delayed_sum(numbers, pause_time=1):
"""Sum the iterable of numbers."""
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -59,6 +59,17 @@ def test_chain_inside_group_receives_arguments(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [14, 14]
+ @flaky
+ def test_eager_chain_inside_task(self, manager):
+ from .tasks import chain_add
+
+ prev = chain_add.app.conf.task_always_eager
+ chain_add.app.conf.task_always_eager = True
+
+ chain_add.apply_async(args=(4, 8), throw=True).get()
+
+ chain_add.app.conf.task_always_eager = prev
+
@flaky
def test_group_chord_group_chain(self, manager):
from celery.five import bytes_if_py2
diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -411,6 +411,26 @@ def test_always_eager(self):
self.app.conf.task_always_eager = True
assert ~(self.add.s(4, 4) | self.add.s(8)) == 16
+ def test_chain_always_eager(self):
+ self.app.conf.task_always_eager = True
+ from celery import _state
+ from celery import result
+
+ fixture_task_join_will_block = _state.task_join_will_block
+ try:
+ _state.task_join_will_block = _state.orig_task_join_will_block
+ result.task_join_will_block = _state.orig_task_join_will_block
+
+ @self.app.task(shared=False)
+ def chain_add():
+ return (self.add.s(4, 4) | self.add.s(8)).apply_async()
+
+ r = chain_add.apply_async(throw=True).get()
+ assert r.get() == 16
+ finally:
+ _state.task_join_will_block = fixture_task_join_will_block
+ result.task_join_will_block = fixture_task_join_will_block
+
def test_apply(self):
x = chain(self.add.s(4, 4), self.add.s(8), self.add.s(10))
res = x.apply()
| Eager Application Synchronous Subtask Guard Blocks Canvas
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
```
celery -A minimal_eager_chain report
software -> celery:4.2.0 (latentcall) kombu:4.1.0 py:2.7.13
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
```
Minimal script to reproduce: https://gist.github.com/npilon/9f3c8469a615081fc8454359945eebd7
## Expected behavior
- No error occurs
## Actual behavior
- We trip the synchronous subtask alarm:
```
>>> test.delay().get()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/npilon/Documents/celery/celery/result.py", line 949, in get
raise self.result
RuntimeError: Never call result.get() within a task!
See http://docs.celeryq.org/en/latest/userguide/tasks.html#task-synchronous-subtasks
```
| 2018-03-22T17:14:08 |
|
celery/celery | 4,690 | celery__celery-4690 | [
"4643"
] | 14c94dadc46686c95aedeb328d341b655b28ecd2 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -569,7 +569,7 @@ def run(self, args=(), kwargs={}, group_id=None, chord=None,
if args and not self.immutable else self.args)
tasks, results = self.prepare_steps(
- args, self.tasks, root_id, parent_id, link_error, app,
+ args, kwargs, self.tasks, root_id, parent_id, link_error, app,
task_id, group_id, chord,
)
@@ -589,12 +589,12 @@ def freeze(self, _id=None, group_id=None, chord=None,
# pylint: disable=redefined-outer-name
# XXX chord is also a class in outer scope.
_, results = self._frozen = self.prepare_steps(
- self.args, self.tasks, root_id, parent_id, None,
+ self.args, self.kwargs, self.tasks, root_id, parent_id, None,
self.app, _id, group_id, chord, clone=False,
)
return results[0]
- def prepare_steps(self, args, tasks,
+ def prepare_steps(self, args, kwargs, tasks,
root_id=None, parent_id=None, link_error=None, app=None,
last_task_id=None, group_id=None, chord_body=None,
clone=True, from_dict=Signature.from_dict):
@@ -632,7 +632,10 @@ def prepare_steps(self, args, tasks,
# first task gets partial args from chain
if clone:
- task = task.clone(args) if is_first_task else task.clone()
+ if is_first_task:
+ task = task.clone(args, kwargs)
+ else:
+ task = task.clone()
elif is_first_task:
task.args = tuple(args) + tuple(task.args)
| diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -333,7 +333,7 @@ def test_group_to_chord(self):
self.add.s(30)
)
c._use_link = True
- tasks, results = c.prepare_steps((), c.tasks)
+ tasks, results = c.prepare_steps((), {}, c.tasks)
assert tasks[-1].args[0] == 5
assert isinstance(tasks[-2], chord)
@@ -347,7 +347,7 @@ def test_group_to_chord(self):
c2 = self.add.s(2, 2) | group(self.add.s(i, i) for i in range(10))
c2._use_link = True
- tasks2, _ = c2.prepare_steps((), c2.tasks)
+ tasks2, _ = c2.prepare_steps((), {}, c2.tasks)
assert isinstance(tasks2[0], group)
def test_group_to_chord__protocol_2__or(self):
@@ -372,7 +372,7 @@ def test_group_to_chord__protocol_2(self):
c2 = self.add.s(2, 2) | group(self.add.s(i, i) for i in range(10))
c2._use_link = False
- tasks2, _ = c2.prepare_steps((), c2.tasks)
+ tasks2, _ = c2.prepare_steps((), {}, c2.tasks)
assert isinstance(tasks2[0], group)
def test_apply_options(self):
| chain().apply_async(kwargs=dict(name='demo_name', instance_id='fake_id')) doesn't pass kwargs to first task.
## Checklist
celery version: ``4.1.0 (latentcall)``
python versioin: ``Python 3.6.0``
OS version: ``Darwin iDocker 17.3.0 Darwin Kernel Version 17.3.0: Thu Nov 9 18:09:22 PST 2017; root:xnu-4570.31.3~1/RELEASE_X86_64 x86_64``
## Steps to reproduce
```python
# Two tasks
@celery.task
def get_host_info(name=None, instance_id=None):
host_info = {'name': name, 'instance_id': instance_id,
'cpu_cores': 4}
task_logger.info('Fetching host info ...')
return host_info
@celery.task
def get_db_info_based_on_host(kwargs):
name = kwargs.get('name')
instance_id = kwargs.get('instance_id')
cpu_cores = kwargs.get('cpu_cores')
db_info = {'name': name, 'instance_id': instance_id,
'cpu_cores': cpu_cores, 'db_version': '5.6'
task_logger.info('Fetching db info ...')
return db_info
# make a chain of two tasks above, and call it with kwargs
ret = chain(get_host_info.s(), get_db_info_based_on_host.s()).apply_async(kwargs=dict(name='demo_name', instance_id='aaabb'))
ret.get()
# And I got the result as below
#{
# "cpu_cores": 4,
# "instance_id": null,
# "db_version": "5.6",
# "name": null
#}
```
Here is the worker's output.
```
[2018-04-05 23:12:30,534: INFO/MainProcess] Received task: cmdb_worker.tasks.demo_tasks.get_host_info[d3b14906-b538-4670-b00e-1bcabb2e8fd4]
[2018-04-05 23:12:30,540: INFO/ForkPoolWorker-2] cmdb_worker.tasks.demo_tasks.get_host_info[d3b14906-b538-4670-b00e-1bcabb2e8fd4]: Fetching host info ...
[2018-04-05 23:12:30,603: INFO/MainProcess] Received task: cmdb_worker.tasks.demo_tasks.get_db_info_based_on_host[f2007c3f-424f-445d-a0b9-38440bb16e67]
[2018-04-05 23:12:30,606: INFO/ForkPoolWorker-3] cmdb_worker.tasks.demo_tasks.get_db_info[f2007c3f-424f-445d-a0b9-38440bb16e67]: Fetching db info ...
[2018-04-05 23:12:30,609: INFO/ForkPoolWorker-2] Task cmdb_worker.tasks.demo_tasks.get_host_info[d3b14906-b538-4670-b00e-1bcabb2e8fd4] succeeded in 0.06943560403306037s: {'name': None, 'instance_id': None, 'cpu_cores': 4}
[2018-04-05 23:12:30,614: INFO/ForkPoolWorker-3] Task cmdb_worker.tasks.demo_tasks.get_db_info_based_on_host[f2007c3f-424f-445d-a0b9-38440bb16e67] succeeded in 0.0084876399487257s: {'name': None, 'instance_id': None, 'cpu_cores': 4, 'db_version': '5.6'}
```
## Expected behavior
When called apply_async with kwargs, chain should pass it to the first task in the chain.
It works when passing kwargs to task.s() directly. But it doesn't seem to be a good way. (Any suggestions ?)
```python
ret = chain(get_host_info.s(dict(name='demo_name', instance_id='aaabb')), get_db_info_based_on_host.s()).apply_async()
```
## Actual behavior
The first task got an empty kwargs even a kwargs was passed in chain.apply_async()
Is it a better way to passing kwargs to a task in a chain ?
| try celey 4.2rc2
@auvipy Thank you very much. Tried with 4.2rc2 but still not working.
> It works when passing kwargs to task.s() directly. But it doesn't seem to be a good way. (Any suggestions ?)
According to [documentation](http://docs.celeryproject.org/en/latest/reference/celery.html#celery.chain) that is exactly correct way.
> When called apply_async with kwargs, chain should pass it to the first task in the chain.
May be. Now it passes only args.
| 2018-04-29T09:59:00 |
celery/celery | 4,696 | celery__celery-4696 | [
"4695"
] | 8c753d7febd3be4e56cb0fda78b0704130a85299 | diff --git a/celery/app/backends.py b/celery/app/backends.py
--- a/celery/app/backends.py
+++ b/celery/app/backends.py
@@ -21,6 +21,7 @@
'rpc': 'celery.backends.rpc.RPCBackend',
'cache': 'celery.backends.cache:CacheBackend',
'redis': 'celery.backends.redis:RedisBackend',
+ 'rediss': 'celery.backends.redis:RedisBackend',
'sentinel': 'celery.backends.redis:SentinelBackend',
'mongodb': 'celery.backends.mongodb:MongoBackend',
'db': 'celery.backends.database:DatabaseBackend',
diff --git a/celery/backends/redis.py b/celery/backends/redis.py
--- a/celery/backends/redis.py
+++ b/celery/backends/redis.py
@@ -3,6 +3,7 @@
from __future__ import absolute_import, unicode_literals
from functools import partial
+from ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED
from kombu.utils.functional import retry_over_time
from kombu.utils.objects import cached_property
@@ -20,6 +21,12 @@
from . import async, base
+try:
+ from urllib.parse import unquote
+except ImportError:
+ # Python 2
+ from urlparse import unquote
+
try:
import redis
from kombu.transport.redis import get_redis_error_classes
@@ -44,6 +51,23 @@
sentinel in order to use the Redis result store backend.
"""
+W_REDIS_SSL_CERT_OPTIONAL = """
+Setting ssl_cert_reqs=CERT_OPTIONAL when connecting to redis means that \
+celery might not valdate the identity of the redis broker when connecting. \
+This leaves you vulnerable to man in the middle attacks.
+"""
+
+W_REDIS_SSL_CERT_NONE = """
+Setting ssl_cert_reqs=CERT_NONE when connecting to redis means that celery \
+will not valdate the identity of the redis broker when connecting. This \
+leaves you vulnerable to man in the middle attacks.
+"""
+
+E_REDIS_SSL_CERT_REQS_MISSING = """
+A rediss:// URL must have parameter ssl_cert_reqs be CERT_REQUIRED, \
+CERT_OPTIONAL, or CERT_NONE
+"""
+
E_LOST = 'Connection to Redis lost: Retry (%s/%s) %s.'
logger = get_logger(__name__)
@@ -197,6 +221,26 @@ def _params_from_url(self, url, defaults):
else:
connparams['db'] = path
+ if scheme == 'rediss':
+ connparams['connection_class'] = redis.SSLConnection
+ # The following parameters, if present in the URL, are encoded. We
+ # must add the decoded values to connparams.
+ for ssl_setting in ['ssl_ca_certs', 'ssl_certfile', 'ssl_keyfile']:
+ ssl_val = query.pop(ssl_setting, None)
+ if ssl_val:
+ connparams[ssl_setting] = unquote(ssl_val)
+ ssl_cert_reqs = query.pop('ssl_cert_reqs', 'MISSING')
+ if ssl_cert_reqs == 'CERT_REQUIRED':
+ connparams['ssl_cert_reqs'] = CERT_REQUIRED
+ elif ssl_cert_reqs == 'CERT_OPTIONAL':
+ logger.warn(W_REDIS_SSL_CERT_OPTIONAL)
+ connparams['ssl_cert_reqs'] = CERT_OPTIONAL
+ elif ssl_cert_reqs == 'CERT_NONE':
+ logger.warn(W_REDIS_SSL_CERT_NONE)
+ connparams['ssl_cert_reqs'] = CERT_NONE
+ else:
+ raise ValueError(E_REDIS_SSL_CERT_REQS_MISSING)
+
# db may be string and start with / like in kombu.
db = connparams.get('db') or 0
db = db.strip('/') if isinstance(db, string_t) else db
| diff --git a/t/unit/backends/test_redis.py b/t/unit/backends/test_redis.py
--- a/t/unit/backends/test_redis.py
+++ b/t/unit/backends/test_redis.py
@@ -270,6 +270,74 @@ def test_backend_ssl(self):
from redis.connection import SSLConnection
assert x.connparams['connection_class'] is SSLConnection
+ @skip.unless_module('redis')
+ def test_backend_ssl_url(self):
+ self.app.conf.redis_socket_timeout = 30.0
+ self.app.conf.redis_socket_connect_timeout = 100.0
+ x = self.Backend(
+ 'rediss://:[email protected]:123//1?ssl_cert_reqs=CERT_REQUIRED',
+ app=self.app,
+ )
+ assert x.connparams
+ assert x.connparams['host'] == 'vandelay.com'
+ assert x.connparams['db'] == 1
+ assert x.connparams['port'] == 123
+ assert x.connparams['password'] == 'bosco'
+ assert x.connparams['socket_timeout'] == 30.0
+ assert x.connparams['socket_connect_timeout'] == 100.0
+ assert x.connparams['ssl_cert_reqs'] == ssl.CERT_REQUIRED
+
+ from redis.connection import SSLConnection
+ assert x.connparams['connection_class'] is SSLConnection
+
+ @skip.unless_module('redis')
+ def test_backend_ssl_url_options(self):
+ x = self.Backend(
+ (
+ 'rediss://:[email protected]:123//1?ssl_cert_reqs=CERT_NONE'
+ '&ssl_ca_certs=%2Fvar%2Fssl%2Fmyca.pem'
+ '&ssl_certfile=%2Fvar%2Fssl%2Fredis-server-cert.pem'
+ '&ssl_keyfile=%2Fvar%2Fssl%2Fprivate%2Fworker-key.pem'
+ ),
+ app=self.app,
+ )
+ assert x.connparams
+ assert x.connparams['host'] == 'vandelay.com'
+ assert x.connparams['db'] == 1
+ assert x.connparams['port'] == 123
+ assert x.connparams['password'] == 'bosco'
+ assert x.connparams['ssl_cert_reqs'] == ssl.CERT_NONE
+ assert x.connparams['ssl_ca_certs'] == '/var/ssl/myca.pem'
+ assert x.connparams['ssl_certfile'] == '/var/ssl/redis-server-cert.pem'
+ assert x.connparams['ssl_keyfile'] == '/var/ssl/private/worker-key.pem'
+
+ @skip.unless_module('redis')
+ def test_backend_ssl_url_cert_none(self):
+ x = self.Backend(
+ 'rediss://:[email protected]:123//1?ssl_cert_reqs=CERT_OPTIONAL',
+ app=self.app,
+ )
+ assert x.connparams
+ assert x.connparams['host'] == 'vandelay.com'
+ assert x.connparams['db'] == 1
+ assert x.connparams['port'] == 123
+ assert x.connparams['ssl_cert_reqs'] == ssl.CERT_OPTIONAL
+
+ from redis.connection import SSLConnection
+ assert x.connparams['connection_class'] is SSLConnection
+
+ @skip.unless_module('redis')
+ @pytest.mark.parametrize("uri", [
+ 'rediss://:[email protected]:123//1?ssl_cert_reqs=CERT_KITTY_CATS',
+ 'rediss://:[email protected]:123//1'
+ ])
+ def test_backend_ssl_url_invalid(self, uri):
+ with pytest.raises(ValueError):
+ self.Backend(
+ uri,
+ app=self.app,
+ )
+
def test_compat_propertie(self):
x = self.Backend(
'redis://:[email protected]:123//1', app=self.app,
| Add Redis + TLS connections to result_backend setting
Component issue of #2833. Users should be able to configure a connection to Redis over TLS using a url with protocol [`rediss://`](https://www.iana.org/assignments/uri-schemes/prov/rediss):
`result_backend = 'rediss://...`
| 2018-05-01T04:06:45 |
|
celery/celery | 4,719 | celery__celery-4719 | [
"4699"
] | ea5a5bf3db1aa3092d22e07fe683935f5717d8ec | diff --git a/celery/utils/dispatch/signal.py b/celery/utils/dispatch/signal.py
--- a/celery/utils/dispatch/signal.py
+++ b/celery/utils/dispatch/signal.py
@@ -37,6 +37,28 @@ def _make_id(target): # pragma: no cover
return id(target)
+def _boundmethod_safe_weakref(obj):
+ """Get weakref constructor appropriate for `obj`. `obj` may be a bound method.
+
+ Bound method objects must be special-cased because they're usually garbage
+ collected immediately, even if the instance they're bound to persists.
+
+ Returns:
+ a (weakref constructor, main object) tuple. `weakref constructor` is
+ either :class:`weakref.ref` or :class:`weakref.WeakMethod`. `main
+ object` is the instance that `obj` is bound to if it is a bound method;
+ otherwise `main object` is simply `obj.
+ """
+ try:
+ obj.__func__
+ obj.__self__
+ # Bound method
+ return WeakMethod, obj.__self__
+ except AttributeError:
+ # Not a bound method
+ return weakref.ref, obj
+
+
def _make_lookup_key(receiver, sender, dispatch_uid):
if dispatch_uid:
return (dispatch_uid, _make_id(sender))
@@ -183,8 +205,7 @@ def _connect_signal(self, receiver, sender, weak, dispatch_uid):
lookup_key = _make_lookup_key(receiver, sender, dispatch_uid)
if weak:
- ref = weakref.ref
- receiver_object = receiver
+ ref, receiver_object = _boundmethod_safe_weakref(receiver)
if PY3:
receiver = ref(receiver)
weakref.finalize(receiver_object, self._remove_receiver)
| diff --git a/t/unit/utils/test_dispatcher.py b/t/unit/utils/test_dispatcher.py
--- a/t/unit/utils/test_dispatcher.py
+++ b/t/unit/utils/test_dispatcher.py
@@ -173,3 +173,14 @@ def test_retry_with_dispatch_uid(self):
assert a_signal.receivers[0][0][0] == uid
a_signal.disconnect(receiver_1_arg, sender=self, dispatch_uid=uid)
self._testIsClean(a_signal)
+
+ def test_boundmethod(self):
+ a = Callable()
+ a_signal.connect(a.a, sender=self)
+ expected = [(a.a, 'test')]
+ garbage_collect()
+ result = a_signal.send(sender=self, val='test')
+ assert result == expected
+ del a, result, expected
+ garbage_collect()
+ self._testIsClean(a_signal)
| RC3: django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
## Steps to reproduce
Django: 1.8.19
django-celery-beat: 1.1.1
django-celery-results: 1.0.1
While attempting to see if 4.2 fixes: https://github.com/celery/django-celery-beat/issues/7 (doesn't seem to)
I installed 4.2, RC1, 2 and then 3. When attempting to start RC3 in my test env,
`celery -A proj worker -B -E -l info`
it fails to start with the `django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.` error.
## Expected behavior
RC2 starts.
## Actual behavior
`Traceback (most recent call last):
File "/proj.env/bin/celery", line 11, in <module>
sys.exit(main())
File "/proj.env/lib/python2.7/site-packages/celery/__main__.py", line 16, in main
_main()
File "/proj.env/lib/python2.7/site-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
File "/proj.env/lib/python2.7/site-packages/celery/bin/celery.py", line 484, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/proj.env/lib/python2.7/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/proj.env/lib/python2.7/site-packages/celery/bin/celery.py", line 476, in handle_argv
return self.execute(command, argv)
File "/proj.env/lib/python2.7/site-packages/celery/bin/celery.py", line 408, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/proj.env/lib/python2.7/site-packages/celery/bin/worker.py", line 223, in run_from_argv
return self(*args, **options)
File "/proj.env/lib/python2.7/site-packages/celery/bin/base.py", line 238, in __call__
ret = self.run(*args, **kwargs)
File "/proj.env/lib/python2.7/site-packages/celery/bin/worker.py", line 257, in run
**kwargs)
File "/proj.env/lib/python2.7/site-packages/celery/worker/worker.py", line 96, in __init__
self.app.loader.init_worker()
File "/proj.env/lib/python2.7/site-packages/celery/loaders/base.py", line 114, in init_worker
self.import_default_modules()
File "/proj.env/lib/python2.7/site-packages/celery/loaders/base.py", line 108, in import_default_modules
raise response
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.`
| Same exception for env:
Python: 3.6.5
Django: 2.0.5
Without any celery-beat | celery-results.
RC2 works fine.
Same problem. Config is similar to friendka.
| 2018-05-08T06:42:20 |
celery/celery | 4,730 | celery__celery-4730 | [
"4498"
] | d178dbbe4f906c8ceab019cacbbc245722df2481 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -779,10 +779,9 @@ class chain(_chain):
def __new__(cls, *tasks, **kwargs):
# This forces `chain(X, Y, Z)` to work the same way as `X | Y | Z`
if not kwargs and tasks:
- if len(tasks) == 1 and is_list(tasks[0]):
- # ensure chain(generator_expression) works.
- tasks = tasks[0]
- return reduce(operator.or_, tasks)
+ if len(tasks) != 1 or is_list(tasks[0]):
+ tasks = tasks[0] if len(tasks) == 1 else tasks
+ return reduce(operator.or_, tasks)
return super(chain, cls).__new__(cls, *tasks, **kwargs)
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -24,6 +24,11 @@ def test_simple_chain(self, manager):
c = add.s(4, 4) | add.s(8) | add.s(16)
assert c().get(timeout=TIMEOUT) == 32
+ @flaky
+ def test_single_chain(self, manager):
+ c = chain(add.s(3, 4))()
+ assert c.get(timeout=TIMEOUT) == 7
+
@flaky
def test_complex_chain(self, manager):
c = (
diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -441,6 +441,11 @@ def test_apply(self):
assert res.parent.parent.get() == 8
assert res.parent.parent.parent is None
+ def test_single_expresion(self):
+ x = chain(self.add.s(1, 2)).apply()
+ assert x.get() == 3
+ assert x.parent is None
+
def test_empty_chain_returns_none(self):
assert chain(app=self.app)() is None
assert chain(app=self.app).apply_async() is None
| Chain with one task doesn't run
## Checklist
```
(celery) ➜ myapp celery -A myapp report
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:3.6.1
billiard:3.5.0.3 redis:2.10.6
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://localhost:6379/0
ABSOLUTE_URL_OVERRIDES: {
}
ADMINS: []
ALLOWED_HOSTS: []
APPEND_SLASH: True
AUTHENTICATION_BACKENDS: ['django.contrib.auth.backends.ModelBackend']
AUTH_PASSWORD_VALIDATORS: '********'
AUTH_USER_MODEL: 'auth.User'
BASE_DIR: '/Users/admin/Projects/myapp'
CACHES: {
'default': {'BACKEND': 'django.core.cache.backends.locmem.LocMemCache'}}
CACHE_MIDDLEWARE_ALIAS: 'default'
CACHE_MIDDLEWARE_KEY_PREFIX: '********'
CACHE_MIDDLEWARE_SECONDS: 600
CELERY_BROKER_URL: 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND: 'redis://localhost:6379/0'
CSRF_COOKIE_AGE: 31449600
CSRF_COOKIE_DOMAIN: None
CSRF_COOKIE_HTTPONLY: False
CSRF_COOKIE_NAME: 'csrftoken'
CSRF_COOKIE_PATH: '/'
CSRF_COOKIE_SECURE: False
CSRF_FAILURE_VIEW: 'django.views.csrf.csrf_failure'
CSRF_HEADER_NAME: 'HTTP_X_CSRFTOKEN'
CSRF_TRUSTED_ORIGINS: []
CSRF_USE_SESSIONS: False
DATABASES: {
'default': { 'ENGINE': 'django.db.backends.sqlite3',
'NAME': '/Users/admin/Projects/myapp/db.sqlite3'}}
DATABASE_ROUTERS: '********'
DATA_UPLOAD_MAX_MEMORY_SIZE: 2621440
DATA_UPLOAD_MAX_NUMBER_FIELDS: 1000
DATETIME_FORMAT: 'N j, Y, P'
DATETIME_INPUT_FORMATS: ['%Y-%m-%d %H:%M:%S',
'%Y-%m-%d %H:%M:%S.%f',
'%Y-%m-%d %H:%M',
'%Y-%m-%d',
'%m/%d/%Y %H:%M:%S',
'%m/%d/%Y %H:%M:%S.%f',
'%m/%d/%Y %H:%M',
'%m/%d/%Y',
'%m/%d/%y %H:%M:%S',
'%m/%d/%y %H:%M:%S.%f',
'%m/%d/%y %H:%M',
'%m/%d/%y']
DATE_FORMAT: 'N j, Y'
DATE_INPUT_FORMATS: ['%Y-%m-%d',
'%m/%d/%Y',
'%m/%d/%y',
'%b %d %Y',
'%b %d, %Y',
'%d %b %Y',
'%d %b, %Y',
'%B %d %Y',
'%B %d, %Y',
'%d %B %Y',
'%d %B, %Y']
DEBUG: True
DEBUG_PROPAGATE_EXCEPTIONS: False
DECIMAL_SEPARATOR: '.'
DEFAULT_CHARSET: 'utf-8'
DEFAULT_CONTENT_TYPE: 'text/html'
DEFAULT_EXCEPTION_REPORTER_FILTER: 'django.views.debug.SafeExceptionReporterFilter'
DEFAULT_FILE_STORAGE: 'django.core.files.storage.FileSystemStorage'
DEFAULT_FROM_EMAIL: 'webmaster@localhost'
DEFAULT_INDEX_TABLESPACE: ''
DEFAULT_TABLESPACE: ''
DISALLOWED_USER_AGENTS: []
EMAIL_BACKEND: 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST: 'localhost'
EMAIL_HOST_PASSWORD: '********'
EMAIL_HOST_USER: ''
EMAIL_PORT: 25
EMAIL_SSL_CERTFILE: None
EMAIL_SSL_KEYFILE: '********'
EMAIL_SUBJECT_PREFIX: '[Django] '
EMAIL_TIMEOUT: None
EMAIL_USE_LOCALTIME: False
EMAIL_USE_SSL: False
EMAIL_USE_TLS: False
FILE_CHARSET: 'utf-8'
FILE_UPLOAD_DIRECTORY_PERMISSIONS: None
FILE_UPLOAD_HANDLERS: ['django.core.files.uploadhandler.MemoryFileUploadHandler',
'django.core.files.uploadhandler.TemporaryFileUploadHandler']
FILE_UPLOAD_MAX_MEMORY_SIZE: 2621440
FILE_UPLOAD_PERMISSIONS: None
FILE_UPLOAD_TEMP_DIR: None
FIRST_DAY_OF_WEEK: 0
FIXTURE_DIRS: []
FORCE_SCRIPT_NAME: None
FORMAT_MODULE_PATH: None
FORM_RENDERER: 'django.forms.renderers.DjangoTemplates'
IGNORABLE_404_URLS: []
INSTALLED_APPS: ['django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'cel']
INTERNAL_IPS: []
LANGUAGES: [...]
LANGUAGES_BIDI: ['he', 'ar', 'fa', 'ur']
LANGUAGE_CODE: 'en-us'
LANGUAGE_COOKIE_AGE: None
LANGUAGE_COOKIE_DOMAIN: None
LANGUAGE_COOKIE_NAME: 'django_language'
LANGUAGE_COOKIE_PATH: '/'
LOCALE_PATHS: []
LOGGING: {
}
LOGGING_CONFIG: 'logging.config.dictConfig'
LOGIN_REDIRECT_URL: '/accounts/profile/'
LOGIN_URL: '/accounts/login/'
LOGOUT_REDIRECT_URL: None
MANAGERS: []
MEDIA_ROOT: ''
MEDIA_URL: ''
MESSAGE_STORAGE: 'django.contrib.messages.storage.fallback.FallbackStorage'
MIDDLEWARE: ['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
MIDDLEWARE_CLASSES: ['django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware']
MIGRATION_MODULES: {
}
MONTH_DAY_FORMAT: 'F j'
NUMBER_GROUPING: 0
PASSWORD_HASHERS: '********'
PASSWORD_RESET_TIMEOUT_DAYS: '********'
PREPEND_WWW: False
ROOT_URLCONF: 'myapp.urls'
SECRET_KEY: '********'
SECURE_BROWSER_XSS_FILTER: False
SECURE_CONTENT_TYPE_NOSNIFF: False
SECURE_HSTS_INCLUDE_SUBDOMAINS: False
SECURE_HSTS_PRELOAD: False
SECURE_HSTS_SECONDS: 0
SECURE_PROXY_SSL_HEADER: None
SECURE_REDIRECT_EXEMPT: []
SECURE_SSL_HOST: None
SECURE_SSL_REDIRECT: False
SERVER_EMAIL: 'root@localhost'
SESSION_CACHE_ALIAS: 'default'
SESSION_COOKIE_AGE: 1209600
SESSION_COOKIE_DOMAIN: None
SESSION_COOKIE_HTTPONLY: True
SESSION_COOKIE_NAME: 'sessionid'
SESSION_COOKIE_PATH: '/'
SESSION_COOKIE_SECURE: False
SESSION_ENGINE: 'django.contrib.sessions.backends.db'
SESSION_EXPIRE_AT_BROWSER_CLOSE: False
SESSION_FILE_PATH: None
SESSION_SAVE_EVERY_REQUEST: False
SESSION_SERIALIZER: 'django.contrib.sessions.serializers.JSONSerializer'
SETTINGS_MODULE: 'myapp.settings'
SHORT_DATETIME_FORMAT: 'm/d/Y P'
SHORT_DATE_FORMAT: 'm/d/Y'
SIGNING_BACKEND: 'django.core.signing.TimestampSigner'
SILENCED_SYSTEM_CHECKS: []
STATICFILES_DIRS: []
STATICFILES_FINDERS: ['django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder']
STATICFILES_STORAGE: 'django.contrib.staticfiles.storage.StaticFilesStorage'
STATIC_ROOT: None
STATIC_URL: '/static/'
TEMPLATES: [{'APP_DIRS': True,
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'OPTIONS': {'context_processors': ['django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages']}}]
TEST_NON_SERIALIZED_APPS: []
TEST_RUNNER: 'django.test.runner.DiscoverRunner'
THOUSAND_SEPARATOR: ','
TIME_FORMAT: 'P'
TIME_INPUT_FORMATS: ['%H:%M:%S', '%H:%M:%S.%f', '%H:%M']
TIME_ZONE: 'UTC'
USE_ETAGS: False
USE_I18N: True
USE_L10N: True
USE_THOUSAND_SEPARATOR: False
USE_TZ: True
USE_X_FORWARDED_HOST: False
USE_X_FORWARDED_PORT: False
WSGI_APPLICATION: 'myapp.wsgi.application'
X_FRAME_OPTIONS: 'SAMEORIGIN'
YEAR_MONTH_FORMAT: 'F Y'
is_overridden: <bound method Settings.is_overridden of <Settings "myapp.settings">>
```
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery. (with the master
## Steps to reproduce
I created a github repo for: https://github.com/xbogdan/myapp
Basically if you do ```chain(AddTask().si(1, 2))()``` doesn't work but if you run
``` chain(AddTask().si(1, 2), AddTask().si(1, 2))()``` it works
Same thing for ```.s()```
## Expected behavior
Should work with one task as well ```chain(AddTask().si(1, 2))()```
On the master branch it doesn't work at all. You get this:
```
(celery) ➜ myapp git:(master) ./manage.py test_cmd
Traceback (most recent call last):
File "./manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/Users/admin/Projects/myapp/cel/management/commands/test_cmd.py", line 10, in handle
c = chain(AddTask().si(1, 2), AddTask().si(1, 2))()
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/celery/canvas.py", line 533, in __call__
return self.apply_async(args, kwargs)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/celery/canvas.py", line 559, in apply_async
dict(self.options, **options) if options else self.options))
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/celery/canvas.py", line 586, in run
first_task.apply_async(**options)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/celery/canvas.py", line 221, in apply_async
return _apply(args, kwargs, **options)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/celery/app/task.py", line 532, in apply_async
shadow = shadow or self.shadow_name(args, kwargs, options)
TypeError: shadow_name() missing 1 required positional argument: 'options'
```
## Actual behavior
```
(celery) ➜ myapp git:(master) ./manage.py test_cmd
Traceback (most recent call last):
File "./manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/Users/admin/Projects/myapp/cel/management/commands/test_cmd.py", line 10, in handle
c = chain(AddTask().si(1, 2))()
File "/Users/admin/Envs/celery/lib/python3.6/site-packages/celery/canvas.py", line 178, in __call__
return self.type(*args, **kwargs)
TypeError: object() takes no parameters
```
| @xbogdan - `chain` reduces to ` X | Y | Z` if there are no kwargs, which means that `chain(AddTask().si(1,2))` will return that signature object, not a chain. Notice that you should be able to do `chain(AddTask().si(1,2)).delay()`
Hi I think it's not just chains, here's a simpler test case:
```
import celery
@celery.task
def foo():
print("hello, world")
f = foo.apply_async()
```
Output:
```
TypeError Traceback (most recent call last)
<ipython-input-4-686caed5db4b> in <module>()
----> 1 f = foo.apply_async()
/usr/local/lib/python3.5/dist-packages/celery/app/task.py in apply_async(self, args, kwargs, task_id, producer, link, link_error, shadow, **options)
530 args = args if isinstance(args, tuple) else tuple(args or ())
531 args = (self.__self__,) + args
--> 532 shadow = shadow or self.shadow_name(args, kwargs, options)
533
534 preopts = self._get_exec_options()
TypeError: shadow_name() missing 1 required positional argument: 'options'
```
@davmlaw can you try binding the task, just to check if this gives us more insight:
```
import celery
@celery.task(bind=True)
def foo():
print("hello, world")
f = foo.apply_async()
``` | 2018-05-12T16:12:23 |
celery/celery | 4,736 | celery__celery-4736 | [
"4735"
] | 28dbb6bd58551e782f17fe81e995176da5951638 | diff --git a/celery/backends/redis.py b/celery/backends/redis.py
--- a/celery/backends/redis.py
+++ b/celery/backends/redis.py
@@ -29,6 +29,7 @@
try:
import redis
+ import redis.connection
from kombu.transport.redis import get_redis_error_classes
except ImportError: # pragma: no cover
redis = None # noqa
@@ -249,6 +250,12 @@ def _params_from_url(self, url, defaults):
db = db.strip('/') if isinstance(db, string_t) else db
connparams['db'] = int(db)
+ for key, value in query.items():
+ if key in redis.connection.URL_QUERY_ARGUMENT_PARSERS:
+ query[key] = redis.connection.URL_QUERY_ARGUMENT_PARSERS[key](
+ value
+ )
+
# Query parameters override other parameters
connparams.update(query)
return connparams
| diff --git a/t/unit/backends/test_redis.py b/t/unit/backends/test_redis.py
--- a/t/unit/backends/test_redis.py
+++ b/t/unit/backends/test_redis.py
@@ -234,6 +234,20 @@ def test_url(self):
assert x.connparams['socket_timeout'] == 30.0
assert x.connparams['socket_connect_timeout'] == 100.0
+ def test_timeouts_in_url_coerced(self):
+ x = self.Backend(
+ ('redis://:[email protected]:123//1?'
+ 'socket_timeout=30&socket_connect_timeout=100'),
+ app=self.app,
+ )
+ assert x.connparams
+ assert x.connparams['host'] == 'vandelay.com'
+ assert x.connparams['db'] == 1
+ assert x.connparams['port'] == 123
+ assert x.connparams['password'] == 'bosco'
+ assert x.connparams['socket_timeout'] == 30
+ assert x.connparams['socket_connect_timeout'] == 100
+
def test_socket_url(self):
self.app.conf.redis_socket_timeout = 30.0
self.app.conf.redis_socket_connect_timeout = 100.0
| Redis connection timeouts are not coerced correctly
Found using celery 4.1.0.
## Steps to reproduce
Configure with redis as a backend with timeouts set in the url:
```
result_backend = 'redis://127.0.0.1:6379?socket_timeout=30&socket_connect_timeout=5'
```
## Expected behavior
Timeouts are applied and celery works normally.
## Actual behavior
Errors are raised by the socket library that the timeout needs to be a float.
Pull request incoming.
| 2018-05-15T20:15:33 |
|
celery/celery | 4,744 | celery__celery-4744 | [
"4668"
] | 1423fabe5954aab87db2a3b29db651782ae1dec2 | diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py
--- a/celery/backends/mongodb.py
+++ b/celery/backends/mongodb.py
@@ -6,7 +6,7 @@
from kombu.exceptions import EncodeError
from kombu.utils.objects import cached_property
-from kombu.utils.url import maybe_sanitize_url
+from kombu.utils.url import maybe_sanitize_url, urlparse
from celery import states
from celery.exceptions import ImproperlyConfigured
@@ -75,8 +75,7 @@ def __init__(self, app=None, **kwargs):
# update conf with mongo uri data, only if uri was given
if self.url:
- if self.url == 'mongodb://':
- self.url += 'localhost'
+ self.url = self._ensure_mongodb_uri_compliance(self.url)
uri_data = pymongo.uri_parser.parse_uri(self.url)
# build the hosts list to create a mongo connection
@@ -120,6 +119,17 @@ def __init__(self, app=None, **kwargs):
self.options.update(config.pop('options', {}))
self.options.update(config)
+ @staticmethod
+ def _ensure_mongodb_uri_compliance(url):
+ parsed_url = urlparse(url)
+ if not parsed_url.scheme.startswith('mongodb'):
+ url = 'mongodb+{}'.format(url)
+
+ if url == 'mongodb://':
+ url += 'localhost'
+
+ return url
+
def _prepare_client_options(self):
if pymongo.version_tuple >= (3,):
return {'maxPoolSize': self.max_pool_size}
| diff --git a/t/unit/backends/test_mongodb.py b/t/unit/backends/test_mongodb.py
--- a/t/unit/backends/test_mongodb.py
+++ b/t/unit/backends/test_mongodb.py
@@ -121,6 +121,62 @@ def test_init_with_settings(self):
mb = MongoBackend(app=self.app, url='mongodb://')
+ @patch('dns.resolver.query')
+ def test_init_mongodb_dns_seedlist(self, dns_resolver_query):
+ from dns.rdtypes.IN.SRV import SRV
+ from dns.rdtypes.ANY.TXT import TXT
+ from dns.name import Name
+
+ self.app.conf.mongodb_backend_settings = None
+
+ def mock_resolver(_, record_type):
+ if record_type == 'SRV':
+ return [
+ SRV(0, 0, 0, 0, 27017, Name(labels=hostname))
+ for hostname in [
+ b'mongo1.example.com'.split(b'.'),
+ b'mongo2.example.com'.split(b'.'),
+ b'mongo3.example.com'.split(b'.')
+ ]
+ ]
+ elif record_type == 'TXT':
+ return [TXT(0, 0, [b'replicaSet=rs0'])]
+
+ dns_resolver_query.side_effect = mock_resolver
+
+ # uri with user, password, database name, replica set,
+ # DNS seedlist format
+ uri = ('srv://'
+ 'celeryuser:celerypassword@'
+ 'dns-seedlist-host.example.com/'
+ 'celerydatabase')
+
+ mb = MongoBackend(app=self.app, url=uri)
+ assert mb.mongo_host == [
+ 'mongo1.example.com:27017',
+ 'mongo2.example.com:27017',
+ 'mongo3.example.com:27017',
+ ]
+ assert mb.options == dict(
+ mb._prepare_client_options(),
+ replicaset='rs0',
+ ssl=True
+ )
+ assert mb.user == 'celeryuser'
+ assert mb.password == 'celerypassword'
+ assert mb.database_name == 'celerydatabase'
+
+ def test_ensure_mongodb_uri_compliance(self):
+ mb = MongoBackend(app=self.app, url=None)
+ compliant_uri = mb._ensure_mongodb_uri_compliance
+
+ assert compliant_uri('mongodb://') == 'mongodb://localhost'
+
+ assert compliant_uri('mongodb+something://host') == \
+ 'mongodb+something://host'
+
+ assert compliant_uri('something://host') == 'mongodb+something://host'
+
@pytest.mark.usefixtures('depends_on_current_app')
def test_reduce(self):
x = MongoBackend(app=self.app)
| MongoDB backend does not support mongodb+srv:// URL's
## Checklist
https://github.com/celery/celery/blob/master/celery/backends/mongodb.py#L143-L146
## Steps to reproduce
Set the MongoDB URL to one like:
```mongodb+srv://mongo.private.corp.example.com/celery?ssl=false```
## Expected behavior
This works.
## Actual behavior
This fails because the URL parsing does not match on `mongodb+srv://` instead trying to match `mongodb://` only.
| Could you please elaborate on the proposed URL format? Some references would be nice as well.
@georgepsarakis it's documented here: https://docs.mongodb.com/manual/reference/connection-string/#dns-seedlist-connection-format | 2018-05-19T14:52:01 |
celery/celery | 4,779 | celery__celery-4779 | [
"4412"
] | 38673412f3ea2781cb96166255fedfeddecb66d8 | diff --git a/celery/app/base.py b/celery/app/base.py
--- a/celery/app/base.py
+++ b/celery/app/base.py
@@ -170,7 +170,7 @@ class Celery(object):
fixups (List[str]): List of fix-up plug-ins (e.g., see
:mod:`celery.fixups.django`).
config_source (Union[str, type]): Take configuration from a class,
- or object. Attributes may include any setings described in
+ or object. Attributes may include any settings described in
the documentation.
"""
| Request on_timeout should ignore soft time limit exception
When Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.
But the task may catch this exception and eg. return (this is what soft timeout are for).
This cause:
1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task
2. the task status to be passed to failure and to success on the same manner
3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasks…
1, 2 and 3 can leads of course to strange race conditions…
## Steps to reproduce (Illustration)
with the program in test_timeout.py:
```python
import time
import celery
app = celery.Celery('test_timeout')
app.conf.update(
result_backend="redis://localhost/0",
broker_url="amqp://celery:celery@localhost:5672/host",
)
@app.task(soft_time_limit=1)
def test():
try:
time.sleep(2)
except Exception:
return 1
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())
result.get()
```
start a worker and the program:
```
$ celery -A test_timeout worker -l WARNING
$ python3 test_timeout.py
```
## Expected behavior
add method is called with `[1, 1]` as argument and test_timeout.py return normally
## Actual behavior
The test_timeout.py fails, with
```
celery.backends.base.ChordError: Callback error: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",
```
On the worker side, the **on_error is called but the add method as well !**
```
[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]
[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]
[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in <listcomp>
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 243, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)
[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding
[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]
```
Of course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:
- the chord result is incremented twice by the error of soft time limit
- the chord result is again incremented twice by the correct returning of `test` task
## Conclusion
Request.on_timeout should not process soft time limit exception.
here is a quick monkey patch (correction of celery is trivial)
```python
def patch_celery_request_on_timeout():
from celery.worker import request
orig = request.Request.on_timeout
def patched_on_timeout(self, soft, timeout):
if not soft:
orig(self, soft, timeout)
request.Request.on_timeout = patched_on_timeout
patch_celery_request_on_timeout()
```
## version info
software -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://10.0.3.253/0
| @ask I think this is quite a big problem (with a trivial fix).
It requires attention though as it is bringging a new behaviour (but previous behaviour is not well documented, and, in my opinion, the new behaviour was the one expected)
This change in behaviour is what kept my team from upgrading to celery 4. Indeed, the chord callback was often not called at all.
I don't know if it is related, but I modified your code sample and it resulted in some `Exception raised outside body` errors and multiple other errors if you try running `python test_timeout.py` multiple times.
Here is my script:
```python
import time
import celery
app = celery.Celery(
'test_timeout',
broker='amqp://localhost',
backend='redis://localhost')
@app.task(soft_time_limit=1)
def test(nb_seconds):
try:
time.sleep(nb_seconds)
return nb_seconds
except:
print("### error handled")
return nb_seconds
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([
test.s(i).set(link_error=on_error.s()) for i in range(0, 2)
])(add.s())
result.get()
```
NB: if you update the range for range(2, 4) for instance, the `Exception raised outside body` does not seem to happen. It seems this particular issue happens when the `SoftTimeLimitExceeded` is raised exactly during the `return`.
could you please send a PR with your proposed fix/workaround on master branch?
hi @auvipy, I won't have the time until jannuary. I'll need help on how to write the tests also (that's the reason why I didn't propose a PR).
OK. dont get afraid of sending logical changes just for the reason you don't know how to write test. we will certainly try to help you
thanks a lot! | 2018-05-29T13:12:44 |
|
celery/celery | 4,836 | celery__celery-4836 | [
"4835"
] | 7d9300b3b94399eafb5e40a08a0cdc8b05f896aa | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -21,6 +21,7 @@
from kombu.utils.encoding import bytes_to_str, ensure_bytes, from_utf8
from kombu.utils.url import maybe_sanitize_url
+import celery.exceptions
from celery import current_app, group, maybe_signature, states
from celery._state import get_current_task
from celery.exceptions import (ChordError, ImproperlyConfigured,
@@ -249,7 +250,11 @@ def exception_to_python(self, exc):
else:
exc_module = from_utf8(exc_module)
exc_type = from_utf8(exc['exc_type'])
- cls = getattr(sys.modules[exc_module], exc_type)
+ try:
+ cls = getattr(sys.modules[exc_module], exc_type)
+ except KeyError:
+ cls = create_exception_cls(exc_type,
+ celery.exceptions.__name__)
exc_msg = exc['exc_message']
exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
if self.serializer in EXCEPTION_ABLE_CODECS:
| Result: Deserializing exceptions for unknown classes causes KeyError
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
Write a task that throws an exception from a custom module.
Run the task
Get the task result.state from an interpreter that does not have the custom module installed.
## Expected behavior
Task result should be returned.
## Actual behavior
KeyError on deserializing the Exception.
My original; traceback is from flower:
```
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: Traceback (most recent call last):
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/tornado/web.py", line 1541, in _execute
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: result = method(*self.path_args, **self.path_kwargs)
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/tornado/web.py", line 2949, in wrapper
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: return method(self, *args, **kwargs)
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/flower/api/tasks.py", line 314, in get
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: response = {'task-id': taskid, 'state': result.state}
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/celery/result.py", line 471, in state
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: return self._get_task_meta()['status']
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/celery/result.py", line 410, in _get_task_meta
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: return self._maybe_set_cache(self.backend.get_task_meta(self.id))
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 359, in get_task_meta
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: meta = self._get_task_meta_for(task_id)
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 674, in _get_task_meta_for
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: return self.decode_result(meta)
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 278, in decode_result
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: return self.meta_from_decoded(self.decode(payload))
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 274, in meta_from_decoded
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: meta['result'] = self.exception_to_python(meta['result'])
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 252, in exception_to_python
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: cls = getattr(sys.modules[exc_module], exc_type)
Jun 21 01:23:24 netdocker1-eastus2 daemon INFO 94e57cb12059[92630]: KeyError: 'net_devices2.exceptions'
```
```
celery -A net_task report
software -> celery:4.2.0rc2 (windowlicker) kombu:4.1.0 py:3.6.3
billiard:3.5.0.3 redis:2.10.6
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:://_task.extendedredisbackend%2Bredis//:@localhost:6379/1
ENV_FILE_PATH: '/etc/celery/conf.d/celery'
REQUIRED_VARS: ['CELERY_BROKER_HOST',
'CELERY_RESULT_BACKEND_HOST']
RFC5424Formatter: <class 'syslog_rfc5424_formatter.RFC5424Formatter'>
accept_content: ['json']
after_setup_logger: <Signal: after_setup_logger providing_args={'logfile', 'logger', 'format', 'colorize', 'loglevel'}>
after_setup_task_logger: <Signal: after_setup_task_logger providing_args={'logfile', 'logger', 'format', 'colorize', 'loglevel'}>
broker_url: 'redis://localhost:6379/0'
dump: ('/usr/bin/python -c "import os, json;print '
'json.dumps(dict(os.environ))"')
env: {
##redacted
}
pipe: <subprocess.Popen object at 0x7f1bd1dd8ac8>
result_backend: '://_task.extendedredisbackend%2Bredis//:@localhost:6379/1'
result_compression: 'gzip'
result_serializer: 'json'
setup_logging: <function setup_logging at 0x7f1bcff64400>
source: 'source /etc/celery/conf.d/celery'
sp: <module 'subprocess' from '/usr/lib/python3.6/subprocess.py'>
sys: <module 'sys' (built-in)>
task_always_eager: False
task_default_queue: 'default'
task_send_sent_event: True
task_serializer: 'json'
urllib: <module 'urllib' from '/usr/lib/python3.6/urllib/__init__.py'>
urlsafe_password: '********'
worker_log_color: False
worker_redirect_stdouts: False
worker_send_task_events: True
```
| Looks like this issue was introduced here:
https://github.com/celery/celery/commit/bd347f7565f3a72c8cfb686ea0bfe38cfa76e09b | 2018-06-21T19:00:31 |
|
celery/celery | 4,864 | celery__celery-4864 | [
"4860"
] | fa0e35b5687fd5ad2b6927b019c364bf5f148f4d | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -33,6 +33,7 @@
from celery.utils.functional import LRUCache, arity_greater
from celery.utils.log import get_logger
from celery.utils.serialization import (create_exception_cls,
+ ensure_serializable,
get_pickleable_exception,
get_pickled_exception)
@@ -236,7 +237,7 @@ def prepare_exception(self, exc, serializer=None):
if serializer in EXCEPTION_ABLE_CODECS:
return get_pickleable_exception(exc)
return {'exc_type': type(exc).__name__,
- 'exc_message': exc.args,
+ 'exc_message': ensure_serializable(exc.args, self.encode),
'exc_module': type(exc).__module__}
def exception_to_python(self, exc):
diff --git a/celery/utils/serialization.py b/celery/utils/serialization.py
--- a/celery/utils/serialization.py
+++ b/celery/utils/serialization.py
@@ -56,6 +56,8 @@ def find_pickleable_exception(exc, loads=pickle.loads,
Arguments:
exc (BaseException): An exception instance.
+ loads: decoder to use.
+ dumps: encoder to use
Returns:
Exception: Nearest pickleable parent exception class
@@ -84,6 +86,26 @@ def create_exception_cls(name, module, parent=None):
return subclass_exception(name, parent, module)
+def ensure_serializable(items, encoder):
+ """Ensure items will serialize.
+
+ For a given list of arbitrary objects, return the object
+ or a string representation, safe for serialization.
+
+ Arguments:
+ items (Iterable[Any]): Objects to serialize.
+ encoder (Callable): Callable function to serialize with.
+ """
+ safe_exc_args = []
+ for arg in items:
+ try:
+ encoder(arg)
+ safe_exc_args.append(arg)
+ except Exception: # pylint: disable=broad-except
+ safe_exc_args.append(safe_repr(arg))
+ return tuple(safe_exc_args)
+
+
@python_2_unicode_compatible
class UnpickleableExceptionWrapper(Exception):
"""Wraps unpickleable exceptions.
@@ -116,13 +138,7 @@ class UnpickleableExceptionWrapper(Exception):
exc_args = None
def __init__(self, exc_module, exc_cls_name, exc_args, text=None):
- safe_exc_args = []
- for arg in exc_args:
- try:
- pickle.dumps(arg)
- safe_exc_args.append(arg)
- except Exception: # pylint: disable=broad-except
- safe_exc_args.append(safe_repr(arg))
+ safe_exc_args = ensure_serializable(exc_args, pickle.dumps)
self.exc_module = exc_module
self.exc_cls_name = exc_cls_name
self.exc_args = safe_exc_args
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -145,6 +145,17 @@ def test_unpickleable(self):
y = self.b.exception_to_python(x)
assert isinstance(y, KeyError)
+ def test_json_exception_arguments(self):
+ self.b.serializer = 'json'
+ x = self.b.prepare_exception(Exception(object))
+ assert x == {
+ 'exc_message': serialization.ensure_serializable(
+ (object,), self.b.encode),
+ 'exc_type': Exception.__name__,
+ 'exc_module': Exception.__module__}
+ y = self.b.exception_to_python(x)
+ assert isinstance(y, Exception)
+
def test_impossible(self):
self.b.serializer = 'pickle'
x = self.b.prepare_exception(Impossible())
diff --git a/t/unit/utils/test_serialization.py b/t/unit/utils/test_serialization.py
--- a/t/unit/utils/test_serialization.py
+++ b/t/unit/utils/test_serialization.py
@@ -1,14 +1,17 @@
from __future__ import absolute_import, unicode_literals
+import json
+import pickle
import sys
from datetime import date, datetime, time, timedelta
import pytest
import pytz
-from case import Mock, mock
+from case import Mock, mock, skip
from kombu import Queue
from celery.utils.serialization import (UnpickleableExceptionWrapper,
+ ensure_serializable,
get_pickleable_etype, jsonify)
@@ -25,6 +28,23 @@ def test_no_cpickle(self):
sys.modules['celery.utils.serialization'] = prev
+class test_ensure_serializable:
+
+ @skip.unless_python3()
+ def test_json_py3(self):
+ assert (1, "<class 'object'>") == \
+ ensure_serializable([1, object], encoder=json.dumps)
+
+ @skip.if_python3()
+ def test_json_py2(self):
+ assert (1, "<type 'object'>") == \
+ ensure_serializable([1, object], encoder=json.dumps)
+
+ def test_pickle(self):
+ assert (1, object) == \
+ ensure_serializable((1, object), encoder=pickle.dumps)
+
+
class test_UnpickleExceptionWrapper:
def test_init(self):
| Exception Serialization error? Tasks raising exceptions stuck in "PENDING" state
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
Still working on a standalone repro, but what we see in production is this traceback from a task making requests module POSTs:
```
Client caught exception in _post(). Unable to retry.
Traceback (most recent call last):
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/connection.py", line 171, in _new_conn
(self._dns_host, self.port), sel
f.timeout, **extra_kw)
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/util/connection.py", line 79, in create_connection
raise err
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/util/connection.py", line 69, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
During handling of the
above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/va
r/lib/celery/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 849, in _validate_conn
conn.connect()
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/connection.py", line 314, in connect
conn = self._new_conn()
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/connection.py", line 180, in _new_conn
self, "F
ailed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7f695118a4a8>: Failed to establish a new connection: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/celery/venv/lib
/python3.6/site-packages/requests/adapters.py", line 445, in send
timeout=timeout
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/var/lib/celery/venv/lib/python3.6/site-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url,
error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='sjc1.solvedirect.com', port=443): Max retries exceeded with url: /ws/rest/oauth/token?grant_type=client_credentials (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f695118a4a8>: Failed to establish a new connection: [Errno 110] Connection timed out
',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/celery/venv/lib/python3.6/site-packages/cisco_tac/ciscotac.py", line 67, in _make_post_request
result = requests.request('POST', url, json=payload, headers=header, verify=True, auth=auth)
File "/var/lib/celery/venv/lib/python3.6
/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/var/lib/celery/venv/lib/python3.6/site-packages/requests/sessions.py", line 512, in request
resp = self.send(prep, **send_kwargs)
File "/var/lib/celery/venv/lib/python3.6/site-packages/requests/sessions.py", line 622, in send
r = adap
ter.send(request, **kwargs)
File "/var/lib/celery/venv/lib/python3.6/site-packages/requests/adapters.py", line 513, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='sjc1.solvedirect.com', port=443): Max retries exceeded with url: /ws/rest/oauth/token?grant_type=client_credentials (Caused by NewConnecti
onError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f695118a4a8>: Failed to establish a new connection: [Errno 110] Connection timed out',))
```
## Expected behavior
The exception should fail the task, and bubble up in the result.traceback.
## Actual behavior
No event is fired, and no tombstone record is recorded. The task is stuck in 'PENDING' using the python library, and 'STARTED' state in flower (since it got a started event).
| (net-task) johnar@netdev1-westus2:~/scripts-tools/project/NetTask$ celery -A net_task report
software -> celery:4.2.0rc2 (windowlicker) kombu:4.1.0 py:3.6.3
billiard:3.5.0.3 redis:2.10.6
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:://_task.extendedredisbackend%2Bredis//:@localhost:6379/1
ENV_FILE_PATH: '/etc/celery/conf.d/celery'
REQUIRED_VARS: ['CELERY_BROKER_HOST',
'CELERY_RESULT_BACKEND_HOST']
RFC5424TaskFormatter: <class 'net_task.log.RFC5424TaskFormatter'>
accept_content: ['json']
after_setup_logger: <Signal: after_setup_logger providing_args={'format', 'loglevel', 'logfile', 'colorize', 'logger'}>
after_setup_task_logger: <Signal: after_setup_task_logger providing_args={'format', 'loglevel', 'logfile', 'colorize', 'logger'}>
broker_url: 'redis://localhost:6379/0'
dump: ('/usr/bin/python -c "import os, json;print '
'json.dumps(dict(os.environ))"')
...
pipe: <subprocess.Popen object at 0x7fe118351ba8>
result_backend: '://_task.extendedredisbackend%2Bredis//:@localhost:6379/1'
result_compression: 'gzip'
result_serializer: 'json'
setup_logging: <function setup_logging at 0x7fe1151438c8>
source: 'source /etc/celery/conf.d/celery'
sp: <module 'subprocess' from '/usr/lib/python3.6/subprocess.py'>
sys: <module 'sys' (built-in)>
task_always_eager: False
task_default_queue: 'default'
task_send_sent_event: True
task_serializer: 'json'
urllib: <module 'urllib' from '/usr/lib/python3.6/urllib/__init__.py'>
urlsafe_password: '********'
worker_hijack_root_logger: True
worker_log_color: False
worker_redirect_stdouts: False
worker_send_task_events: True
This looks like an exception serialization error. trying to repro the problem, we caught this nasty traceback on the worker:
https://pastebin.com/F4m8gzie
Possibly related -- kombu catches all connection errors defined by the provider (redis in my case):
https://github.com/celery/kombu/issues/802
It seems like tasks should not be able to bubble up exceptions which will break kombu....
Acutally the original issue may be different than those Kombu issues... the original exception seems to get stuck in the worker, not make it out to kombu.
Looking into the worker's serialization prechecks -- it seems to test for whether the exception is Pickleable, but not whether it's json-able. | 2018-06-29T00:24:33 |
celery/celery | 4,870 | celery__celery-4870 | [
"4449"
] | aa12474a5fcfb4ff3a155ccb8ac6d3f1b019a301 | diff --git a/celery/backends/couchbase.py b/celery/backends/couchbase.py
--- a/celery/backends/couchbase.py
+++ b/celery/backends/couchbase.py
@@ -19,6 +19,7 @@
from couchbase import Couchbase
from couchbase.connection import Connection
from couchbase.exceptions import NotFoundError
+ from couchbase import FMT_AUTO
except ImportError:
Couchbase = Connection = NotFoundError = None # noqa
@@ -106,7 +107,7 @@ def get(self, key):
return None
def set(self, key, value):
- self.connection.set(key, value, ttl=self.expires)
+ self.connection.set(key, value, ttl=self.expires, format=FMT_AUTO)
def mget(self, keys):
return [self.get(key) for key in keys]
| Unable to save pickled objects with couchbase as result backend
Hi it seems like when I attempt to process groups of chords, the couchbase result backend is consistently failing to unlock the chord when reading from the db:
`celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()`
This behavior does not occur with the redis result backend, i can switch between them and see that the error unlocking only occurs on couchbase.
## Steps to reproduce
Attempt to process a chord with couchbase backend using pickle serialization.
## Expected behavior
Chords process correctly, and resulting data is fed to the next task
## Actual behavior
Celery is unable to unlock the chord from the result backend
## Celery project info:
```
celery -A ipaassteprunner report
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.10
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:couchbase://isadmin:**@localhost:8091/tasks
task_serializer: 'pickle'
result_serializer: 'pickle'
dbconfig: <ipaascommon.ipaas_config.DatabaseConfig object at 0x10fbbfe10>
db_pass: u'********'
IpaasConfig: <class 'ipaascommon.ipaas_config.IpaasConfig'>
imports:
('ipaassteprunner.tasks',)
worker_redirect_stdouts: False
DatabaseConfig: u'********'
db_port: '8091'
ipaas_constants: <module 'ipaascommon.ipaas_constants' from '/Library/Python/2.7/site-packages/ipaascommon/ipaas_constants.pyc'>
enable_utc: True
db_user: 'isadmin'
db_host: 'localhost'
result_backend: u'couchbase://isadmin:********@localhost:8091/tasks'
result_expires: 3600
iconfig: <ipaascommon.ipaas_config.IpaasConfig object at 0x10fbbfd90>
broker_url: u'amqp://guest:********@localhost:5672//'
task_bucket: 'tasks'
accept_content: ['pickle']
```
### Additional Debug output
```
[2017-12-13 15:39:57,860: INFO/MainProcess] Received task: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] ETA:[2017-12-13 20:39:58.853535+00:00]
[2017-12-13 15:39:57,861: DEBUG/MainProcess] basic.qos: prefetch_count->27
[2017-12-13 15:39:58,859: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x10b410b90> (args:('celery.chord_unlock', 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', {'origin': 'gen53678@silo2460', 'lang': 'py', 'task': 'celery.chord_unlock', 'group': None, 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', u'delivery_info': {u'priority': None, u'redelivered': False, u'routing_key': u'celery', u'exchange': u''}, 'expires': None, u'correlation_id': 'e3139ae5-a67d-4f0c-8c54-73b1e19433d2', 'retries': 311, 'timelimit': [None, None], 'argsrepr': "('90c64bef-21ba-42f9-be75-fdd724375a7a', {'chord_size': 2, 'task': 'ipaassteprunner.tasks.transfer_data', 'subtask_type': None, 'kwargs': {}, 'args': (), 'options': {'chord_size': None, 'chain': [...], 'task_id': '9c6b5e1c-2089-4db7-9590-117aeaf782c7', 'root_id': '0acd3e0d-7532-445c-8916-b5fc8a6395ab', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', 'reply_to': '0a58093c-6fdd-3458-9a34-7d5e094ac6a8'}, 'immutable': False})", 'eta': '2017-12-13T20:39:58.853535+00:00', 'parent_id': 'c27c9565-19a6-4683-8180-60f0c25007e9', u'reply_to':... kwargs:{})
[2017-12-13 15:40:00,061: DEBUG/MainProcess] basic.qos: prefetch_count->26
[2017-12-13 15:40:00,065: DEBUG/MainProcess] Task accepted: celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] pid:53679
[2017-12-13 15:40:00,076: INFO/ForkPoolWorker-6] Task celery.chord_unlock[e3139ae5-a67d-4f0c-8c54-73b1e19433d2] retry: Retry in 1s: ValueFormatError()
```
### Stack trace from chord unlocking failure
```python
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 374, in trace_task
R = retval = fun(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/trace.py", line 629, in __protected_call__
return self.run(*args, **kwargs)
File "/Library/Python/2.7/site-packages/celery/app/builtins.py", line 75, in unlock_chord
raise self.retry(countdown=interval, max_retries=max_retries)
File "/Library/Python/2.7/site-packages/celery/app/task.py", line 689, in retry
raise ret
Retry: Retry in 1s
```
| Some more info after digging into the library a little bit: The exception message that occurs (for some reason it's not written out to console) looks like this:
`<Couldn't decode as UTF-8, Results=1, inner_cause='utf8' codec can't decode byte 0x80 in position 1: invalid start byte, C Source=(src/convert.c,65)>`
Any thoughts as to a possible work around, or root cause?
Finally had some time to go back and figure out what was wrong here (as opposed to just using Redis). For anyone else experiencing trouble with couchbase as a result backend:
It seems the "cant decode byte" error comes about when trying to use the pickle serialization only, as the pickle serialized strings do not conform to the data structure that couchbase is expecting.
Once I switched to json serialization, the error went away, but of course I encountered the issue of my complex python objects not being json serializable.
As a workaround to enable the use of pickled objects in the couchbase result_backend, I wrote a custom encoder that first pickles my objects to strings, then encodes the resulting string in json, decoding works in the opposite way:
```python
import json, pickle
from kombu.serialization import register
def dumps(obj):
obj_serialized = pickle.dumps(obj)
return json.dumps(obj_serialized)
def loads(obj):
obj_pickled = json.loads(obj)
obj_deserialized = pickle.loads(obj_pickled)
return obj_deserialized
register('cb_pickle', dumps, loads, content_type='application/x-cbpickle',
content_encoding='utf-8')
result_serializer = 'cb_pickle'
accept_content = ['cb_pickle']
```
Using those methods for encoding/decoding, couchbase is able to store all my complex python objects and is a functional result backend.
@dfresh613 nice work in locating the root cause! Do you think that `base64` encoding would be equivalent? I think it may be faster, and is widely used to avoid encoding issues.
I wonder what is the best approach here. I don't think that it is possible to use a different serializer just for this backend specifically, at least not in a straightforward way.
We could change the Pickle serializer functions (see [here](https://github.com/celery/kombu/blob/a600ab87d9c32d23396f1171486541ce0b6d937d/kombu/serialization.py#L340) and [here](https://github.com/celery/kombu/blob/a600ab87d9c32d23396f1171486541ce0b6d937d/kombu/serialization.py#L349)), but it would have to be done in a backwards compatible manner, probably something like:
```python
import pickle
import base64
def dumps(obj):
return base64.b64encode(pickle.dumps(obj))
def loads(obj):
try:
obj = base64.b64decode(obj)
except TypeError:
pass
return pickle.loads(obj)
```
I found a benchmark [here](https://github.com/non/benchmarks-1) stating that base64 is indeed faster compared to JSON (builtin).
Thanks @georgepsarakis for the suggestion. I took your advice and am now base64 encoding the results, and everything is still working.
I'd be more than happy to change the serializer functions if that's what should be done. Though, since this doesn't seem to be an issue with many other result backends, do we really want to do the base64 encoding by default, adding potential overhead to every result encoding?
What about a separate celery built-in serialization type like "safe_pickle" that users can choose to implement if they need for situations like this?
Read through a bit more of the couchbase documentation, and it seems like this may best be fixed in the couchbase backend code itself, instead of a serializer change.
It seems couchbase allows you to specify the formatting with the fmt kwarg. Its default formatting seems to be json, which makes sense as to why I was encountering the problem (although celery was encoding in pickle, it was still trying to save as a json string by default in couchbase).
[Here](http://docs.couchbase.com/sdk-api/couchbase-python-client-1.2.3/api/couchbase.html#couchbase.connection.Connection.default_format) is the link to the couchbase Connection API which talks about the default formatting.
[Here](https://github.com/celery/celery/blob/master/celery/backends/couchbase.py#L106) is where celery is currently saving the result backend with no formatting specified (defaults to json)
The other formatting options available to use (including pickle) are available [Here](http://docs.couchbase.com/sdk-api/couchbase-python-client-1.2.3/api/couchbase.html#format-info)
I think the best fix here is to ~~change the celery backend code such that it asseses the current result serializer being used, and changes the fmt kwarg when running Connection.set() so that it matches the actual serialization celery is using.~~
~~Just use the FMT_AUTO format, and let couchbase identify the formatting.~~
Just use FMT_UTF8, since celery is already serializing the results as pickle/json, and specifying any other format will just make couchbase re-serialize again, and run an additional json.dumps/pickle.dumps
What do you think?
@dfresh613 that seems as a far better solution for the time being, nice finding. There might be value in changing the serializer in a future major release.
FYI - Encountered some issues using FMT_UTF8 format in couchbase, it was still having some issues attempting to store python pickled objects.
The formatting i found which consistently works for both json and pickle serialization is the FMT_AUTO.
| 2018-06-30T21:43:51 |
|
celery/celery | 4,880 | celery__celery-4880 | [
"4020"
] | 47ca2b462f22a8d48ed8d80c2f9bf8b9dc4a4de6 | diff --git a/celery/schedules.py b/celery/schedules.py
--- a/celery/schedules.py
+++ b/celery/schedules.py
@@ -361,7 +361,7 @@ class crontab(BaseSchedule):
- A (list of) integers from 1-31 that represents the days of the
month that execution should occur.
- A string representing a Crontab pattern. This may get pretty
- advanced, such as ``day_of_month='2-30/3'`` (for every even
+ advanced, such as ``day_of_month='2-30/2'`` (for every even
numbered day) or ``day_of_month='1-7,15-21'`` (for the first and
third weeks of the month).
| The celery document has a mistake about crontab
## about crontab
the document in this link about crontab simple.
http://docs.celeryproject.org/en/master/userguide/periodic-tasks.html
**crontab(0, 0, day_of_month='2-30/3')**
Execute on every even numbered day.
shuld be
**crontab(0, 0, day_of_month='2-30/2')**
| plz send a fix | 2018-07-04T18:52:29 |
|
celery/celery | 4,892 | celery__celery-4892 | [
"4638"
] | 68e5268044d0fcc4867b29273df347acacf04c92 | diff --git a/celery/platforms.py b/celery/platforms.py
--- a/celery/platforms.py
+++ b/celery/platforms.py
@@ -202,6 +202,10 @@ def remove_if_stale(self):
print('Stale pidfile exists - Removing it.', file=sys.stderr)
self.remove()
return True
+ except SystemError as exc:
+ print('Stale pidfile exists - Removing it.', file=sys.stderr)
+ self.remove()
+ return True
return False
def write_pid(self):
| celery beat: OSError: [WinError 87]
Suddenly I started getting this error when I run celery beat
```
celery beat -A proj
```
Funny thing is that celery worker still runs but beat doesn't.
I'm running version 4.1 (latest version) btw.
```
celery beat v4.1.0 (latentcall) is starting.
OSError: [WinError 87] The parameter is incorrect
...
SystemError: <class 'OSError'> returned a result with an error set
```
| We cannot reproduce this with the information you provided.
If you'd like, provide more information and a reproducible test case and I'll try to help.
In the meanwhile, this issue is closed.
This error occurs from: celery/platforms.py:199 (remove_if_stale)
```
try:
os.kill(pid, 0)
except OSError as exc: # os.error as exc:
if exc.errno == errno.ESRCH:
print('Stale pidfile exists - Removing it.', file=sys.stderr)
self.remove()
return True
```
It can be fixed by catching SystemError and wiping the pidfile, e.g:
```
SystemError as exc:
print('Stale pidfile exists - Removing it.', file=sys.stderr)
self.remove()
return True
```
but Im not entirely sure how this error was triggered originally as I cannot seem to reproduce it now.
can you send a PR? | 2018-07-07T10:42:47 |
|
celery/celery | 4,908 | celery__celery-4908 | [
"4906"
] | 97fd3acac6515a9b783c73d9ab5575644a79449c | diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -116,8 +116,9 @@ def __init__(self, message, on_ack=noop,
self.parent_id = headers.get('parent_id')
if 'shadow' in headers:
self.name = headers['shadow'] or self.name
- if 'timelimit' in headers:
- self.time_limits = headers['timelimit']
+ timelimit = headers.get('timelimit', None)
+ if timelimit:
+ self.time_limits = timelimit
self.argsrepr = headers.get('argsrepr', '')
self.kwargsrepr = headers.get('kwargsrepr', '')
self.on_ack = on_ack
| diff --git a/t/unit/worker/test_request.py b/t/unit/worker/test_request.py
--- a/t/unit/worker/test_request.py
+++ b/t/unit/worker/test_request.py
@@ -1008,6 +1008,30 @@ def test_execute_using_pool(self):
weakref_ref.assert_called_with(self.pool.apply_async())
assert job._apply_result is weakref_ref()
+ def test_execute_using_pool_with_none_timelimit_header(self):
+ from celery.app.trace import trace_task_ret as trace
+ weakref_ref = Mock(name='weakref.ref')
+ job = self.zRequest(id=uuid(),
+ revoked_tasks=set(),
+ ref=weakref_ref,
+ headers={'timelimit': None})
+ job.execute_using_pool(self.pool)
+ self.pool.apply_async.assert_called_with(
+ trace,
+ args=(job.type, job.id, job.request_dict, job.body,
+ job.content_type, job.content_encoding),
+ accept_callback=job.on_accepted,
+ timeout_callback=job.on_timeout,
+ callback=job.on_success,
+ error_callback=job.on_failure,
+ soft_timeout=self.task.soft_time_limit,
+ timeout=self.task.time_limit,
+ correlation_id=job.id,
+ )
+ assert job._apply_result
+ weakref_ref.assert_called_with(self.pool.apply_async())
+ assert job._apply_result is weakref_ref()
+
def test_execute_using_pool__defaults_of_hybrid_to_proto2(self):
weakref_ref = Mock(name='weakref.ref')
headers = strategy.hybrid_to_proto2('', {'id': uuid(),
| Request.time_limits is None
I am getting an error in the stack trace below. It looks like this line is referencing the wrong field name: https://github.com/celery/celery/blob/master/celery/worker/request.py#L520. I don't think time_limits exists for this Request. The only place in celery codebase that I see using `create_request_cls` is here https://github.com/celery/celery/blob/47ca2b462f22a8d48ed8d80c2f9bf8b9dc4a4de6/celery/worker/strategy.py#L130. It seems to be creating the Request object from task.Request base. According to the docs http://docs.celeryproject.org/en/latest/userguide/tasks.html#task-request task.Request has a field `timelimit`, not `time_limits`, hence the TypeError. worker.Request, on the other hand, does have `time_limits` http://docs.celeryproject.org/en/latest/reference/celery.worker.request.html.
The workaround I did to get my code working is basically to check if the variable is None here https://github.com/celery/celery/blob/master/celery/worker/request.py#L520 :
```
if not self.time_limits:
time_limit = default_time_limit
soft_time_limit = default_soft_time_limit
else:
time_limit, soft_time_limit = self.time_limits
```
I am not sure if I did something stupid and don't understand how the code is structured. Please point me at why it's not an error, if it is by design.
Stack trace:
```
[2018-07-16 06:09:46,229: CRITICAL/MainProcess] Unrecoverable error: TypeError("'NoneType' object is not iterable",)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/celery/worker/worker.py", line 205, in start
self.blueprint.start(self)
File "/usr/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/lib/python2.7/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 322, in start
blueprint.start(self)
File "/usr/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 598, in start
c.loop(*c.loop_args())
File "/usr/lib/python2.7/site-packages/celery/worker/loops.py", line 118, in synloop
qos.update()
File "/usr/lib/python2.7/site-packages/kombu/common.py", line 417, in update
return self.set(self.value)
File "/usr/lib/python2.7/site-packages/kombu/common.py", line 410, in set
self.callback(prefetch_count=new_value)
File "/usr/lib/python2.7/site-packages/celery/worker/consumer/tasks.py", line 47, in set_prefetch_count
apply_global=qos_global,
File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 558, in qos
apply_global)
File "/usr/lib/python2.7/site-packages/amqp/channel.py", line 1812, in basic_qos
wait=spec.Basic.QosOk,
File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 59, in send_method
return self.wait(wait, returns_tuple=returns_tuple)
File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 79, in wait
self.connection.drain_events(timeout=timeout)
File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 491, in drain_events
while not self.blocking_read(timeout):
File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 497, in blocking_read
return self.on_inbound_frame(frame)
File "/usr/lib/python2.7/site-packages/amqp/method_framing.py", line 77, in on_frame
callback(channel, msg.frame_method, msg.frame_args, msg)
File "/usr/lib/python2.7/site-packages/amqp/connection.py", line 501, in on_inbound_method
method_sig, payload, content,
File "/usr/lib/python2.7/site-packages/amqp/abstract_channel.py", line 128, in dispatch_method
listener(*args)
File "/usr/lib/python2.7/site-packages/amqp/channel.py", line 1597, in _on_basic_deliver
fun(msg)
File "/usr/lib/python2.7/site-packages/kombu/messaging.py", line 624, in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message)
File "/usr/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 572, in on_task_received
callbacks,
File "/usr/lib/python2.7/site-packages/celery/worker/strategy.py", line 200, in task_message_handler
handle(req)
File "/usr/lib/python2.7/site-packages/celery/worker/worker.py", line 228, in _process_task
req.execute_using_pool(self.pool)
File "/usr/lib/python2.7/site-packages/celery/worker/request.py", line 520, in execute_using_pool
time_limit, soft_time_limit = self.time_limits
TypeError: 'NoneType' object is not iterable
```
## Checklist
celery v 4.2.0
## Steps to reproduce
It's part of a much larger project, so I don't have a self-contained example yet.
## Expected behavior
return a tuple of (None, None)
## Actual behavior
TypeError
| This is indeed a bug and your fix should be correct.
I'm trying to figure out how to write a test for this case so it won't regress.
While the fix may be correct, the real issue is that someone is passing a `timelimt` header which is equal to `None` somewhere.
A fix should be made in the `Request` class constructor. | 2018-07-17T10:41:49 |
celery/celery | 4,952 | celery__celery-4952 | [
"4951"
] | a29a0fe562fbf5d6b88294cea4030a6f12e8dd15 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -701,11 +701,11 @@ def prepare_steps(self, args, kwargs, tasks,
return tasks, results
def apply(self, args=(), kwargs={}, **options):
- last, fargs = None, args
+ last, (fargs, fkwargs) = None, (args, kwargs)
for task in self.tasks:
- res = task.clone(fargs).apply(
+ res = task.clone(fargs, fkwargs).apply(
last and (last.get(),), **dict(self.options, **options))
- res.parent, last, fargs = last, res, None
+ res.parent, last, (fargs, fkwargs) = last, res, (None, None)
return last
@property
| diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -441,6 +441,11 @@ def test_apply(self):
assert res.parent.parent.get() == 8
assert res.parent.parent.parent is None
+ def test_kwargs_apply(self):
+ x = chain(self.add.s(), self.add.s(8), self.add.s(10))
+ res = x.apply(kwargs={'x': 1, 'y': 1}).get()
+ assert res == 20
+
def test_single_expresion(self):
x = chain(self.add.s(1, 2)).apply()
assert x.get() == 3
| chain.apply ignores kwargs
On master@a29a0fe56 I have added the following test to test_canvas.py in the test_chain class:
```python
def test_kwargs_apply(self):
x = chain(self.add.s(), self.add.s(8), self.add.s(10))
res = x.apply(kwargs={'x': 1, 'y': 1}).get()
assert res == 20
```
It fails and describes the observed behavior also mentioned earlier in https://github.com/celery/celery/issues/2695
Pytest output for the failure:
```
―――――――――――――――――――――――――――――――――――――――――――――――――― test_chain.test_kwargs_apply ―――――――――――――――――――――――――――――――――――――――――――――――――――
self = <t.unit.tasks.test_canvas.test_chain instance at 0x7f2725208560>
def test_kwargs_apply(self):
x = chain(self.add.s(), self.add.s(8), self.add.s(10))
> res = x.apply(kwargs={'x': 1, 'y': 1}).get()
t/unit/tasks/test_canvas.py:446:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
celery/canvas.py:707: in apply
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <EagerResult: 7b72e2aa-0455-45ae-9df9-429ecd1faf6a>, timeout = None, propagate = True, disable_sync_subtasks = True
kwargs = {}
def get(self, timeout=None, propagate=True,
disable_sync_subtasks=True, **kwargs):
if disable_sync_subtasks:
assert_will_not_block()
if self.successful():
return self.result
elif self.state in states.PROPAGATE_STATES:
if propagate:
> raise self.result
E TypeError: add() takes exactly 2 arguments (0 given)
celery/result.py:995: TypeError
------------------------------------------------------ Captured stderr call -------------------------------------------------------
[2018-08-03 14:23:40,008: ERROR/MainProcess] Task t.unit.tasks.test_canvas.add[7b72e2aa-0455-45ae-9df9-429ecd1faf6a] raised unexpected: TypeError('add() takes exactly 2 arguments (0 given)',)
Traceback (most recent call last):
File "/home/developer/celery/app/trace.py", line 382, in trace_task
R = retval = fun(*args, **kwargs)
TypeError: add() takes exactly 2 arguments (0 given)
```
| 2018-08-03T14:45:06 |
|
celery/celery | 4,979 | celery__celery-4979 | [
"4873",
"4873"
] | 0e86862a4a0f8e4c06c2896c75086bb6bc61956a | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -1226,8 +1226,9 @@ def apply_async(self, args=(), kwargs={}, task_id=None,
tasks = (self.tasks.clone() if isinstance(self.tasks, group)
else group(self.tasks, app=app))
if app.conf.task_always_eager:
- return self.apply(args, kwargs,
- body=body, task_id=task_id, **options)
+ with allow_join_result():
+ return self.apply(args, kwargs,
+ body=body, task_id=task_id, **options)
# chord([A, B, ...], C)
return self.run(tasks, body, args, task_id=task_id, **options)
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -3,7 +3,7 @@
from time import sleep
-from celery import chain, group, shared_task
+from celery import chain, chord, group, shared_task
from celery.exceptions import SoftTimeLimitExceeded
from celery.utils.log import get_task_logger
@@ -42,6 +42,11 @@ def chain_add(x, y):
).apply_async()
+@shared_task
+def chord_add(x, y):
+ chord(add.s(x, x), add.s(y)).apply_async()
+
+
@shared_task
def delayed_sum(numbers, pause_time=1):
"""Sum the iterable of numbers."""
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -366,6 +366,17 @@ def test_add_chord_to_chord(self, manager):
res = c()
assert res.get() == [0, 5 + 6 + 7]
+ @flaky
+ def test_eager_chord_inside_task(self, manager):
+ from .tasks import chord_add
+
+ prev = chord_add.app.conf.task_always_eager
+ chord_add.app.conf.task_always_eager = True
+
+ chord_add.apply_async(args=(4, 8), throw=True).get()
+
+ chord_add.app.conf.task_always_eager = prev
+
@flaky
def test_group_chain(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -747,6 +747,29 @@ def test_freeze_tasks_is_not_group(self):
x.tasks = [self.add.s(2, 2)]
x.freeze()
+ def test_chain_always_eager(self):
+ self.app.conf.task_always_eager = True
+ from celery import _state
+ from celery import result
+
+ fixture_task_join_will_block = _state.task_join_will_block
+ try:
+ _state.task_join_will_block = _state.orig_task_join_will_block
+ result.task_join_will_block = _state.orig_task_join_will_block
+
+ @self.app.task(shared=False)
+ def finalize(*args):
+ pass
+
+ @self.app.task(shared=False)
+ def chord_add():
+ return chord([self.add.s(4, 4)], finalize.s()).apply_async()
+
+ chord_add.apply_async(throw=True).get()
+ finally:
+ _state.task_join_will_block = fixture_task_join_will_block
+ result.task_join_will_block = fixture_task_join_will_block
+
class test_maybe_signature(CanvasCase):
| Synchronous subtask guard blocks chord() when CELERY_TASK_ALWAYS_EAGER=True
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
```
$ celery -A minimal_eager_chord report
software -> celery:4.2.0 (windowlicker) kombu:4.2.1 py:3.5.3
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
```
Script to reproduce:
https://gist.github.com/jcushman/ea8adcb37dffdf5c3fdf756d02366ab2
## Expected behavior
No error occurs
## Actual behavior
```
In [1]: test.delay().get()
...
~/minimal_eager_chord.py in test()
27 def test():
28 chord(
---> 29 a.s(), b.s()
30 ).apply_async()
31
...
~/.pyenv/versions/3.5.3/envs/capstone/src/celery/celery/result.py in assert_will_not_block()
39 def assert_will_not_block():
40 if task_join_will_block():
---> 41 raise RuntimeError(E_WOULDBLOCK)
```
## Comments
This is a similar issue to #4576: `assert_will_not_block()` is triggered when using canvas functions with `CELERY_TASK_ALWAYS_EAGER=True`, even when they are used with apply_async().
The patch for that issue fixed `chain()` but not `chord()`, and the problem may exist for other canvas functions as well. I'm not sure whether it makes sense to add more calls to `with allow_join_result():` in `celery/canvas.py` (as in the patch for #4576), or if `assert_will_not_block()` could be modified to avoid false positives from `CELERY_TASK_ALWAYS_EAGER=True` in general.
The tricky bit of a general solution would be that the guard should still detect actual unsafe subtasks even if `CELERY_TASK_ALWAYS_EAGER=True`, since that's a natural way to run tests.
Synchronous subtask guard blocks chord() when CELERY_TASK_ALWAYS_EAGER=True
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
```
$ celery -A minimal_eager_chord report
software -> celery:4.2.0 (windowlicker) kombu:4.2.1 py:3.5.3
billiard:3.5.0.3 py-amqp:2.2.2
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
```
Script to reproduce:
https://gist.github.com/jcushman/ea8adcb37dffdf5c3fdf756d02366ab2
## Expected behavior
No error occurs
## Actual behavior
```
In [1]: test.delay().get()
...
~/minimal_eager_chord.py in test()
27 def test():
28 chord(
---> 29 a.s(), b.s()
30 ).apply_async()
31
...
~/.pyenv/versions/3.5.3/envs/capstone/src/celery/celery/result.py in assert_will_not_block()
39 def assert_will_not_block():
40 if task_join_will_block():
---> 41 raise RuntimeError(E_WOULDBLOCK)
```
## Comments
This is a similar issue to #4576: `assert_will_not_block()` is triggered when using canvas functions with `CELERY_TASK_ALWAYS_EAGER=True`, even when they are used with apply_async().
The patch for that issue fixed `chain()` but not `chord()`, and the problem may exist for other canvas functions as well. I'm not sure whether it makes sense to add more calls to `with allow_join_result():` in `celery/canvas.py` (as in the patch for #4576), or if `assert_will_not_block()` could be modified to avoid false positives from `CELERY_TASK_ALWAYS_EAGER=True` in general.
The tricky bit of a general solution would be that the guard should still detect actual unsafe subtasks even if `CELERY_TASK_ALWAYS_EAGER=True`, since that's a natural way to run tests.
| Same issue here: our tests are using `CELERY_TASK_ALWAYS_EAGER=True` and tests using `chords()` fail with the same stacktrace
fixes are welcome
Same issue here: our tests are using `CELERY_TASK_ALWAYS_EAGER=True` and tests using `chords()` fail with the same stacktrace
fixes are welcome | 2018-08-15T10:16:14 |
celery/celery | 5,074 | celery__celery-5074 | [
"3586"
] | bbacdfeb39a67bc05e571bddc01865f95efbbfcf | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -9,6 +9,7 @@
from __future__ import absolute_import, unicode_literals
import datetime
+import inspect
import sys
import time
from collections import namedtuple
@@ -34,7 +35,6 @@
from celery.utils.functional import LRUCache, arity_greater
from celery.utils.log import get_logger
from celery.utils.serialization import (create_exception_cls,
- ensure_serializable,
get_pickleable_exception,
get_pickled_exception)
@@ -236,9 +236,14 @@ def prepare_exception(self, exc, serializer=None):
serializer = self.serializer if serializer is None else serializer
if serializer in EXCEPTION_ABLE_CODECS:
return get_pickleable_exception(exc)
+ # retrieve exception original module
+ exc_module = inspect.getmodule(type(exc))
+ if exc_module:
+ exc_module = exc_module.__name__
+
return {'exc_type': type(exc).__name__,
- 'exc_message': ensure_serializable(exc.args, self.encode),
- 'exc_module': type(exc).__module__}
+ 'exc_args': exc.args,
+ 'exc_module': exc_module}
def exception_to_python(self, exc):
"""Convert serialized exception to Python exception."""
diff --git a/celery/utils/serialization.py b/celery/utils/serialization.py
--- a/celery/utils/serialization.py
+++ b/celery/utils/serialization.py
@@ -8,11 +8,11 @@
from base64 import b64decode as base64decode
from base64 import b64encode as base64encode
from functools import partial
+from importlib import import_module
from inspect import getmro
from itertools import takewhile
from kombu.utils.encoding import bytes_to_str, str_to_bytes
-
from celery.five import (bytes_if_py2, items, python_2_unicode_compatible,
reraise, string_t)
@@ -81,6 +81,14 @@ def itermro(cls, stop):
def create_exception_cls(name, module, parent=None):
"""Dynamically create an exception class."""
+ try:
+ mod = import_module(module)
+ exc_cls = getattr(mod, name, None)
+ if exc_cls and isinstance(exc_cls, type(BaseException)):
+ return exc_cls
+ except ImportError:
+ pass
+ # we could not find the exception, fallback and create a type.
if not parent:
parent = Exception
return subclass_exception(name, parent, module)
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -225,6 +225,10 @@ def _delete_group(self, group_id):
self._data.pop(group_id, None)
+class CustomTestError(Exception):
+ pass
+
+
class test_BaseBackend_dict:
def setup(self):
@@ -245,13 +249,26 @@ def test_delete_group(self):
self.b.delete_group('can-delete')
assert 'can-delete' not in self.b._data
- def test_prepare_exception_json(self):
- x = DictBackend(self.app, serializer='json')
- e = x.prepare_exception(KeyError('foo'))
- assert 'exc_type' in e
+ @pytest.mark.parametrize(("serializer"), (("pickle", "json")))
+ def test_prepare_builtin_exception(self, serializer):
+ x = DictBackend(self.app, serializer=serializer)
+ e = x.prepare_exception(ValueError('foo'))
+ if not isinstance(e, BaseException):
+ # not using pickle
+ assert 'exc_type' in e
+ e = x.exception_to_python(e)
+ assert e.__class__ is ValueError
+ assert e.args == ("foo", )
+
+ @pytest.mark.parametrize(("serializer"), (("pickle", "json")))
+ def test_prepare_custom_exception(self, serializer):
+ x = DictBackend(self.app, serializer=serializer)
+ e = x.prepare_exception(CustomTestError('foo'))
+ if not isinstance(e, BaseException):
+ assert 'exc_type' in e
e = x.exception_to_python(e)
- assert e.__class__.__name__ == 'KeyError'
- assert str(e).strip('u') == "'foo'"
+ assert e.__class__ is CustomTestError
+ assert e.args == ("foo", )
def test_save_group(self):
b = BaseBackend(self.app)
| Celery does not respect exceptions types when using a serializer different than pickle.
## Checklist
```
~ : celery -A analystick report
software -> celery:4.0.0 (latentcall) kombu:4.0.0 py:3.5.2
billiard:3.5.0.2 redis:2.10.5
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://127.0.0.1:6379/1
```
## Steps to reproduce
(See example code below)
## Expected behavior
**When using a result serializer different than pickle**, exceptions types should be the same as the raised exception.
## Actual behavior
Celery does not respect the exception type but create _a new type_ instead.
The main problem is that instead of using the actual type of the exception, celery will [reconstruct a type](https://github.com/celery/celery/blob/0f87321df385c5f3dca717ec2a4a9c0d25f88054/celery/utils/serialization.py#L43-L45) on the fly, but without respecting the original exception module.
For example, using the `yaml` result serializer (I believe it will be the same for `json`):
* if a task raises a `ValueError`, the caller will receive a `celery.backends.base.ValueError`
* if a task raises a `custom.module.CustomError`, the caller will receive a `celery.backends.base.CustomError`
This ends with wrongs behaviour when raising a exception from a task and trying to catch it from the caller.
### Minimal reproductible test
As an example, I've setup a minimal reproductible test, using a redis backend :
celery config (I can provide a full config if needed):
```python
CELERY_TASK_SERIALIZER = 'yaml'
CELERY_RESULT_SERIALIZER='yaml'
```
Tasks :
```python
# module myapp.tasks
from myapp import celery_app
@celery_app.task
def raises_valueerror():
raise ValueError('Builtin exception')
class CustomError(Exception):
pass
@celery_app.task
def raises_customerror():
raise CustomError('Custom exception', {'a':1})
```
Unittest :
```python
from myapp import tasks
from myapp.tasks import CustomError
def test_builtin_exception():
t = tasks.raises_valueerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.ValueError` instead of builtin ValueError
assert isinstance(exc, ValueError), "Actual class %s}" % (exc.__class__)
def test_custom_exception():
t = tasks.raises_customerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.CustomError` instead of builtin CustomError
assert isinstance(exc, CustomError), "1/2 Actual class is %s" % (exc.__class__)
assert isinstance(exc, tasks.CustomError), "2/2 Actual class is %s" % (exc.__class__)
```
Theses tests will fail with the following errors :
```
# ...
AssertionError: Actual class <class 'celery.backends.base.ValueError'>
# ...
AssertionError: 1/2 Actual class is <class 'celery.backends.base.CustomError'>
```
Another side effect for this problem will be that a code like the one below won't work if a subtask raise a `ValueError`, as the propagated exception won't be of the builtin type `ValueError` but `celery.backends.base.ValueError`:
```python
try:
r.get(propagate=True)
except ValueError as e:
# do something
```
This problem will be the same also for any custom exceptions.
While I'm not sure about the possible side-effects, [I have a fix for this](https://github.com/jcsaaddupuy/celery/commit/8d4e613e24f6561fdaafd4e6ede582ceac882804) and I will gladly create a PR for this problem as it seems pretty critical.
What do you think ?
| This is actually deliberate, as none of json, yaml or msgpack are able to reconstruct exceptions.
The alternative is to load exceptions of any type, which opens up for similar security issues as using pickle.
If this is deliberate, perhaps this issue should be closed? Or are there any plans to try to make it work, despite the security implications?
Is this situation documented? I was not able to find anything regarding this behaviour in the documentation.
@vladcalin @estan please check https://github.com/celery/celery/pull/3592 . As you can see this is still on-going. Any feedback is appreciated!
Hey here, will this issue be fixed one day ? Should we stop using custom exception ? Is there any workaround for now ? | 2018-09-26T05:28:12 |
celery/celery | 5,085 | celery__celery-5085 | [
"3586"
] | 9e457c0394689acdeb7f856488d3f2a9d0f4723b | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -9,7 +9,6 @@
from __future__ import absolute_import, unicode_literals
import datetime
-import inspect
import sys
import time
from collections import namedtuple
@@ -35,6 +34,7 @@
from celery.utils.functional import LRUCache, arity_greater
from celery.utils.log import get_logger
from celery.utils.serialization import (create_exception_cls,
+ ensure_serializable,
get_pickleable_exception,
get_pickled_exception)
@@ -236,14 +236,9 @@ def prepare_exception(self, exc, serializer=None):
serializer = self.serializer if serializer is None else serializer
if serializer in EXCEPTION_ABLE_CODECS:
return get_pickleable_exception(exc)
- # retrieve exception original module
- exc_module = inspect.getmodule(type(exc))
- if exc_module:
- exc_module = exc_module.__name__
-
return {'exc_type': type(exc).__name__,
- 'exc_args': exc.args,
- 'exc_module': exc_module}
+ 'exc_message': ensure_serializable(exc.args, self.encode),
+ 'exc_module': type(exc).__module__}
def exception_to_python(self, exc):
"""Convert serialized exception to Python exception."""
diff --git a/celery/utils/serialization.py b/celery/utils/serialization.py
--- a/celery/utils/serialization.py
+++ b/celery/utils/serialization.py
@@ -8,11 +8,11 @@
from base64 import b64decode as base64decode
from base64 import b64encode as base64encode
from functools import partial
-from importlib import import_module
from inspect import getmro
from itertools import takewhile
from kombu.utils.encoding import bytes_to_str, str_to_bytes
+
from celery.five import (bytes_if_py2, items, python_2_unicode_compatible,
reraise, string_t)
@@ -81,14 +81,6 @@ def itermro(cls, stop):
def create_exception_cls(name, module, parent=None):
"""Dynamically create an exception class."""
- try:
- mod = import_module(module)
- exc_cls = getattr(mod, name, None)
- if exc_cls and isinstance(exc_cls, type(BaseException)):
- return exc_cls
- except ImportError:
- pass
- # we could not find the exception, fallback and create a type.
if not parent:
parent = Exception
return subclass_exception(name, parent, module)
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -225,10 +225,6 @@ def _delete_group(self, group_id):
self._data.pop(group_id, None)
-class CustomTestError(Exception):
- pass
-
-
class test_BaseBackend_dict:
def setup(self):
@@ -249,26 +245,13 @@ def test_delete_group(self):
self.b.delete_group('can-delete')
assert 'can-delete' not in self.b._data
- @pytest.mark.parametrize(("serializer"), (("pickle", "json")))
- def test_prepare_builtin_exception(self, serializer):
- x = DictBackend(self.app, serializer=serializer)
- e = x.prepare_exception(ValueError('foo'))
- if not isinstance(e, BaseException):
- # not using pickle
- assert 'exc_type' in e
- e = x.exception_to_python(e)
- assert e.__class__ is ValueError
- assert e.args == ("foo", )
-
- @pytest.mark.parametrize(("serializer"), (("pickle", "json")))
- def test_prepare_custom_exception(self, serializer):
- x = DictBackend(self.app, serializer=serializer)
- e = x.prepare_exception(CustomTestError('foo'))
- if not isinstance(e, BaseException):
- assert 'exc_type' in e
+ def test_prepare_exception_json(self):
+ x = DictBackend(self.app, serializer='json')
+ e = x.prepare_exception(KeyError('foo'))
+ assert 'exc_type' in e
e = x.exception_to_python(e)
- assert e.__class__ is CustomTestError
- assert e.args == ("foo", )
+ assert e.__class__.__name__ == 'KeyError'
+ assert str(e).strip('u') == "'foo'"
def test_save_group(self):
b = BaseBackend(self.app)
| Celery does not respect exceptions types when using a serializer different than pickle.
## Checklist
```
~ : celery -A analystick report
software -> celery:4.0.0 (latentcall) kombu:4.0.0 py:3.5.2
billiard:3.5.0.2 redis:2.10.5
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://127.0.0.1:6379/1
```
## Steps to reproduce
(See example code below)
## Expected behavior
**When using a result serializer different than pickle**, exceptions types should be the same as the raised exception.
## Actual behavior
Celery does not respect the exception type but create _a new type_ instead.
The main problem is that instead of using the actual type of the exception, celery will [reconstruct a type](https://github.com/celery/celery/blob/0f87321df385c5f3dca717ec2a4a9c0d25f88054/celery/utils/serialization.py#L43-L45) on the fly, but without respecting the original exception module.
For example, using the `yaml` result serializer (I believe it will be the same for `json`):
* if a task raises a `ValueError`, the caller will receive a `celery.backends.base.ValueError`
* if a task raises a `custom.module.CustomError`, the caller will receive a `celery.backends.base.CustomError`
This ends with wrongs behaviour when raising a exception from a task and trying to catch it from the caller.
### Minimal reproductible test
As an example, I've setup a minimal reproductible test, using a redis backend :
celery config (I can provide a full config if needed):
```python
CELERY_TASK_SERIALIZER = 'yaml'
CELERY_RESULT_SERIALIZER='yaml'
```
Tasks :
```python
# module myapp.tasks
from myapp import celery_app
@celery_app.task
def raises_valueerror():
raise ValueError('Builtin exception')
class CustomError(Exception):
pass
@celery_app.task
def raises_customerror():
raise CustomError('Custom exception', {'a':1})
```
Unittest :
```python
from myapp import tasks
from myapp.tasks import CustomError
def test_builtin_exception():
t = tasks.raises_valueerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.ValueError` instead of builtin ValueError
assert isinstance(exc, ValueError), "Actual class %s}" % (exc.__class__)
def test_custom_exception():
t = tasks.raises_customerror.s()
r = t.apply_async()
exc = None
try:
r.get(propagate=True)
except Exception as e:
exc = e
# with celery bug, the actual class of exc will be `celery.backends.base.CustomError` instead of builtin CustomError
assert isinstance(exc, CustomError), "1/2 Actual class is %s" % (exc.__class__)
assert isinstance(exc, tasks.CustomError), "2/2 Actual class is %s" % (exc.__class__)
```
Theses tests will fail with the following errors :
```
# ...
AssertionError: Actual class <class 'celery.backends.base.ValueError'>
# ...
AssertionError: 1/2 Actual class is <class 'celery.backends.base.CustomError'>
```
Another side effect for this problem will be that a code like the one below won't work if a subtask raise a `ValueError`, as the propagated exception won't be of the builtin type `ValueError` but `celery.backends.base.ValueError`:
```python
try:
r.get(propagate=True)
except ValueError as e:
# do something
```
This problem will be the same also for any custom exceptions.
While I'm not sure about the possible side-effects, [I have a fix for this](https://github.com/jcsaaddupuy/celery/commit/8d4e613e24f6561fdaafd4e6ede582ceac882804) and I will gladly create a PR for this problem as it seems pretty critical.
What do you think ?
| This is actually deliberate, as none of json, yaml or msgpack are able to reconstruct exceptions.
The alternative is to load exceptions of any type, which opens up for similar security issues as using pickle.
If this is deliberate, perhaps this issue should be closed? Or are there any plans to try to make it work, despite the security implications?
Is this situation documented? I was not able to find anything regarding this behaviour in the documentation.
@vladcalin @estan please check https://github.com/celery/celery/pull/3592 . As you can see this is still on-going. Any feedback is appreciated!
Hey here, will this issue be fixed one day ? Should we stop using custom exception ? Is there any workaround for now ?
please check this PR https://github.com/celery/celery/pull/5074
Thanks @auvipy. Looking at the other PR. | 2018-09-29T15:01:56 |
celery/celery | 5,095 | celery__celery-5095 | [
"5046"
] | ac65826cdf984e4728329f02c2fda048722f4605 | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -673,6 +673,8 @@ def _store_result(self, task_id, result, state,
if request and getattr(request, 'group', None):
meta['group_id'] = request.group
+ if request and getattr(request, 'parent_id', None):
+ meta['parent_id'] = request.parent_id
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py
--- a/celery/backends/mongodb.py
+++ b/celery/backends/mongodb.py
@@ -181,6 +181,8 @@ def _store_result(self, task_id, result, state,
self.current_task_children(request),
),
}
+ if request and getattr(request, 'parent_id', None):
+ meta['parent_id'] = request.parent_id
try:
self.collection.save(meta)
diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -128,8 +128,10 @@ def as_tuple(self):
return (self.id, parent and parent.as_tuple()), None
def forget(self):
- """Forget about (and possibly remove the result of) this task."""
+ """Forget the result of this task and its parents."""
self._cache = None
+ if self.parent:
+ self.parent.forget()
self.backend.forget(self.id)
def revoke(self, connection=None, terminate=False, signal=None,
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -423,6 +423,18 @@ def test_get_store_delete_result(self):
self.b.forget(tid)
assert self.b.get_state(tid) == states.PENDING
+ def test_store_result_parent_id(self):
+ tid = uuid()
+ pid = uuid()
+ state = 'SUCCESS'
+ result = 10
+ request = Context(parent_id=pid)
+ self.b.store_result(
+ tid, state=state, result=result, request=request,
+ )
+ stored_meta = self.b.decode(self.b.get(self.b.get_key_for_task(tid)))
+ assert stored_meta['parent_id'] == request.parent_id
+
def test_store_result_group_id(self):
tid = uuid()
state = 'SUCCESS'
diff --git a/t/unit/backends/test_mongodb.py b/t/unit/backends/test_mongodb.py
--- a/t/unit/backends/test_mongodb.py
+++ b/t/unit/backends/test_mongodb.py
@@ -221,6 +221,33 @@ def test_store_result(self, mock_get_database):
self.backend._store_result(
sentinel.task_id, sentinel.result, sentinel.status)
+ @patch('celery.backends.mongodb.MongoBackend._get_database')
+ def test_store_result_with_request(self, mock_get_database):
+ self.backend.taskmeta_collection = MONGODB_COLLECTION
+
+ mock_database = MagicMock(spec=['__getitem__', '__setitem__'])
+ mock_collection = Mock()
+ mock_request = MagicMock(spec=['parent_id'])
+
+ mock_get_database.return_value = mock_database
+ mock_database.__getitem__.return_value = mock_collection
+ mock_request.parent_id = sentinel.parent_id
+
+ ret_val = self.backend._store_result(
+ sentinel.task_id, sentinel.result, sentinel.status,
+ request=mock_request)
+
+ mock_get_database.assert_called_once_with()
+ mock_database.__getitem__.assert_called_once_with(MONGODB_COLLECTION)
+ parameters = mock_collection.save.call_args[0][0]
+ assert parameters['parent_id'] == sentinel.parent_id
+ assert sentinel.result == ret_val
+
+ mock_collection.save.side_effect = InvalidDocument()
+ with pytest.raises(EncodeError):
+ self.backend._store_result(
+ sentinel.task_id, sentinel.result, sentinel.status)
+
@patch('celery.backends.mongodb.MongoBackend._get_database')
def test_get_task_meta_for(self, mock_get_database):
self.backend.taskmeta_collection = MONGODB_COLLECTION
@@ -322,7 +349,8 @@ def test_delete_group(self, mock_get_database):
{'_id': sentinel.taskset_id})
@patch('celery.backends.mongodb.MongoBackend._get_database')
- def test_forget(self, mock_get_database):
+ def test__forget(self, mock_get_database):
+ # note: here tested _forget method, not forget method
self.backend.taskmeta_collection = MONGODB_COLLECTION
mock_database = MagicMock(spec=['__getitem__', '__setitem__'])
diff --git a/t/unit/tasks/test_result.py b/t/unit/tasks/test_result.py
--- a/t/unit/tasks/test_result.py
+++ b/t/unit/tasks/test_result.py
@@ -85,6 +85,16 @@ def mytask():
pass
self.mytask = mytask
+ def test_forget(self):
+ first = Mock()
+ second = self.app.AsyncResult(self.task1['id'], parent=first)
+ third = self.app.AsyncResult(self.task2['id'], parent=second)
+ last = self.app.AsyncResult(self.task3['id'], parent=third)
+ last.forget()
+ first.forget.assert_called_once()
+ assert last.result is None
+ assert second.result is None
+
def test_ignored_getter(self):
result = self.app.AsyncResult(uuid())
assert result.ignored is False
| Forgetting a chain leaves task meta in MongoDb
## Checklist
* [x] I have included the output of ``celery -A proj report`` in the issue.
```
# celery -A celerytest report
software -> celery:4.2.1 (windowlicker) kombu:4.2.1 py:3.6.6
billiard:3.5.0.4 py-amqp:2.3.2
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:mongodb://mongodb:27017/celery_results
broker_url: 'amqp://guest:********@rabbitmq:5672//'
result_backend: 'mongodb://mongodb:27017/celery_results'
```
* [x] I have verified that the issue exists against the `master` branch of Celery.
Yes, installed celery with pip install --upgrade git+https://github.com/celery/celery
Reported version was 4.2.0 (older?) but with same behavior.
## Steps to reproduce
```
from celery import Celery, chain, group
app = Celery(
'tasks',
broker='pyamqp://guest@rabbitmq//',
backend='mongodb://mongodb:27017/celery_results',
)
@app.task(name='celerytest.add') #, ignore_results=True)
def add(x, y):
return x + y
if __name__ == '__main__':
c = chain(
add.s(1, 2),
add.s(3),
)
res = c.delay()
print(res)
print(res.get())
res.forget()
```
## Expected behavior
MongoDb should stay empty as in example code when replacing c = chain() for c = group().
## Actual behavior
One document remains in database.
```
{
"_id" : "6ed8544d-3731-465f-bc82-bce40635d69b",
"status" : "SUCCESS",
"result" : "3",
"date_done" : ISODate("2018-09-11T15:32:34.934Z"),
"traceback" : "null",
"children" : "[[[\"a0563d8f-1be7-437c-9f2b-e8fbb7a0b3bc\", null], null]]"
}
```
| It seems Redis results backend has the same behavior.
If it is a bug, it is a bug not only for MongoDB. | 2018-10-06T18:34:12 |
celery/celery | 5,114 | celery__celery-5114 | [
"5113"
] | 21baef53c39bc1909fd6eee9a2a20e6ce851e88c | diff --git a/celery/beat.py b/celery/beat.py
--- a/celery/beat.py
+++ b/celery/beat.py
@@ -10,6 +10,7 @@
import sys
import time
import traceback
+from calendar import timegm
from collections import namedtuple
from functools import total_ordering
from threading import Event, Thread
@@ -26,7 +27,7 @@
from .schedules import crontab, maybe_schedule
from .utils.imports import load_extension_class_names, symbol_by_name
from .utils.log import get_logger, iter_open_logger_fds
-from .utils.time import humanize_seconds
+from .utils.time import humanize_seconds, maybe_make_aware
__all__ = (
'SchedulingError', 'ScheduleEntry', 'Scheduler',
@@ -253,12 +254,13 @@ def adjust(self, n, drift=-0.010):
def is_due(self, entry):
return entry.is_due()
- def _when(self, entry, next_time_to_run, mktime=time.mktime):
+ def _when(self, entry, next_time_to_run, mktime=timegm):
+ """Return a utc timestamp, make sure heapq in currect order."""
adjust = self.adjust
- as_now = entry.default_now()
+ as_now = maybe_make_aware(entry.default_now())
- return (mktime(as_now.timetuple()) +
+ return (mktime(as_now.utctimetuple()) +
as_now.microsecond / 1e6 +
(adjust(next_time_to_run) or 0))
| diff --git a/t/unit/app/test_beat.py b/t/unit/app/test_beat.py
--- a/t/unit/app/test_beat.py
+++ b/t/unit/app/test_beat.py
@@ -1,6 +1,7 @@
from __future__ import absolute_import, unicode_literals
import errno
+import pytz
from datetime import datetime, timedelta
from pickle import dumps, loads
@@ -143,11 +144,12 @@ def is_due(self, *args, **kwargs):
class mocked_schedule(schedule):
- def __init__(self, is_due, next_run_at):
+ def __init__(self, is_due, next_run_at, nowfun=datetime.utcnow):
self._is_due = is_due
self._next_run_at = next_run_at
self.run_every = timedelta(seconds=1)
- self.nowfun = datetime.utcnow
+ self.nowfun = nowfun
+ self.default_now = self.nowfun
def is_due(self, last_run_at):
return self._is_due, self._next_run_at
@@ -371,6 +373,22 @@ def test_merge_inplace(self):
assert 'baz' in a.schedule
assert a.schedule['bar'].schedule._next_run_at == 40
+ def test_when(self):
+ now_time_utc = datetime(2000, 10, 10, 10, 10, 10, 10, tzinfo=pytz.utc)
+ now_time_casey = now_time_utc.astimezone(
+ pytz.timezone('Antarctica/Casey')
+ )
+ scheduler = mScheduler(app=self.app)
+ result_utc = scheduler._when(
+ mocked_schedule(True, 10, lambda: now_time_utc),
+ 10
+ )
+ result_casey = scheduler._when(
+ mocked_schedule(True, 10, lambda: now_time_casey),
+ 10
+ )
+ assert result_utc == result_casey
+
@patch('celery.beat.Scheduler._when', return_value=1)
def test_populate_heap(self, _when):
scheduler = mScheduler(app=self.app)
| Why beat.Schedle _when return this???
## Steps to reproduce
**https://github.com/celery/celery/blob/master/celery/beat.py#L261**
```python
return (mktime(as_now.timetuple()) +
as_now.microsecond / 1e6 +
(adjust(next_time_to_run) or 0))
```
What about the as_now timezone???
## Expected behavior
like this?
```python
return (as_now.timestamp() + (adjust(next_time_to_run) or 0))
```
But timestamp not support for python2.7?
```python
return (mktime(as_now.utctimetuple()) +
as_now.microsecond / 1e6 +
(adjust(next_time_to_run) or 0))
```
Or, tell me why wirte like this.
This problem is found because of django-celery-beat.TzAwareCrontab not work as well.
Beacause this schedule.now() return a time with timezone.
## Actual behavior
| python2.7 dont have `timestamp()` | 2018-10-15T13:15:24 |
celery/celery | 5,141 | celery__celery-5141 | [
"5140"
] | 2a33ba326645bd217a7286929f409ef171cfd8bf | diff --git a/celery/app/trace.py b/celery/app/trace.py
--- a/celery/app/trace.py
+++ b/celery/app/trace.py
@@ -394,7 +394,7 @@ def trace_task(uuid, args, kwargs, request=None):
task_request, exc, uuid, RETRY, call_errbacks=False)
except Exception as exc:
I, R, state, retval = on_error(task_request, exc, uuid)
- except BaseException as exc:
+ except BaseException:
raise
else:
try:
diff --git a/celery/bin/amqp.py b/celery/bin/amqp.py
--- a/celery/bin/amqp.py
+++ b/celery/bin/amqp.py
@@ -280,7 +280,7 @@ def onecmd(self, line):
self.counter = next(self.inc_counter)
try:
self.respond(self.dispatch(cmd, arg))
- except (AttributeError, KeyError) as exc:
+ except (AttributeError, KeyError):
self.default(line)
except Exception as exc: # pylint: disable=broad-except
self.say(exc)
diff --git a/celery/platforms.py b/celery/platforms.py
--- a/celery/platforms.py
+++ b/celery/platforms.py
@@ -188,7 +188,7 @@ def remove_if_stale(self):
"""
try:
pid = self.read_pid()
- except ValueError as exc:
+ except ValueError:
print('Broken pidfile found - Removing it.', file=sys.stderr)
self.remove()
return True
@@ -203,7 +203,7 @@ def remove_if_stale(self):
print('Stale pidfile exists - Removing it.', file=sys.stderr)
self.remove()
return True
- except SystemError as exc:
+ except SystemError:
print('Stale pidfile exists - Removing it.', file=sys.stderr)
self.remove()
return True
| flake8 check fails due to update
## Steps to reproduce
Simply run flake8 linter
```tox -e flake8```
## Expected behavior
flake8 linter must pass tests
## Actual behavior
flake8 linter fails
```with error W504 line break after binary operator```
```F841 local variable 'exc' is assigned to but never used```
```tox -e flake8
flake8 create: /home/henrylf/python/celery/.tox/flake8
flake8 installdeps: -r/home/henrylf/python/celery/requirements/default.txt, -r/home/henrylf/python/celery/requirements/test.txt, -r/home/henrylf/python/celery/requirements/pkgutils.txt
flake8 develop-inst: /home/henrylf/python/celery
flake8 installed: amqp==2.3.2,atomicwrites==1.2.1,attrs==18.2.0,billiard==3.5.0.4,bumpversion==0.5.3,case==1.5.3,-e git+https://github.com/othalla/celery.git@2a33ba326645bd217a7286929f409ef171cf
d8bf#egg=celery,cyanide==1.3.0,filelock==3.0.9,flake8==3.6.0,flakeplus==1.1.0,kombu==4.2.1,linecache2==1.0.0,mccabe==0.6.1,mock==2.0.0,more-itertools==4.3.0,nose==1.3.7,pbr==5.1.0,pluggy==0.8.0,
py==1.7.0,pycodestyle==2.4.0,pydocstyle==1.1.1,pyflakes==2.0.0,pytest==3.8.2,python-vagrant==0.5.15,pytz==2018.5,six==1.11.0,sphinx2rst==1.1.0,toml==0.10.0,tox==3.5.2,traceback2==1.4.0,Unipath==
1.1,unittest2==1.1.0,vine==1.1.4,virtualenv==16.0.0
flake8 run-test-pre: PYTHONHASHSEED='3688869430'
flake8 runtests: commands[0] | flake8 -j 2 /home/henrylf/python/celery/celery /home/henrylf/python/celery/t
/home/henrylf/python/celery/celery/result.py:830:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/result.py:930:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/result.py:931:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:216:30: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:217:30: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:263:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:264:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:295:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:334:16: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:335:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:336:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/beat.py:568:30: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:277:25: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:484:25: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:487:25: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:556:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:557:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:562:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:563:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:564:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:565:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:566:13: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:576:30: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:586:32: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:636:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/schedules.py:637:17: W504 line break after binary operator
/home/henrylf/python/celery/celery/platforms.py:191:9: F841 local variable 'exc' is assigned to but never used
/home/henrylf/python/celery/celery/platforms.py:206:9: F841 local variable 'exc' is assigned to but never used
/home/henrylf/python/celery/celery/bin/amqp.py:283:9: F841 local variable 'exc' is assigned to but never used
/home/henrylf/python/celery/celery/app/trace.py:397:17: F841 local variable 'exc' is assigned to but never used
...
```
| 2018-10-24T09:25:39 |
||
celery/celery | 5,154 | celery__celery-5154 | [
"5153"
] | 611e63ccc4b06addd41a634903a37b420a5765aa | diff --git a/celery/app/base.py b/celery/app/base.py
--- a/celery/app/base.py
+++ b/celery/app/base.py
@@ -151,8 +151,9 @@ class Celery(object):
Keyword Arguments:
broker (str): URL of the default broker used.
- backend (Union[str, type]): The result store backend class,
- or the name of the backend class to use.
+ backend (Union[str, Type[celery.backends.base.Backend]]):
+ The result store backend class, or the name of the backend
+ class to use.
Default is the value of the :setting:`result_backend` setting.
autofinalize (bool): If set to False a :exc:`RuntimeError`
@@ -161,15 +162,17 @@ class Celery(object):
set_as_current (bool): Make this the global current app.
include (List[str]): List of modules every worker should import.
- amqp (Union[str, type]): AMQP object or class name.
- events (Union[str, type]): Events object or class name.
- log (Union[str, type]): Log object or class name.
- control (Union[str, type]): Control object or class name.
- tasks (Union[str, type]): A task registry, or the name of
+ amqp (Union[str, Type[AMQP]]): AMQP object or class name.
+ events (Union[str, Type[celery.app.events.Events]]): Events object or
+ class name.
+ log (Union[str, Type[Logging]]): Log object or class name.
+ control (Union[str, Type[celery.app.control.Control]]): Control object
+ or class name.
+ tasks (Union[str, Type[TaskRegistry]]): A task registry, or the name of
a registry class.
fixups (List[str]): List of fix-up plug-ins (e.g., see
:mod:`celery.fixups.django`).
- config_source (Union[str, type]): Take configuration from a class,
+ config_source (Union[str, class]): Take configuration from a class,
or object. Attributes may include any settings described in
the documentation.
"""
diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -110,7 +110,8 @@ class Signature(dict):
:ref:`guide-canvas` for the complete guide.
Arguments:
- task (Task, str): Either a task class/instance, or the name of a task.
+ task (Union[Type[celery.app.task.Task], str]): Either a task
+ class/instance, or the name of a task.
args (Tuple): Positional arguments to apply.
kwargs (Dict): Keyword arguments to apply.
options (Dict): Additional options to :meth:`Task.apply_async`.
| Building the documentation throws a variety of warnings
## Checklist
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
On a fresh Ubuntu 18.04 system, clone the repo and follow the steps in CONTRIBUTING.rst to build the documentation files.
## Expected behavior
The docs should build without warnings as stipulated in CONTRIBUTING.rst
## Actual behavior
Warnings are thrown for a number of things, including:
- missing python packages and system packages.
- missing and mislabelled toctree entries
- ambiguous typehints in docstrings
- a link intended to show the `--port` option for `flower` instead links to the options list for the base `celery` command, which has no `--port` option if `flower` is not installed.
- there are also a few warnings which cannot be fixed with a PR on this project because they originate in kombu and sphinx_celery. i'm working on issues and PRs for those warnings also.
I recognize that this seems really nit-picky; I do think that some of these issues result in the docs being slightly less useful than they would otherwise be. Certainly as a whole they make it feel much more intimidating to try to contribute to the docs, which seems worth avoiding as well.
| 2018-10-30T01:59:31 |
||
celery/celery | 5,168 | celery__celery-5168 | [
"5161"
] | 40fd143ac1c48146f180a79b9ab87badeb68bc41 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -9,7 +9,7 @@
import itertools
import operator
-from collections import MutableSequence, deque
+from collections import deque
from copy import deepcopy
from functools import partial as _partial
from functools import reduce
@@ -32,6 +32,12 @@
from celery.utils.objects import getitem_property
from celery.utils.text import remove_repeating_from_task, truncate
+try:
+ from collections.abc import MutableSequence
+except ImportError:
+ # TODO: Remove this when we drop Python 2.7 support
+ from collections import MutableSequence
+
__all__ = (
'Signature', 'chain', 'xmap', 'xstarmap', 'chunks',
'group', 'chord', 'signature', 'maybe_signature',
| DeprecationWarning: Using or importing the ABCs from 'collections' is deprecated
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
Run anything that uses `celery` under Python 3.7, verified against `celery==4.2.1`
## Expected behavior
No `DeprecationWarning` should be seen
## Actual behavior
```
.../lib/python3.7/site-packages/celery/canvas.py:12
.../lib/python3.7/site-packages/celery/canvas.py:12: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import MutableSequence, deque
```
| Note that the `collections.abc` module does not exist on Python 2.7. | 2018-11-12T11:01:01 |
|
celery/celery | 5,232 | celery__celery-5232 | [
"4377",
"4377"
] | 064a86308c9db25ece08771314678d272ab2dbd1 | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -170,6 +170,13 @@ def _call_task_errbacks(self, request, exc, traceback):
for errback in request.errbacks:
errback = self.app.signature(errback)
if (
+ # Celery tasks type created with the @task decorator have the
+ # __header__ property, but Celery task created from Task
+ # class do not have this property.
+ # That's why we have to check if this property exists before
+ # checking is it partial function.
+ hasattr(errback.type, '__header__') and
+
# workaround to support tasks with bind=True executed as
# link errors. Otherwise retries can't be used
not isinstance(errback.type.__header__, partial) and
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -8,7 +8,7 @@
from case import ANY, Mock, call, patch, skip
from celery import chord, group, states, uuid
-from celery.app.task import Context
+from celery.app.task import Context, Task
from celery.backends.base import (BaseBackend, DisabledBackend,
KeyValueStoreBackend, _nulldict)
from celery.exceptions import ChordError, TimeoutError
@@ -383,6 +383,23 @@ def test_mark_as_failure__errback(self):
b.mark_as_failure('id', exc, request=request)
assert self.errback.last_result == 5
+ @patch('celery.backends.base.group')
+ def test_class_based_task_can_be_used_as_error_callback(self, mock_group):
+ b = BaseBackend(app=self.app)
+ b._store_result = Mock()
+
+ class TaskBasedClass(Task):
+ def run(self):
+ pass
+
+ TaskBasedClass = self.app.register_task(TaskBasedClass())
+
+ request = Mock(name='request')
+ request.errbacks = [TaskBasedClass.subtask(args=[], immutable=True)]
+ exc = KeyError()
+ b.mark_as_failure('id', exc, request=request)
+ mock_group.assert_called_once_with(request.errbacks, app=self.app)
+
def test_mark_as_failure__chord(self):
b = BaseBackend(app=self.app)
b._store_result = Mock()
| Using a class based task as errback results in AttributeError '_header_'
## Checklist
- [X] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
[celery-report.txt](https://github.com/celery/celery/files/1456808/celery-report.txt)
- [ ] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
Create a tasks file:
```
from celery import Celery,Task
celery = Celery('tasks', broker='redis://localhost:6379/6',backend='redis://localhost:6379/6')
@celery.task
def ErrTask1(req):
print('failure')
class CustomTask(Task):
def run(self,var):
print('running %s' % var)
if var == "err":
raise Exception("err")
CustomTask = celery.register_task(CustomTask())
class ErrTask2(Task):
def run(self):
print('failure')
ErrTask2 = celery.register_task(ErrTask2())
```
start celery with it:
`celery worker -A tasks:celery -l info`
Start a python shell and start the CustomTask with an error callback:
```
from tasks import CustomTask,ErrTask1,ErrTask2
a = CustomTask.apply_async(['err'],link_error=ErrTask1.s())
a.result
b = CustomTask.apply_async(['err'],link_error=ErrTask2.s())
b.result
```
## Expected behavior
When running with ErrTask1 as error callback ( the one with the decorator) we get the expected result:
```
>>> a = CustomTask.apply_async(['err'],link_error=ErrTask1.s())
>>> a.result
Exception(u'err',)
```
## Actual behavior
When running with ErrTask2 as error callback ( the one that is defined classbased) we get the Attributeerror:
```
>>> b = CustomTask.apply_async(['err'],link_error=ErrTask2.s())
>>> b.result
AttributeError(u"'ErrTask2' object has no attribute '__header__'",)
```
It seems the changed behaviour on the error callback in celery 4 has something to do with it. The class based tasks miss some attributes which doesn't seem to be an issue for normal execution but only for the error callback. Probably there is a difference in initialization between `celery.register_task()` and the decorator.
Workaround is to define the error callbacks with the task decorator.
Using a class based task as errback results in AttributeError '_header_'
## Checklist
- [X] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
[celery-report.txt](https://github.com/celery/celery/files/1456808/celery-report.txt)
- [ ] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
Create a tasks file:
```
from celery import Celery,Task
celery = Celery('tasks', broker='redis://localhost:6379/6',backend='redis://localhost:6379/6')
@celery.task
def ErrTask1(req):
print('failure')
class CustomTask(Task):
def run(self,var):
print('running %s' % var)
if var == "err":
raise Exception("err")
CustomTask = celery.register_task(CustomTask())
class ErrTask2(Task):
def run(self):
print('failure')
ErrTask2 = celery.register_task(ErrTask2())
```
start celery with it:
`celery worker -A tasks:celery -l info`
Start a python shell and start the CustomTask with an error callback:
```
from tasks import CustomTask,ErrTask1,ErrTask2
a = CustomTask.apply_async(['err'],link_error=ErrTask1.s())
a.result
b = CustomTask.apply_async(['err'],link_error=ErrTask2.s())
b.result
```
## Expected behavior
When running with ErrTask1 as error callback ( the one with the decorator) we get the expected result:
```
>>> a = CustomTask.apply_async(['err'],link_error=ErrTask1.s())
>>> a.result
Exception(u'err',)
```
## Actual behavior
When running with ErrTask2 as error callback ( the one that is defined classbased) we get the Attributeerror:
```
>>> b = CustomTask.apply_async(['err'],link_error=ErrTask2.s())
>>> b.result
AttributeError(u"'ErrTask2' object has no attribute '__header__'",)
```
It seems the changed behaviour on the error callback in celery 4 has something to do with it. The class based tasks miss some attributes which doesn't seem to be an issue for normal execution but only for the error callback. Probably there is a difference in initialization between `celery.register_task()` and the decorator.
Workaround is to define the error callbacks with the task decorator.
| @AlexHill @georgepsarakis could you check
I see that this issue #3723 is related too.
Edit: Somehow related #4022 too.
Hi,
is there any resolution to this issue? We are facing it on 4.1.0 (and 4.1.1). I see that the task is closed but couldn't find any code related to it.
@livenson this should be on master 4.2rc versions as you can see the milestone 4.2 tag
https://github.com/celery/celery/pull/4545
@auvipy thanks for reply, but I still don't get it –– #4545 adds additional handling but still expects __header__ to be there, which is not happening for class-based Tasks, at least I could only find a single place where it is set in Celery - https://github.com/celery/celery/blob/2636251a1249634258c1910c6400bf2ccf28b8ed/celery/app/base.py#L445 . I.e. only in function decorator.
reopening, could you plz dig bit more and let us know? could you try master?
for a fix, you can use this solution https://github.com/opennode/waldur-core/blob/develop/waldur_core/core/tasks.py#L31
anyone up for a PR? with @maximprokopenko proposed solution?
PR could be made if you think it makes sense -- basically, our problem in Waldur was that we are still using "legacy" base Task class with self-registration. So the question is if it's a correct workaround to apply -- or perhaps it should rather be fixed in self-registration of legacy class. @auvipy , any feelings about it?
which version of celery are you using? is you code open sourced?
4.1.0 (tried with 4.1.1 -- but it's the same for that part of the code).
Yes. @maximprokopenko actually linked to our project.
@thedrow @georgepsarakis whats yours thoughts on this guys?
@ask Could you please validate proposed fix?
@auvipy @livenson @stevenwbe just tested the given example with 4.2.0rc3+ and it seems to work, as it has the expected behavior, both the error callback is executed on the worker and the result property contains `Exception('err')` . Can you please cross-check again?
The issue is back in 4.2.1.
@sbj-ss If you claim we have a regression, we'll need you to provide a test case in order for us to verify it.
@thedrow you can use the test case from the 1st post.
@thedrow @sbj-ss ,
It is indeed correct that it occurs again. Output of the above testcase:
(venv2)$ python
Python 2.7.5 (default, May 31 2018, 09:41:32)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from tasks import CustomTask,ErrTask1,ErrTask2
>>> a = CustomTask.apply_async(['err'],link_error=ErrTask1.s())
>>> a.result
Exception(u'err',)
>>> b = CustomTask.apply_async(['err'],link_error=ErrTask2.s())
>>> b.result
AttributeError(u"'ErrTask2' object has no attribute '__header__'",)
>>>
(venv2) $ pip freeze | grep celery
celery==4.2.1
celery-redis-sentinel==0.3.0
@AlexHill @georgepsarakis could you check
I see that this issue #3723 is related too.
Edit: Somehow related #4022 too.
Hi,
is there any resolution to this issue? We are facing it on 4.1.0 (and 4.1.1). I see that the task is closed but couldn't find any code related to it.
@livenson this should be on master 4.2rc versions as you can see the milestone 4.2 tag
https://github.com/celery/celery/pull/4545
@auvipy thanks for reply, but I still don't get it –– #4545 adds additional handling but still expects __header__ to be there, which is not happening for class-based Tasks, at least I could only find a single place where it is set in Celery - https://github.com/celery/celery/blob/2636251a1249634258c1910c6400bf2ccf28b8ed/celery/app/base.py#L445 . I.e. only in function decorator.
reopening, could you plz dig bit more and let us know? could you try master?
for a fix, you can use this solution https://github.com/opennode/waldur-core/blob/develop/waldur_core/core/tasks.py#L31
anyone up for a PR? with @maximprokopenko proposed solution?
PR could be made if you think it makes sense -- basically, our problem in Waldur was that we are still using "legacy" base Task class with self-registration. So the question is if it's a correct workaround to apply -- or perhaps it should rather be fixed in self-registration of legacy class. @auvipy , any feelings about it?
which version of celery are you using? is you code open sourced?
4.1.0 (tried with 4.1.1 -- but it's the same for that part of the code).
Yes. @maximprokopenko actually linked to our project.
@thedrow @georgepsarakis whats yours thoughts on this guys?
@ask Could you please validate proposed fix?
@auvipy @livenson @stevenwbe just tested the given example with 4.2.0rc3+ and it seems to work, as it has the expected behavior, both the error callback is executed on the worker and the result property contains `Exception('err')` . Can you please cross-check again?
The issue is back in 4.2.1.
@sbj-ss If you claim we have a regression, we'll need you to provide a test case in order for us to verify it.
@thedrow you can use the test case from the 1st post.
@thedrow @sbj-ss ,
It is indeed correct that it occurs again. Output of the above testcase:
(venv2)$ python
Python 2.7.5 (default, May 31 2018, 09:41:32)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from tasks import CustomTask,ErrTask1,ErrTask2
>>> a = CustomTask.apply_async(['err'],link_error=ErrTask1.s())
>>> a.result
Exception(u'err',)
>>> b = CustomTask.apply_async(['err'],link_error=ErrTask2.s())
>>> b.result
AttributeError(u"'ErrTask2' object has no attribute '__header__'",)
>>>
(venv2) $ pip freeze | grep celery
celery==4.2.1
celery-redis-sentinel==0.3.0
| 2018-12-13T20:12:06 |
celery/celery | 5,297 | celery__celery-5297 | [
"5265"
] | c1d0bfea9ad98477cbc1def99157fe5109555500 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -1045,7 +1045,13 @@ def link(self, sig):
return self.tasks[0].link(sig)
def link_error(self, sig):
- sig = sig.clone().set(immutable=True)
+ try:
+ sig = sig.clone().set(immutable=True)
+ except AttributeError:
+ # See issue #5265. I don't use isinstance because current tests
+ # pass a Mock object as argument.
+ sig['immutable'] = True
+ sig = Signature.from_dict(sig)
return self.tasks[0].link_error(sig)
def _prepared(self, tasks, partial_args, group_id, root_id, app,
| Complex canvas might raise AttributeError: 'dict' object has no attribute 'clone'
WARNING: I'm still trying to collect all the necessary details and, if possible, provide a reproducible example. For the time, the error is happening constantly in one server, but doesn't happen in another one with the same source code.
I'm getting the following:
```
Traceback (most recent call last):
File "eggs/celery-4.2.1-py2.7.egg/celery/app/trace.py", line 439, in trace_task
parent_id=uuid, root_id=root_id,
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 1232, in apply_async
return self.run(tasks, body, args, task_id=task_id, **options)
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 1277, in run
header_result = header(*partial_args, task_id=group_id, **options)
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 953, in __call__
return self.apply_async(partial_args, **options)
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 978, in apply_async
args=args, kwargs=kwargs, **options))
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 1054, in _apply_tasks
**options)
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 557, in apply_async
dict(self.options, **options) if options else self.options))
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 573, in run
task_id, group_id, chord,
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 683, in prepare_steps
task.link_error(errback)
File "eggs/celery-4.2.1-py2.7.egg/celery/canvas.py", line 1016, in link_error
sig = sig.clone().set(immutable=True)
AttributeError: 'dict' object has no attribute 'clone'
```
When trying to execute a complex canvas. The canvas itself is dynamically created. We have a single task that dispatches the actual calls. So all, jobs are instances of this task with different arguments. The task is: https://github.com/merchise-autrement/odoo/blob/merchise-10.0/odoo/jobs.py#L746
The canvas is constructed in: https://github.com/merchise/xopgi.base/blob/master/xopgi/xopgi_cdr/cdr_agent.py#L181
Basically, we have a layered graphs of events that depend on several evidences, which in turn depend on variables. We compute each variable in a job. Each variable job is linked to a group of jobs to update the dependent evidences. Each evidence job is liked to a group of event jobs. Each job has an on_error callback to signal the whole cycle had some errors.
I'm trying to detect if the issue comes from a particular workflow of jobs, or other cause.
I'm using celery 4.2.1, but I tested with master.
| This went away after updating the redis server. The server showing the error was using redis 3. After upgrading to redis 4 the issue went away.
I could reproduce the error in my box using redis 3.
@thedrow Should we reopen this, or let just this issue as a historical note. I don't really know how many people with be affected. Redis 5 was release some weeks ago. Maybe most people are using Redis 4.
Opps! I have just been hit by the same error with Redis 5. So, I'm guessing I was just lucky when I upgraded the server in the failing box. I'm reopening, till I know more.
Update: The server (with redis 4) is now again raising the AttributeError.
The only workaround I have found to make this issue gone in my case, is to change the `link_error` method from:
```python
def link_error(self, sig):
sig = sig.clone().set(immutable=True)
return self.tasks[0].link_error(sig)
```
to:
```python
def link_error(self, sig):
if not isinstance(sig, Signature):
sig = Signature.from_dict(sig)
sig = sig.clone().set(immutable=True)
return self.tasks[0].link_error(sig)
```
No explanation or reproducible scenario yet. But now that I'm "out of the woods" with my server running, I can dedicate some time to try and find the root cause of this issue.
The fix itself makes sense.
Though a more preferable solution would be:
```python
def link_error(self, sig):
if not isinstance(sig, Signature):
sig['immutable'] = True
sig = Signature.from_dict(sig)
else:
sig = sig.clone().set(immutable=True)
return self.tasks[0].link_error(sig)
```
I'd still would like to see a test case so that we can include it in our test suite.
I noticed I had `task_ignore_result` set to True, but changing to False didn't help.
@thedrow I'm trying to create the failing scenario. But it could take some time. This is the first time I'm working with canvases; and my case is rather complicated, so producing a simple test case may take some time.
I'm trying to use mypy to set some annotations; but I see that some annotations are already there, but invalid:
```
$ mypy -2 -p celery --ignore-missing-imports
celery/backends/mongodb.py:31: error: Name 'InvalidDocument' already defined (possibly by an import)
celery/beat.py:658: error: Name '_Process' already defined on line 656
celery/contrib/pytest.py:18: error: Type signature has too few arguments
celery/contrib/pytest.py:46: error: Type signature has too few arguments
celery/contrib/pytest.py:66: error: Type signature has too few arguments
celery/contrib/pytest.py:163: error: Type signature has too few arguments
```
What's your stand on making mypy part of the CI pipeline?
It can but we'll need to invest a lot of work in making it pass.
Let's create a different issue about it. | 2019-01-21T20:32:04 |
|
celery/celery | 5,345 | celery__celery-5345 | [
"4105"
] | 76d10453ab9267c45b12d7c60b5ee0e4113b4369 | diff --git a/celery/backends/redis.py b/celery/backends/redis.py
--- a/celery/backends/redis.py
+++ b/celery/backends/redis.py
@@ -2,6 +2,8 @@
"""Redis result store backend."""
from __future__ import absolute_import, unicode_literals
+import time
+
from functools import partial
from ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED
@@ -117,9 +119,12 @@ def stop(self):
self._pubsub.close()
def drain_events(self, timeout=None):
- message = self._pubsub.get_message(timeout=timeout)
- if message and message['type'] == 'message':
- self.on_state_change(self._decode_result(message['data']), message)
+ if self._pubsub:
+ message = self._pubsub.get_message(timeout=timeout)
+ if message and message['type'] == 'message':
+ self.on_state_change(self._decode_result(message['data']), message)
+ elif timeout:
+ time.sleep(timeout)
def consume_from(self, task_id):
if self._pubsub is None:
| diff --git a/t/unit/backends/test_redis.py b/t/unit/backends/test_redis.py
--- a/t/unit/backends/test_redis.py
+++ b/t/unit/backends/test_redis.py
@@ -189,6 +189,11 @@ def test_on_state_change(self, parent_method, cancel_for):
parent_method.assert_called_once_with(meta, message)
cancel_for.assert_not_called()
+ def test_drain_events_before_start(self):
+ consumer = self.get_consumer()
+ # drain_events shouldn't crash when called before start
+ consumer.drain_events(0.001)
+
class test_RedisBackend:
def get_backend(self):
diff --git a/t/unit/backends/test_rpc.py b/t/unit/backends/test_rpc.py
--- a/t/unit/backends/test_rpc.py
+++ b/t/unit/backends/test_rpc.py
@@ -8,6 +8,19 @@
from celery.backends.rpc import RPCBackend
+class test_RPCResultConsumer:
+ def get_backend(self):
+ return RPCBackend(app=self.app)
+
+ def get_consumer(self):
+ return self.get_backend().result_consumer
+
+ def test_drain_events_before_start(self):
+ consumer = self.get_consumer()
+ # drain_events shouldn't crash when called before start
+ consumer.drain_events(0.001)
+
+
class test_RPCBackend:
def setup(self):
| ResultConsumer greenlet race condition
When I tried to get result from task inside greenlet, I've got an exception:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gevent/greenlet.py", line 534, in run
result = self._run(*self.args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/async.py", line 83, in run
self.result_consumer.drain_events(timeout=1)
File "/usr/local/lib/python2.7/dist-packages/celery/backends/redis.py", line 69, in drain_events
m = self._pubsub.get_message(timeout=timeout)
AttributeError: 'NoneType' object has no attribute 'get_message'
<Greenlet at 0x7efd9d8ba550: <bound method geventDrainer.run of <celery.backends.async.geventDrainer object at 0x7efd9d99f550>>> failed with AttributeError
```
Celery version:
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.12
billiard:3.5.0.2 redis:2.10.5
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis
```
After some recon, I've found that error occurs, when `threading` module is monkey-patched by `gevent.monkey`. It seems that attribute `_pubsub` from `celery.backends.async.ResultConsumer` is initialized a bit too late. Bug is triggered by racing `greenletDrainer.start()` and `ResultConsumer.start()` initializers.
```python
def add_pending_result(self, result, weak=False, start_drainer=True):
if start_drainer:
# Spawns greenlet and starts to drain events
self.result_consumer.drainer.start()
try:
self._maybe_resolve_from_buffer(result)
except Empty:
# Initializes pubsub needed by drainer
self._add_pending_result(result.id, result, weak=weak)
return result
```
```python
class greenletDrainer(Drainer):
#...
def run(self):
self._started.set()
while not self._stopped.is_set():
try:
self.result_consumer.drain_events(timeout=1)
except socket.timeout:
pass
self._shutdown.set()
# self.result_consumer.drainer.start()
def start(self):
if not self._started.is_set():
self._g = self.spawn(self.run)
# Switches immediately to self.run
self._started.wait()
```
Issue related with #3452.
Filtered Python trace and way to reproduce a bug can be found [in this Gist](https://gist.github.com/psrok1/8dc27d3cdf367573183fc3f1e5524293)
| Bypass:
```python
# Manually initialize consumer
celery_app.backend.result_consumer.start("")
def fetch(uuid):
res = AsyncResult(uuid, app=celery_app).get()
print res
```
can you send a patch with test to fix the issue? | 2019-02-17T10:47:59 |
celery/celery | 5,348 | celery__celery-5348 | [
"5349"
] | 0736cff9d908c0519e07babe4de9c399c87cb32b | diff --git a/celery/app/builtins.py b/celery/app/builtins.py
--- a/celery/app/builtins.py
+++ b/celery/app/builtins.py
@@ -78,7 +78,10 @@ def unlock_chord(self, group_id, callback, interval=None,
callback = maybe_signature(callback, app=app)
try:
with allow_join_result():
- ret = j(timeout=3.0, propagate=True)
+ ret = j(
+ timeout=app.conf.result_chord_join_timeout,
+ propagate=True,
+ )
except Exception as exc: # pylint: disable=broad-except
try:
culprit = next(deps._failed_join_report())
diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -210,6 +210,7 @@ def __repr__(self):
extended=Option(False, type='bool'),
serializer=Option('json'),
backend_transport_options=Option({}, type='dict'),
+ chord_join_timeout=Option(3.0, type='float'),
),
elasticsearch=Namespace(
__old__=old_ns('celery_elasticsearch'),
| diff --git a/t/unit/tasks/test_chord.py b/t/unit/tasks/test_chord.py
--- a/t/unit/tasks/test_chord.py
+++ b/t/unit/tasks/test_chord.py
@@ -177,6 +177,28 @@ class NeverReady(TSR):
def test_is_in_registry(self):
assert 'celery.chord_unlock' in self.app.tasks
+ def _test_unlock_join_timeout(self, timeout):
+ class MockJoinResult(TSR):
+ is_ready = True
+ value = [(None,)]
+ join = Mock(return_value=value)
+ join_native = join
+
+ self.app.conf.result_chord_join_timeout = timeout
+ with self._chord_context(MockJoinResult):
+ MockJoinResult.join.assert_called_with(
+ timeout=timeout,
+ propagate=True,
+ )
+
+ def test_unlock_join_timeout_default(self):
+ self._test_unlock_join_timeout(
+ timeout=self.app.conf.result_chord_join_timeout,
+ )
+
+ def test_unlock_join_timeout_custom(self):
+ self._test_unlock_join_timeout(timeout=5.0)
+
class test_chord(ChordCase):
| Allow GroupResult.join timeout to be configurable in celery.chord_unlock
# Checklist
- [x] I have checked the issues list for similar or identical enhancement to an existing feature.
- [x] I have checked the commit log to find out if a the same enhancement was already implemented in master.
# Brief Summary
Pull request: #5348
Previously the timeout passed down to `GroupResult.join` in `celery.chord_unlock` was hardcoded to 3.0 seconds. This introduces the new configuration option `result_chord_join_timeout` which allows users to configure the timeout. The default value remains as 3.0.
This change will solve the issue of unwanted timeouts caused when there's a moderate latency between the Celery workers and the configured backend or the result set to join is relatively large (5000+ task results to join).
# Design
## Architectural Considerations
<!--
If more components other than Celery are involved,
describe them here and the effect it would have on Celery.
-->
None
## Proposed Behavior
See summary.
## Proposed UI/UX
The introduced configuration can be applied as follows:
```python
app.conf.result_chord_join_timeout = 3.0 # default timeout
app.conf.result_chord_join_timeout = 10.0 # 10 second timeout
app.conf.result_chord_join_timeout = None # No timeout
```
## Diagrams
<!--
Please include any diagrams that might be relevant
to the implementation of this enhancement such as:
* Class Diagrams
* Sequence Diagrams
* Activity Diagrams
You can drag and drop images into the text box to attach them to this issue.
-->
N/A
## Alternatives
<!--
If you have considered any alternative implementations
describe them in detail below.
-->
None
| 2019-02-20T09:22:08 |
|
celery/celery | 5,355 | celery__celery-5355 | [
"5347"
] | 210ad35e8c8a0ed26abbec45126700eebec1d0ef | diff --git a/celery/fixups/django.py b/celery/fixups/django.py
--- a/celery/fixups/django.py
+++ b/celery/fixups/django.py
@@ -57,8 +57,10 @@ def __init__(self, app):
self._worker_fixup = None
def install(self):
- # Need to add project directory to path
- sys.path.append(os.getcwd())
+ # Need to add project directory to path.
+ # The project directory has precedence over system modules,
+ # so we prepend it to the path.
+ sys.path.prepend(os.getcwd())
self._settings = symbol_by_name('django.conf:settings')
self.app.loader.now = self.now
| diff --git a/t/unit/fixups/test_django.py b/t/unit/fixups/test_django.py
--- a/t/unit/fixups/test_django.py
+++ b/t/unit/fixups/test_django.py
@@ -91,7 +91,7 @@ def test_install(self, patching):
f.install()
self.sigs.worker_init.connect.assert_called_with(f.on_worker_init)
assert self.app.loader.now == f.now
- self.p.append.assert_called_with('/opt/vandelay')
+ self.p.prepend.assert_called_with('/opt/vandelay')
def test_now(self):
with self.fixup_context(self.app) as (f, _, _):
| Django fixup appends to PYTHONPATH instead of prepending
Hi,
## Environment & Settings
**Celery version**: 4.2.1 (windowlicker)
# Steps to Reproduce
We are using Celery + Django in [dissemin](https://github.com/dissemin/dissemin/). We have a Django app named "statistics" which is actually a name conflicting with a Python 3 package from standard library. This should be fine in principle as long as the `PYTHONPATH` is set so that the local modules have precedence over the system ones.
When running Celery CLI, the system wide module has precedence over the local ones apparently.
I traced this issue back to [this `sys.path` tweak](https://github.com/celery/celery/blob/072dab85261599234341cc714b0d6f0caca20f00/celery/fixups/django.py#L60-L61), which is actually **appending** local path instead of prepending it.
I may have missed something, but is it important for some reason to append it and not prepend it in this context?
# Expected Behavior
Celery should load the local module as expected.
# Actual Behavior
```
# When going through celery CLI
sys.path == ['/Users/lverney/.local/share/virtualenvs/dissemin3/bin', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python36.zip', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/site-packages', '/Users/lverney/.local/share/virtualenvs/dissemin3/src/django-allauth', '/Users/lverney/tmp/dissemin']
# Without celery
sys.path == ['', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python36.zip', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/site-packages', '/Users/lverney/.local/share/virtualenvs/dissemin3/src/django-allauth']
```
We can see that current path is actually appended, not prepended. Therefore, "system" modules have precedence on the ones from the local project.
# Workaround
For people experiencing this issue, `PYTHONPATH=$(pwd) celery …` is a workaround.
Best,
/cc @wetneb who first noticed this issue.
| could you come with a possible fix?
@auvipy fixing is easy - use `prepend` instead of `append`. I am more worried about the sort of testing you would expect for that kind of bug? It looks a bit hard to test to me. Happy to provide a PR otherwise. | 2019-02-22T12:43:59 |
celery/celery | 5,356 | celery__celery-5356 | [
"5355"
] | 128433770aa4524de2bf1eead3a2309708d7b51c | diff --git a/celery/fixups/django.py b/celery/fixups/django.py
--- a/celery/fixups/django.py
+++ b/celery/fixups/django.py
@@ -60,7 +60,7 @@ def install(self):
# Need to add project directory to path.
# The project directory has precedence over system modules,
# so we prepend it to the path.
- sys.path.prepend(os.getcwd())
+ sys.path.insert(0, os.getcwd())
self._settings = symbol_by_name('django.conf:settings')
self.app.loader.now = self.now
| diff --git a/t/unit/fixups/test_django.py b/t/unit/fixups/test_django.py
--- a/t/unit/fixups/test_django.py
+++ b/t/unit/fixups/test_django.py
@@ -91,7 +91,7 @@ def test_install(self, patching):
f.install()
self.sigs.worker_init.connect.assert_called_with(f.on_worker_init)
assert self.app.loader.now == f.now
- self.p.prepend.assert_called_with('/opt/vandelay')
+ self.p.insert.assert_called_with(0, '/opt/vandelay')
def test_now(self):
with self.fixup_context(self.app) as (f, _, _):
| Prepend to sys.path in the Django fixup instead of appending.
This makes sure that project modules have precedence over system ones.
Closes #5347.
## Description
This follows @Phyks's suggestion of a fix for #5347, by prepending instead of appending to the system path, to ensure that the project modules are not hidden by system-wide ones.
| # [Codecov](https://codecov.io/gh/celery/celery/pull/5355?src=pr&el=h1) Report
> Merging [#5355](https://codecov.io/gh/celery/celery/pull/5355?src=pr&el=desc) into [master](https://codecov.io/gh/celery/celery/commit/210ad35e8c8a0ed26abbec45126700eebec1d0ef?src=pr&el=desc) will **not change** coverage.
> The diff coverage is `100%`.
[](https://codecov.io/gh/celery/celery/pull/5355?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5355 +/- ##
=======================================
Coverage 83.37% 83.37%
=======================================
Files 144 144
Lines 16450 16450
Branches 2047 2047
=======================================
Hits 13716 13716
Misses 2527 2527
Partials 207 207
```
| [Impacted Files](https://codecov.io/gh/celery/celery/pull/5355?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [celery/fixups/django.py](https://codecov.io/gh/celery/celery/pull/5355/diff?src=pr&el=tree#diff-Y2VsZXJ5L2ZpeHVwcy9kamFuZ28ucHk=) | `93.07% <100%> (ø)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/celery/celery/pull/5355?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/celery/celery/pull/5355?src=pr&el=footer). Last update [210ad35...d235558](https://codecov.io/gh/celery/celery/pull/5355?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
Thank you a million @auvipy!
welcome 2 million :) @wetneb
I don't think this is correct, Python lists don't have a `prepend` method. Was this tested?
Usually to add something as the first element you would use:
```python
sys.path.insert(0, os.getcwd())
```
This was reported by a user on IRC:
```
AttributeError: 'list' object has no attribute 'prepend'
```
@auvipy I'd suggest we back this out.
Oops, sorry about that! That's a pretty epic fail on my part. | 2019-02-22T15:51:11 |
celery/celery | 5,373 | celery__celery-5373 | [
"4899"
] | 4c633f02c2240d6bfe661d532fb0734053243606 | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -200,9 +200,15 @@ def _call_task_errbacks(self, request, exc, traceback):
# need to do so if the errback only takes a single task_id arg.
task_id = request.id
root_id = request.root_id or task_id
- group(old_signature, app=self.app).apply_async(
- (task_id,), parent_id=task_id, root_id=root_id
- )
+ g = group(old_signature, app=self.app)
+ if self.app.conf.task_always_eager or request.delivery_info.get('is_eager', False):
+ g.apply(
+ (task_id,), parent_id=task_id, root_id=root_id
+ )
+ else:
+ g.apply_async(
+ (task_id,), parent_id=task_id, root_id=root_id
+ )
def mark_as_revoked(self, task_id, reason='',
request=None, store_result=True, state=states.REVOKED):
diff --git a/t/integration/conftest.py b/t/integration/conftest.py
--- a/t/integration/conftest.py
+++ b/t/integration/conftest.py
@@ -1,7 +1,6 @@
from __future__ import absolute_import, unicode_literals
import os
-from functools import wraps
import pytest
@@ -18,25 +17,11 @@
__all__ = (
'celery_app',
'celery_session_worker',
- 'flaky',
'get_active_redis_channels',
'get_redis_connection',
)
-def flaky(fun):
- @wraps(fun)
- def _inner(*args, **kwargs):
- for i in reversed(range(3)):
- try:
- return fun(*args, **kwargs)
- except Exception:
- if not i:
- raise
- _inner.__wrapped__ = fun
- return _inner
-
-
def get_redis_connection():
from redis import StrictRedis
return StrictRedis(host=os.environ.get('REDIS_HOST'))
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -14,6 +14,7 @@
@shared_task
def identity(x):
+ """Return the argument."""
return x
@@ -106,6 +107,12 @@ def print_unicode(log_message='hå它 valmuefrø', print_message='hiöäüß'
print(print_message)
+@shared_task
+def return_exception(e):
+ """Return a tuple containing the exception message and sentinel value."""
+ return e, True
+
+
@shared_task
def sleeping(i, **_):
"""Task sleeping for ``i`` seconds, and returning nothing."""
@@ -125,23 +132,22 @@ def collect_ids(self, res, i):
are :task:`ids`: returns a tuple of::
(previous_result, (root_id, parent_id, i))
-
"""
return res, (self.request.root_id, self.request.parent_id, i)
@shared_task(bind=True, expires=60.0, max_retries=1)
-def retry_once(self):
+def retry_once(self, *args, expires=60.0, max_retries=1, countdown=0.1):
"""Task that fails and is retried. Returns the number of retries."""
if self.request.retries:
return self.request.retries
- raise self.retry(countdown=0.1)
+ raise self.retry(countdown=countdown,
+ max_retries=max_retries)
@shared_task
def redis_echo(message):
- """Task that appends the message to a redis list"""
-
+ """Task that appends the message to a redis list."""
redis_connection = get_redis_connection()
redis_connection.rpush('redis-echo', message)
@@ -192,12 +198,24 @@ def build_chain_inside_task(self):
class ExpectedException(Exception):
- pass
+ """Sentinel exception for tests."""
+
+ def __eq__(self, other):
+ return (
+ other is not None and
+ isinstance(other, ExpectedException) and
+ self.args == other.args
+ )
+
+ def __hash__(self):
+ return hash(self.args)
@shared_task
def fail(*args):
- raise ExpectedException('Task expected to fail')
+ """Task that simply raises ExpectedException."""
+ args = ("Task expected to fail",) + args
+ raise ExpectedException(*args)
@shared_task
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -4,33 +4,89 @@
import pytest
-from celery import chain, chord, group
+from celery import chain, chord, group, signature
from celery.exceptions import TimeoutError
from celery.result import AsyncResult, GroupResult, ResultSet
-from .conftest import flaky, get_active_redis_channels, get_redis_connection
-from .tasks import (add, add_chord_to_chord, add_replaced, add_to_all,
- add_to_all_to_chord, build_chain_inside_task, chord_error,
- collect_ids, delayed_sum, delayed_sum_with_soft_guard,
- fail, identity, ids, print_unicode, raise_error,
- redis_echo, second_order_replace1, tsum, return_priority)
+from .conftest import get_active_redis_channels, get_redis_connection
+from .tasks import (ExpectedException, add, add_chord_to_chord, add_replaced,
+ add_to_all, add_to_all_to_chord, build_chain_inside_task,
+ chord_error, collect_ids, delayed_sum,
+ delayed_sum_with_soft_guard, fail, identity, ids,
+ print_unicode, raise_error, redis_echo, retry_once,
+ return_exception, return_priority, second_order_replace1,
+ tsum)
TIMEOUT = 120
+class test_link_error:
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
+ def test_link_error_eager(self):
+ exception = ExpectedException("Task expected to fail", "test")
+ result = fail.apply(args=("test", ), link_error=return_exception.s())
+ actual = result.get(timeout=TIMEOUT, propagate=False)
+ assert actual == exception
+
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
+ def test_link_error(self):
+ exception = ExpectedException("Task expected to fail", "test")
+ result = fail.apply(args=("test", ), link_error=return_exception.s())
+ actual = result.get(timeout=TIMEOUT, propagate=False)
+ assert actual == exception
+
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
+ def test_link_error_callback_error_callback_retries_eager(self):
+ exception = ExpectedException("Task expected to fail", "test")
+ result = fail.apply(
+ args=("test", ),
+ link_error=retry_once.s(countdown=None)
+ )
+ assert result.get(timeout=TIMEOUT, propagate=False) == exception
+
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
+ def test_link_error_callback_retries(self):
+ exception = ExpectedException("Task expected to fail", "test")
+ result = fail.apply_async(
+ args=("test", ),
+ link_error=retry_once.s(countdown=None)
+ )
+ assert result.get(timeout=TIMEOUT, propagate=False) == exception
+
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
+ def test_link_error_using_signature_eager(self):
+ fail = signature('t.integration.tasks.fail', args=("test", ))
+ retrun_exception = signature('t.integration.tasks.return_exception')
+
+ fail.link_error(retrun_exception)
+
+ exception = ExpectedException("Task expected to fail", "test")
+ assert (fail.apply().get(timeout=TIMEOUT, propagate=False), True) == (exception, True)
+
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
+ def test_link_error_using_signature(self):
+ fail = signature('t.integration.tasks.fail', args=("test", ))
+ retrun_exception = signature('t.integration.tasks.return_exception')
+
+ fail.link_error(retrun_exception)
+
+ exception = ExpectedException("Task expected to fail", "test")
+ assert (fail.delay().get(timeout=TIMEOUT, propagate=False), True) == (exception, True)
+
+
class test_chain:
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_simple_chain(self, manager):
c = add.s(4, 4) | add.s(8) | add.s(16)
assert c().get(timeout=TIMEOUT) == 32
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_single_chain(self, manager):
c = chain(add.s(3, 4))()
assert c.get(timeout=TIMEOUT) == 7
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_complex_chain(self, manager):
c = (
add.s(2, 2) | (
@@ -41,7 +97,7 @@ def test_complex_chain(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [64, 65, 66, 67]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_group_results_in_chain(self, manager):
# This adds in an explicit test for the special case added in commit
# 1e3fcaa969de6ad32b52a3ed8e74281e5e5360e6
@@ -73,7 +129,7 @@ def test_chain_on_error(self, manager):
with pytest.raises(ExpectedException):
res.parent.get(propagate=True)
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_inside_group_receives_arguments(self, manager):
c = (
add.s(5, 6) |
@@ -82,7 +138,7 @@ def test_chain_inside_group_receives_arguments(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [14, 14]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_eager_chain_inside_task(self, manager):
from .tasks import chain_add
@@ -93,7 +149,7 @@ def test_eager_chain_inside_task(self, manager):
chain_add.app.conf.task_always_eager = prev
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_group_chord_group_chain(self, manager):
from celery.five import bytes_if_py2
@@ -120,7 +176,7 @@ def test_group_chord_group_chain(self, manager):
assert set(redis_messages[4:]) == after_items
redis_connection.delete('redis-echo')
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_group_result_not_has_cache(self, manager):
t1 = identity.si(1)
t2 = identity.si(2)
@@ -130,7 +186,7 @@ def test_group_result_not_has_cache(self, manager):
result = task.delay()
assert result.get(timeout=TIMEOUT) == [1, 2, [3, 4]]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_second_order_replace(self, manager):
from celery.five import bytes_if_py2
@@ -150,7 +206,7 @@ def test_second_order_replace(self, manager):
expected_messages = [b'In A', b'In B', b'In/Out C', b'Out B', b'Out A']
assert redis_messages == expected_messages
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_parent_ids(self, manager, num=10):
assert_ping(manager)
@@ -218,7 +274,7 @@ def test_chain_error_handler_with_eta(self, manager):
result = c.get()
assert result == 10
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_groupresult_serialization(self, manager):
"""Test GroupResult is correctly serialized
to save in the result backend"""
@@ -232,7 +288,7 @@ def test_groupresult_serialization(self, manager):
assert len(result) == 2
assert isinstance(result[0][1], list)
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_of_task_a_group_and_a_chord(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -247,7 +303,7 @@ def test_chain_of_task_a_group_and_a_chord(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == 8
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_of_chords_as_groups_chained_to_a_task_with_two_tasks(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -264,7 +320,7 @@ def test_chain_of_chords_as_groups_chained_to_a_task_with_two_tasks(self, manage
res = c()
assert res.get(timeout=TIMEOUT) == 12
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_of_chords_with_two_tasks(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -280,7 +336,7 @@ def test_chain_of_chords_with_two_tasks(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == 12
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_of_a_chord_and_a_group_with_two_tasks(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -296,7 +352,7 @@ def test_chain_of_a_chord_and_a_group_with_two_tasks(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [6, 6]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_of_a_chord_and_a_task_and_a_group(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -311,7 +367,7 @@ def test_chain_of_a_chord_and_a_task_and_a_group(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [6, 6]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_of_a_chord_and_two_tasks_and_a_group(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -327,7 +383,7 @@ def test_chain_of_a_chord_and_two_tasks_and_a_group(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [7, 7]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_of_a_chord_and_three_tasks_and_a_group(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -347,14 +403,14 @@ def test_chain_of_a_chord_and_three_tasks_and_a_group(self, manager):
class test_result_set:
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_result_set(self, manager):
assert_ping(manager)
rs = ResultSet([add.delay(1, 1), add.delay(2, 2)])
assert rs.get(timeout=TIMEOUT) == [2, 4]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_result_set_error(self, manager):
assert_ping(manager)
@@ -366,7 +422,7 @@ def test_result_set_error(self, manager):
class test_group:
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_ready_with_exception(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -376,7 +432,7 @@ def test_ready_with_exception(self, manager):
while not result.ready():
pass
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_empty_group_result(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -388,7 +444,7 @@ def test_empty_group_result(self, manager):
task = GroupResult.restore(result.id)
assert task.results == []
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_parent_ids(self, manager):
assert_ping(manager)
@@ -408,7 +464,7 @@ def test_parent_ids(self, manager):
assert parent_id == expected_parent_id
assert value == i + 2
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_nested_group(self, manager):
assert_ping(manager)
@@ -426,7 +482,7 @@ def test_nested_group(self, manager):
assert res.get(timeout=TIMEOUT) == [11, 101, 1001, 2001]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_large_group(self, manager):
assert_ping(manager)
@@ -451,8 +507,7 @@ def assert_ping(manager):
class test_chord:
-
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_redis_subscribed_channels_leak(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -493,7 +548,7 @@ def test_redis_subscribed_channels_leak(self, manager):
assert channels_after_count == initial_channels_count
assert set(channels_after) == set(initial_channels)
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_replaced_nested_chord(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -513,7 +568,7 @@ def test_replaced_nested_chord(self, manager):
res1 = c1()
assert res1.get(timeout=TIMEOUT) == [29, 38]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_add_to_chord(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -522,7 +577,7 @@ def test_add_to_chord(self, manager):
res = c()
assert res.get() == [0, 5, 6, 7]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_add_chord_to_chord(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -531,7 +586,7 @@ def test_add_chord_to_chord(self, manager):
res = c()
assert res.get() == [0, 5 + 6 + 7]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_eager_chord_inside_task(self, manager):
from .tasks import chord_add
@@ -542,7 +597,7 @@ def test_eager_chord_inside_task(self, manager):
chord_add.app.conf.task_always_eager = prev
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_group_chain(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -554,7 +609,7 @@ def test_group_chain(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [12, 13, 14, 15]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_nested_group_chain(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -580,7 +635,7 @@ def test_nested_group_chain(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == 11
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_single_task_header(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -609,7 +664,7 @@ def test_empty_header_chord(self, manager):
res2 = c2()
assert res2.get(timeout=TIMEOUT) == []
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_nested_chord(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -643,7 +698,7 @@ def test_nested_chord(self, manager):
res = c()
assert [[[[3, 3], 4], 5], 6] == res.get(timeout=TIMEOUT)
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_parent_ids(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -658,7 +713,7 @@ def test_parent_ids(self, manager):
)
self.assert_parentids_chord(g(), expected_root_id)
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_parent_ids__OR(self, manager):
if not manager.app.conf.result_backend.startswith('redis'):
raise pytest.skip('Requires redis result backend.')
@@ -762,7 +817,7 @@ def test_chord_on_error(self, manager):
assert len([cr for cr in chord_results if cr[2] != states.SUCCESS]
) == 1
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_parallel_chords(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -776,7 +831,7 @@ def test_parallel_chords(self, manager):
assert r.get(timeout=TIMEOUT) == [10, 10]
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chord_in_chords_with_chains(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -807,7 +862,7 @@ def test_chord_in_chords_with_chains(self, manager):
assert r.get(timeout=TIMEOUT) == 4
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_chord_chain_chord(self, manager):
# test for #2573
try:
@@ -833,7 +888,7 @@ def test_chain_chord_chain_chord(self, manager):
res = c.delay()
assert res.get(timeout=TIMEOUT) == 7
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_large_header(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -844,7 +899,7 @@ def test_large_header(self, manager):
res = c.delay()
assert res.get(timeout=TIMEOUT) == 499500
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_chain_to_a_chord_with_large_header(self, manager):
try:
manager.app.backend.ensure_chords_allowed()
@@ -855,12 +910,12 @@ def test_chain_to_a_chord_with_large_header(self, manager):
res = c.delay()
assert res.get(timeout=TIMEOUT) == 1000
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_priority(self, manager):
c = chain(return_priority.signature(priority=3))()
assert c.get(timeout=TIMEOUT) == "Priority: 3"
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=1)
def test_priority_chain(self, manager):
c = return_priority.signature(priority=3) | return_priority.signature(priority=5)
assert c().get(timeout=TIMEOUT) == "Priority: 5"
diff --git a/t/integration/test_tasks.py b/t/integration/test_tasks.py
--- a/t/integration/test_tasks.py
+++ b/t/integration/test_tasks.py
@@ -4,24 +4,24 @@
from celery import group
-from .conftest import flaky, get_active_redis_channels
+from .conftest import get_active_redis_channels
from .tasks import add, add_ignore_result, print_unicode, retry_once, sleeping
class test_tasks:
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_task_accepted(self, manager, sleep=1):
r1 = sleeping.delay(sleep)
sleeping.delay(sleep)
manager.assert_accepted([r1.id])
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_task_retried(self):
res = retry_once.delay()
assert res.get(timeout=10) == 1 # retried once
- @flaky
+ @pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_unicode_task(self, manager):
manager.join(
group(print_unicode.s() for _ in range(5))(),
diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -383,10 +383,27 @@ def test_mark_as_done__chord(self):
b.mark_as_done('id', 10, request=request)
b.on_chord_part_return.assert_called_with(request, states.SUCCESS, 10)
+ def test_mark_as_failure__bound_errback_eager(self):
+ b = BaseBackend(app=self.app)
+ b._store_result = Mock()
+ request = Mock(name='request')
+ request.delivery_info = {
+ 'is_eager': True
+ }
+ request.errbacks = [
+ self.bound_errback.subtask(args=[1], immutable=True)]
+ exc = KeyError()
+ group = self.patching('celery.backends.base.group')
+ b.mark_as_failure('id', exc, request=request)
+ group.assert_called_with(request.errbacks, app=self.app)
+ group.return_value.apply.assert_called_with(
+ (request.id, ), parent_id=request.id, root_id=request.root_id)
+
def test_mark_as_failure__bound_errback(self):
b = BaseBackend(app=self.app)
b._store_result = Mock()
request = Mock(name='request')
+ request.delivery_info = {}
request.errbacks = [
self.bound_errback.subtask(args=[1], immutable=True)]
exc = KeyError()
| link_error tasks not run eagerly when using apply()
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
Tested using the included docker setup
```
$ celery -A tasks report
No handlers could be found for logger "vagrant"
software -> celery:4.2.0 (windowlicker) kombu:4.2.1 py:2.7.14
billiard:3.5.0.4 py-amqp:2.3.2
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:redis://redis/
broker_url: u'amqp://guest:********@rabbit:5672//'
result_backend: u'redis://redis/'
```
## Steps to reproduce
Add `tasks.py`:
```python
# tasks.py
@app.task
def on_failure():
print 'from on_failure'
@app.task
def error():
raise ValueError('oh no!')
```
Run the tasks:
```python
>>> from tasks import error, on_failure
>>> error.apply(link_error=on_failure.si()).get()
```
## Expected behavior
Both `error` and `on_failure` to be executed eagerly.
## Actual behavior
`error` is executed eagerly and `on_failure` is executed on a worker.
| please send a pr if you have any intended fix in mind if there isn't any open pr to fix this issue. | 2019-03-08T08:44:39 |
celery/celery | 5,382 | celery__celery-5382 | [
"5377"
] | 5f579acf62b11fdca70604c5d7b7350b7f6db951 | diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -249,6 +249,7 @@ def __repr__(self):
task=Namespace(
__old__=OLD_NS,
acks_late=Option(False, type='bool'),
+ acks_on_failure_or_timeout=Option(True, type='bool'),
always_eager=Option(False, type='bool'),
annotations=Option(type='any'),
compression=Option(type='string', old={'celery_message_compression'}),
diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -256,11 +256,12 @@ class Task(object):
#: fails or times out.
#:
#: Configuring this setting only applies to tasks that are
- #: acknowledged **after** they have been executed.
+ #: acknowledged **after** they have been executed and only if
+ #: :setting:`task_acks_late` is enabled.
#:
#: The application default can be overridden with the
#: :setting:`task_acks_on_failure_or_timeout` setting.
- acks_on_failure_or_timeout = True
+ acks_on_failure_or_timeout = None
#: Even if :attr:`acks_late` is enabled, the worker will
#: acknowledge tasks when the worker process executing them abruptly
| diff --git a/t/unit/worker/test_request.py b/t/unit/worker/test_request.py
--- a/t/unit/worker/test_request.py
+++ b/t/unit/worker/test_request.py
@@ -616,9 +616,34 @@ def test_on_failure_acks_late(self):
except KeyError:
exc_info = ExceptionInfo()
job.on_failure(exc_info)
- assert job.acknowledged
+ assert job.acknowledged
+
+ def test_on_failure_acks_on_failure_or_timeout_disabled_for_task(self):
+ job = self.xRequest()
+ job.time_start = 1
+ self.mytask.acks_late = True
+ self.mytask.acks_on_failure_or_timeout = False
+ try:
+ raise KeyError('foo')
+ except KeyError:
+ exc_info = ExceptionInfo()
+ job.on_failure(exc_info)
+ assert job.acknowledged is False
+
+ def test_on_failure_acks_on_failure_or_timeout_enabled_for_task(self):
+ job = self.xRequest()
+ job.time_start = 1
+ self.mytask.acks_late = True
+ self.mytask.acks_on_failure_or_timeout = True
+ try:
+ raise KeyError('foo')
+ except KeyError:
+ exc_info = ExceptionInfo()
+ job.on_failure(exc_info)
+ assert job.acknowledged is True
- def test_on_failure_acks_on_failure_or_timeout(self):
+ def test_on_failure_acks_on_failure_or_timeout_disabled(self):
+ self.app.conf.acks_on_failure_or_timeout = False
job = self.xRequest()
job.time_start = 1
self.mytask.acks_late = True
@@ -628,7 +653,20 @@ def test_on_failure_acks_on_failure_or_timeout(self):
except KeyError:
exc_info = ExceptionInfo()
job.on_failure(exc_info)
- assert job.acknowledged is False
+ assert job.acknowledged is False
+ self.app.conf.acks_on_failure_or_timeout = True
+
+ def test_on_failure_acks_on_failure_or_timeout_enabled(self):
+ self.app.conf.acks_on_failure_or_timeout = True
+ job = self.xRequest()
+ job.time_start = 1
+ self.mytask.acks_late = True
+ try:
+ raise KeyError('foo')
+ except KeyError:
+ exc_info = ExceptionInfo()
+ job.on_failure(exc_info)
+ assert job.acknowledged is True
def test_from_message_invalid_kwargs(self):
m = self.TaskMessage(self.mytask.name, args=(), kwargs='foo')
| Add documentation to the task_acks_on_failure_or_timeout setting
# Description
#4970 did not include the proper documentation for the setting.
# Suggestions
We should ensure to document it's behaviour before 4.3 GA.
| @thedrow I submitted a PR based on the missing gaps I inferred from source code that used the `task_acks_on_failure_or_timeout` setting. I assume that the docs are built from the `celery.app.task` module so I applied my changes there.
The configuration documentation is at https://github.com/celery/celery/blob/master/docs/userguide/configuration.rst.
I'll copy what you added there.
Oh, seems like this was not introduced as a setting as well.
I'm going to do so now. | 2019-03-12T14:32:08 |
celery/celery | 5,399 | celery__celery-5399 | [
"4022"
] | 90ca47c02196ba0610f2b4abf972cc245fcc6b45 | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -25,7 +25,7 @@
from celery import current_app, group, maybe_signature, states
from celery._state import get_current_task
from celery.exceptions import (ChordError, ImproperlyConfigured,
- TaskRevokedError, TimeoutError)
+ NotRegistered, TaskRevokedError, TimeoutError)
from celery.five import PY3, items
from celery.result import (GroupResult, ResultBase, allow_join_result,
result_from_tuple)
@@ -168,22 +168,33 @@ def _call_task_errbacks(self, request, exc, traceback):
old_signature = []
for errback in request.errbacks:
errback = self.app.signature(errback)
- if (
- # Celery tasks type created with the @task decorator have
- # the __header__ property, but Celery task created from
- # Task class do not have this property.
- # That's why we have to check if this property exists
- # before checking is it partial function.
- hasattr(errback.type, '__header__') and
-
- # workaround to support tasks with bind=True executed as
- # link errors. Otherwise retries can't be used
- not isinstance(errback.type.__header__, partial) and
- arity_greater(errback.type.__header__, 1)
- ):
- errback(request, exc, traceback)
- else:
+ if not errback._app:
+ # Ensure all signatures have an application
+ errback._app = self.app
+ try:
+ if (
+ # Celery tasks type created with the @task decorator have
+ # the __header__ property, but Celery task created from
+ # Task class do not have this property.
+ # That's why we have to check if this property exists
+ # before checking is it partial function.
+ hasattr(errback.type, '__header__') and
+
+ # workaround to support tasks with bind=True executed as
+ # link errors. Otherwise retries can't be used
+ not isinstance(errback.type.__header__, partial) and
+ arity_greater(errback.type.__header__, 1)
+ ):
+ errback(request, exc, traceback)
+ else:
+ old_signature.append(errback)
+ except NotRegistered:
+ # Task may not be present in this worker.
+ # We simply send it forward for another worker to consume.
+ # If the task is not registered there, the worker will raise
+ # NotRegistered.
old_signature.append(errback)
+
if old_signature:
# Previously errback was called as a task so we still
# need to do so if the errback only takes a single task_id arg.
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -8,7 +8,7 @@
from case import ANY, Mock, call, patch, skip
from kombu.serialization import prepare_accept_content
-from celery import chord, group, states, uuid
+from celery import chord, group, signature, states, uuid
from celery.app.task import Context, Task
from celery.backends.base import (BaseBackend, DisabledBackend,
KeyValueStoreBackend, _nulldict)
@@ -399,6 +399,18 @@ def run(self):
b.mark_as_failure('id', exc, request=request)
mock_group.assert_called_once_with(request.errbacks, app=self.app)
+ @patch('celery.backends.base.group')
+ def test_unregistered_task_can_be_used_as_error_callback(self, mock_group):
+ b = BaseBackend(app=self.app)
+ b._store_result = Mock()
+
+ request = Mock(name='request')
+ request.errbacks = [signature('doesnotexist',
+ immutable=True)]
+ exc = KeyError()
+ b.mark_as_failure('id', exc, request=request)
+ mock_group.assert_called_once_with(request.errbacks, app=self.app)
+
def test_mark_as_failure__chord(self):
b = BaseBackend(app=self.app)
b._store_result = Mock()
| link_error fails if errback task is written in external codebase, raises NotRegistered error
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Steps to reproduce
### Setup
I have three systems:
- TaskProducer, simply schedules tasks. This happens to be a Django app, however I don't believe that is related to this issue. This system manages the routing for the tasks and queues.
- ExportWorker, handles `export` queue. This system is unaware of the routing for any tasks or queues.
- MessageWorker, handles `msg` queue. This system is unaware of the routing for any tasks or queues.
I'm attempting to keep these systems decoupled. They share no code.
ExportWorker has a single task:
```python
@app.task(name='export.hello', bind=True)
def hello(self, name='world'):
if name == 'homer':
raise Exception("NO HOMERS ALLOWED!")
return 'hello {}'.format(name)
```
MessageWorker has two tasks:
```python
@app.task(name='msg.success', bind=True)
def email_success(self, msg, email_address):
return 'Sending email: {}'.format(msg)
@app.task(name='msg.err', bind=True)
def email_err(self, context, exception, traceback):
print("Handled error: {}".format(exception))
return 'Something went wrong!'
```
#### Settings
<details>
<summary><code>taskProducer$ celery -A tasks report</code> (brief, only Celery pertinent settings)</summary>
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:3.5.2
billiard:3.5.0.2 sqs:N/A
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:sqs results:disabled
CELERY_BROKER_TRANSPORT = 'sqs'
CELERY_BROKER_TRANSPORT_OPTIONS = {
'region': 'us-west-2',
'queue_name_prefix': 'platform-staging-',
}
CELERY_TASK_ROUTES = {
'export.*': {'queue': 'export'},
'import.*': {'queue': 'import'},
'msg.*': {'queue': 'msg'},
}
CELERY_RESULT_QUEUE = 'result.fifo'
```
</details>
<details>
<summary><code>taskProducer$ celery -A tasks report</code> (long)</summary>
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:3.5.2
billiard:3.5.0.2 sqs:N/A
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:sqs results:disabled
CACHES: {
'default': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'default'},
'jsonattrs': { 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'jsonattrs'}}
SETTINGS_MODULE: 'config.settings.dev_debug'
PASSWORD_HASHERS: '********'
OSM_ATTRIBUTION: <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e052e8>
USE_TZ: True
MEDIA_ROOT: '/vagrant/cadasta/core/media'
FORMAT_MODULE_PATH: None
SIGNING_BACKEND: 'django.core.signing.TimestampSigner'
ICON_URL: 'https://s3-us-west-2.amazonaws.com/cadasta-resources/icons/{}.png'
CSRF_COOKIE_HTTPONLY: False
DATETIME_INPUT_FORMATS: ['%Y-%m-%d %H:%M:%S',
'%Y-%m-%d %H:%M:%S.%f',
'%Y-%m-%d %H:%M',
'%Y-%m-%d',
'%m/%d/%Y %H:%M:%S',
'%m/%d/%Y %H:%M:%S.%f',
'%m/%d/%Y %H:%M',
'%m/%d/%Y',
'%m/%d/%y %H:%M:%S',
'%m/%d/%y %H:%M:%S.%f',
'%m/%d/%y %H:%M',
'%m/%d/%y']
ICON_LOOKUPS: {
'application/gpx+xml': 'gpx',
'application/msexcel': 'xls',
'application/msword': 'doc',
'application/pdf': 'pdf',
'application/vnd.ms-excel': 'xls',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': 'xlsx',
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': 'docx',
'application/xml': 'xml',
'audio/1d-interleaved-parityfec': 'audio',
'audio/32kadpcm': 'audio',
'audio/3gpp': 'audio',
'audio/3gpp2': 'audio',
'audio/ATRAC-ADVANCED-LOSSESS': 'audio',
'audio/ATRAC-X': 'audio',
'audio/ATRAC3': 'audio',
'audio/BV16': 'audio',
'audio/BV32': 'audio',
'audio/CN': 'audio',
'audio/DAT12': 'audio',
'audio/DV': 'audio',
'audio/DV14': 'audio',
'audio/EVRC': 'audio',
'audio/EVRC-QCP': 'audio',
'audio/EVRC0': 'audio',
'audio/EVRC1': 'audio',
'audio/EVRCB': 'audio',
'audio/EVRCB0': 'audio',
'audio/EVRCB1': 'audio',
'audio/EVRCNW': 'audio',
'audio/EVRCNW0': 'audio',
'audio/EVRCNW1': 'audio',
'audio/EVRCWB': 'audio',
'audio/EVRCWB0': 'audio',
'audio/EVRCWB1': 'audio',
'audio/EVS': 'audio',
'audio/G711-0': 'audio',
'audio/G719': 'audio',
'audio/G722': 'audio',
'audio/G7221': 'audio',
'audio/G723': 'audio',
'audio/G726-16': 'audio',
'audio/G726-24': 'audio',
'audio/G726-32': 'audio',
'audio/G726-40': 'audio',
'audio/G728': 'audio',
'audio/G729': 'audio',
'audio/G7291': 'audio',
'audio/G729D': 'audio',
'audio/G729E': 'audio',
'audio/GSM': 'audio',
'audio/GSM-EFR': 'audio',
'audio/GSM-HR-08': 'audio',
'audio/L16': 'audio',
'audio/L20': 'audio',
'audio/L24': 'audio',
'audio/L8': 'audio',
'audio/LPC': 'audio',
'audio/MP4A-LATM': 'audio',
'audio/MPA': 'audio',
'audio/MPA2': 'audio',
'audio/PCMA': 'audio',
'audio/PCMA-WB': 'audio',
'audio/PCMU': 'audio',
'audio/PCMU-WB': 'audio',
'audio/QCELP': 'audio',
'audio/RED': 'audio',
'audio/SMV': 'audio',
'audio/SMV-QCP': 'audio',
'audio/SMV0': 'audio',
'audio/UEMCLIP': 'audio',
'audio/VDVI': 'audio',
'audio/VMR-WB': 'audio',
'audio/aac': 'audio',
'audio/aacp': 'audio',
'audio/ac3': 'audio',
'audio/amr': 'audio',
'audio/amr-wb': 'audio',
'audio/amr-wb+': 'audio',
'audio/aptx': 'audio',
'audio/asc': 'audio',
'audio/basic': 'audio',
'audio/clearmode': 'audio',
'audio/dls': 'dls',
'audio/dsr-es201108': 'audio',
'audio/dsr-es202050': 'audio',
'audio/dsr-es202211': 'audio',
'audio/dsr-es202212': 'audio',
'audio/eac3': 'audio',
'audio/encaprtp': 'audio',
'audio/example': 'audio',
'audio/fwdred': 'audio',
'audio/iLBC': 'audio',
'audio/ip-mr_v2.5': 'audio',
'audio/m4a': 'audio',
'audio/midi': 'audio',
'audio/mobile-xmf': 'audio',
'audio/mp3': 'mp3',
'audio/mp4': 'mp4',
'audio/mpa-robust': 'audio',
'audio/mpa-robust3': 'audio',
'audio/mpeg': 'mp3',
'audio/mpeg1': 'audio',
'audio/mpeg3': 'mp3',
'audio/mpeg4-generic': 'mp4',
'audio/ogg': 'audio',
'audio/opus': 'audio',
'audio/parityfec': 'audio',
'audio/raptorfec': 'audio',
'audio/rtp-enc-aescm128': 'audio',
'audio/rtp-midi': 'audio',
'audio/rtploopback': 'audio',
'audio/rtx': 'audio',
'audio/sp-midi': 'audio',
'audio/speex': 'audio',
'audio/t140c': 'audio',
'audio/t38': 'audio',
'audio/telephone-event': 'audio',
'audio/tone': 'audio',
'audio/ulpfec': 'audio',
'audio/vorbis': 'audio',
'audio/vorbis-config': 'audio',
'audio/wav': 'audio',
'audio/wave': 'audio',
'audio/x-flac': 'audio',
'audio/x-midi': 'audio',
'audio/x-mpeg-3': 'mp3',
'audio/x-wav': 'audio',
'image/gif': 'gif',
'image/jpeg': 'jpg',
'image/png': 'png',
'image/tif': 'tiff',
'image/tiff': 'tiff',
'text/csv': 'csv',
'text/plain': 'csv',
'text/xml': 'xml',
'video/mp4': 'mp4',
'video/mpeg': 'mp3',
'video/x-mpeg': 'mp3'}
AUTH_PASSWORD_VALIDATORS: '********'
CACHE_MIDDLEWARE_SECONDS: 600
ACCOUNT_LOGOUT_REDIRECT_URL: '/account/login/'
STATICFILES_STORAGE: 'django.contrib.staticfiles.storage.StaticFilesStorage'
DEVSERVER_TRUNCATE_SQL: True
SESSION_FILE_PATH: None
DEBUG: True
LANGUAGE_COOKIE_NAME: 'django_language'
PREPEND_WWW: False
DEFAULT_INDEX_TABLESPACE: ''
ES_HOST: 'localhost'
DEBUG_PROPAGATE_EXCEPTIONS: False
LANGUAGES_BIDI: ['he', 'ar', 'fa', 'ur']
FILE_UPLOAD_HANDLERS: ['django.core.files.uploadhandler.TemporaryFileUploadHandler']
CSRF_COOKIE_DOMAIN: None
SESSION_COOKIE_PATH: '/'
CSRF_FAILURE_VIEW: 'django.views.csrf.csrf_failure'
CSRF_COOKIE_AGE: 31449600
ES_SCHEME: 'http'
STATICFILES_DIRS: []
FILE_UPLOAD_TEMP_DIR: None
SESSION_COOKIE_HTTPONLY: True
DIGITALGLOBE_TILESET_URL_FORMAT: 'https://{{s}}.tiles.mapbox.com/v4/digitalglobe.{}/{{z}}/{{x}}/{{y}}.png?access_toke'
SITE_ID: 1
X_FRAME_OPTIONS: 'SAMEORIGIN'
NUMBER_GROUPING: 0
CELERY_BROKER_TRANSPORT: 'sqs'
EMAIL_TIMEOUT: None
ACCOUNT_EMAIL_CONFIRMATION_ANONYMOUS_REDIRECT_URL: '/account/login/'
SESSION_COOKIE_DOMAIN: None
EMAIL_SUBJECT_PREFIX: '[Django] '
EMAIL_HOST: 'localhost'
ES_MAX_RESULTS: 10000
BASE_TEMPLATE_DIR: '/vagrant/cadasta/templates'
LEAFLET_CONFIG: {
'PLUGINS': { 'draw': {'js': '/static/leaflet/draw/leaflet.draw.js'},
'groupedlayercontrol': { 'css': '/static/css/leaflet.groupedlayercontrol.min.css',
'js': '/static/js/leaflet.groupedlayercontrol.min.js'}},
'RESET_VIEW': False,
'TILES': [ ( <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e44748>,
'http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png',
{ 'attribution': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e052e8>,
'maxZoom': 19}),
( <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f940>,
'https://{s}.tiles.mapbox.com/v4/digitalglobe.n6ngnadl/{z}/{x}/{y}.png?access_token=pk.eyJ1IjoiZ,
{ 'attribution': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e05358>,
'maxZoom': 22}),
( <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f978>,
'https://{s}.tiles.mapbox.com/v4/digitalglobe.nal0g75k/{z}/{x}/{y}.png?access_token=pk.eyJ1IjoiZ,
{ 'attribution': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e05358>,
'maxZoom': 22}),
( <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f9e8>,
'https://{s}.tiles.mapbox.com/v4/digitalglobe.n6nhclo2/{z}/{x}/{y}.png?access_token=pk.eyJ1IjoiZ,
{ 'attribution': ( <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e052e8,
<django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e05358,
'maxZoom': 22}),
( <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1fa58>,
'https://{s}.tiles.mapbox.com/v4/digitalglobe.nal0mpda/{z}/{x}/{y}.png?access_token=pk.eyJ1IjoiZ,
{ 'attribution': ( <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e052e8,
<django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e05358,
'maxZoom': 22})]}
is_overridden: <bound method Settings.is_overridden of <Settings "config.settings.dev_debug">>
MIDDLEWARE_CLASSES:
('debug_toolbar.middleware.DebugToolbarMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
'audit_log.middleware.UserLoggingMiddleware',
'simple_history.middleware.HistoryRequestMiddleware')
USE_THOUSAND_SEPARATOR: False
EMAIL_USE_TLS: False
LOGGING: {
'disable_existing_loggers': False,
'formatters': { 'simple': { 'format': '%(asctime)s %(levelname)s '
'%(message)s'}},
'handlers': { 'file': { 'class': 'logging.FileHandler',
'filename': '/var/log/django/debug.log',
'formatter': 'simple',
'level': 'DEBUG'}},
'loggers': { 'django': { 'handlers': ['file'],
'level': 'DEBUG',
'propagate': True},
'xform.submissions': { 'handlers': ['file'],
'level': 'DEBUG'}},
'version': 1}
DATA_UPLOAD_MAX_MEMORY_SIZE: 2621440
AUTH_USER_MODEL: 'accounts.User'
SESSION_SAVE_EVERY_REQUEST: False
IGNORABLE_404_URLS: []
STATICFILES_FINDERS:
('django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
'sass_processor.finders.CssFinder')
DATABASES: {
'default': { 'ENGINE': 'django.contrib.gis.db.backends.postgis',
'HOST': 'localhost',
'NAME': 'cadasta',
'PASSWORD': '********',
'USER': 'cadasta'}}
CELERY_BROKER_TRANSPORT_OPTIONS: {
'queue_name_prefix': 'platform-staging-', 'region': 'us-west-2'}
DEVSERVER_MODULES:
('devserver.modules.sql.SQLSummaryModule',
'devserver.modules.profile.ProfileSummaryModule')
DECIMAL_SEPARATOR: '.'
SESSION_ENGINE: 'django.contrib.sessions.backends.db'
ALLOWED_HOSTS: ['*']
FILE_UPLOAD_PERMISSIONS: None
SESSION_EXPIRE_AT_BROWSER_CLOSE: False
FIXTURE_DIRS: []
TIME_FORMAT: 'P'
SASS_PROCESSOR_INCLUDE_DIRS:
('/vagrant/cadasta/core/node_modules',)
DEBUG_TOOLBAR_CONFIG: {
'SHOW_TOOLBAR_CALLBACK': <function always at 0x7f8467e45400>}
REST_FRAMEWORK: {
'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.TokenAuthentication',
'rest_framework.authentication.BasicAuthentication',
'rest_framework.authentication.SessionAuthentication'),
'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated',),
'DEFAULT_VERSION': 'v1',
'DEFAULT_VERSIONING_CLASS': 'rest_framework.versioning.NamespaceVersioning',
'EXCEPTION_HANDLER': 'core.views.api.exception_handler'}
CADASTA_INVALID_ENTITY_NAMES: ['add', 'new']
THOUSAND_SEPARATOR: ','
ACCOUNT_EMAIL_CONFIRMATION_EXPIRE_DAYS: 2
DATE_INPUT_FORMATS: ['%Y-%m-%d',
'%m/%d/%Y',
'%m/%d/%y',
'%b %d %Y',
'%b %d, %Y',
'%d %b %Y',
'%d %b, %Y',
'%B %d %Y',
'%B %d, %Y',
'%d %B %Y',
'%d %B, %Y']
SECURE_PROXY_SSL_HEADER: None
FILE_CHARSET: 'utf-8'
SECURE_HSTS_INCLUDE_SUBDOMAINS: False
DEFAULT_CHARSET: 'utf-8'
MESSAGE_STORAGE: 'django.contrib.messages.storage.fallback.FallbackStorage'
FIRST_DAY_OF_WEEK: 0
CSRF_COOKIE_PATH: '/'
FILE_UPLOAD_DIRECTORY_PERMISSIONS: None
CELERY_RESULT_QUEUE: 'result.fifo'
FILE_UPLOAD_MAX_MEMORY_SIZE: 2621440
DEFAULT_FROM_EMAIL: '[email protected]'
SESSION_SERIALIZER: 'django.contrib.sessions.serializers.JSONSerializer'
LOGGING_CONFIG: 'logging.config.dictConfig'
USE_L10N: True
LANGUAGE_COOKIE_DOMAIN: None
CSRF_HEADER_NAME: 'HTTP_X_CSRFTOKEN'
EMAIL_USE_SSL: False
TEMPLATES: [{'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['/vagrant/cadasta/templates',
'/vagrant/cadasta/templates/allauth'],
'OPTIONS': {'context_processors': ['django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages'],
'loaders': ['django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader']}}]
ADMINS: []
SECURE_SSL_REDIRECT: False
LANGUAGE_CODE: 'en-us'
SECURE_REDIRECT_EXEMPT: []
EMAIL_SSL_CERTFILE: None
WSGI_APPLICATION: 'config.wsgi.application'
LANGUAGE_COOKIE_PATH: '/'
DEFAULT_TABLESPACE: ''
CORS_ORIGIN_ALLOW_ALL: False
EMAIL_HOST_PASSWORD: '********'
SHORT_DATE_FORMAT: 'm/d/Y'
LOGIN_REDIRECT_URL: '/dashboard/'
DEFAULT_CONTENT_TYPE: 'text/html'
DATE_FORMAT: 'N j, Y'
EMAIL_HOST_USER: ''
CSRF_COOKIE_NAME: 'csrftoken'
EMAIL_BACKEND: 'django.core.mail.backends.console.EmailBackend'
PASSWORD_RESET_TIMEOUT_DAYS: '********'
CSRF_TRUSTED_ORIGINS: []
BASE_DIR: '/vagrant/cadasta/config'
CACHE_MIDDLEWARE_KEY_PREFIX: '********'
FORM_LANGS: {
'af': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16908>,
'ar': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16898>,
'az': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e169e8>,
'be': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16a20>,
'bg': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16a58>,
'bn': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16a90>,
'br': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16b00>,
'bs': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16b70>,
'ca': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16be0>,
'cs': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16c50>,
'cy': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16cc0>,
'da': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16d30>,
'de': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16da0>,
'el': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16e10>,
'en': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16e80>,
'eo': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16ef0>,
'es': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16f60>,
'et': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e16fd0>,
'eu': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c080>,
'fa': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c0f0>,
'fi': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c160>,
'fr': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c1d0>,
'fy': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c240>,
'ga': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c2b0>,
'gd': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c320>,
'gl': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c390>,
'he': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c400>,
'hi': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c470>,
'hr': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c4e0>,
'hu': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c550>,
'ia': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c5c0>,
'id': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c630>,
'io': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c6a0>,
'is': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c710>,
'it': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c780>,
'ja': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c7f0>,
'ka': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c860>,
'kar': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f8d0>,
'kk': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c8d0>,
'km': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c940>,
'kn': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1c9b0>,
'ko': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1ca20>,
'lb': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1ca90>,
'lt': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cb00>,
'lv': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cb70>,
'mk': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cbe0>,
'ml': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cc50>,
'mn': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1ccc0>,
'mr': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cd30>,
'my': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cda0>,
'nb': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1ce10>,
'ne': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1ce80>,
'nl': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cef0>,
'nn': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cf60>,
'os': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1cfd0>,
'pa': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f080>,
'pl': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f0f0>,
'pt': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f160>,
'ro': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f1d0>,
'ru': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f240>,
'sk': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f2b0>,
'sl': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f320>,
'sq': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f390>,
'sr': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f400>,
'sv': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f470>,
'sw': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f4e0>,
'ta': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f550>,
'te': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f5c0>,
'th': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f630>,
'tr': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f6a0>,
'tt': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f710>,
'uk': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f780>,
'ur': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f7f0>,
'vi': <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1f860>}
DATETIME_FORMAT: 'N j, Y, P'
TIME_INPUT_FORMATS: ['%H:%M:%S', '%H:%M:%S.%f', '%H:%M']
LANGUAGE_COOKIE_AGE: None
ABSOLUTE_URL_OVERRIDES: {
}
INTERNAL_IPS:
('0.0.0.0',)
SERVER_EMAIL: 'root@localhost'
MIME_LOOKUPS: {
'gpx': 'application/gpx+xml'}
SITE_NAME: 'Cadasta'
DATA_UPLOAD_MAX_NUMBER_FIELDS: 1000
ACCOUNT_ADAPTER: 'accounts.adapter.DefaultAccountAdapter'
SILENCED_SYSTEM_CHECKS: []
DATABASE_ROUTERS: '********'
LOGOUT_URL: '/account/logout/'
CSRF_COOKIE_SECURE: False
SECURE_BROWSER_XSS_FILTER: False
AUTHENTICATION_BACKENDS: ['core.backends.Auth',
'django.contrib.auth.backends.ModelBackend',
'accounts.backends.AuthenticationBackend']
SECURE_SSL_HOST: None
DEFAULT_FILE_STORAGE: 'buckets.test.storage.FakeS3Storage'
EMAIL_PORT: 1025
USE_X_FORWARDED_PORT: False
INSTALLED_APPS:
('debug_toolbar',
'django_extensions',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'django.contrib.gis',
'corsheaders',
'core',
'geography',
'accounts',
'organization',
'spatial',
'questionnaires',
'resources',
'buckets',
'party',
'xforms',
'search',
'tasks',
'crispy_forms',
'parsley',
'widget_tweaks',
'django_countries',
'leaflet',
'rest_framework',
'rest_framework_gis',
'rest_framework.authtoken',
'rest_framework_docs',
'djoser',
'tutelary',
'allauth',
'allauth.account',
'allauth.socialaccount',
'sass_processor',
'simple_history',
'jsonattrs')
LOGIN_URL: '/account/login/'
SECURE_CONTENT_TYPE_NOSNIFF: False
SHORT_DATETIME_FORMAT: 'm/d/Y P'
USE_I18N: True
SECURE_HSTS_SECONDS: 0
ACCOUNT_LOGIN_ATTEMPTS_TIMEOUT: 86400
YEAR_MONTH_FORMAT: 'F Y'
APPEND_SLASH: True
MIGRATION_MODULES: {
}
ES_PORT: '8000'
TEST_RUNNER: 'django.test.runner.DiscoverRunner'
LOCALE_PATHS: ['/vagrant/cadasta/config/locale']
MANAGERS: []
TIME_ZONE: 'UTC'
DEBUG_TOOLBAR_PANELS:
('debug_toolbar.panels.version.VersionDebugPanel',
'debug_toolbar.panels.timer.TimerDebugPanel',
'debug_toolbar.panels.headers.HeaderDebugPanel',
'debug_toolbar.panels.request_vars.RequestVarsDebugPanel',
'debug_toolbar.panels.template.TemplateDebugPanel',
'debug_toolbar.panels.sql.SQLDebugPanel',
'debug_toolbar.panels.signals.SignalDebugPanel')
ACCOUNT_FORMS: {
'profile': 'accounts.forms.ProfileForm',
'signup': 'accounts.forms.RegisterForm'}
SESSION_COOKIE_NAME: 'sessionid'
MONTH_DAY_FORMAT: 'F j'
SESSION_COOKIE_AGE: 1209600
DIGITALGLOBE_ATTRIBUTION: <django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e05358>
DJOSER: {
'ACTIVATION_URL': 'account/activate/{uid}/{token}',
'DOMAIN': 'localhost:8000',
'PASSWORD_RESET_CONFIRM_RETYPE': '********',
'PASSWORD_RESET_CONFIRM_URL': '********',
'SERIALIZERS': {'set_password_retype': '********'},
'SET_PASSWORD_RETYPE': '********',
'SITE_NAME': 'Cadasta'}
DISALLOWED_USER_AGENTS: []
LANGUAGES: [('en',
<django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1fac8>),
('fr',
<django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1fb38>),
('es',
<django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1fba8>),
('id',
<django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1fc18>),
('pt',
<django.utils.functional.lazy.<locals>.__proxy__ object at 0x7f8467e1fc88>)]
STATIC_URL: '/static/'
ATTRIBUTE_GROUPS: {
'location_attributes': { 'app_label': 'spatial',
'label': 'Location',
'model': 'spatialunit'},
'location_relationship_attributes': { 'app_label': 'spatial',
'label': 'Spatial relationship',
'model': 'spatialrelationship'},
'party_attributes': { 'app_label': 'party',
'label': 'Party',
'model': 'party'},
'party_relationship_attributes': { 'app_label': 'party',
'label': 'Party relationship',
'model': 'partyrelationship'},
'tenure_relationship_attributes': { 'app_label': 'party',
'label': 'Tenure Relationship',
'model': 'tenurerelationship'}}
SESSION_COOKIE_SECURE: False
ROOT_URLCONF: 'config.urls.dev'
TEST_NON_SERIALIZED_APPS: []
JSONATTRS_SCHEMA_SELECTORS: {
'party.party': ( 'project.organization.pk',
'project.pk',
'project.current_questionnaire',
'type'),
'party.partyrelationship': ( 'project.organization.pk',
'project.pk',
'project.current_questionnaire'),
'party.tenurerelationship': ( 'project.organization.pk',
'project.pk',
'project.current_questionnaire'),
'spatial.spatialrelationship': ( 'project.organization.pk',
'project.pk',
'project.current_questionnaire'),
'spatial.spatialunit': ( 'project.organization.pk',
'project.pk',
'project.current_questionnaire')}
SASS_PROCESSOR_ROOT: '/vagrant/cadasta/core/static'
SESSION_CACHE_ALIAS: 'default'
SECRET_KEY: '********'
FORCE_SCRIPT_NAME: None
CACHE_MIDDLEWARE_ALIAS: 'default'
ACCOUNT_LOGOUT_ON_GET: True
MIDDLEWARE: None
ACCOUNT_CONFIRM_EMAIL_ON_GET: True
USE_ETAGS: False
CELERY_TASK_ROUTES: {
'export.*': {'queue': 'export'},
'import.*': {'queue': 'import'},
'msg.*': {'queue': 'msg'}}
MEDIA_URL: '/media/'
IMPORTERS: {
'csv': 'organization.importers.csv.CSVImporter',
'xls': 'organization.importers.xls.XLSImporter'}
DEVSERVER_AUTO_PROFILE: False
DEFAULT_EXCEPTION_REPORTER_FILTER: 'django.views.debug.SafeExceptionReporterFilter'
STATIC_ROOT: None
LOGOUT_REDIRECT_URL: None
USE_X_FORWARDED_HOST: False
ACCOUNT_AUTHENTICATION_METHOD: 'username_email'
EMAIL_SSL_KEYFILE: '********'
```
</details>
<details>
<summary><code>exportWorker$ celery -A app report</code></summary>
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:3.5.2
billiard:3.5.0.2 sqs:N/A
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:sqs results:rpc:///
broker_transport: 'sqs'
worker_prefetch_multiplier: 0
broker_transport_options: {
'queue_name_prefix': 'platform-staging-', 'region': 'us-west-2'}
task_track_started: True
result_backend: 'rpc:///'
os: <module 'os' from '/vagrant/test_worker/env/lib/python3.5/os.py'>
imports:
('app.tasks',)
```
</details>
<details>
<summary><code>messageWorker$ celery -A app report</code></summary>
```
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:3.5.2
billiard:3.5.0.2 sqs:N/A
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:sqs results:rpc:///
imports:
('app.tasks',)
broker_transport: 'sqs'
task_track_started: True
result_backend: 'rpc:///'
os: <module 'os' from '/vagrant/test_worker/env/lib/python3.5/os.py'>
worker_prefetch_multiplier: 0
broker_transport_options: {
'queue_name_prefix': 'platform-staging-', 'region': 'us-west-2'}
```
</details>
### Causing the error
I spin up ExportWorker with `celery -A app worker -Q export -l INFO` and MessageWorker with `celery -A app worker -Q msg -l INFO` and then schedule the following task from the TaskProduer:
```python
# TaskProducer:
from celery import Signature
Signature(
'export.hello', args=['homer'],
link_error=Signature('msg.err', queue='msg')
).apply_async()
```
## Expected behavior
The `export.hello` task fails on ExportWorker, scheduling the `msg.err` task that is then executed on MessageWorker.
## Actual behavior
The ExportWorker is unable to schedule to followup errback task defined in `link_error`, raising a `NotRegistered` exception:
```python
# TaskProducer:
from celery import Signature
Signature(
'export.hello', args=['homer'],
link_error=Signature('msg.err', queue='msg')
).apply_async()
# ExportWorker:
[2017-05-09 00:14:53,458: INFO/MainProcess] Received task: export.hello[ad4ef3ea-06e8-4980-8d9c-91ae68c2305a]
[2017-05-09 00:14:53,506: INFO/PoolWorker-1] Resetting dropped connection: us-west-2.queue.amazonaws.com
[2017-05-09 00:14:53,517: INFO/PoolWorker-1] Starting new HTTPS connection (9): us-west-2.queue.amazonaws.com
[2017-05-09 00:14:53,918: WARNING/PoolWorker-1] /vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py:542: RuntimeWarning: Exception raised outside body: Task of kind 'msg.err' never registered, please make sure it's imported.:
Traceback (most recent call last):
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 622, in __protected_call__
return self.run(*args, **kwargs)
File "/vagrant/test_worker/app/tasks.py", line 9, in hello
raise Exception("NO HOMERS ALLOWED!")
Exception: NO HOMERS ALLOWED!
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/vagrant/test_worker/env/src/kombu/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'type'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 381, in trace_task
I, R, state, retval = on_error(task_request, exc, uuid)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 323, in on_error
task, request, eager=eager, call_errbacks=call_errbacks,
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 157, in handle_error_state
call_errbacks=call_errbacks)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 202, in handle_failure
call_errbacks=call_errbacks,
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/backends/base.py", line 168, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/backends/base.py", line 174, in _call_task_errbacks
if arity_greater(errback.type.__header__, 1):
File "/vagrant/test_worker/env/src/kombu/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/canvas.py", line 490, in type
return self._type or self.app.tasks[self['task']]
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/registry.py", line 19, in __missing__
raise self.NotRegistered(key)
celery.exceptions.NotRegistered: 'msg.err'
exc, exc_info.traceback)))
[2017-05-09 00:14:53,996: INFO/MainProcess] Resetting dropped connection: us-west-2.queue.amazonaws.com
[2017-05-09 00:14:53,999: INFO/MainProcess] Starting new HTTPS connection (3): us-west-2.queue.amazonaws.com
[2017-05-09 00:14:54,237: ERROR/MainProcess] Pool callback raised exception: Task of kind 'msg.err' never registered, please make sure it's imported.
Traceback (most recent call last):
File "/vagrant/test_worker/env/src/kombu/kombu/utils/objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'type'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/vagrant/test_worker/env/lib/python3.5/site-packages/billiard/pool.py", line 1748, in safe_apply_callback
fun(*args, **kwargs)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/worker/request.py", line 366, in on_failure
self.id, exc, request=self, store_result=self.store_errors,
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/backends/base.py", line 168, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/backends/base.py", line 174, in _call_task_errbacks
if arity_greater(errback.type.__header__, 1):
File "/vagrant/test_worker/env/src/kombu/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/canvas.py", line 490, in type
return self._type or self.app.tasks[self['task']]
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/registry.py", line 19, in __missing__
raise self.NotRegistered(key)
celery.exceptions.NotRegistered: 'msg.err'
# (MessageWorker has no output)
```
Being that neither of the workers have any awareness of task routing, I manually set the `queue` for any task that is scheduled on the worker. This technique works for standard `link` operations:
```python
# TaskProducer:
from celery import Signature
Signature(
'export.hello', args=['world'],
link=Signature(
'msg.success', kwargs={'email_address': '[email protected]'}, queue='msg'
)
).apply_async()
# ExportWorker:
[2017-05-09 00:08:29,290: INFO/MainProcess] Received task: export.hello[a08db60b-7c59-478e-9293-c0a716629b11]
[2017-05-09 00:08:30,747: INFO/PoolWorker-1] Resetting dropped connection: us-west-2.queue.amazonaws.com
[2017-05-09 00:08:30,751: INFO/PoolWorker-1] Starting new HTTPS connection (7): us-west-2.queue.amazonaws.com
[2017-05-09 00:08:32,469: INFO/PoolWorker-1] Task export.hello[a08db60b-7c59-478e-9293-c0a716629b11] succeeded in 1.723272997973254s: 'hello world'
# MessageWorker:
[2017-05-09 00:08:32,448: INFO/MainProcess] Received task: msg.success[b728d3c7-34cf-49ea-9cfe-5d374c3c1d0e]
[2017-05-09 00:08:32,497: INFO/PoolWorker-1] Resetting dropped connection: us-west-2.queue.amazonaws.com
[2017-05-09 00:08:32,504: INFO/PoolWorker-1] Starting new HTTPS connection (5): us-west-2.queue.amazonaws.com
[2017-05-09 00:08:32,738: INFO/PoolWorker-1] Task msg.success[b728d3c7-34cf-49ea-9cfe-5d374c3c1d0e] succeeded in 0.24422147497534752s: 'Sending email: hello world'
```
The above code successfully runs `export.hello` on ExportWorker and then passes the results to the `msg.success` task which is run on MessageWorker. For the record, `chain()` works as well.
Finally, if the errback is included with the callback as a list in the `link` argument, it appears that the errback isn't event attempted:
```python
# On TaskProducer
Signature(
'export.hello', args=['homer'],
link=[
Signature(
'msg.success', kwargs={'email_address': '[email protected]'}, queue='msg'),
Signature(
'msg.err', queue='msg')
]
).apply_async()
# On ExportWorker
[2017-05-09 00:02:26,406: ERROR/PoolWorker-1] Task export.hello[571fd654-909c-4aa4-b6df-50967d172d9f] raised unexpected: Exception('NO HOMERS ALLOWED!',)
Traceback (most recent call last):
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 622, in __protected_call__
return self.run(*args, **kwargs)
File "/vagrant/test_worker/app/tasks.py", line 9, in hello
raise Exception("NO HOMERS ALLOWED!")
Exception: NO HOMERS ALLOWED!
# (MessageWorker has no output)
```
### Ideas
If I move the `email.err` task to ExportWorker, it actually catches and handles the error:
```python
@app.task(name='export.hello', bind=True)
def hello(self, name='world'):
if name == 'homer':
raise Exception("NO HOMERS ALLOWED!")
return 'hello {}'.format(name)
@app.task(name='msg.err', bind=True)
def email_err(self, context, exception, traceback):
print("Handled error: {}".format(exception))
return 'Something went wrong!'
```
```python
# TaskProducer:
from celery import Signature
Signature(
'export.hello', args=['homer'],
link_error=Signature('msg.err', queue='msg')
).apply_async()
# ExportWorker:
[2017-05-09 00:31:49,666: INFO/MainProcess] Received task: export.hello[14ca7a6e-6e7a-4b96-b0aa-42f4e4919f53]
[2017-05-09 00:31:49,738: INFO/PoolWorker-1] Found credentials in environment variables.
[2017-05-09 00:31:50,031: INFO/PoolWorker-1] Starting new HTTPS connection (1): us-west-2.queue.amazonaws.com
[2017-05-09 00:31:50,429: WARNING/PoolWorker-1] Handled error: NO HOMERS ALLOWED!
[2017-05-09 00:31:50,430: ERROR/PoolWorker-1] Task export.hello[14ca7a6e-6e7a-4b96-b0aa-42f4e4919f53] raised unexpected: Exception('NO HOMERS ALLOWED!',)
Traceback (most recent call last):
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 367, in trace_task
R = retval = fun(*args, **kwargs)
File "/vagrant/test_worker/env/lib/python3.5/site-packages/celery/app/trace.py", line 622, in __protected_call__
return self.run(*args, **kwargs)
File "/vagrant/test_worker/app/tasks.py", line 9, in hello
raise Exception("NO HOMERS ALLOWED!")
Exception: NO HOMERS ALLOWED!
# (MessageWorker has no output)
```
This is despite the fact that ExportWorker should only be reading off of the `export` queue. **This leads me to believe that the issue may be that the `link_error` logic does not respect the `queue` argument.** However, I haven't dug into the code enough to verify this.
This is possibly related to #3350.
| Reading through the [code](https://github.com/celery/celery/blob/e812c5780b4006516116f059ab498e1f043bdd50/celery/backends/base.py#L174-L175), I see that the `errback` is designed to be run inline if it has a function signature that accepts more than one arg. Naturally, the system is unable to examine the function signature of a task if that task is not in its codebase, which is where the problem lies.
I'd be curious to learn _why_ Celery was designed to handle errors synchronously. Doesn't this violate the advice for [pursuing granularity](http://celery.readthedocs.io/en/latest/userguide/tasks.html#granularity) when writing tasks? Additionally, it causes errors such as above. Or maybe I'm missing something? If I'm correct about above, how would the maintainers feel about a PR to schedule the `errback` as a task if the above error occurs? Unfortunately, it will be impossible to distinguish between old-style signatures that only accept a single task id arg and new-style signatures that accept the request, exception, and traceback. If there's a work-around for this that would allow us to support both (or if we could ditch support for old-style signatures), please let me know. Maybe a user should be able to include this detail as an `option` in the errback signature?
As an aside, the documentation around this topic seems a bit hard to follow. It is correctly documented, however it's done so in the [Chains documentation](http://docs.celeryproject.org/en/latest/userguide/canvas.html?highlight=link_error#chains):
> The worker won’t actually call the errback as a task, but will instead call the errback function directly so that the raw request, exception and traceback objects can be passed to it.
However, in the [Linking (callbacks/errbacks) documentation](http://docs.celeryproject.org/en/latest/userguide/calling.html?highlight=link_error#linking-callbacks-errbacks) it's described differently:
> You can also cause a callback to be applied if task raises an exception (errback), but this behaves differently from a regular callback in that it will be passed the id of the parent task, not the result. This is because it may not always be possible to serialize the exception raised, and so this way the error callback requires a result backend to be enabled, and the task must retrieve the result of the task instead.
Is there a difference between the errback mentioned in the Chains documentation and the errback mentioned in the Linking documentation?
A note that `link_error` appears to work correctly if the error occurs within a `chord`:
```python
from celery.canvas import chord, signature
from .celery import app
@app.task
def export_foo():
# do some things
pass
@app.task
def export_bar():
# do some things
pass
@app.task
def create_zip():
# do some things
pass
def export():
_chain = chord([
export_foo.s(),
export_bar.s(),
])
callback = create_zip.s().set(link_error=signature('msg.email_err'))
_chain(callback)
```
If the exception occurs in `export_foo` or `export_bar`, the `msg.email_err` task is properly scheduled and run on the remote worker. However if the exception occurs in the `create_zip` task, the `link_error` is attempted to be run inline and the `celery.exceptions.NotRegistered` will be raised.
@alukach I am facing a similar problem with link_error being called synchronously. Were you able to find a solution or patch?
@alukach Taking inspiration from your case where chord callbacks work; link_error(errback) works when errback is a chain.
app1.py
```
@app.task(name='export.hello', bind=True, queue="export")
def hello(self, name='world'):
if name == 'homer':
raise Exception("NO HOMERS ALLOWED!")
return 'hello {}'.format(name)
```
app2.py
```
@app.task(name='msg.success', bind=True)
def email_success(self, msg, email_address):
return 'Sending email: {}'.format(msg)
@app.task(name='msg.err', bind=True)
def email_err(self):
print("Handled error")
return 'Something went wrong!'
```
main.py
```
from celery import signature
import app2
_sign = signature(
'export.hello', args=['homer'],
link_error=(app2.email_err.si()|app2.email_err.si()),
app=app2.app,
queue="export"
)
_sign.apply_async()
```
@karan718 Interesting, so your example then runs `email_err` twice, correct? Good find though, I think there's clearly room for a patch to be made, I'd be interested to hear the input of some maintainers on this issue before submitting a patch (/ping @thedrow, @georgepsarakis, @auvipy ?)
@alukach you could replace one of the calls to email_err with a dummy task. Its not pretty.
@karan718 yeah, not the prettiest but working is definitely better than broken. Thanks for sharing.
I agree. A patch is worthwhile in this case.
I am facing this issue with celery 4.1.
I have 2 celery workers attached to one broker, but without sharing the code.
current_app.send_task('gsclient.run_task', kwargs={'task_id': task.id}, queue=queue_name,
link=complete_task.subtask(kwargs={'task_id': task_id}),
link_error=complete_task.subtask(kwargs={'task_id': task_id}),
)
When the task finishes without error - `link` is called and executed. In case of error `link_error` is not called with the stack trace from the first post of this thread.
In my case workaround will be to wrap the task to handle all exceptions, but return error code.
This might have been addressed by commit https://github.com/celery/celery/commit/bc366c887f194a3ff1f99671d9742a525a066339 : "*Added support to bind=True task as link errors*"
We have run into same issue, so I don't think this should be closed yet.
1. Worker API: Celery 4.2.1
2. Worker Search: Celery 4.2.1
API triggers task on `search` queue with `link` and `link_error`:
```python
eventbus.send_task(
task_name,
(event_name, data,),
queue=queue_name,
link=signature("ahoy.eventbus.tasks.eventbus_success", args=(event_name, data, receiver),
queue="eventbus-coreapi"),
link_error=signature("ahoy.eventbus.tasks.eventbus_failure", args=(event_name, data, receiver),
queue="eventbus-coreapi"),
**kwargs
)
```
After Search Worker processes the task, then `eventbus_success` is triggered correctly on `api` queue. However, `eventbus_failure` is executed on `search` worker, even that the queue is specified.
Exception from `search` worker:
```
File "/usr/local/lib/python3.5/dist-packages/billiard/pool.py", line 1747, in safe_apply_callback
fun(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/celery/worker/request.py", line 367, in on_failure
self.id, exc, request=self, store_result=self.store_errors,
File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 162, in mark_as_failure
self._call_task_errbacks(request, exc, traceback)
File "/usr/local/lib/python3.5/dist-packages/celery/backends/base.py", line 171, in _call_task_errbacks
not isinstance(errback.type.__header__, partial) and
File "/usr/local/lib/python3.5/dist-packages/kombu/utils/objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "/usr/local/lib/python3.5/dist-packages/celery/canvas.py", line 471, in type
return self._type or self.app.tasks[self['task']]
File "/usr/local/lib/python3.5/dist-packages/celery/app/registry.py", line 21, in __missing__
raise self.NotRegistered(key)
celery.exceptions.NotRegistered: 'ahoy.eventbus.tasks.eventbus_failure'
```
This issue still actual.
Two app, different queues, one worker for one queue.
When call for external task and err_handler local with
`s = celery.signature('external_app.tasks.task', options={'link_error': err_handler}) `
external_app worker complains that err_handler NotRegistered (maybe err_handler goes to wrong queue?)
But works fine
```
s = celery.signature('external_app.tasks.task')
s = s.link_error(err_handler)
```
So possible workaround to use link_error() when errback in different codebase than calling task.
**UPDATE**:
I haven't noticed that link_error returns its argument. So in second case err_handler called directly. Not solution.
Can you please try master?
Yes, I am on celery==4.3.0rc1 | 2019-03-19T16:44:07 |
celery/celery | 5,423 | celery__celery-5423 | [
"2700"
] | 52d63439fd9de6ee613de844306345c0584dff62 | diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -2,6 +2,7 @@
"""Task implementation: request context and the task base class."""
from __future__ import absolute_import, unicode_literals
+import signal
import sys
from billiard.einfo import ExceptionInfo
@@ -20,6 +21,7 @@
from celery.utils import abstract
from celery.utils.functional import mattrgetter, maybe_list
from celery.utils.imports import instantiate
+from celery.utils.log import get_logger
from celery.utils.nodenames import gethostname
from celery.utils.serialization import raise_with_context
@@ -388,6 +390,10 @@ def add_around(cls, attr, around):
setattr(cls, attr, meth)
def __call__(self, *args, **kwargs):
+ logger = get_logger(__name__)
+ handle_sigterm = lambda signum, frame: \
+ logger.info('SIGTERM received, waiting till the task finished')
+ signal.signal(signal.SIGTERM, handle_sigterm)
_task_stack.push(self)
self.push_request(args=args, kwargs=kwargs)
try:
| SIGTERM does not do a warm shutdown.
According to the documentation here: http://celery.readthedocs.org/en/latest/userguide/workers.html#process-signals
I should be able to send a SIGTERM to my running worker process and have it finish the task it is currently working on before it shuts down. However when I call sigterm the process exits immediately with the following traceback.
```
Traceback (most recent call last):
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/billiard/pool.py", line 1171, in mark_as_worker_lost
human_status(exitcode)),
WorkerLostError: Worker exited prematurely: signal 15 (SIGTERM).
```
I'm using Celery 3.1.18 with django 1.4, is there something special that has to be done to integrate the signal handlers with django?
Also posted here on the mailing list: https://groups.google.com/forum/#!topic/celery-users/t8g5KvIvQZ8
| Closing this, as we don't have the resources to complete this task.
May be fixed in master, let's see if comes back after 4.0 release.
@ask any suggestions on where to start with trying to fix this? I'm happy to try to dig into it if you can provide me a starting point.
I'm not sure, are you running on Heroku perhaps? TERM does not propagate to child processes, so I'm unsure why this would happen.
We are running in ec2 and in vagrant, the signal is sent from supervisord to the celery process.
Are you using stopasgroup or killasgroup in your supervisor config? If so, you may not want to do that.
We are doing that. What's the reason why that would be causing issues?
Supervisor will send the SIGTERM down to all the forks (with stopasgroup) which I doubt would handle the shutdown that way. I have tried in the past (unsuccessfully) to catch the SIGTERM for gracefully exiting long operations, however I haven't had time to dig into why I had trouble doing so.
If the work itself only gets the SIGTERM, it will wait for the forks to complete their tasks before exiting I think.
If you want the worker to drop currently executing tasks you should send SIGQUIT to parent, not send TERM to child processes.
@ask Thanks :-) I'll play with that to see the behavior of it.
As for this issue, I think it is solved really. Having supervisor set to stopasgroup will cause the described problem and @feanil says they are doing that. Best, I think, would be just to set to stopasgroup back to false and be done as the SIGTERMed worker will get will wait on all tasks to complete (what he was looking for).
Thanks! Then I guess we can close again until further information :)
I'm looking into this problem with @feanil, and I am mystified about the right way to get the behavior we want.
What we want: for "supervisor restart" to restart celery workers, but let them finish their current task.
We found that the default supervisor signal (TERM) will restart the workers, but will cause them to abort their current task.
Changing the signal to INT makes the worker restart if it is idle, but if busy, will finish its task and then restart.
Does this match your understanding of how celery workers behave?
What we are trying to debug is a situation where workers (even idle workers) get stuck in a supervisor STOPPING state. We don't know why it is. Switching back to TERM for the stop signal seems to fix the problem, but aborts in-progress tasks. Is there a way to get what we need?
I guess I would need to know more about your configuration first, but sending TERM to the master/parent process initiates a warm shutdown sequence. It closes the connection to the broker, waits for all currently executing tasks to finish and then terminates.
Currently executing tasks, means any task in the 'active' state, tasks merely reserved by the worker
is left unacknowledged so that the broker resends them as soon as the worker closes the broker connection.
I am definitely seeing celery tasks aborting when supervisor sends TERM. I don't know how to figure out why that is different than your description.
@nedbat When you have stopasgroup enabled, this will send a TERM signal to the worker plus all worker forks causing the forks to abort. You want to turn off stopasgroup (and update supervisor) and the TERM signal will only get sent to the worker.
Apologies if I am not understanding the issue properly. I use supervisor with celery very often, though. If you want to post a gist to your supervisor config I'll be happy to look at it.
The TERM signal being sent to the process _group_ would certainly explain what you're seeing!
@ask @mverrilli thanks for walking me through it. Removing stopasgroup, and using the default signal of TERM seems good. :)
I am finding celery multi is exiting immediately on SIGTERM rather than waiting for tasks to finish, is that case applicable to this issue? Also I'm sending SIGTERM to all processes as reported by `ps aux`, should we perhaps only send to the root process or something?
@carn1x you should only be sending the SIGTERM signal to the parent process, not the child threads.
Something like this would probably work for you -- it's been working for me.
```
# find parent celery process id
CELERY_PID=$(ps aux --sort=start_time | grep 'celery worker' | grep 'bin/celery' | head -1 | awk '{print $2}')
# warm shutdown builder via SIGTERM signal to parent celery process
kill -TERM $CELERY_PID
```
I use this in a bash script before doing some cleanup maint. I actually wait to confirm the thread stopped before continuing:
```
while kill -0 $CELERY_PID 2> /dev/null; do
#make sure you're not waiting forever
done
```
If you read the comments above by @ask, he clarified that the process group shouldn't get killed, just the parent. I can confirm the code above works great for me on production systems. I hope this helps.
BTW, that above assumes you only have one worker running -- with a set of threads. If you have multiple workers, you'll have to adjust that to parse out the correct parent process ID. The main point is that you need to kill the parent (the first celery worker thread).
thanks @rocksfrow, working great :)
@ask current example [supervisord/celeryd.conf](https://github.com/celery/celery/blob/master/extra/supervisord/celeryd.conf) uses `killasgroup=true`. Maybe the default should be false?
Expected behavior of acks_late and a SIGTERM
Going off ask's reply above..
So let's assume you have a worker working on a 10 second task with acks_late=True. Halfway through, the parent process gets a SIGTERM.
Based on what Ask said, and what I have experienced.. what happens is
1) The Parent immediately cuts the connection to the broker. This triggers the late_ack to trip, and another task to get assigned the task that is in progress.
2) The parent waits for the child to finish executing his current task, completing the task once
3) Another worker is re-given this task (due to 1 and late_ack), and this task is duplicated a second time.
Does this sound correct? Is there any clean way to both use late_ack AND do a graceful SIGTERM for a shutdown?
Great comment @brianwawok, warm shutdown with acks_late looks broken. The task being immediately reassigned does not meet any definition of warm shutdown. Although this should not cause correctness issues because acks_late tasks are supposed to be idempotent.
I have small workaround to fix it now
```python
import signal
from celery import Celery, Task
class MyBaseTask(Task):
`logger = get_task_logger('my_logger)`
def __call__(self, *args, **kwargs):
signal.signal(signal.SIGTERM,
lambda signum, frame: self.logger.info('SIGTERM received, wait till the task finished'))
return super().__call__(*args, **kwargs)
app = Celery('my_app')
app.Task = MyBaseTask
````
For me it worked
could you please add a pr with your some proposed fix? @namevic | 2019-04-02T05:26:54 |
|
celery/celery | 5,462 | celery__celery-5462 | [
"5411"
] | f2cab7715cceafcae1343fdcdc65704e0a2c751f | diff --git a/celery/utils/time.py b/celery/utils/time.py
--- a/celery/utils/time.py
+++ b/celery/utils/time.py
@@ -207,8 +207,8 @@ def remaining(start, ends_in, now=None, relative=False):
~datetime.timedelta: Remaining time.
"""
now = now or datetime.utcnow()
- if now.utcoffset() != start.utcoffset():
- # Timezone has changed, or DST started/ended
+ if str(start.tzinfo) == str(now.tzinfo) and now.utcoffset() != start.utcoffset():
+ # DST started/ended
start = start.replace(tzinfo=now.tzinfo)
end_date = start + ends_in
if relative:
| diff --git a/t/unit/utils/test_time.py b/t/unit/utils/test_time.py
--- a/t/unit/utils/test_time.py
+++ b/t/unit/utils/test_time.py
@@ -107,9 +107,53 @@ def test_maybe_timedelta(arg, expected):
assert maybe_timedelta(arg) == expected
-def test_remaining_relative():
+def test_remaining():
+ # Relative
remaining(datetime.utcnow(), timedelta(hours=1), relative=True)
+ """
+ The upcoming cases check whether the next run is calculated correctly
+ """
+ eastern_tz = pytz.timezone("US/Eastern")
+ tokyo_tz = pytz.timezone("Asia/Tokyo")
+
+ # Case 1: `start` in UTC and `now` in other timezone
+ start = datetime.now(pytz.utc)
+ now = datetime.now(eastern_tz)
+ delta = timedelta(hours=1)
+ assert str(start.tzinfo) == str(pytz.utc)
+ assert str(now.tzinfo) == str(eastern_tz)
+ rem_secs = remaining(start, delta, now).total_seconds()
+ # assert remaining time is approximately equal to delta
+ assert rem_secs == pytest.approx(delta.total_seconds(), abs=1)
+
+ # Case 2: `start` and `now` in different timezones (other than UTC)
+ start = datetime.now(eastern_tz)
+ now = datetime.now(tokyo_tz)
+ delta = timedelta(hours=1)
+ assert str(start.tzinfo) == str(eastern_tz)
+ assert str(now.tzinfo) == str(tokyo_tz)
+ rem_secs = remaining(start, delta, now).total_seconds()
+ assert rem_secs == pytest.approx(delta.total_seconds(), abs=1)
+
+ """
+ Case 3: DST check
+ Suppose start (which is last_run_time) is in EST while next_run is in EDT, then
+ check whether the `next_run` is actually the time specified in the start (i.e. there is not an hour diff due to DST).
+ In 2019, DST starts on March 10
+ """
+ start = eastern_tz.localize(datetime(month=3, day=9, year=2019, hour=10, minute=0)) # EST
+ now = eastern_tz.localize(datetime(day=11, month=3, year=2019, hour=1, minute=0)) # EDT
+ delta = ffwd(hour=10, year=2019, microsecond=0, minute=0, second=0, day=11, weeks=0, month=3)
+ # `next_actual_time` is the next time to run (derived from delta)
+ next_actual_time = eastern_tz.localize(datetime(day=11, month=3, year=2019, hour=10, minute=0)) # EDT
+ assert start.tzname() == "EST"
+ assert now.tzname() == "EDT"
+ assert next_actual_time.tzname() == "EDT"
+ rem_time = remaining(start, delta, now)
+ next_run = now + rem_time
+ assert next_run == next_actual_time
+
class test_timezone:
| Minor bug in celery utils time.py file
There is a function named remaining in the celery.utils.time.py file. In that function, I think the statement start = start.replace(tzinfo=now.tzinfo) (line 211) has to be replaced with start = start.astimezone(now.tzinfo).
I hope the expected working by using the statement (`start = start.replace(tzinfo=now.tzinfo)`) is to change the `start` variable's date and time according to the `now` variable's timezone. But the replace function in the above mentioned statement is just replacing the timezone of the `start` variable without changing the values of its date and time. I thought `astimezone` is the method required for our purpose (which converts the date and time in `start` variable to the given timezone argument).
So, I think `start = start.replace(tzinfo=now.tzinfo)` must be changed to `start = start.astimezone(now.tzinfo)`.
Please correct me incase I am wrong.
Module versions:
Python 3.5.2
celery==4.2.1
django-celery-beat==1.4.0
| I think that we should be calling [`make_aware()`](https://github.com/celery/celery/blob/master/celery/utils/time.py#L285) instead of doing a replace.
Care to issue a PR?
I would like to...But was something changed in `5.0-devel` branch? I dont find that `start = start.replace(tzinfo=now.tzinfo)` line itself. So, I would like to know whether the issue is taken care of already. Else, I would issue a PR. Please let me know.
Also, `make_aware` just makes a `datetime` aware from being naive I guess. But from the code, the statement `start = start.replace(tzinfo=now.tzinfo)` is called when the `utcoffset` of the `start` and `now` variables are different. So, doesn't that mean the `replace` function was trying to normalize them to one timezone before finding the remaining time (the purpose of `remaining` function)? And I think `astimezone` is suitable for this rather than `make_aware`. Please correct me if I am wrong.
`make_aware` also eventually calls `astimezone`.
So, I think the bug can be resolved by just removing the following lines https://github.com/celery/celery/blob/59547c5ccb78d2f30d53cb7fedf9db147f2153c8/celery/utils/time.py#L210-L212
The reason is that even if both the `start` and `now` variables are in different timezones, the line https://github.com/celery/celery/blob/59547c5ccb78d2f30d53cb7fedf9db147f2153c8/celery/utils/time.py#L216 (subtraction of two datetimes) will take care of the timezone difference and return the time difference accordingly.
So, I hope there is no need for a PR (for this minor change).
I'm quite busy with Celery 5.
If you can prove that's the case with a unit test and a contribution, go ahead and do so.
Hey, I just found that in `5.0-devel` branch, the above mentioned lines have already been removed.
https://github.com/celery/celery/blob/7ee75fa9882545bea799db97a40cc7879d35e726/celery/utils/time.py#L180-L207
So, the issue might have been taken care already. In that case, the issue can be closed.
If the unit-test contribution is still needed, then please let me know.
I think we still need this on 4.3 and I don't think there was a test added to that.
hey..I can do that. Can you please tell me how to add a test and give PR (I know about issuing PR but not sure about adding unittests)? Sorry but I dont have much knowledge about unittests.
Just need some clarification on the specification about scheduling:
Suppose user in America/Los_Angeles timezone schedules a task to run like this (he schedules during summer): 10:00 AM, 11th day of the month, once in 6 months.
So, what is the expected scheduling?
1. When the task runs in summer, it will run on 10:00 AM PDT time and during winter, it will run at 10:00 AM PST time i.e. it will run at 10:00 AM time irrespective of PDT or PST.
OR
2. It will run on 10:00 AM PDT time during summer and 9:00 AM PST (equivalent of 10:00 AM PDT) during winter. | 2019-04-15T06:33:04 |
celery/celery | 5,486 | celery__celery-5486 | [
"5439"
] | 3d387704d4a18db5aea758e9a26c1ee4d71659df | diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -92,7 +92,6 @@ class Context(object):
errbacks = None
timelimit = None
origin = None
- task_name = None
_children = None # see property
_protected = 0
@@ -129,7 +128,6 @@ def as_execution_options(self):
'retries': self.retries,
'reply_to': self.reply_to,
'origin': self.origin,
- 'task_name': self.task_name
}
@property
diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -699,7 +699,7 @@ def _store_result(self, task_id, result, state,
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
request_meta = {
- 'name': getattr(request, 'task_name', None),
+ 'name': getattr(request, 'task', None),
'args': getattr(request, 'args', None),
'kwargs': getattr(request, 'kwargs', None),
'worker': getattr(request, 'hostname', None),
| diff --git a/t/unit/tasks/test_result.py b/t/unit/tasks/test_result.py
--- a/t/unit/tasks/test_result.py
+++ b/t/unit/tasks/test_result.py
@@ -408,7 +408,7 @@ def test_get_request_meta(self):
x = self.app.AsyncResult('1')
request = Context(
- task_name='foo',
+ task='foo',
children=None,
args=['one', 'two'],
kwargs={'kwarg1': 'three'},
| [Celery 4.3] when result_extended=True task name is always None
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
```
</p>
</details>
# Expected Behavior
When `result_extended` is set to `True`, we expect to get the task name using the `name` attribute.
# Actual Behavior
The `name` is always `None`, after debugging the `Context` object that is being passed to `BaseKeyValueStoreBackend.store_result` it contains the task name with the attribute `task` and not `task_name` as `BaseKeyValueStoreBackend._store_result` is trying to get from the request object:
```python
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
request_meta = {
'name': getattr(request, 'task_name', None),
'args': getattr(request, 'args', None),
'kwargs': getattr(request, 'kwargs', None),
'worker': getattr(request, 'hostname', None),
'retries': getattr(request, 'retries', None),
'queue': request.delivery_info.get('routing_key')
if hasattr(request, 'delivery_info') and
request.delivery_info else None
}
meta.update(request_meta)
```
Also the unit test `test_result.test_AsyncResult.test_get_request_meta` is using a `Context` object with an explicit `task_name` set.
| Hi @johnarnold, I'm pinging you since you contributed the original PR.
It seems like `task_name` is not properly initialized. Do you have an idea why?
@thedrow Ouch. I swear I tested for task name and had it working with flower. I'll take a look.
Found: https://github.com/celery/celery/blob/master/celery/app/task.py#L95
Hi,
I'm facing the same issue. I'm not familiarized with the source code but I wanted to share what I found while trying to understand how everything works.
The message sent to the worker has the task name under the key `'task'`. See:
- https://github.com/celery/celery/blob/master/celery/app/amqp.py#L353 (protocol v2)
- https://github.com/celery/celery/blob/master/celery/app/amqp.py#L433 (protocol v1)
In fact, the consumer fails otherwise. See:
- https://github.com/celery/celery/blob/master/celery/worker/consumer/consumer.py#L546 (protocol v2)
- https://github.com/celery/celery/blob/master/celery/worker/consumer/consumer.py#L555 (protocol v1)
Using that same message format, a `Request` object is created. See:
- https://github.com/celery/celery/blob/b2668607c909c61becd151905b4525190c19ff4a/celery/worker/strategy.py#L149
The `Context` class is initialized using that request dict. See:
- https://github.com/celery/celery/blob/b2668607c909c61becd151905b4525190c19ff4a/celery/worker/request.py#L523
- https://github.com/celery/celery/blob/4d4fb3bf04ebf1642428dccdd578f2f244aa158f/celery/app/trace.py#L365
Since the request dict has the task name under the key `'task'` (not `'task_name'`), the `Context` object has the task name in the attribute `'task'` as well. See:
- https://github.com/celery/celery/blob/master/celery/app/task.py#L100
- https://github.com/celery/celery/blob/master/celery/app/task.py#L103
It seems that using `task` instead of `task_name` in `BaseKeyValueStoreBackend._store_result` should fix this. Of course, the attribute in the `Context` class should be changed as well:
- https://github.com/celery/celery/blob/master/celery/app/task.py#L95
- https://github.com/celery/celery/blob/master/celery/app/task.py#L132
I hope it helps!
@svidela thank you for the explanation, that's what I was saying, the request object has a `task` attribute and not a `task_name`, the tests pass because a `Context` object is explicitly set with `task_name`. | 2019-04-25T14:58:35 |
celery/celery | 5,499 | celery__celery-5499 | [
"4457"
] | a7f92282d6fa64b03df8d87517f37e3fe3023d93 | diff --git a/celery/concurrency/asynpool.py b/celery/concurrency/asynpool.py
--- a/celery/concurrency/asynpool.py
+++ b/celery/concurrency/asynpool.py
@@ -482,8 +482,16 @@ def register_with_event_loop(self, hub):
[self._track_child_process(w, hub) for w in self._pool]
# Handle_result_event is called whenever one of the
# result queues are readable.
- [hub.add_reader(fd, self.handle_result_event, fd)
- for fd in self._fileno_to_outq]
+ stale_fds = []
+ for fd in self._fileno_to_outq:
+ try:
+ hub.add_reader(fd, self.handle_result_event, fd)
+ except OSError:
+ logger.info("Encountered OSError while trying "
+ "to access fd %s ", fd, exc_info=True)
+ stale_fds.append(fd) # take note of stale fd
+ for fd in stale_fds: # Remove now defunct file descriptors
+ self._fileno_to_outq.pop(fd, None)
# Timers include calling maintain_pool at a regular interval
# to be certain processes are restarted.
@@ -1057,7 +1065,7 @@ def create_process_queues(self):
return inq, outq, synq
def on_process_alive(self, pid):
- """Called when reciving the :const:`WORKER_UP` message.
+ """Called when receiving the :const:`WORKER_UP` message.
Marks the process as ready to receive work.
"""
| diff --git a/t/unit/worker/test_worker.py b/t/unit/worker/test_worker.py
--- a/t/unit/worker/test_worker.py
+++ b/t/unit/worker/test_worker.py
@@ -10,8 +10,9 @@
import pytest
from amqp import ChannelError
-from case import Mock, patch, skip
+from case import Mock, mock, patch, skip
from kombu import Connection
+from kombu.asynchronous import get_event_loop
from kombu.common import QoS, ignore_errors
from kombu.transport.base import Message
from kombu.transport.memory import Transport
@@ -29,7 +30,7 @@
from celery.utils.nodenames import worker_direct
from celery.utils.serialization import pickle
from celery.utils.timer2 import Timer
-from celery.worker import components, consumer, state
+from celery.worker import autoscale, components, consumer, state
from celery.worker import worker as worker_module
from celery.worker.consumer import Consumer
from celery.worker.pidbox import gPidbox
@@ -791,6 +792,55 @@ def test_with_autoscaler(self):
)
assert worker.autoscaler
+ @pytest.mark.nothreads_not_lingering
+ @mock.sleepdeprived(module=autoscale)
+ def test_with_autoscaler_file_descriptor_safety(self):
+ # Given: a test celery worker instance with auto scaling
+ worker = self.create_worker(
+ autoscale=[10, 5], use_eventloop=True,
+ timer_cls='celery.utils.timer2.Timer',
+ threads=False,
+ )
+ # Given: This test requires a QoS defined on the worker consumer
+ worker.consumer.qos = qos = QoS(lambda prefetch_count: prefetch_count, 2)
+ qos.update()
+
+ # Given: We have started the worker pool
+ worker.pool.start()
+
+ # Then: the worker pool is the same as the autoscaler pool
+ auto_scaler = worker.autoscaler
+ assert worker.pool == auto_scaler.pool
+
+ # Given: Utilize kombu to get the global hub state
+ hub = get_event_loop()
+ # Given: Initial call the Async Pool to register events works fine
+ worker.pool.register_with_event_loop(hub)
+
+ # Create some mock queue message and read from them
+ _keep = [Mock(name='req{0}'.format(i)) for i in range(20)]
+ [state.task_reserved(m) for m in _keep]
+ auto_scaler.body()
+
+ # Simulate a file descriptor from the list is closed by the OS
+ # auto_scaler.force_scale_down(5)
+ # This actually works -- it releases the semaphore properly
+ # Same with calling .terminate() on the process directly
+ for fd, proc in worker.pool._pool._fileno_to_outq.items():
+ # however opening this fd as a file and closing it will do it
+ queue_worker_socket = open(str(fd), "w")
+ queue_worker_socket.close()
+ break # Only need to do this once
+
+ # When: Calling again to register with event loop ...
+ worker.pool.register_with_event_loop(hub)
+
+ # Then: test did not raise "OSError: [Errno 9] Bad file descriptor!"
+
+ # Finally: Clean up so the threads before/after fixture passes
+ worker.terminate()
+ worker.pool.terminate()
+
def test_dont_stop_or_terminate(self):
worker = self.app.WorkController(concurrency=1, loglevel=0)
worker.stop()
| Connection to broker lost. Trying to re-establish the connection: OSError: [Errno 9] Bad file descriptor
## Checklist
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Software
celery==4.1.0
kombu==4.1.0
amqp==2.2.2
Python 3.6.1
broker: rabbitmq 3.6.14
result backend: redis
## Steps to reproduce
1. celery -A proj worker -Q Q1 --autoscale=10,1 -Ofair --without-gossip --without-mingle --heartbeat-interval=60 -n Q1
2. celery lost connection to broker
3. after restarting affected worker the connection is successfully re-established and the worker starts processing tasks
## Expected behavior
celery should re-establish connection to broker
## Actual behavior
celery tries to re-establish connection to broker but fails with this error message (which is repeated every second) until manually restarted:
```
[user] celery.worker.consumer.consumer WARNING 2017-12-18 00:38:27,078 consumer:
Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 320, in start
blueprint.start(self)
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 596, in start
c.loop(*c.loop_args())
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/worker/loops.py", line 47, in asynloop
obj.controller.register_with_event_loop(hub)
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/worker/worker.py", line 217, in register_with_event_loop
description='hub.register',
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/bootsteps.py", line 151, in send_all
fun(parent, *args)
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/worker/components.py", line 178, in register_with_event_loop
w.pool.register_with_event_loop(hub)
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/concurrency/prefork.py", line 134, in register_with_event_loop
return reg(loop)
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/concurrency/asynpool.py", line 476, in register_with_event_loop
for fd in self._fileno_to_outq]
File "/home/user/.envs/user/lib/python3.6/site-packages/celery/concurrency/asynpool.py", line 476, in <listcomp>
for fd in self._fileno_to_outq]
File "/home/user/.envs/user/lib/python3.6/site-packages/kombu/async/hub.py", line 207, in add_reader
return self.add(fds, callback, READ | ERR, args)
File "/home/user/.envs/user/lib/python3.6/site-packages/kombu/async/hub.py", line 158, in add
self.poller.register(fd, flags)
File "/home/user/.envs/user/lib/python3.6/site-packages/kombu/utils/eventio.py", line 67, in register
self._epoll.register(fd, events)
OSError: [Errno 9] Bad file descriptor
```
| 2019-05-03T04:02:52 |
|
celery/celery | 5,500 | celery__celery-5500 | [
"5057"
] | a7f92282d6fa64b03df8d87517f37e3fe3023d93 | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -278,7 +278,10 @@ def exception_to_python(self, exc):
cls = create_exception_cls(exc_type,
celery.exceptions.__name__)
exc_msg = exc['exc_message']
- exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
+ try:
+ exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
+ except Exception as err: # noqa
+ exc = Exception('{}({})'.format(cls, exc_msg))
if self.serializer in EXCEPTION_ABLE_CODECS:
exc = get_pickled_exception(exc)
return exc
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -8,6 +8,7 @@
from case import ANY, Mock, call, patch, skip
from kombu.serialization import prepare_accept_content
+import celery
from celery import chord, group, signature, states, uuid
from celery.app.task import Context, Task
from celery.backends.base import (BaseBackend, DisabledBackend,
@@ -29,6 +30,12 @@ def __init__(self, *args, **kwargs):
self.args = args
+class paramexception(Exception):
+
+ def __init__(self, param):
+ self.param = param
+
+
if sys.version_info[0] == 3 or getattr(sys, 'pypy_version_info', None):
Oldstyle = None
else:
@@ -456,6 +463,17 @@ def test_exception_to_python_when_attribute_exception(self):
result_exc = b.exception_to_python(test_exception)
assert str(result_exc) == 'Raise Custom Message'
+ def test_exception_to_python_when_type_error(self):
+ b = BaseBackend(app=self.app)
+ celery.TestParamException = paramexception
+ test_exception = {'exc_type': 'TestParamException',
+ 'exc_module': 'celery',
+ 'exc_message': []}
+
+ result_exc = b.exception_to_python(test_exception)
+ del celery.TestParamException
+ assert str(result_exc) == "<class 't.unit.backends.test_base.paramexception'>([])"
+
def test_wait_for__on_interval(self):
self.patching('time.sleep')
b = BaseBackend(app=self.app)
| SQLAlchemy's ProgrammingError exception causes a traceback when AsyncResult.ready() is called
## Checklist
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [ ] I have verified that the issue exists against the `master` branch of Celery.
Checked with 4.2.1
## Steps to reproduce
Simple reproducer (please note I used dummy values but I hit this in production with real world values):
**tasks.py**
```python
from celery import Celery
from sqlalchemy.exc import ProgrammingError
celery = Celery(...)
@celery.task(name='fail')
def fail(param):
raise ProgrammingError(Exception(), 'params', Exception('exception message'))
```
**test.py**
```python
import time
from celery import Celery
import sqlalchemy # this import must be here, otherwise it behaves differently and works
celery = Celery(...)
if __name__ == '__main__':
time.sleep(5)
for dom in ('example.com', 'example2.com'):
result = celery.send_task('fail', args=(dom,))
time.sleep(10)
if result.ready() and result.successful(): # .ready() fails
print(result.get())
```
**Result**
```
celery_test_tasks_1 | [2018-09-18 13:16:45,295: ERROR/ForkPoolWorker-1] Task fail[610284d4-e7dc-47a0-9ab0-efcec8609599] raised unexpected: ProgrammingError('(builtins.Exception) exception message',)
celery_test_tasks_1 | Traceback (most recent call last):
celery_test_tasks_1 | File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 382, in trace_task
celery_test_tasks_1 | R = retval = fun(*args, **kwargs)
celery_test_tasks_1 | File "/usr/local/lib/python3.6/site-packages/celery/app/trace.py", line 641, in __protected_call__
celery_test_tasks_1 | return self.run(*args, **kwargs)
celery_test_tasks_1 | File "/home/user/tasks.py", line 18, in fail
celery_test_tasks_1 | raise ProgrammingError(Exception(), 'params', Exception('exception message'))
celery_test_tasks_1 | sqlalchemy.exc.ProgrammingError: (builtins.Exception) exception message [SQL: Exception()] [parameters: 'params'] (Background on this error at: http://sqlalche.me/e/f405)
```
```
celery_run_test_1 | Traceback (most recent call last):
celery_run_test_1 | File "test.py", line 22, in <module>
celery_run_test_1 | if result.ready():
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 311, in ready
celery_run_test_1 | return self.state in self.backend.READY_STATES
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 471, in state
celery_run_test_1 | return self._get_task_meta()['status']
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 410, in _get_task_meta
celery_run_test_1 | return self._maybe_set_cache(self.backend.get_task_meta(self.id))
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 365, in get_task_meta
celery_run_test_1 | meta = self._get_task_meta_for(task_id)
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 680, in _get_task_meta_for
celery_run_test_1 | return self.decode_result(meta)
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 284, in decode_result
celery_run_test_1 | return self.meta_from_decoded(self.decode(payload))
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 280, in meta_from_decoded
celery_run_test_1 | meta['result'] = self.exception_to_python(meta['result'])
celery_run_test_1 | File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 260, in exception_to_python
celery_run_test_1 | exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
celery_run_test_1 | TypeError: __init__() missing 2 required positional arguments: 'params' and 'orig'
```
## Expected behavior
Do not fail when the received exception cannot be reconstructed. Use a dummy exception.
## Actual behavior
Fails because repr() of sqlalchemy's `ProgrammingError` cannot be used to reconstruct exception.
```python
e = ProgrammingError(Exception(), 'params', Exception('exception message'))
>>> repr(e)
"ProgrammingError('(builtins.Exception) exception message',)"
>>> ProgrammingError('(builtins.Exception) exception message',)
TypeError: __init__() missing 2 required positional arguments: 'params' and 'orig'
````
| Was there ever a fix for this? I am running into this problem.
I'm hitting this too. Is there a workaround we can use while we wait for a fix?
@rclecha @oskarsinger can you try the following:
```python
return_value = result.get(propagate=False)
if result.failed():
...
```
So I wonder if https://github.com/celery/celery/commit/33713dbf69cbd05b59a55077c137d256d652524b is fixing this? #4860
Nah, that doesn't do the trick.
So to summarize:
- [Some SQLAlchemy exceptions](https://github.com/zzzeek/sqlalchemy/blob/dc48ac54893491c5ecd64a868883a22769376e9a/lib/sqlalchemy/exc.py#L462-L478) can't be instantiated again by simply using the `args` attribute but require the original exception that it wraps and some other arguments. In fact the args attribute is set with a human-readable representation of the passed arguments.
- When Celery is configured to use the JSON serializer the exceptions are [prepared to be stored as JSON](https://github.com/celery/celery/blob/e257646136e6fae73186d7385317f4e20cd36130/celery/backends/base.py#L244-L251) which means using the [`args` attribute](https://github.com/zzzeek/sqlalchemy/blob/dc48ac54893491c5ecd64a868883a22769376e9a/lib/sqlalchemy/exc.py#L471-L472) of the exception instance, which is a human readable representation of the exception but doesn't contain the arguments needed for deserialization.
- When a task is throwing an SQLAlchemy exception, it's successfully serialized for storage but misses the [required arguments for deserialization](https://github.com/celery/celery/blob/e257646136e6fae73186d7385317f4e20cd36130/celery/backends/base.py#L269-L270).
I'm mostly writing this down so other people can find this information as well as I'm not sure how this has worked before. Fact is, the exception behavior has been like this in SQLAlchemy [for a while now](https://github.com/zzzeek/sqlalchemy/commit/2d8b5bb4f36e5624f25b170391fe42d3bfbeb623), so my guess is that either a) this never worked with the JSON serializer b) or has been subtly introduced recently in Celery. Can anyone sanity check that?
Yeah, still the case. This is kind of a shame I can’t keep track of failed tasks natively.
This issue also affects me:
```
Traceback (most recent call last):
File "/opt/app/.venv/lib/python3.6/site-packages/celery/app/builtins.py", line 69, in unlock_chord
ready = deps.ready()
File "/opt/app/.venv/lib/python3.6/site-packages/celery/result.py", line 619, in ready
return all(result.ready() for result in self.results)
File "/opt/app/.venv/lib/python3.6/site-packages/celery/result.py", line 619, in <genexpr>
return all(result.ready() for result in self.results)
File "/opt/app/.venv/lib/python3.6/site-packages/celery/result.py", line 313, in ready
return self.state in self.backend.READY_STATES
File "/opt/app/.venv/lib/python3.6/site-packages/celery/result.py", line 473, in state
return self._get_task_meta()['status']
File "/opt/app/.venv/lib/python3.6/site-packages/celery/result.py", line 412, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "/opt/app/.venv/lib/python3.6/site-packages/celery/backends/base.py", line 386, in get_task_meta
meta = self._get_task_meta_for(task_id)
File "/opt/app/.venv/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 53, in _inner
return fun(*args, **kwargs)
File "/opt/app/.venv/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 130, in _get_task_meta_for
return self.meta_from_decoded(task.to_dict())
File "/opt/app/.venv/lib/python3.6/site-packages/celery/backends/base.py", line 301, in meta_from_decoded
meta['result'] = self.exception_to_python(meta['result'])
File "/opt/app/.venv/lib/python3.6/site-packages/celery/backends/base.py", line 281, in exception_to_python
exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
TypeError: __init__() missing 2 required positional arguments: 'params' and 'orig'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/app/.venv/lib/python3.6/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/opt/app/.venv/lib/python3.6/site-packages/celery/app/trace.py", line 648, in __protected_call__
return self.run(*args, **kwargs)
File "/opt/app/.venv/lib/python3.6/site-packages/celery/app/builtins.py", line 72, in unlock_chord
exc=exc, countdown=interval, max_retries=max_retries,
File "/opt/app/.venv/lib/python3.6/site-packages/celery/app/task.py", line 725, in retry
raise ret
celery.exceptions.Retry: Retry in 1s: TypeError("__init__() missing 2 required positional arguments: 'params' and 'orig'",)
```
We're also hitting this.
Although not the best solution, would anyone perhaps like to try the following patch:
```
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -278,7 +278,10 @@ class Backend(object):
cls = create_exception_cls(exc_type,
celery.exceptions.__name__)
exc_msg = exc['exc_message']
- exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
+ try:
+ exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
+ except Exception: # noqa
+ exc = Exception(repr(exc))
if self.serializer in EXCEPTION_ABLE_CODECS:
exc = get_pickled_exception(exc)
return exc
```
@georgepsarakis The patch is working for now. Thanks
@georgepsarakis That seems to work quite well, any plans to submit this as pull request?
JFYI: I'm experiencing same issue with filelock module [Timeout exception](https://github.com/benediktschmitt/py-filelock/blob/71a5e02664a34c48792f4716f0761dc3bd23a4c4/filelock.py#L85-L101).
@nijel thanks for the feedback, glad it works! Contributions are welcome, so if anyone that has already applied the patch and works for their deployment, can open a PR with the patch and a simple test case, please feel free to mention me so I can review as soon as possible. | 2019-05-03T07:09:35 |
celery/celery | 5,527 | celery__celery-5527 | [
"4454"
] | e7ae4290ef044de4ead45314d8fe2b190e497322 | diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py
--- a/celery/backends/mongodb.py
+++ b/celery/backends/mongodb.py
@@ -271,7 +271,10 @@ def _get_database(self):
conn = self._get_connection()
db = conn[self.database_name]
if self.user and self.password:
- if not db.authenticate(self.user, self.password):
+ source = self.options.get('authsource',
+ self.database_name or 'admin'
+ )
+ if not db.authenticate(self.user, self.password, source=source):
raise ImproperlyConfigured(
'Invalid MongoDB username or password.')
return db
| diff --git a/t/unit/backends/test_mongodb.py b/t/unit/backends/test_mongodb.py
--- a/t/unit/backends/test_mongodb.py
+++ b/t/unit/backends/test_mongodb.py
@@ -235,7 +235,7 @@ def test_get_database_no_existing(self, mock_get_connection):
assert database is mock_database
assert self.backend.__dict__['database'] is mock_database
mock_database.authenticate.assert_called_once_with(
- MONGODB_USER, MONGODB_PASSWORD)
+ MONGODB_USER, MONGODB_PASSWORD, source=self.backend.database_name)
@patch('celery.backends.mongodb.MongoBackend._get_connection')
def test_get_database_no_existing_no_auth(self, mock_get_connection):
| Celery does not consider authSource on mongodb backend URLs
Version: Celery 4.0.2 (from looking at the changes since then it seems there is no change addressing this issue here: https://github.com/celery/celery/commits/master/celery/backends/mongodb.py )
(Edit) Confirmed with the following versions as well:
amqp==2.2.2
billiard==3.5.0.3
celery==4.1.0
kombu==4.1.0
pymongo==3.6.0
Celery Report
software -> celery:4.0.2 (latentcall) kombu:4.0.2 py:2.7.8
billiard:3.5.0.3 py-amqp:2.2.1
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
## Steps to reproduce
Give Celery a Backend URL pointing to a MongoDB instance with authentication and username/password (user/pwd set on the Admin DB by default) in the format:
mongodb://user:pass@your-server/your_db?authSource=admin
(Please see http://api.mongodb.com/python/current/examples/authentication.html#default-database-and-authsource and http://api.mongodb.com/python/current/api/pymongo/mongo_client.html?highlight=authsource )
## Expected behavior
Celery authenticates the user in the admin database (this is the same as passing --authenticationDatabase to the mongo client or the same url to MongoClient)
## Actual behavior
Celery tries to authenticate the user on the your_db database (failing to authenticate)
## Workaround (not recommended)
Change the db on the URL to /admin (this db shouldn't be used to store arbitrary data normally)
| Workaround ;
create a user specific to the database you are using in Mongo.
In mongo shell
1. use dbname
2. db.createUser( { user: "user",
pwd: "passwd",
roles:[ "readWrite", "dbAdmin" ] }
)
3. set the backend url
mongodb://user:[email protected]:27017/dbname
Hello,
This limitation doesn't allow to use MongoDB Atlas as result_backend a huge limitation.
Could you please add the authSource options and update mongodb.py wrapper with the latest functionalities of pymongo?
Thank you!
PRs are welcome.
I don't use the MongoDB backend myself...
There's an existing PR but it requires tests.
https://github.com/celery/celery/pull/4581/files
Thank you for the info
ok I'll test PR and I'll evaluate if I can help somehow! | 2019-05-16T03:38:36 |
celery/celery | 5,565 | celery__celery-5565 | [
"5564"
] | ebcc62207c14eac95ea01b6d0859fab8c32da7eb | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -279,7 +279,10 @@ def exception_to_python(self, exc):
celery.exceptions.__name__)
exc_msg = exc['exc_message']
try:
- exc = cls(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
+ if isinstance(exc_msg, tuple):
+ exc = cls(*exc_msg)
+ else:
+ exc = cls(exc_msg)
except Exception as err: # noqa
exc = Exception('{}({})'.format(cls, exc_msg))
if self.serializer in EXCEPTION_ABLE_CODECS:
| Exception message as a string is still being unpacked
Here: https://github.com/celery/celery/blob/master/celery/backends/base.py#L282
Following code:
```
exc_msg = 'SHOULDBETOGETHER'
print(*exc_msg if isinstance(exc_msg, tuple) else exc_msg)
```
results in:
```
S H O U L D B E T O G E T H E R
```
| 2019-06-04T16:57:15 |
||
celery/celery | 5,587 | celery__celery-5587 | [
"3810"
] | b3904189bc1290bce1f52318d5cd934dfe74ccf4 | diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -205,7 +205,7 @@ def get(self, timeout=None, propagate=True, interval=0.5,
assert_will_not_block()
_on_interval = promise()
if follow_parents and propagate and self.parent:
- on_interval = promise(self._maybe_reraise_parent_error, weak=True)
+ _on_interval = promise(self._maybe_reraise_parent_error, weak=True)
self._maybe_reraise_parent_error()
if on_interval:
_on_interval.then(on_interval)
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -55,6 +55,27 @@ def test_group_results_in_chain(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [4, 5]
+
+ def test_chain_on_error(self, manager):
+ from celery import states
+ from .tasks import ExpectedException
+ import time
+
+ if not manager.app.conf.result_backend.startswith('redis'):
+ raise pytest.skip('Requires redis result backend.')
+
+ # Run the chord and wait for the error callback to finish.
+ c1 = chain(
+ add.s(1, 2), fail.s(), add.s(3, 4),
+ )
+ res = c1()
+
+ with pytest.raises(ExpectedException):
+ res.get(propagate=True)
+
+ with pytest.raises(ExpectedException):
+ res.parent.get(propagate=True)
+
@flaky
def test_chain_inside_group_receives_arguments(self, manager):
c = (
@@ -562,17 +583,13 @@ def test_chord_on_error(self, manager):
chord_error.s()),
)
res = c1()
- try:
+ with pytest.raises(ExpectedException):
res.wait(propagate=False)
- except ExpectedException:
- pass
# Got to wait for children to populate.
while not res.children:
time.sleep(0.1)
- try:
+ with pytest.raises(ExpectedException):
res.children[0].children[0].wait(propagate=False)
- except ExpectedException:
- pass
# Extract the results of the successful tasks from the chord.
#
| Group of Chains Exception/Error Propagation
## Checklist
- I'm running Celery 4.0.2
## Steps to reproduce
tasks.py
```
@shared_task
def add(x,y):
return x+y
@shared_task
def raising_exception():
raise RuntimeError("BLAH")
```
test.py
```
from celery import chord, group
from my_app.tasks import xsum, raising_exception
tasks = group(add.si(1,2), raising_exception.si() | add.si(1,1) )
result = tasks.apply_async()
try:
result.get()
except Exception as ex:
raise ex
```
Run the following:
celery worker -A grizzly_project -c 8 &
python manage.py runserver
then in another terminal, run:
python test.py
## Expected behavior
When the "raising_exception" task fails, result.get() should propagate one task back in the chain from the last task (for the second part of the group) to receive the exception: " RuntimeError: BLAH" thus allowing me to catch the Exception similar to below:
```
Traceback (most recent call last):
File "test.py", line 15, in <module>
raise ex
celery.backends.base.RuntimeError: BLAH
```
This is the expected behaviour, according to the back propagation of errors in a chain that was implemented in the closing of Issue [1014](https://github.com/celery/celery/issues/1014).
## Actual behavior
Rather, when the result.get() statement executes, the program hangs.
When I change the group statement from the above code to this such that the failing task is the last to execute:
```
tasks = group(add.si(1,2), add.si(1,1) | raising_exception.si() )
```
...the result.get() doesn't hang but rather the Exception is caught, as expected
Also, if i simply make it just a chain (rather than a group of chains) with the failing task first:
```
tasks = raising_exception.si() | add.si(1,1)
```
...result.get() doesn't hang but accurately catches the Exception (granted a wait a second or two before executing result.get())
| I would just like to throw my hat into the ring here because I'm encountering the same issue. So long as there are no errors, everything works as expected, but with errors, indefinite hang.
I'm experiencing this on Celery 4.2.1 w/ Redis as a broker.
Does anyone have a workaround for this? Without this, there is no way to do exception handling in canvas and we have to catch all exceptions and propagate them in the results instead.
I'm having the same problem given the test provided above using the following setup (master):
```
software -> celery:4.2.0 (windowlicker) kombu:4.2.2-post1 py:3.6.7
billiard:3.5.0.5 py-amqp:2.4.0
platform -> system:Linux arch:64bit, ELF
kernel version:4.9.125-linuxkit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:redis://redis/
broker_url: 'amqp://guest:********@rabbit:5672//'
result_backend: 'redis://redis/'
```
Same here with celery 4.3.0, I would like to add that this is really a critical issue.
I don't understand how is possible given that it should have already been fixed here https://github.com/celery/celery/commit/e043b82827874ed1c904e47b8dc964729b40442a
could you try that patch locally?
What patch? that commit is already on master | 2019-06-13T12:33:27 |
celery/celery | 5,613 | celery__celery-5613 | [
"5574"
] | c09e79e007b48d1ec55f0b032018106b45896713 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -392,10 +392,6 @@ def __or__(self, other):
# These could be implemented in each individual class,
# I'm sure, but for now we have this.
if isinstance(self, group):
- if isinstance(other, group):
- # group() | group() -> single group
- return group(
- itertools.chain(self.tasks, other.tasks), app=self.app)
# group() | task -> chord
return chord(self, body=other, app=self._app)
elif isinstance(other, group):
| diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -324,6 +324,12 @@ def test_handles_dicts(self):
assert isinstance(task, Signature)
assert task.app is self.app
+ def test_groups_in_chain_to_chord(self):
+ g1 = group([self.add.s(2, 2), self.add.s(4, 4)])
+ g2 = group([self.add.s(3, 3), self.add.s(5, 5)])
+ c = g1 | g2
+ assert isinstance(c, chord)
+
def test_group_to_chord(self):
c = (
self.add.s(5) |
| chan(group1, group2), tasks of group2 runs while tasks of group1 is running.
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [x] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- #5467
- #2573
#### Possible Duplicates
- #1163
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.4.0rc1 (cliffs) kombu:4.6.1 py:3.6.6
billiard:3.6.0.0 py-amqp:2.5.0
platform -> system:Darwin arch:64bit
kernel version:18.5.0 imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://localhost:6379/0
broker_url: 'amqp://guest:********@localhost:5672//'
result_backend: 'redis://localhost:6379/0'
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
# myapp.py
from celery import Celery
app = Celery(
'myapp',
broker='amqp://guest@localhost//',
backend='redis://localhost:6379/0'
)
@app.task
def b(zz):
import time
print('start: {}'.format(zz))
time.sleep(5)
print('end: {}'.format(zz))
return zz
@app.task
def a(zz):
print(zz)
return zz
if __name__ == '__main__':
app.start()
```
and run celery worker with concurrency=3
```shell
worker -A myapp -l debug --concurrency=3
```
send tasks:
```python
group1 = group(b.si('group1 job1'), b.si('group1 job2'))
group2 = group(a.si('group2 job1'), a.si('group2 job2'))
res = chain(group1, group2)
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
tasks in group2 not run until group1 returns
expect output:
```
start: group1 job1
start: group1 job2
end: group1 job1
end: group1 job2
group2 job1
group2 job2
```
# Actual Behavior
they are run in parallel and the out put is:
```
start: group1 job1
start: group1 job2
group2 job1
group2 job2
end: group1 job1
end: group1 job2
```
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
| could you try celery from master once?
Yes, I did try it from master.
Actually, I have a question for the source code in the file `celery.canvas.Signature#__or__`,
maybe the comment `# group() | group() -> single group` is not right.
`chain(group1, group2)` is not the same as `group(group1, group2)`. | 2019-06-22T07:08:35 |
celery/celery | 5,631 | celery__celery-5631 | [
"5627"
] | 60ac03b4956307daf3717bfdbccceab693bd9a6e | diff --git a/celery/beat.py b/celery/beat.py
--- a/celery/beat.py
+++ b/celery/beat.py
@@ -384,7 +384,7 @@ def apply_async(self, entry, producer=None, advance=True, **kwargs):
task = self.app.tasks.get(entry.task)
try:
- entry_args = [v() if isinstance(v, BeatLazyFunc) else v for v in entry.args]
+ entry_args = [v() if isinstance(v, BeatLazyFunc) else v for v in (entry.args or [])]
entry_kwargs = {k: v() if isinstance(v, BeatLazyFunc) else v for k, v in entry.kwargs.items()}
if task:
return task.apply_async(entry_args, entry_kwargs,
| diff --git a/t/unit/app/test_beat.py b/t/unit/app/test_beat.py
--- a/t/unit/app/test_beat.py
+++ b/t/unit/app/test_beat.py
@@ -188,6 +188,17 @@ def foo():
scheduler.apply_async(scheduler.Entry(task=foo.name, app=self.app))
foo.apply_async.assert_called()
+ def test_apply_async_with_null_args(self):
+
+ @self.app.task(shared=False)
+ def foo():
+ pass
+ foo.apply_async = Mock(name='foo.apply_async')
+
+ scheduler = mScheduler(app=self.app)
+ scheduler.apply_async(scheduler.Entry(task=foo.name, app=self.app, args=None, kwargs=None))
+ foo.apply_async.assert_called()
+
def test_should_sync(self):
@self.app.task(shared=False)
| Null args in backend_cleanup task from beat.
# Checklist
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [ ] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.4.0rc2 (cliffs)
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
software -> celery:4.4.0rc2 (cliffs) kombu:4.6.3 py:3.7.3
billiard:3.6.0.0 py-amqp:2.5.0
platform -> system:Linux arch:64bit
kernel version:4.18.0-21-generic imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:pyamqp results:redis://redis:6379/0
broker_url: 'amqp://guest:********@rabbitmq:5672//'
result_backend: 'redis://redis:6379/0'
timezone: 'UTC'
result_extended: True
task_acks_late: True
task_routes: {
'celery.*': {'queue': 'api_workers'},
'graphape.tasks.api.*': {'queue': 'api_workers'},
'graphape.tasks.process.*': {'queue': 'process_workers'}}
JOBTASTIC_CACHE: <jobtastic.cache.base.WrappedCache object at 0x7f152c5ddb70>
redbeat_redis_url: 'redis://redis:6379/0'
beat_schedule: {
}
</p>
</details>
# Steps to Reproduce
Don't think anything other than celery 4.4.0RC and just having beat running and publishing a "celery.backend_cleanup"-task.
Task in backend (redis) will have args set to null and this code will later try to enumerate from the null value:
https://github.com/celery/celery/blob/master/celery/beat.py#L387
I guess these lines should be patched to deal with args = null?
Or perhaps args is never allowed to be null and fix should instead be already when task is written to backend?
I'm not sure what celery specification says about null args, please advise.
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
alabaster==0.7.12
amqp==2.5.0
argcomplete==1.9.3
argh==0.26.2
asteval==0.9.14
Babel==2.7.0
billiard==3.6.0.0
bleach==3.1.0
boto3==1.9.179
botocore==1.12.179
bumpversion==0.5.3
cachelib==0.1
celery==4.4.0rc2
celery-redbeat==0.13.0
Cerberus==1.3.1
certifi==2019.6.16
cfn-flip==1.2.1
chardet==3.0.4
Click==7.0
coverage==4.5.3
decorator==4.4.0
dnspython==1.16.0
docutils==0.14
dumb-init==1.2.2
durationpy==0.5
Eve==0.9.2
eventlet==0.25.0
Events==0.3
Flask==1.0.3
Flask-SocketIO==4.1.0
future==0.16.0
graphape==0.0.1
greenlet==0.4.15
hjson==3.0.1
idna==2.8
imagesize==1.1.0
itsdangerous==1.1.0
Jinja2==2.10.1
jmespath==0.9.3
jobtastic==2.1.1
kappa==0.6.0
kombu==4.6.3
lambda-packages==0.20.0
livereload==2.6.1
MarkupSafe==1.1.1
mock==3.0.5
monotonic==1.5
networkx==2.3
numpy==1.16.4
packaging==19.0
pathtools==0.1.2
pkginfo==1.5.0.1
placebo==0.9.0
pluginbase==1.0.0
port-for==0.3.1
psutil==5.6.3
Pygments==2.4.2
pymongo==3.8.0
pyparsing==2.4.0
python-dateutil==2.6.1
python-engineio==3.8.1
python-slugify==1.2.4
python-socketio==4.1.0
pytz==2019.1
PyYAML==5.1.1
readme-renderer==24.0
redis==3.2.1
requests==2.22.0
requests-toolbelt==0.9.1
s3transfer==0.2.1
simplejson==3.16.0
six==1.12.0
snowballstemmer==1.2.1
Sphinx==2.0.1
sphinx-autobuild==0.7.1
sphinx-rtd-theme==0.4.3
sphinxcontrib-applehelp==1.0.1
sphinxcontrib-devhelp==1.0.1
sphinxcontrib-htmlhelp==1.0.2
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.2
sphinxcontrib-serializinghtml==1.1.3
tenacity==5.0.4
toml==0.10.0
tornado==6.0.2
tqdm==4.32.1
troposphere==2.4.9
twine==1.13.0
Unidecode==1.1.1
urllib3==1.25.3
uWSGI==2.0.18
vine==1.3.0
watchdog==0.9.0
webencodings==0.5.1
Werkzeug==0.15.4
wsgi-request-logger==0.4.6
zappa==0.48.2
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
- celery-redbeat
</p>
</details>
# Expected Behavior
No exception.
# Actual Behavior
Exception:
```
[2019-07-02 04:00:00,081: ERROR/MainProcess] Message Error: Couldn't apply scheduled task celery.backend_cleanup: 'NoneType' object is not iterable
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 387, in apply_async
entry_args = [v() if isinstance(v, BeatLazyFunc) else v for v in entry.args]
TypeError: 'NoneType' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/redbeat/schedulers.py", line 427, in maybe_due
result = self.apply_async(entry, **kwargs)
File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 400, in apply_async
entry, exc=exc)), sys.exc_info()[2])
File "/usr/local/lib/python3.7/site-packages/vine/five.py", line 194, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.7/site-packages/celery/beat.py", line 387, in apply_async
entry_args = [v() if isinstance(v, BeatLazyFunc) else v for v in entry.args]
celery.beat.SchedulingError: Couldn't apply scheduled task celery.backend_cleanup: 'NoneType' object is not iterable
```
| could you try to patch and test? | 2019-07-03T14:25:28 |
celery/celery | 5,638 | celery__celery-5638 | [
"5496"
] | 8e016e667ae16958043b096e5cb89ac0f7dd7989 | diff --git a/celery/backends/asynchronous.py b/celery/backends/asynchronous.py
--- a/celery/backends/asynchronous.py
+++ b/celery/backends/asynchronous.py
@@ -134,7 +134,9 @@ def iter_native(self, result, no_ack=True, **kwargs):
# into these buckets.
bucket = deque()
for node in results:
- if node._cache:
+ if not hasattr(node, '_cache'):
+ bucket.append(node)
+ elif node._cache:
bucket.append(node)
else:
self._collect_into(node, bucket)
@@ -142,7 +144,10 @@ def iter_native(self, result, no_ack=True, **kwargs):
for _ in self._wait_for_pending(result, no_ack=no_ack, **kwargs):
while bucket:
node = bucket.popleft()
- yield node.id, node._cache
+ if not hasattr(node, '_cache'):
+ yield node.id, node.children
+ else:
+ yield node.id, node._cache
while bucket:
node = bucket.popleft()
yield node.id, node._cache
diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -819,9 +819,14 @@ def join_native(self, timeout=None, propagate=True,
acc = None if callback else [None for _ in range(len(self))]
for task_id, meta in self.iter_native(timeout, interval, no_ack,
on_message, on_interval):
- value = meta['result']
- if propagate and meta['status'] in states.PROPAGATE_STATES:
- raise value
+ if isinstance(meta, list):
+ value = []
+ for children_result in meta:
+ value.append(children_result.get())
+ else:
+ value = meta['result']
+ if propagate and meta['status'] in states.PROPAGATE_STATES:
+ raise value
if callback:
callback(task_id, value)
else:
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -120,6 +120,16 @@ def test_group_chord_group_chain(self, manager):
assert set(redis_messages[4:]) == after_items
redis_connection.delete('redis-echo')
+ @flaky
+ def test_group_result_not_has_cache(self, manager):
+ t1 = identity.si(1)
+ t2 = identity.si(2)
+ gt = group([identity.si(3), identity.si(4)])
+ ct = chain(identity.si(5), gt)
+ task = group(t1, t2, ct)
+ result = task.delay()
+ assert result.get(timeout=TIMEOUT) == [1, 2, [3, 4]]
+
@flaky
def test_second_order_replace(self, manager):
from celery.five import bytes_if_py2
| Nested group(chain(group)) fails
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [ ] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [ ] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [x] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- Subsequent groups within a chain fail #3585
- Consecutive groups in chain fails #4848
- task/group chains fails in some scenarios #5467
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.3.0
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:
software -> celery:4.3.0 (rhubarb) kombu:4.5.0 py:3.5.3
billiard:3.6.0.0 redis:3.2.1
platform -> system:Linux arch:64bit
kernel version:4.19.32-1-MANJARO imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://redis:6379/1
broker_url: redis://redis:6379/1
result_backend: redis://redis:6379/1
</b></summary>
<p>
```
```
</p>
</details>
# Steps to Reproduce
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
from celery import group, chain
from tasks import task as t
# failing sequence
task = group([ t.si(),t.si(), chain( t.si(), group([ t.si(), t.si()]))])
# working sequence
task = group([ t.si(),t.si(), chain( t.si(), group([ t.si(), t.si()]), t.si())])
async_result = task.apply_async()
result = async_result.get()
```
</p>
</details>
# Expected Behavior
Calling `get` return group result.
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
All task finish success but calling `.get()` fail with traceback:
```
Traceback (most recent call last):
File "/my_app.py", line 111, in add_to_queue
result = async_result.get()
File "/root/.cache/pypoetry/virtualenvs/my_app/lib/python3.7/site-packages/celery/result.py", line 697, in get
on_interval=on_interval,
File "/root/.cache/pypoetry/virtualenvs/my_app/lib/python3.7/site-packages/celery/result.py", line 815, in join_native
on_message, on_interval):
File "/root/.cache/pypoetry/virtualenvs/my_app/lib/python3.7/site-packages/celery/backends/asynchronous.py", line 137, in iter_native
if node._cache:
AttributeError: 'GroupResult' object has no attribute '_cache'
```
| This can be reproduced on master and I used RabbitMQ as broker.
Because nested `group(signature, chain(group))` get results like `GroupResult(AsyncResult, GroupResult)`, but the GroupResult do not have attribute `_cache`.
The problem is, you can not simplely change the statetment to `if hasattr(node, '_cache') and node._cache`, for that will have new issue.
please provide a sample test case
@p-eli provide a test case that can be used.
First, the task file: `myapp.py`
```
from celery import Celery
app = Celery(
'myapp',
broker='amqp://guest@localhost//',
backend='redis://localhost:6379/0'
)
@app.task
def t():
print('t-t-t')
if __name__ == '__main__':
app.start()
```
then send tasks by:
```
from celery import group, chain
from myapp import t
# failing sequence
task = group([ t.si(),t.si(), chain( t.si(), group([ t.si(), t.si()]))])
# working sequence
task = group([ t.si(),t.si(), chain( t.si(), group([ t.si(), t.si()]), t.si())])
async_result = task.apply_async()
result = async_result.get()
```
@tothegump is this fixed? | 2019-07-09T01:54:28 |
celery/celery | 5,661 | celery__celery-5661 | [
"5436",
"5436"
] | 4e4d308db88e60afeec97479a5a133671c671fce | diff --git a/celery/backends/base.py b/celery/backends/base.py
--- a/celery/backends/base.py
+++ b/celery/backends/base.py
@@ -351,6 +351,54 @@ def encode_result(self, result, state):
def is_cached(self, task_id):
return task_id in self._cache
+ def _get_result_meta(self, result,
+ state, traceback, request, format_date=True,
+ encode=False):
+ if state in self.READY_STATES:
+ date_done = datetime.datetime.utcnow()
+ if format_date:
+ date_done = date_done.isoformat()
+ else:
+ date_done = None
+
+ meta = {
+ 'status': state,
+ 'result': result,
+ 'traceback': traceback,
+ 'children': self.current_task_children(request),
+ 'date_done': date_done,
+ }
+
+ if request and getattr(request, 'group', None):
+ meta['group_id'] = request.group
+ if request and getattr(request, 'parent_id', None):
+ meta['parent_id'] = request.parent_id
+
+ if self.app.conf.find_value_for_key('extended', 'result'):
+ if request:
+ request_meta = {
+ 'name': getattr(request, 'task', None),
+ 'args': getattr(request, 'args', None),
+ 'kwargs': getattr(request, 'kwargs', None),
+ 'worker': getattr(request, 'hostname', None),
+ 'retries': getattr(request, 'retries', None),
+ 'queue': request.delivery_info.get('routing_key')
+ if hasattr(request, 'delivery_info') and
+ request.delivery_info else None
+ }
+
+ if encode:
+ # args and kwargs need to be encoded properly before saving
+ encode_needed_fields = {"args", "kwargs"}
+ for field in encode_needed_fields:
+ value = request_meta[field]
+ encoded_value = self.encode(value)
+ request_meta[field] = ensure_bytes(encoded_value)
+
+ meta.update(request_meta)
+
+ return meta
+
def store_result(self, task_id, result, state,
traceback=None, request=None, **kwargs):
"""Update task state and result."""
@@ -703,40 +751,9 @@ def _forget(self, task_id):
def _store_result(self, task_id, result, state,
traceback=None, request=None, **kwargs):
-
- if state in self.READY_STATES:
- date_done = datetime.datetime.utcnow().isoformat()
- else:
- date_done = None
-
- meta = {
- 'status': state,
- 'result': result,
- 'traceback': traceback,
- 'children': self.current_task_children(request),
- 'task_id': bytes_to_str(task_id),
- 'date_done': date_done,
- }
-
- if request and getattr(request, 'group', None):
- meta['group_id'] = request.group
- if request and getattr(request, 'parent_id', None):
- meta['parent_id'] = request.parent_id
-
- if self.app.conf.find_value_for_key('extended', 'result'):
- if request:
- request_meta = {
- 'name': getattr(request, 'task', None),
- 'args': getattr(request, 'args', None),
- 'kwargs': getattr(request, 'kwargs', None),
- 'worker': getattr(request, 'hostname', None),
- 'retries': getattr(request, 'retries', None),
- 'queue': request.delivery_info.get('routing_key')
- if hasattr(request, 'delivery_info') and
- request.delivery_info else None
- }
-
- meta.update(request_meta)
+ meta = self._get_result_meta(result=result, state=state,
+ traceback=traceback, request=request)
+ meta['task_id'] = bytes_to_str(task_id)
self.set(self.get_key_for_task(task_id), self.encode(meta))
return result
diff --git a/celery/backends/database/__init__.py b/celery/backends/database/__init__.py
--- a/celery/backends/database/__init__.py
+++ b/celery/backends/database/__init__.py
@@ -5,7 +5,6 @@
import logging
from contextlib import contextmanager
-from kombu.utils.encoding import ensure_bytes
from vine.utils import wraps
from celery import states
@@ -120,6 +119,7 @@ def _store_result(self, task_id, result, state, traceback=None,
task = task and task[0]
if not task:
task = self.task_cls(task_id)
+ task.task_id = task_id
session.add(task)
session.flush()
@@ -128,24 +128,22 @@ def _store_result(self, task_id, result, state, traceback=None,
def _update_result(self, task, result, state, traceback=None,
request=None):
- task.result = result
- task.status = state
- task.traceback = traceback
- if self.app.conf.find_value_for_key('extended', 'result'):
- task.name = getattr(request, 'task', None)
- task.args = ensure_bytes(
- self.encode(getattr(request, 'args', None))
- )
- task.kwargs = ensure_bytes(
- self.encode(getattr(request, 'kwargs', None))
- )
- task.worker = getattr(request, 'hostname', None)
- task.retries = getattr(request, 'retries', None)
- task.queue = (
- request.delivery_info.get("routing_key")
- if hasattr(request, "delivery_info") and request.delivery_info
- else None
- )
+
+ meta = self._get_result_meta(result=result, state=state,
+ traceback=traceback, request=request,
+ format_date=False, encode=True)
+
+ # Exclude the primary key id and task_id columns
+ # as we should not set it None
+ columns = [column.name for column in self.task_cls.__table__.columns
+ if column.name not in {'id', 'task_id'}]
+
+ # Iterate through the columns name of the table
+ # to set the value from meta.
+ # If the value is not present in meta, set None
+ for column in columns:
+ value = meta.get(column)
+ setattr(task, column, value)
@retry
def _get_task_meta_for(self, task_id):
diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py
--- a/celery/backends/mongodb.py
+++ b/celery/backends/mongodb.py
@@ -185,18 +185,10 @@ def decode(self, data):
def _store_result(self, task_id, result, state,
traceback=None, request=None, **kwargs):
"""Store return value and state of an executed task."""
- meta = {
- '_id': task_id,
- 'status': state,
- 'result': self.encode(result),
- 'date_done': datetime.utcnow(),
- 'traceback': self.encode(traceback),
- 'children': self.encode(
- self.current_task_children(request),
- ),
- }
- if request and getattr(request, 'parent_id', None):
- meta['parent_id'] = request.parent_id
+ meta = self._get_result_meta(result=result, state=state,
+ traceback=traceback, request=request)
+ # Add the _id for mongodb
+ meta['_id'] = task_id
try:
self.collection.replace_one({'_id': task_id}, meta, upsert=True)
| diff --git a/t/unit/backends/test_base.py b/t/unit/backends/test_base.py
--- a/t/unit/backends/test_base.py
+++ b/t/unit/backends/test_base.py
@@ -7,6 +7,7 @@
import pytest
from case import ANY, Mock, call, patch, skip
from kombu.serialization import prepare_accept_content
+from kombu.utils.encoding import ensure_bytes
import celery
from celery import chord, group, signature, states, uuid
@@ -104,6 +105,45 @@ def test_accept_precedence(self):
assert list(b4.accept)[0] == 'application/x-yaml'
assert prepare_accept_content(['yaml']) == b4.accept
+ def test_get_result_meta(self):
+ b1 = BaseBackend(self.app)
+ meta = b1._get_result_meta(result={'fizz': 'buzz'},
+ state=states.SUCCESS, traceback=None,
+ request=None)
+ assert meta['status'] == states.SUCCESS
+ assert meta['result'] == {'fizz': 'buzz'}
+ assert meta['traceback'] is None
+
+ self.app.conf.result_extended = True
+ args = ['a', 'b']
+ kwargs = {'foo': 'bar'}
+ task_name = 'mytask'
+
+ b2 = BaseBackend(self.app)
+ request = Context(args=args, kwargs=kwargs,
+ task=task_name,
+ delivery_info={'routing_key': 'celery'})
+ meta = b2._get_result_meta(result={'fizz': 'buzz'},
+ state=states.SUCCESS, traceback=None,
+ request=request, encode=False)
+ assert meta['name'] == task_name
+ assert meta['args'] == args
+ assert meta['kwargs'] == kwargs
+ assert meta['queue'] == 'celery'
+
+ def test_get_result_meta_encoded(self):
+ self.app.conf.result_extended = True
+ b1 = BaseBackend(self.app)
+ args = ['a', 'b']
+ kwargs = {'foo': 'bar'}
+
+ request = Context(args=args, kwargs=kwargs)
+ meta = b1._get_result_meta(result={'fizz': 'buzz'},
+ state=states.SUCCESS, traceback=None,
+ request=request, encode=True)
+ assert meta['args'] == ensure_bytes(b1.encode(args))
+ assert meta['kwargs'] == ensure_bytes(b1.encode(kwargs))
+
class test_BaseBackend_interface:
diff --git a/t/unit/backends/test_database.py b/t/unit/backends/test_database.py
--- a/t/unit/backends/test_database.py
+++ b/t/unit/backends/test_database.py
@@ -246,6 +246,37 @@ def test_store_result(self, result_serializer, args, kwargs):
assert meta['retries'] == 2
assert meta['worker'] == "celery@worker_1"
+ @pytest.mark.parametrize(
+ 'result_serializer, args, kwargs',
+ [
+ ('pickle', (SomeClass(1), SomeClass(2)),
+ {'foo': SomeClass(123)}),
+ ('json', ['a', 'b'], {'foo': 'bar'}),
+ ],
+ ids=['using pickle', 'using json']
+ )
+ def test_get_result_meta(self, result_serializer, args, kwargs):
+ self.app.conf.result_serializer = result_serializer
+ tb = DatabaseBackend(self.uri, app=self.app)
+
+ request = Context(args=args, kwargs=kwargs,
+ task='mytask', retries=2,
+ hostname='celery@worker_1',
+ delivery_info={'routing_key': 'celery'})
+
+ meta = tb._get_result_meta(result={'fizz': 'buzz'},
+ state=states.SUCCESS, traceback=None,
+ request=request, format_date=False,
+ encode=True)
+
+ assert meta['result'] == {'fizz': 'buzz'}
+ assert tb.decode(meta['args']) == args
+ assert tb.decode(meta['kwargs']) == kwargs
+ assert meta['queue'] == 'celery'
+ assert meta['name'] == 'mytask'
+ assert meta['retries'] == 2
+ assert meta['worker'] == "celery@worker_1"
+
@skip.unless_module('sqlalchemy')
class test_SessionManager:
| Celery 4.3 result_extended doesn't push extended meta data to some backends when set to true
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
celery -A celeryacl report
software -> celery:4.3.0 (rhubarb) kombu:4.5.0 py:3.6.8
billiard:3.6.0.0 py-amqp:2.4.2
platform -> system:Linux arch:64bit
kernel version:4.18.0-1013-azure imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:db+mysql+pymysql://celery:**@X.X.X.X:XXXX/celeryresults
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
pip freeze
amqp==2.4.2
asn1crypto==0.24.0
bcrypt==3.1.6
billiard==3.6.0.0
celery==4.3.0
cffi==1.12.2
Click==7.0
cryptography==2.6.1
Flask==1.0.2
Flask-Cors==3.0.7
itsdangerous==1.1.0
Jinja2==2.10
kombu==4.5.0
MarkupSafe==1.1.1
mysql-connector-python==8.0.15
netmiko==2.3.3
paramiko==2.4.2
protobuf==3.7.1
pyasn1==0.4.5
pycparser==2.19
PyMySQL==0.9.3
PyNaCl==1.3.0
pyserial==3.4
pytz==2018.9
PyYAML==5.1
scp==0.13.2
six==1.12.0
SQLAlchemy==1.2.14
textfsm==0.4.1
vine==1.3.0
Werkzeug==0.15.2
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
With the following configured "result_extended" to true I expected to see more meta data in the backend results DB but I don't. I just see the standard results.
app = Celery('tasks',
backend='db+mysql+pymysql://XXX:[email protected]:XXXX/celeryresults',
broker='amqp://xxxx:[email protected]/celery',
result_extended=True)
# Actual Behavior
<!--
I only see the standard fields in the result DB and not the extended meta data as expected.
-->
Celery 4.3 result_extended doesn't push extended meta data to some backends when set to true
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
celery -A celeryacl report
software -> celery:4.3.0 (rhubarb) kombu:4.5.0 py:3.6.8
billiard:3.6.0.0 py-amqp:2.4.2
platform -> system:Linux arch:64bit
kernel version:4.18.0-1013-azure imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:db+mysql+pymysql://celery:**@X.X.X.X:XXXX/celeryresults
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
pip freeze
amqp==2.4.2
asn1crypto==0.24.0
bcrypt==3.1.6
billiard==3.6.0.0
celery==4.3.0
cffi==1.12.2
Click==7.0
cryptography==2.6.1
Flask==1.0.2
Flask-Cors==3.0.7
itsdangerous==1.1.0
Jinja2==2.10
kombu==4.5.0
MarkupSafe==1.1.1
mysql-connector-python==8.0.15
netmiko==2.3.3
paramiko==2.4.2
protobuf==3.7.1
pyasn1==0.4.5
pycparser==2.19
PyMySQL==0.9.3
PyNaCl==1.3.0
pyserial==3.4
pytz==2018.9
PyYAML==5.1
scp==0.13.2
six==1.12.0
SQLAlchemy==1.2.14
textfsm==0.4.1
vine==1.3.0
Werkzeug==0.15.2
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
With the following configured "result_extended" to true I expected to see more meta data in the backend results DB but I don't. I just see the standard results.
app = Celery('tasks',
backend='db+mysql+pymysql://XXX:[email protected]:XXXX/celeryresults',
broker='amqp://xxxx:[email protected]/celery',
result_extended=True)
# Actual Behavior
<!--
I only see the standard fields in the result DB and not the extended meta data as expected.
-->
| As referenced here: http://docs.celeryproject.org/en/latest/whatsnew-4.3.html#result-backends it doesn't state that any additional configuration is needed but perhaps I am missing something.
I am seeing the same issue with the MongoDB result backend. The MongoDB backend overrides [`_store_result`](https://github.com/celery/celery/blob/a616ae7c02fa137de2250f66d0c8db693e070210/celery/backends/mongodb.py#L181) and does not call the base.py `_store_result` method. The base.py [`_store_result`](https://github.com/celery/celery/blob/a616ae7c02fa137de2250f66d0c8db693e070210/celery/backends/base.py#L699) method is where the code was changed to add the extended results.
Possible solutions include adding the new result_extended code to the `_store_result` method of the MongoDB backend, or having the backend call the super method and store those values in the database. I am not sure why `_store_result` was overridden, since MongoDB can act like a key/value store.
@mikolevy Thanks for the accurate analysis. We overlooked those.
This should be a very easy fix.
Any volunteers for a PR?
I don't know how much this is related, I'm using **Redis** as the broker and the result backend and I'm setting `result_extended` to `True` and everything is working fine except for the task name, when I try to access `task.name` it's always `None`, after debugging the `Context` object that is being passed to `BaseKeyValueStoreBackend.store_result` it contains the task name with the attribute `task` and not `task_name` as `BaseKeyValueStoreBackend._store_result` is trying to get from the request object:
```python
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
request_meta = {
'name': getattr(request, 'task_name', None),
'args': getattr(request, 'args', None),
'kwargs': getattr(request, 'kwargs', None),
'worker': getattr(request, 'hostname', None),
'retries': getattr(request, 'retries', None),
'queue': request.delivery_info.get('routing_key')
if hasattr(request, 'delivery_info') and
request.delivery_info else None
}
meta.update(request_meta)
```
@mdawar This is a different bug, please submit a new issue about it.
>
>
> @mikolevy Thanks for the accurate analysis. We overlooked those.
>
> This should be a very easy fix.
> Any volunteers for a PR?
@thedrow I am willing to give it a go unless someone who is more familiar with Celery is willing to step up.
Go for it.
@thedrow do I need contributor rights to create the branch with the fix? Also do you have some rules or guidelines on how to name the branch etc...?
Create a fork and submit a pull request from a branch.
We don't care about the branch name since it's in your fork, you can do whatever you want in your fork.
I suggest that you read our CONTRIBUTING document.
I'm not sure if I'm missing something, but this issue also seems to affect:
AMQPBackend: store_result overridden, does not implement result_extended
CassandraBackend: _store_result overridden, does not implement result_extended
DatabaseBackend: _store_result overridden, does not implement result_extended
RPCBackend: store_result overridden, does not implement result_extended
Looks like every backend that doesn't extend KeyValueStoreBackend is at fault?
Perhaps it's worth refactoring so that all the standard message 'construction' (including result_extended) is within Backend.store_result().
Each implementation must implement _store_result() and never, ever override store_result().
Thoughts?
You could simply create a private method which must be called in `store_result()` named `_maybe_extend_result(result)` which will add data as needed.
You are outside of my capability on this one. If someone else wants to work on a more global solution I would be all for that.
I wrote the original PR, and didn't scope it to the other backends other than KV (redis for my use case). I don't see any reason it can't be generalized to all backends; however, someone would have to go update those backends that have overridden store_result()
This should not be closed. @auvipy Can you please open it?
As referenced here: http://docs.celeryproject.org/en/latest/whatsnew-4.3.html#result-backends it doesn't state that any additional configuration is needed but perhaps I am missing something.
I am seeing the same issue with the MongoDB result backend. The MongoDB backend overrides [`_store_result`](https://github.com/celery/celery/blob/a616ae7c02fa137de2250f66d0c8db693e070210/celery/backends/mongodb.py#L181) and does not call the base.py `_store_result` method. The base.py [`_store_result`](https://github.com/celery/celery/blob/a616ae7c02fa137de2250f66d0c8db693e070210/celery/backends/base.py#L699) method is where the code was changed to add the extended results.
Possible solutions include adding the new result_extended code to the `_store_result` method of the MongoDB backend, or having the backend call the super method and store those values in the database. I am not sure why `_store_result` was overridden, since MongoDB can act like a key/value store.
@mikolevy Thanks for the accurate analysis. We overlooked those.
This should be a very easy fix.
Any volunteers for a PR?
I don't know how much this is related, I'm using **Redis** as the broker and the result backend and I'm setting `result_extended` to `True` and everything is working fine except for the task name, when I try to access `task.name` it's always `None`, after debugging the `Context` object that is being passed to `BaseKeyValueStoreBackend.store_result` it contains the task name with the attribute `task` and not `task_name` as `BaseKeyValueStoreBackend._store_result` is trying to get from the request object:
```python
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
request_meta = {
'name': getattr(request, 'task_name', None),
'args': getattr(request, 'args', None),
'kwargs': getattr(request, 'kwargs', None),
'worker': getattr(request, 'hostname', None),
'retries': getattr(request, 'retries', None),
'queue': request.delivery_info.get('routing_key')
if hasattr(request, 'delivery_info') and
request.delivery_info else None
}
meta.update(request_meta)
```
@mdawar This is a different bug, please submit a new issue about it.
>
>
> @mikolevy Thanks for the accurate analysis. We overlooked those.
>
> This should be a very easy fix.
> Any volunteers for a PR?
@thedrow I am willing to give it a go unless someone who is more familiar with Celery is willing to step up.
Go for it.
@thedrow do I need contributor rights to create the branch with the fix? Also do you have some rules or guidelines on how to name the branch etc...?
Create a fork and submit a pull request from a branch.
We don't care about the branch name since it's in your fork, you can do whatever you want in your fork.
I suggest that you read our CONTRIBUTING document.
I'm not sure if I'm missing something, but this issue also seems to affect:
AMQPBackend: store_result overridden, does not implement result_extended
CassandraBackend: _store_result overridden, does not implement result_extended
DatabaseBackend: _store_result overridden, does not implement result_extended
RPCBackend: store_result overridden, does not implement result_extended
Looks like every backend that doesn't extend KeyValueStoreBackend is at fault?
Perhaps it's worth refactoring so that all the standard message 'construction' (including result_extended) is within Backend.store_result().
Each implementation must implement _store_result() and never, ever override store_result().
Thoughts?
You could simply create a private method which must be called in `store_result()` named `_maybe_extend_result(result)` which will add data as needed.
You are outside of my capability on this one. If someone else wants to work on a more global solution I would be all for that.
I wrote the original PR, and didn't scope it to the other backends other than KV (redis for my use case). I don't see any reason it can't be generalized to all backends; however, someone would have to go update those backends that have overridden store_result()
This should not be closed. @auvipy Can you please open it? | 2019-07-27T03:19:54 |
celery/celery | 5,664 | celery__celery-5664 | [
"5617",
"5617"
] | 8f3680c5189f2ef63753a692aaeea3892f067c56 | diff --git a/celery/app/amqp.py b/celery/app/amqp.py
--- a/celery/app/amqp.py
+++ b/celery/app/amqp.py
@@ -253,6 +253,7 @@ def __init__(self, app):
1: self.as_task_v1,
2: self.as_task_v2,
}
+ self.app._conf.bind_to(self._handle_conf_update)
@cached_property
def create_task_message(self):
@@ -611,6 +612,10 @@ def routes(self):
def router(self):
return self.Router()
+ @router.setter
+ def router(self, value):
+ return value
+
@property
def producer_pool(self):
if self._producer_pool is None:
@@ -634,3 +639,9 @@ def _event_dispatcher(self):
# We call Dispatcher.publish with a custom producer
# so don't need the diuspatcher to be enabled.
return self.app.events.Dispatcher(enabled=False)
+
+ def _handle_conf_update(self, *args, **kwargs):
+ if ('task_routes' in kwargs or 'task_routes' in args):
+ self.flush_routes()
+ self.router = self.Router()
+ return
diff --git a/celery/utils/collections.py b/celery/utils/collections.py
--- a/celery/utils/collections.py
+++ b/celery/utils/collections.py
@@ -245,6 +245,7 @@ class ChainMap(MutableMapping):
changes = None
defaults = None
maps = None
+ _observers = []
def __init__(self, *maps, **kwargs):
# type: (*Mapping, **Any) -> None
@@ -335,7 +336,10 @@ def setdefault(self, key, default=None):
def update(self, *args, **kwargs):
# type: (*Any, **Any) -> Any
- return self.changes.update(*args, **kwargs)
+ result = self.changes.update(*args, **kwargs)
+ for callback in self._observers:
+ callback(*args, **kwargs)
+ return result
def __repr__(self):
# type: () -> str
@@ -376,6 +380,9 @@ def _iterate_values(self):
return (self[key] for key in self)
itervalues = _iterate_values
+ def bind_to(self, callback):
+ self._observers.append(callback)
+
if sys.version_info[0] == 3: # pragma: no cover
keys = _iterate_keys
items = _iterate_items
| diff --git a/t/unit/app/test_amqp.py b/t/unit/app/test_amqp.py
--- a/t/unit/app/test_amqp.py
+++ b/t/unit/app/test_amqp.py
@@ -333,6 +333,15 @@ def test_routes(self):
r2 = self.app.amqp.routes
assert r1 is r2
+ def update_conf_runtime_for_tasks_queues(self):
+ self.app.conf.update(task_routes={'task.create_pr': 'queue.qwerty'})
+ self.app.send_task('task.create_pr')
+ router_was = self.app.amqp.router
+ self.app.conf.update(task_routes={'task.create_pr': 'queue.asdfgh'})
+ self.app.send_task('task.create_pr')
+ router = self.app.amqp.router
+ assert router != router_was
+
class test_as_task_v2:
| Updating task_routes config during runtime does not have effect
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [x] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
[root@shiny ~]# celery report
software -> celery:4.4.0rc2 (cliffs) kombu:4.6.3 py:3.6.8
billiard:3.6.0.0 py-amqp:2.5.0
platform -> system:Linux arch:64bit
kernel version:4.19.13-200.fc28.x86_64 imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
[root@shiny ~]# pip3 freeze
amqp==2.5.0
anymarkup==0.7.0
anymarkup-core==0.7.1
billiard==3.6.0.0
celery==4.4.0rc2
configobj==5.0.6
gpg==1.10.0
iniparse==0.4
json5==0.8.4
kombu==4.6.3
pygobject==3.28.3
python-qpid-proton==0.28.0
pytz==2019.1
PyYAML==5.1.1
pyzmq==18.0.1
redis==3.2.1
rpm==4.14.2
six==1.11.0
smartcols==0.3.0
toml==0.10.0
ucho==0.1.0
vine==1.3.0
xmltodict==0.12.0
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
Updating task_routes during runtime is possible and has effect
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
Updating `task_routes` during runtime does not have effect - the config is updated but the `router` in `send_task` seems to be reusing old configuration.
```python
import celery
c = celery.Celery(broker='redis://localhost:6379/0',
backend='redis://localhost:6379/0')
c.conf.update(task_routes={'task.create_pr': 'queue.betka'})
c.send_task('task.create_pr')
print(c.conf.get('task_routes'))
c.conf.update(task_routes={'task.create_pr': 'queue.ferdinand'})
c.send_task('task.create_pr')
print(c.conf.get('task_routes'))
```
Output:
```
[root@shiny ~]# python3 repr.py
{'task.create_pr': 'queue.betka'}
{'task.create_pr': 'queue.ferdinand'}
```
So the configuration is updated but it seems the routes are still pointing to queue.betka, since both tasks are sent to queue.betka and queue.ferdinand didn't receive anything.
```
betka_1 | [2019-06-24 14:50:41,386: INFO/MainProcess] Received task: task.create_pr[54b28121-28cf-4301-b6f2-185d2e7c50cb]
betka_1 | [2019-06-24 14:50:41,386: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7fca7f4d6a60> (args:('task.create_pr', '54b28121-28cf-4301-b6f2-185d2e7c50cb', {'lang': 'py', 'task': 'task.create_pr', 'id': '54b28121-28cf-4301-b6f2-185d2e7c50cb', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'retries': 0, 'timelimit': [None, None], 'root_id': '54b28121-28cf-4301-b6f2-185d2e7c50cb', 'parent_id': None, 'argsrepr': '()', 'kwargsrepr': '{}', 'origin': 'gen68@shiny', 'reply_to': 'b7be085a-b1f8-3738-b65f-963a805f2513', 'correlation_id': '54b28121-28cf-4301-b6f2-185d2e7c50cb', 'delivery_info': {'exchange': '', 'routing_key': 'queue.betka', 'priority': 0, 'redelivered': None}}, b'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]', 'application/json', 'utf-8') kwargs:{})
betka_1 | [2019-06-24 14:50:41,387: INFO/MainProcess] Received task: task.create_pr[3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0]
betka_1 | [2019-06-24 14:50:41,388: DEBUG/MainProcess] Task accepted: task.create_pr[54b28121-28cf-4301-b6f2-185d2e7c50cb] pid:12
betka_1 | [2019-06-24 14:50:41,390: INFO/ForkPoolWorker-1] Task task.create_pr[54b28121-28cf-4301-b6f2-185d2e7c50cb] succeeded in 0.002012896991800517s: 'Maybe later :)'
betka_1 | [2019-06-24 14:50:41,390: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7fca7f4d6a60> (args:('task.create_pr', '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', {'lang': 'py', 'task': 'task.create_pr', 'id': '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'retries': 0, 'timelimit': [None, None], 'root_id': '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', 'parent_id': None, 'argsrepr': '()', 'kwargsrepr': '{}', 'origin': 'gen68@shiny', 'reply_to': 'b7be085a-b1f8-3738-b65f-963a805f2513', 'correlation_id': '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', 'delivery_info': {'exchange': '', 'routing_key': 'queue.betka', 'priority': 0, 'redelivered': None}}, b'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]', 'application/json', 'utf-8') kwargs:{})
betka_1 | [2019-06-24 14:50:41,391: DEBUG/MainProcess] Task accepted: task.create_pr[3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0] pid:12
betka_1 | [2019-06-24 14:50:41,391: INFO/ForkPoolWorker-1] Task task.create_pr[3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0] succeeded in 0.0006862019945401698s: 'Maybe later :)'
```
Note: I managed to workaround it by adding `del c.amqp` right after update for now
Updating task_routes config during runtime does not have effect
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [x] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
[root@shiny ~]# celery report
software -> celery:4.4.0rc2 (cliffs) kombu:4.6.3 py:3.6.8
billiard:3.6.0.0 py-amqp:2.5.0
platform -> system:Linux arch:64bit
kernel version:4.19.13-200.fc28.x86_64 imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
[root@shiny ~]# pip3 freeze
amqp==2.5.0
anymarkup==0.7.0
anymarkup-core==0.7.1
billiard==3.6.0.0
celery==4.4.0rc2
configobj==5.0.6
gpg==1.10.0
iniparse==0.4
json5==0.8.4
kombu==4.6.3
pygobject==3.28.3
python-qpid-proton==0.28.0
pytz==2019.1
PyYAML==5.1.1
pyzmq==18.0.1
redis==3.2.1
rpm==4.14.2
six==1.11.0
smartcols==0.3.0
toml==0.10.0
ucho==0.1.0
vine==1.3.0
xmltodict==0.12.0
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
Updating task_routes during runtime is possible and has effect
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
Updating `task_routes` during runtime does not have effect - the config is updated but the `router` in `send_task` seems to be reusing old configuration.
```python
import celery
c = celery.Celery(broker='redis://localhost:6379/0',
backend='redis://localhost:6379/0')
c.conf.update(task_routes={'task.create_pr': 'queue.betka'})
c.send_task('task.create_pr')
print(c.conf.get('task_routes'))
c.conf.update(task_routes={'task.create_pr': 'queue.ferdinand'})
c.send_task('task.create_pr')
print(c.conf.get('task_routes'))
```
Output:
```
[root@shiny ~]# python3 repr.py
{'task.create_pr': 'queue.betka'}
{'task.create_pr': 'queue.ferdinand'}
```
So the configuration is updated but it seems the routes are still pointing to queue.betka, since both tasks are sent to queue.betka and queue.ferdinand didn't receive anything.
```
betka_1 | [2019-06-24 14:50:41,386: INFO/MainProcess] Received task: task.create_pr[54b28121-28cf-4301-b6f2-185d2e7c50cb]
betka_1 | [2019-06-24 14:50:41,386: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7fca7f4d6a60> (args:('task.create_pr', '54b28121-28cf-4301-b6f2-185d2e7c50cb', {'lang': 'py', 'task': 'task.create_pr', 'id': '54b28121-28cf-4301-b6f2-185d2e7c50cb', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'retries': 0, 'timelimit': [None, None], 'root_id': '54b28121-28cf-4301-b6f2-185d2e7c50cb', 'parent_id': None, 'argsrepr': '()', 'kwargsrepr': '{}', 'origin': 'gen68@shiny', 'reply_to': 'b7be085a-b1f8-3738-b65f-963a805f2513', 'correlation_id': '54b28121-28cf-4301-b6f2-185d2e7c50cb', 'delivery_info': {'exchange': '', 'routing_key': 'queue.betka', 'priority': 0, 'redelivered': None}}, b'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]', 'application/json', 'utf-8') kwargs:{})
betka_1 | [2019-06-24 14:50:41,387: INFO/MainProcess] Received task: task.create_pr[3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0]
betka_1 | [2019-06-24 14:50:41,388: DEBUG/MainProcess] Task accepted: task.create_pr[54b28121-28cf-4301-b6f2-185d2e7c50cb] pid:12
betka_1 | [2019-06-24 14:50:41,390: INFO/ForkPoolWorker-1] Task task.create_pr[54b28121-28cf-4301-b6f2-185d2e7c50cb] succeeded in 0.002012896991800517s: 'Maybe later :)'
betka_1 | [2019-06-24 14:50:41,390: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x7fca7f4d6a60> (args:('task.create_pr', '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', {'lang': 'py', 'task': 'task.create_pr', 'id': '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', 'shadow': None, 'eta': None, 'expires': None, 'group': None, 'retries': 0, 'timelimit': [None, None], 'root_id': '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', 'parent_id': None, 'argsrepr': '()', 'kwargsrepr': '{}', 'origin': 'gen68@shiny', 'reply_to': 'b7be085a-b1f8-3738-b65f-963a805f2513', 'correlation_id': '3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0', 'delivery_info': {'exchange': '', 'routing_key': 'queue.betka', 'priority': 0, 'redelivered': None}}, b'[[], {}, {"callbacks": null, "errbacks": null, "chain": null, "chord": null}]', 'application/json', 'utf-8') kwargs:{})
betka_1 | [2019-06-24 14:50:41,391: DEBUG/MainProcess] Task accepted: task.create_pr[3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0] pid:12
betka_1 | [2019-06-24 14:50:41,391: INFO/ForkPoolWorker-1] Task task.create_pr[3bf8b0fb-cb4a-412b-84d4-1a52b794b4e0] succeeded in 0.0006862019945401698s: 'Maybe later :)'
```
Note: I managed to workaround it by adding `del c.amqp` right after update for now
| Could someone please take a look at this?
post support Questions to mailing list
This is not about support, this is a bug.
verify that on the mailing list first
from your code of conduct:
> Bugs can always be described to the Mailing list, but the best way to report an issue and to ensure a timely response is to use the issue tracker.
Also, I am pretty sure this is a bug in code in this repo:
I think I can see specifically where it is:
When the task is sent, router from self.amqp is used - router is gotten from cached property - self.amqp which is not updated, when the configutration changes:
https://github.com/celery/celery/blob/5b8fe5f2f3314f2b5d03097533711e6b47b570d4/celery/app/base.py#L717
https://github.com/celery/celery/blob/5b8fe5f2f3314f2b5d03097533711e6b47b570d4/celery/app/base.py#L714
https://github.com/celery/celery/blob/5b8fe5f2f3314f2b5d03097533711e6b47b570d4/celery/app/base.py#L1202
So I suggest deleting amqp value (as I did in my workaround fix) when `task_routes` conf value is updated
And when you don't want to allow changes of task_routes during runtime, the bug is, that the configuration shows something that is not true after an update.
I would request to come with a fix and test
Could someone please take a look at this?
post support Questions to mailing list
This is not about support, this is a bug.
verify that on the mailing list first
from your code of conduct:
> Bugs can always be described to the Mailing list, but the best way to report an issue and to ensure a timely response is to use the issue tracker.
Also, I am pretty sure this is a bug in code in this repo:
I think I can see specifically where it is:
When the task is sent, router from self.amqp is used - router is gotten from cached property - self.amqp which is not updated, when the configutration changes:
https://github.com/celery/celery/blob/5b8fe5f2f3314f2b5d03097533711e6b47b570d4/celery/app/base.py#L717
https://github.com/celery/celery/blob/5b8fe5f2f3314f2b5d03097533711e6b47b570d4/celery/app/base.py#L714
https://github.com/celery/celery/blob/5b8fe5f2f3314f2b5d03097533711e6b47b570d4/celery/app/base.py#L1202
So I suggest deleting amqp value (as I did in my workaround fix) when `task_routes` conf value is updated
And when you don't want to allow changes of task_routes during runtime, the bug is, that the configuration shows something that is not true after an update.
I would request to come with a fix and test | 2019-08-01T11:05:11 |
celery/celery | 5,681 | celery__celery-5681 | [
"5512",
"2573"
] | 88f726f5af2efb7044b00734d00c499f50ea6795 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -599,7 +599,15 @@ def run(self, args=None, kwargs=None, group_id=None, chord=None,
# chain option may already be set, resulting in
# "multiple values for keyword argument 'chain'" error.
# Issue #3379.
- options['chain'] = tasks if not use_link else None
+ chain_ = tasks if not use_link else None
+ if 'chain' not in options:
+ options['chain'] = chain_
+ elif chain_ is not None:
+ # If a chain already exists, we need to extend it with the next
+ # tasks in the chain.
+ # Issue #5354.
+ options['chain'].extend(chain_)
+
first_task.apply_async(**options)
return results[0]
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -68,7 +68,7 @@ def delayed_sum_with_soft_guard(numbers, pause_time=1):
@shared_task
def tsum(nums):
- """Sum an iterable of numbers"""
+ """Sum an iterable of numbers."""
return sum(nums)
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -247,6 +247,103 @@ def test_chain_of_task_a_group_and_a_chord(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == 8
+ @flaky
+ def test_chain_of_chords_as_groups_chained_to_a_task_with_two_tasks(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c = add.si(1, 0)
+ c = c | group(add.s(1), add.s(1))
+ c = c | tsum.s()
+ c = c | add.s(1)
+ c = c | group(add.s(1), add.s(1))
+ c = c | tsum.s()
+
+ res = c()
+ assert res.get(timeout=TIMEOUT) == 12
+
+ @flaky
+ def test_chain_of_chords_with_two_tasks(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c = add.si(1, 0)
+ c = c | group(add.s(1), add.s(1))
+ c = c | tsum.s()
+ c = c | add.s(1)
+ c = c | chord(group(add.s(1), add.s(1)), tsum.s())
+
+ res = c()
+ assert res.get(timeout=TIMEOUT) == 12
+
+ @flaky
+ def test_chain_of_a_chord_and_a_group_with_two_tasks(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c = add.si(1, 0)
+ c = c | group(add.s(1), add.s(1))
+ c = c | tsum.s()
+ c = c | add.s(1)
+ c = c | group(add.s(1), add.s(1))
+
+ res = c()
+ assert res.get(timeout=TIMEOUT) == [6, 6]
+
+ @flaky
+ def test_chain_of_a_chord_and_a_task_and_a_group(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c = group(add.s(1, 1), add.s(1, 1))
+ c = c | tsum.s()
+ c = c | add.s(1)
+ c = c | group(add.s(1), add.s(1))
+
+ res = c()
+ assert res.get(timeout=TIMEOUT) == [6, 6]
+
+ @flaky
+ def test_chain_of_a_chord_and_two_tasks_and_a_group(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c = group(add.s(1, 1), add.s(1, 1))
+ c = c | tsum.s()
+ c = c | add.s(1)
+ c = c | add.s(1)
+ c = c | group(add.s(1), add.s(1))
+
+ res = c()
+ assert res.get(timeout=TIMEOUT) == [7, 7]
+
+ @flaky
+ def test_chain_of_a_chord_and_three_tasks_and_a_group(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c = group(add.s(1, 1), add.s(1, 1))
+ c = c | tsum.s()
+ c = c | add.s(1)
+ c = c | add.s(1)
+ c = c | add.s(1)
+ c = c | group(add.s(1), add.s(1))
+
+ res = c()
+ assert res.get(timeout=TIMEOUT) == [8, 8]
+
class test_result_set:
@@ -338,7 +435,9 @@ def assert_ids(r, expected_value, expected_root_id, expected_parent_id):
def assert_ping(manager):
- ping_val = list(manager.inspect().ping().values())[0]
+ ping_result = manager.inspect().ping()
+ assert ping_result
+ ping_val = list(ping_result.values())[0]
assert ping_val == {"ok": "pong"}
| Chain with groups at start and end and more than 2 tasks in between never complete
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [ ] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- Subsequent groups within a chain fail #3585
- Nested group(chain(group)) fails #5496
- group([chain(group(task(), task()), group(...)), chain(group(...), group(...))]) construct never finishes #2354
- Canvas with group-task-task-group does not work #5354
#### Possible Duplicates
- Nested group-chain-group structure never succeeds #2573
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.3.0 (rhubarb) kombu:4.5.0 py:3.6.5
billiard:3.6.0.0 redis:3.2.1
platform -> system:Linux arch:64bit
kernel version:4.9.0-9-amd64 imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis:///
broker_url: 'redis://localhost:6379//'
result_backend: 'redis:///'
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: 3.6
* **Minimal Celery Version**: 4.3
* **Minimal Kombu Version**: 4.5.0
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: Redis
* **Minimal OS and/or Kernel Version**: Debian Stretch
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
amqp==2.4.2
billiard==3.6.0.0
celery==4.3.0
celery-eternal==0.1.1
celery-singleton==0.1.3
certifi==2018.4.16
kombu==4.5.0
pytz==2019.1
redis==3.2.1
vine==1.3.0
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
from time import sleep
from celery import Celery
from celery import group
app = Celery('tasks', broker = 'redis://')
app.conf['result_backend'] = app.conf.broker_url
@app.task
def prod(x, y):
return x*y
@app.task
def subtract(args):
return args[0]-args[1]
@app.task(shared=False)
def identity(args):
"""Identity task (returns its input)."""
return args
x = (
group( prod.s(1, 1), prod.s(2, 1) )
|
identity.s()
|
subtract.s()
|
group( prod.s(5), prod.s(6) )
)
r = x.delay()
sleep(10)
print(r.waiting())
# Another case
x = (
group( prod.s(1, 1), prod.s(2, 1) )
|
subtract.si((3,4))
|
subtract.si((4,3))
|
subtract.si((6,5))
|
group( prod.si(5, 6), prod.si(6, 5) )
)
r = x.delay()
sleep(10)
print(r.waiting())
# However this works
# x = (
# group( prod.s(1, 1), prod.s(2, 1) )
# |
# subtract.si((3,4))
# |
# group( prod.si(5, 6), prod.si(6, 5) )
# )
# r = x.delay()
# print([p.result for p in r.results])
# prints [30, 30]
# The adding more than one component in between 2 groups
# in a chain is causing the canvas to wait right before the final group
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
Above code snippet should return a list of 2 values
([5, 6] in for first case, [30, 30] for second case)
# Actual Behavior
The code never never returns, keeps waiting
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
Celery worker logs for the first case mentioned
```
[2019-05-10 21:12:42,361: INFO/MainProcess] Connected to redis://localhost:6379//
[2019-05-10 21:12:42,366: INFO/MainProcess] mingle: searching for neighbors
[2019-05-10 21:12:43,380: INFO/MainProcess] mingle: all alone
[2019-05-10 21:12:43,393: INFO/MainProcess] celery@debian ready.
[2019-05-10 21:13:43,557: INFO/MainProcess] Received task: celery_task.prod[e11748a0-dca2-41fb-b799-a2ae7879c9fd]
[2019-05-10 21:13:43,559: INFO/MainProcess] Received task: celery_task.prod[6ec1c88f-cca7-417c-8ff1-eacd9bdef7a1]
[2019-05-10 21:13:43,566: INFO/ForkPoolWorker-2] Task celery_task.prod[6ec1c88f-cca7-417c-8ff1-eacd9bdef7a1] succeeded in 0.0064055299990286585s: 2
[2019-05-10 21:13:43,585: INFO/ForkPoolWorker-1] Task celery_task.prod[e11748a0-dca2-41fb-b799-a2ae7879c9fd] succeeded in 0.02549504399939906s: 1
[2019-05-10 21:13:43,586: INFO/MainProcess] Received task: celery_task.identity[87148533-8da1-4fc6-bbfe-608470cb3c1a]
[2019-05-10 21:13:43,588: INFO/ForkPoolWorker-1] Task celery_task.identity[87148533-8da1-4fc6-bbfe-608470cb3c1a] succeeded in 0.0011925399994652253s: [2, 1]
[2019-05-10 21:13:43,588: INFO/MainProcess] Received task: celery_task.subtract[e907af43-a886-45ea-bfe3-f709c8267e5d]
[2019-05-10 21:13:43,589: INFO/ForkPoolWorker-1] Task celery_task.subtract[e907af43-a886-45ea-bfe3-f709c8267e5d] succeeded in 0.00028916799965372775s: 1
```
Considering the first case mentioned in the code snippet, you can see that value of the task before the second group, `subtract`, is returned but the last group (consisting 2 `prod`s) is never received, even after waiting many minutes. (I put the `sleep(10)` just as a proxy for some wait)
Nested group-chain-group structure never succeeds
The structure like
```
workflow = chain(
t.task1.s(),
chord(
[
t.task2.s(),
chain(
t.task3.s(),
chord(
[t.task4.s(), t.task5.s()],
t.task6.s()
)
)
],
t.task7.s()
)
)
```
never succeeds. Task `t.task7` is never called, while other tasks succeed just fine.
Replacing chords with groups did nothing. When I remove the inner chord, workflow finishes without any problems. Workflow also succeeds if I remove an inner chain.
The exact code that can be used to reproduce this issue as well as log messages are in [this gist](https://gist.github.com/traut/fcf5d76e07dd3ed6e54d)
| and this @tothegump
Ok, let me have a look.
It appears that #4481 and #4690 possibly caused this regression.
@tothegump Any idea why?
I'm looking into this now.
I haven't figured out a way to fix it yet.
I can confirm that this works when run eagerly.
I tested both on the deprecated amqp backend and the redis backend.
The canvas never completes.
I still have no idea why.
@tothegump I need help here.
Any chance we can schedule a debugging session together?
> @tothegump I need help here.
> Any chance we can schedule a debugging session together?
Sure. Would you like to schedule a debugging session together by email?
Yes.
I just saw your email.
I'll respond.
I suspect that this issue is related to #2556 because this configuration works just fine:
```
workflow = chain(
t.task1.s(),
chord(
[
t.task2.s(),
chain(
t.task3.s(),
chord(
[t.task4.s(), t.task5.s()],
t.task6.s()
),
t.task3.s(),
t.task4.s()
)
],
t.task7.s()
)
)
```
Note 2 added tasks `t.task3` and `t.task4` calls after inner chord. The workflow will not work without them or with only one task there. This configuration still hangs:
```
workflow = chain(
t.task1.s(),
chord(
[
t.task2.s(),
chain(
t.task3.s(),
chord(
[t.task4.s(), t.task5.s()],
t.task6.s()
),
t.task3.s()
)
],
t.task7.s()
)
)
```
Just came across the same issue - Confirmed
Hello. Is there any progress on this? I ran into what might be a related issue today. I have something of the form:
```
group1 | group2 | task1 | task2
```
'task2' doesn't get executed even though everything else succeeds.
Interestingly, if `group2` is composed of a single task, then the full pipeline succeeds. Does that give a clue into what is going on?
Example of failure:
```
from celery import Celery
from celery import chain
from celery import group
app = Celery('bug')
app.config_from_object('debug_celeryconfig')
@app.task
def t(v):
print v
return v
if __name__ == '__main__':
# This works.
group_1 = group([
t.si('group_1_task_1'),
t.si('group_1_task_2'),
])
group_2 = group([
t.si('group_2_task_1'),
])
addtl_task_1 = t.si('addtl_task_1')
addtl_task_2 = t.si('addtl_task_2')
pipeline = group_1 | group_2 | addtl_task_1 | addtl_task_2
assert pipeline.delay().get() == 'addtl_task_2'
# This fails.
group_1 = group([
t.si('group_1_task_1'),
])
group_2 = group([
t.si('group_2_task_1'),
t.si('group_2_task_2'),
])
addtl_task_1 = t.si('addtl_task_1')
addtl_task_2 = t.si('addtl_task_2')
pipeline = group_1 | group_2 | addtl_task_1 | addtl_task_2
assert pipeline.delay().get() == 'addtl_task_2'
# This also fails.
group_1 = group([
t.si('group_1_task_1'),
t.si('group_1_task_2')
])
group_2 = group([
t.si('group_2_task_1'),
t.si('group_2_task_2') # NEW
])
addtl_task_1 = t.si('addtl_task_1')
addtl_task_2 = t.si('addtl_task_2')
pipeline = group_1 | group_2 | addtl_task_1 | addtl_task_2
assert pipeline.delay().get() == 'addtl_task_2'
```
@glebkuznetsov Can you please submit the failing test cases in a form of a PR?
That will certainly help resolve this issue?
@thedrow This issue is blocking for us. I can provide a PR for a failing test case and I can also work on a fix but I need a bit of guidance. Could you please tell me which kind of PR you expect for the failing test? Following code is a failing case with Redis as broker and backend, but it also fails with amqp as broker and backend. It passes when `CELERY_ALWAYS_EAGER = True`
```
from celery import Celery
from celery import group
app = Celery(
'tasks',
backend='redis://?new_join=1',
broker='redis://'
)
@app.task
def t(v):
print(v)
return v
if __name__ == '__main__':
# This doesn't work.
group_1 = group([
t.si('group_1_task_1'),
t.si('group_1_task_2'),
])
group_2 = group([
t.si('group_2_task_1'),
])
addtl_task_1 = t.si('addtl_task_1')
pipeline = group_1 | group_2 | addtl_task_1
result = pipeline.delay().get()
assert result == 'addtl_task_1'
```
trying to chain groups will probably not work as you expect them to. See:
https://github.com/celery/celery/issues/1671#issuecomment-39901321
is anyone working on fixing it?
here is my example,
it's hanging when trying to do `result.get()` (ie `result.join_native()`) on waiting for results of inner group (`KeyValueStorage.get_many`) but works when doing `result.join()`
```
job = group(
group(
add.s(16, 16),
add.s(32, 32)
),
(
add.s(2, 2)
|
add.s(4)
)
)
result = job.apply_async()
```
here is more complicated example, it hangs with either `join` or `native_join`, but on different routine (`BaseBackend.wait_for`)
```
job = group(
(
group(
add.s(16, 16),
add.s(32, 32)
)
|
add.si(2, 4)
),
(
add.s(2, 2)
|
add.s(4)
)
)
result = job.apply_async()
```
UPD:
this one (with chord instead of group and chain) works but only with `join` instead of `native_join`
```
job = group(
(
chord([
add.s(16, 16),
add.s(32, 32)
], add.si(2, 4))
),
(
add.s(2, 2)
|
add.s(4)
)
)
result = job.apply_async()
```
UPD2: but some nested group workflows are still not working, but can be workarounded by turning them into a chord with a dummy task as a final note
@merwan how did you resolved your blocker? did you found some workaround? or some other similar library?
UPD: for our setup i applied this workaround: https://github.com/verifiedpixel/celery/commit/f42a70e04e46565443a10ff8f27e454707c2c977
I'd be happy to guide people regarding to how we're going to fix it.
@actionless Can you demonstrate how this fixes the issues?
@thedrow, to reproduce the problem and workaround you can use the last example i provided (group(chord, chain)) (with redis, obviously).
But it seems to be another unrelated to redis issue with "parsing" complicated task workflows (group-chain-group and for some cases like (group(group,group)) but they're harder to reproduce) -- it can be workarounded by using chord instead of group for nested taskset. Also sometimes during the debugging i had a feeling what `__init__` for `group` was called two times more than needed so the id of an executed task was not the same as for which parent task is waiting to be executed, i think i had that behavior with my second example.
These examples work for me in master.
Please open a new issue if you can reproduce a problem in that version!
| 2019-08-19T16:25:44 |
celery/celery | 5,682 | celery__celery-5682 | [
"5467",
"3585"
] | 89c4573ac47a1f840ed2d15e2820d0eaed29dc32 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -669,10 +669,20 @@ def prepare_steps(self, args, kwargs, tasks,
# signature instead of a group.
tasks.pop()
results.pop()
- task = chord(
- task, body=prev_task,
- task_id=prev_res.task_id, root_id=root_id, app=app,
- )
+ try:
+ task = chord(
+ task, body=prev_task,
+ task_id=prev_res.task_id, root_id=root_id, app=app,
+ )
+ except AttributeError:
+ # A GroupResult does not have a task_id since it consists
+ # of multiple tasks.
+ # We therefore, have to construct the chord without it.
+ # Issues #5467, #3585.
+ task = chord(
+ task, body=prev_task,
+ root_id=root_id, app=app,
+ )
if is_last_task:
# chain(task_id=id) means task id is set for the last task
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -232,6 +232,21 @@ def test_groupresult_serialization(self, manager):
assert len(result) == 2
assert isinstance(result[0][1], list)
+ @flaky
+ def test_chain_of_task_a_group_and_a_chord(self, manager):
+ try:
+ manager.app.backend.ensure_chords_allowed()
+ except NotImplementedError as e:
+ raise pytest.skip(e.args[0])
+
+ c = add.si(1, 0)
+ c = c | group(add.s(1), add.s(1))
+ c = c | group(tsum.s(), tsum.s())
+ c = c | tsum.s()
+
+ res = c()
+ assert res.get(timeout=TIMEOUT) == 8
+
class test_result_set:
| task/group chains fails in some scenarios
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [x] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [x] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Possible Duplicates
- [Subsequent groups within a chain fail #3585](https://github.com/celery/celery/issues/3585)
- [Consecutive groups in chain fails #4848 ](https://github.com/celery/celery/issues/3585)
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.2.2 (windowlicker)
**Celery report**:
software -> celery:4.2.2 (windowlicker)
kombu:4.3.0 py:2.7.5
billiard:3.5.0.5 redis:3.2.1
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://localhost:6379/0
broker_url: redis://localhost:6379/0
result_backend: redis://localhost:6379/0
</p>
</details>
# Steps to Reproduce
## Minimally Reproducible Test Case
```
from celery import signature, group, chain
from tasks import task
task1 = task.si('task1')
task2 = task.si('task2')
group1 = group(task.si('group1 job1'), task.si('group1 job2'))
group2 = group(task.si('group2 job1'), task.si('group2 job2'))
# working sequences
#res = chain(group1, group2)
#res = chain(group1, task1, group2)
#res = chain(task1, group1, task2, group2)
#res = chain(group1, group2, task1)
# failing sequence
res = chain(task1, group1, group2)
res.delay()
```
# Expected Behavior
Tasks are passed to celery
# Actual Behavior
It looks like celery is making the wrong decision to convert this chain into a chord. There are some workarounds available, but no real fix is available yet.
```
Traceback (most recent call last):
File "debug_run.py", line 19, in <module>
res.delay()
File "/home/ja04913/ocenter/ocenter-venv/lib/python2.7/site-packages/celery/canvas.py", line 179, in delay
return self.apply_async(partial_args, partial_kwargs)
File "/home/ja04913/ocenter/ocenter-venv/lib/python2.7/site-packages/celery/canvas.py", line 557, in apply_async
dict(self.options, **options) if options else self.options))
File "/home/ja04913/ocenter/ocenter-venv/lib/python2.7/site-packages/celery/canvas.py", line 573, in run
task_id, group_id, chord,
File "/home/ja04913/ocenter/ocenter-venv/lib/python2.7/site-packages/celery/canvas.py", line 655, in prepare_steps
task_id=prev_res.task_id, root_id=root_id, app=app,
AttributeError: 'GroupResult' object has no attribute 'task_id'
```
Subsequent groups within a chain fail
```
software -> celery:4.0.0 (latentcall) kombu:4.0.0 py:2.7.10
billiard:3.5.0.2 py-amqp:2.1.1
platform -> system:FreeBSD arch:64bit, ELF imp:PyPy
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:cache+memcached://10.1.1.2:11211/
```
```
amqp==2.1.1
celery==4.0.0
Django==1.10.3
librabbitmq==1.6.1
pylibmc==1.5.1
```
## Steps to reproduce
```python
# celery.py
from __future__ import absolute_import
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks(force=True)
```
```python
from celery import task, group, chain
@task
def add(x, y):
return x + y
# chain( task, group(tasks) )
x = chain( add.si(1, 1), group([add.si(1, 1), add.si(1, 1)]) )
type(x) # celery.canvas._chain
x.apply_async() # works as expected
# chain( task, group(tasks), group(tasks) )
x = chain( add.si(1, 1), group([add.si(1, 1), add.si(1, 1)]), group([add.si(1, 1), add.si(1, 1)]) )
type(x) # celery.canvas._chain
x.apply_async() # fails, traceback below
# chain( task, group(tasks), task, group(tasks) )
x = chain( add.si(1, 1), group([add.si(1, 1), add.si(1, 1)]), add.si(1, 1), group([add.si(1, 1), add.si(1, 1)]) )
type(x) # celery.canvas._chain
x.apply_async() # works as expected
```
## Expected behavior
It seems like a chain containing subsequent groups fails. If there is a task in between, or just a single group, it works as expected. This is different behavior from 3.x and I can't seem to find any obvious differences in the docs. The only thing that looks relevant in the release notes is the fix about `group | group` being flattened into a single group (#2573).
## Actual behavior
```python
AttributeError Traceback (most recent call last)
<ipython-input-30-495e2ee230ea> in <module>()
----> 1 x.apply_async()
/project/site-packages/celery/canvas.pyc in delay(self, *partial_args, **partial_kwargs)
180 def delay(self, *partial_args, **partial_kwargs):
181 """Shortcut to :meth:`apply_async` using star arguments."""
--> 182 return self.apply_async(partial_args, partial_kwargs)
183
184 def apply(self, args=(), kwargs={}, **options):
/project/site-packages/celery/canvas.pyc in apply_async(self, args, kwargs, **options)
565 return self.apply(args, kwargs, **options)
566 return self.run(args, kwargs, app=app, **(
--> 567 dict(self.options, **options) if options else self.options))
568
569 def run(self, args=(), kwargs={}, group_id=None, chord=None,
/project/site-packages/celery/canvas.pyc in run(self, args, kwargs, group_id, chord, task_id, link, link_error, publisher, producer, root_id, parent_id, app, **options)
584 tasks, results = self.prepare_steps(
585 args, self.tasks, root_id, parent_id, link_error, app,
--> 586 task_id, group_id, chord,
587 )
588
/project/site-packages/celery/canvas.pyc in prepare_steps(self, args, tasks, root_id, parent_id, link_error, app, last_task_id, group_id, chord_body, clone, from_dict)
663 task = chord(
664 task, body=prev_task,
--> 665 task_id=prev_res.task_id, root_id=root_id, app=app,
666 )
667
AttributeError: 'GroupResult' object has no attribute 'task_id'
```
| are you using celery 4.3?
I've tested it with 4.3 where it shows the same behavior.
I can confirm I also have this issue on 4.3.
Hey there. Look's like i'm facing the same issue.
Any workaround ?
I'm working with :
```
amqp (2.1.4)
celery (4.0.2)
kombu (4.0.2)
```
Here is my stack trace:
```python
Apr 4 19:31:59: File ".python/3.5-dev/lib/python3.5/site-packages/celery/canvas.py", line 567, in apply_async
Apr 4 19:31:59: dict(self.options, **options) if options else self.options))
Apr 4 19:31:59: File ".python/3.5-dev/lib/python3.5/site-packages/celery/canvas.py", line 586, in run
Apr 4 19:31:59: task_id, group_id, chord,
Apr 4 19:31:59: File ".python/3.5-dev/lib/python3.5/site-packages/celery/canvas.py", line 665, in prepare_steps
Apr 4 19:31:59: task_id=prev_res.task_id, root_id=root_id, app=app,
Apr 4 19:31:59: AttributeError: 'GroupResult' object has no attribute 'task_id'
```
My workflow looks like:
(TaskA | Group B | Group C | Goup D | Task E) and all the tasks are immutable.
RabbitMQ is used as broker and Mysql as result backend.
Regards.
It's not that ideal, but you can throw an empty/dummy task in between.
@mheppner
Yes i could do that but it is far from ideal...
I'll wait and hope for a bugfix.
I would be glad to do it but but i'm not familiar enough with Celery internals :/
> It's not that ideal, but you can throw an empty/dummy task in between.
It does avoid error, but it ends up with wrong exection.
It seems that the behavior with multiple groups in chain seems to be broken in Celery v4.
Following example illustrate it:
```
# myapp.py (run worker with: celery -A myapp worker)
from celery import Celery
app = Celery('tasks', backend='redis://localhost:6379', broker='redis://localhost:6379')
@app.task
def nop(*args):
print(args)
```
```
# myapp_run.py
import celery
import myapp
celery.chain(myapp.nop.si("1"),
celery.group(myapp.nop.si("2-a"), myapp.nop.si("2-b")),
myapp.nop.si("3"),
celery.group(myapp.nop.si("4-a"), myapp.nop.si("4-b")),
myapp.nop.si("5")).delay().get()
```
worker output with Celery v3.1.25 (expected behavior)
```
[2017-05-01 10:44:46,159: WARNING/Worker-2] ('1',)
[2017-05-01 10:44:46,178: WARNING/Worker-1] ('2-a',)
[2017-05-01 10:44:46,188: WARNING/Worker-2] ('2-b',)
[2017-05-01 10:44:46,693: WARNING/Worker-1] ('3',)
[2017-05-01 10:44:46,708: WARNING/Worker-2] ('4-a',)
[2017-05-01 10:44:46,712: WARNING/Worker-1] ('4-b',)
[2017-05-01 10:44:47,217: WARNING/Worker-2] ('5',)
```
worker output with Celery v4.0.2 (unexpected behavior)
```
[2017-05-01 10:42:41,983: WARNING/PoolWorker-2] ('1',)
[2017-05-01 10:42:41,986: WARNING/PoolWorker-1] ('2-a',)
[2017-05-01 10:42:41,988: WARNING/PoolWorker-2] ('2-b',)
[2017-05-01 10:42:41,990: WARNING/PoolWorker-1] ('4-a',)
[2017-05-01 10:42:41,992: WARNING/PoolWorker-1] ('4-b',)
[2017-05-01 10:42:41,995: WARNING/PoolWorker-1] ('4-b',)
[2017-05-01 10:42:41,995: WARNING/PoolWorker-2] ('4-a',)
[2017-05-01 10:42:42,000: WARNING/PoolWorker-1] ('3',)
[2017-05-01 10:42:42,013: WARNING/PoolWorker-1] ('5',)
[2017-05-01 10:42:42,013: WARNING/PoolWorker-2] ('5',)
[2017-05-01 10:42:42,014: WARNING/PoolWorker-2] ('4-a',)
[2017-05-01 10:42:42,016: WARNING/PoolWorker-2] ('4-b',)
[2017-05-01 10:42:42,018: WARNING/PoolWorker-2] ('5',)
```
Tested with:
* OS: Ubuntu 16.04.1
* Python: 3.5.2
* Redis: 3.0.6
@yoichi I think the issue you are describing is fixed in `master`. @xavierp is it possible to give example tasks and a chain that reproduce the issue? Thanks.
@georgepsarakis Thanks for the information. I've confirmed the problem is not reproduced with Celery in master branch (e812c578).
chained groups are still broken; using a "placeholder task" bypasses error of:
AttributeError: 'GroupResult' object has no attribute 'task_id'
I can confirm that this group-following-group error still occurs on 4.1.0. I don't know when 4.1.0 diverged from master, but it was released it a couple months after @georgepsarakis and @yoichi tested master. Strangely, if one of the groups happens to have only one element, then the error does not occur, so there must actually be multiple signatures per group to reliably test for the bug.
+1 - Just encountered it as well.
could you try celery 4.2?
Updated to 4.2.1 and am still receive "AttributeError: 'GroupResult' object has no attribute 'task_id'" for two consecutive groups in a chain.
Doing this in the meantime:
@app.task(name='group_separator')
def group_separator() -> None:
return True
Thanks @denniswalker for this workaround. Group separator worked for me as I commented here: https://github.com/celery/celery/issues/4848#issuecomment-447317343. | 2019-08-19T16:46:53 |
celery/celery | 5,700 | celery__celery-5700 | [
"5106"
] | 54ee4bafdf84becd9d33b5b3ff12800a50c10ddb | diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -496,18 +496,8 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
if isinstance(exc, Retry):
return self.on_retry(exc_info)
- # These are special cases where the process wouldn't've had
- # time to write the result.
- if isinstance(exc, Terminated):
- self._announce_revoked(
- 'terminated', True, string(exc), False)
- send_failed_event = False # already sent revoked event
- elif isinstance(exc, WorkerLostError) or not return_ok:
- self.task.backend.mark_as_failure(
- self.id, exc, request=self._context,
- store_result=self.store_errors,
- )
# (acks_late) acknowledge after result stored.
+ requeue = False
if self.task.acks_late:
reject = (
self.task.reject_on_worker_lost and
@@ -521,6 +511,19 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
elif ack:
self.acknowledge()
+ # These are special cases where the process would not have had time
+ # to write the result.
+ if isinstance(exc, Terminated):
+ self._announce_revoked(
+ 'terminated', True, string(exc), False)
+ send_failed_event = False # already sent revoked event
+ elif not requeue and (isinstance(exc, WorkerLostError) or not return_ok):
+ # only mark as failure if task has not been requeued
+ self.task.backend.mark_as_failure(
+ self.id, exc, request=self._context,
+ store_result=self.store_errors,
+ )
+
if send_failed_event:
self.send_event(
'task-failed',
| diff --git a/t/unit/tasks/test_result.py b/t/unit/tasks/test_result.py
--- a/t/unit/tasks/test_result.py
+++ b/t/unit/tasks/test_result.py
@@ -7,7 +7,7 @@
import pytest
-from case import Mock, call, patch, skip, MagicMock
+from case import Mock, call, patch, skip
from celery import states, uuid
from celery.app.task import Context
from celery.backends.base import SyncBackendMixin
diff --git a/t/unit/worker/test_request.py b/t/unit/worker/test_request.py
--- a/t/unit/worker/test_request.py
+++ b/t/unit/worker/test_request.py
@@ -635,6 +635,26 @@ def get_ei():
job.on_failure(exc_info)
assert self.mytask.backend.get_status(job.id) == states.PENDING
+ def test_on_failure_acks_late_reject_on_worker_lost_enabled(self):
+ try:
+ raise WorkerLostError()
+ except WorkerLostError:
+ exc_info = ExceptionInfo()
+ self.mytask.acks_late = True
+ self.mytask.reject_on_worker_lost = True
+
+ job = self.xRequest()
+ job.delivery_info['redelivered'] = False
+ job.on_failure(exc_info)
+
+ assert self.mytask.backend.get_status(job.id) == states.PENDING
+
+ job = self.xRequest()
+ job.delivery_info['redelivered'] = True
+ job.on_failure(exc_info)
+
+ assert self.mytask.backend.get_status(job.id) == states.FAILURE
+
def test_on_failure_acks_late(self):
job = self.xRequest()
job.time_start = 1
| Retry tasks which failed because of SIGKILL
Hello,
We are currently using celery with Broker=RabbitMQ & Backend=Redis. Our setup runs on Kubernetes with auto-scaling enabled. (Our number of pods scale up / down based on resource utilization). We use the Forked Process mode to run our celery worker.
When the size of our cluster increases, everything works great. However, when the pods turn down, the tasks that were running on the pods get marked as FAILED. The sequence of actions is as follows:
1. Resource consumption decreases
2. Kubernetes realizes that pods that were scaled up now need to be turned down
3. Kubernetes sends `SIGKILL` signal to the pods that should turn down
4. Celery intercepts the signals and turns down all the Forked Processes
5. The tasks that were running on the processes return their execution back to the Main Process
6. The main process marks all the running tasks as FAILED
When the main process is the one that marks the running operations as FAILED, the `retry` code is never executed.
We do want to recover these tasks as they are part of a long running process. Celery should provide a way to recover these tasks.
| The thing here is about acknowledgement.
By default Celery will acknowledge a task right before a worker executes it. If the worker then fails unexpectedly the task is already marked as acknowledged and no retry policy will apply. This might seem a bit counterintuitive but it is the default because of the alternative.
The alternative is to set `acks_late=True` which will make Celery acknowledge a task *after* it's been run. That means if a broker fail unexpectedly while running a task then that task will not be marked as acknowledged and the retry policy will apply.
Now, the issue with using `acks_late=True` is that you have to make sure your task takes this into consideration and either is able to pickup from where it left or whatever it does is idempotent. Meaning, running the same code over and over again will always yield the same result.
Because of this caveat Celery chooses to acknowledge a task right after a worker executes it in case the task has some undesired result if executed multiple times.
Thanks a lot for the reply. I should have clarified: we are setting `acks_late=True` for the tasks. These are the parameter list for our task signature:
```
@cel.task(bind=True, acks_late=True, ignore_result=True, autoretry_for=(Exception,))
```
We have made it such that the task execution is idempotent, so even if there is a retry on the task, we should be OK.
OK, so I looked at the documentation again, and looks like I missed the config setting: `task_reject_on_worker_lost` which is exactly the [feature](http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-reject-on-worker-lost) I was looking for.
While this solves part-1 of the problem, it still does't solve the second part. If this particular task happens to be part of a `group` in a `chord`, the error is propagated to a chord-error and the chord completely fails. Not sure if there is a way to fix this correctly.
Ah, yes. Forgot you also need `task_reject_on_worker_lost`.
Can you show the stacktrace when the group example happens?
Here is a small program to reproduce the error:
-- server.py
```
import celery
import time
cel = celery.Celery(
'experiments',
backend='redis://localhost:6379',
broker='amqp://localhost:5672'
)
cel.conf.update(
CELERYD_PREFETCH_MULTIPLIER = 1,
CELERY_REJECT_ON_WORKER_LOST=True,
CELERY_TASK_REJECT_ON_WORKER_LOST=True,
)
@cel.task(bind=True, acks_late=True, name='tasks.long_running_task', queue='worker.experiment')
def long_running_task(self, index):
if index >= 0 and index <= 3:
print("Sleeping for 30 secs")
time.sleep(30.0)
@cel.task(bind=True, acks_late=True, name='tasks.callback_task', queue='worker.experiment')
def callback_task(self):
print("Completed the job successfully.")
```
Run the server using the following command:
```
celery -A worker_failure worker -l info --concurrency=4 -Q worker.experiment
```
-- client.py
```
import celery
import worker_failure
if __name__ == '__main__':
res = celery.chord(
worker_failure.long_running_task.s(i) for i in range(4)
)(worker_failure.callback_task.s())
res.get()
```
Run the client using the following command: `python main.py`
Once the tasks are received, kill one of the processes by using the following command:
```
ps -awwwx | grep celery | tail -n 1 | awk '{print $1}' | xargs kill -9
```
This should kill the last process (ideally :) )
On the server side, you'll see the following trace:
```
[2018-10-14 10:01:55,639: ERROR/ForkPoolWorker-1] Chord callback for '2af041bb-809d-4fbe-b6df-2afb457f693b' raised: ChordError(u"Dependency d1cb5de9-7a6b-4442-a52c-5cad85c04c10 raised WorkerLostError(u'Worker exited prematurely: signal 9 (SIGKILL).',)",)
Traceback (most recent call last):
File "/Users/shaunak/venv/lib/python2.7/site-packages/celery/backends/redis.py", line 289, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/Users/shaunak/venv/lib/python2.7/site-packages/celery/backends/redis.py", line 242, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
ChordError: Dependency d1cb5de9-7a6b-4442-a52c-5cad85c04c10 raised WorkerLostError(u'Worker exited prematurely: signal 9 (SIGKILL).',)
```
This is not correct, since I do see the task being successful:
```
[2018-10-14 10:01:24,008: INFO/ForkPoolWorker-5] Task tasks.long_running_task[d1cb5de9-7a6b-4442-a52c-5cad85c04c10] succeeded in 30.007873791s: None
```
I think that the order of handling of WorkerLostError in [request.py](https://github.com/celery/celery/blob/master/celery/worker/request.py#L368-L385) might be incorrect.
The current code looks as such:
```
elif isinstance(exc, WorkerLostError) or not return_ok:
self.task.backend.mark_as_failure(
self.id, exc, request=self._context,
store_result=self.store_errors,
)
# (acks_late) acknowledge after result stored.
if self.task.acks_late:
reject = (
self.task.reject_on_worker_lost and
isinstance(exc, WorkerLostError)
)
ack = self.task.acks_on_failure_or_timeout
if reject:
requeue = not self.delivery_info.get('redelivered')
self.reject(requeue=requeue)
send_failed_event = False
elif ack:
self.acknowledge()
```
Note that `task` is marked as failed as soon as we are able to determine that the `exc` is a `WorkerLostError`. This internally causes the [`on_chord_return`](https://github.com/celery/celery/blob/master/celery/backends/base.py#L150-L162) function to mark the chord as failed.
There are multiple ways in which we can try to fix the problem:
1. The determination of whether we should mark the task as failed should be done after we determine whether the task is marked as retryable. If it is, the status of the task should be marked with `mark_as_retry`
2. If one still wants to mark the task as failed, adding an additional parameter as to whether `on_chord_part_return` could help.
Note that the `on_chord_return` will also [run the `errbacks`](https://github.com/celery/celery/blob/master/celery/backends/base.py#L161-L162) regardless of whether the task is marked as retryable or not. This also needs to be amended in case the task is marked as retryable in case of worker lost.
@xirdneh any update on when we can expect a fix, and whether it will be backported to previous releases?
I can also submit a patch as a fix, based on my limited understanding of the code.
Very interesting case.
I believe you have a good idea on how to fix this issue. Could you submit that PR you have in mind as well as a test for this?
Let me know if you have any questions submitting the PR or writing the test :)
Thanks for the help 👍
Thanks for the help. I'll get the PR up in a few days. I plan to take the approach of marking the task as `FAILED` to keep the state consistent, but pass a `None` `request` to [`mark_as_failure`](https://github.com/celery/celery/blob/master/celery/backends/base.py#L150)
This will prevent both `on_chord_part_return`, and `on_err_callback` to be invoked.
@shaunakgodbole Make sure to mention/relate this issue with the PR, please.
did you get the time for the PR?
Unfortunately did not get time to fix this. I'll take care of it next week. Sorry about the delay.
@shaunakgodbole Any progress on this?
Also, you might be able to mitigate this if you know how long your tasks are running by setting a `terminationGracePeriodSeconds` in your Kubernetes Deployment manifest. | 2019-08-28T15:22:29 |
celery/celery | 5,720 | celery__celery-5720 | [
"5719"
] | 9cac36ff2a916fbeb5fc9fdfbaa0fd14ad448baf | diff --git a/celery/contrib/pytest.py b/celery/contrib/pytest.py
--- a/celery/contrib/pytest.py
+++ b/celery/contrib/pytest.py
@@ -15,6 +15,16 @@
# Well, they're called fixtures....
+def pytest_configure(config):
+ """Register additional pytest configuration."""
+ # add the pytest.mark.celery() marker registration to the pytest.ini [markers] section
+ # this prevents pytest 4.5 and newer from issueing a warning about an unknown marker
+ # and shows helpful marker documentation when running pytest --markers.
+ config.addinivalue_line(
+ "markers", "celery(**overrides): override celery configuration for a test case"
+ )
+
+
@contextmanager
def _create_app(enable_logging=False,
use_trap=False,
| diff --git a/t/unit/contrib/test_pytest.py b/t/unit/contrib/test_pytest.py
new file mode 100644
--- /dev/null
+++ b/t/unit/contrib/test_pytest.py
@@ -0,0 +1,34 @@
+import pytest
+
+try:
+ from pytest import PytestUnknownMarkWarning # noqa: F401
+
+ pytest_marker_warnings = True
+except ImportError:
+ pytest_marker_warnings = False
+
+
+pytest_plugins = ["pytester"]
+
+
[email protected](
+ not pytest_marker_warnings,
+ reason="Older pytest version without marker warnings",
+)
+def test_pytest_celery_marker_registration(testdir):
+ """Verify that using the 'celery' marker does not result in a warning"""
+ testdir.plugins.append("celery")
+ testdir.makepyfile(
+ """
+ import pytest
+ @pytest.mark.celery(foo="bar")
+ def test_noop():
+ pass
+ """
+ )
+
+ result = testdir.runpytest('-q')
+ with pytest.raises(ValueError):
+ result.stdout.fnmatch_lines_random(
+ "*PytestUnknownMarkWarning: Unknown pytest.mark.celery*"
+ )
| Getting error when write a unit test using PyTest for a celery task -> pytest.PytestUnknownMarkWarning: Unknown pytest.mark.celery
I wrote the following bare minimum unit test class for celery
```python
import pytest
@pytest.fixture
def celery_config():
return {
"broker_url": "redis://localhost:6379/0",
"result_backend": "redis://localhost:6379/0"
}
@pytest.mark.celery(result_backend="redis://")
class GetHash:
def test_some(self):
pass
```
I am getting the following error when executing the test
```
test_get_hash.py:12: in <module>
@pytest.mark.celery(result_backend="redis://")
/home/work/.virtualenvs/dev_env/lib/python3.6/site-packages/_pytest/mark/structures.py:324: in __getattr__
PytestUnknownMarkWarning,
E pytest.PytestUnknownMarkWarning: Unknown pytest.mark.celery - is this a typo?
```
These are the items in the `_mark` set `structures.py` file
```python
<class 'set'>: {
'tryfirst',
'skip',
'black',
'filterwarnings',
'parametrize',
'usefixtures',
'skipif',
'xfail',
'no_cover',
'trylast'
}
```
These are the installed python libraries
```
amqp==2.5.1
anyjson==0.3.3
apipkg==1.5
appdirs==1.4.3
atomicwrites==1.3.0
attrs==19.1.0
autoflake==1.3
Babel==2.7.0
bandit==1.6.2
billiard==3.6.1.0
black==19.3b0
celery==4.3.0
Cerberus==1.3.1
certifi==2019.6.16
chardet==3.0.4
checksumdir==1.1.6
Click==7.0
coverage==4.5.3
execnet==1.6.0
Flask==1.0.2
Flask-Cors==3.0.8
flower==0.9.3
gitdb2==2.0.5
GitPython==2.1.13
idna==2.8
importlib-metadata==0.19
isort==4.3.20
itsdangerous==1.1.0
Jinja2==2.10.1
kombu==4.6.4
MarkupSafe==1.1.1
mock==3.0.5
more-itertools==7.0.0
mysql-connector-python==8.0.16
Nuitka==0.6.5
packaging==19.1
pbr==5.4.2
pluggy==0.12.0
protobuf==3.7.1
py==1.8.0
pyflakes==2.1.1
pyparsing==2.4.2
pytest==5.1.1
pytest-black==0.3.7
pytest-cov==2.7.1
pytest-forked==1.0.2
pytest-runner==5.1
pytest-xdist==1.29.0
python-dateutil==2.8.0
python-dotenv==0.10.1
pytz==2019.2
PyYAML==5.1.2
redis==3.3.8
requests==2.22.0
rq==1.1.0
six==1.12.0
smmap2==2.0.5
SQLAlchemy==1.3.3
stevedore==1.30.1
toml==0.10.0
tornado==5.1.1
urllib3==1.25.3
vine==1.3.0
wcwidth==0.1.7
Werkzeug==0.15.2
```
Is the documentation missing an additional package?
| which version of pytest you are using?
@auvipy pytest==5.1.1, I have listed all the libraries with exact versions
pytest 5.x versions only support python 3.x series so we are using 4.x series with pytest 4.6.x seris
pytest 5.x will be used with celery 5.x version onward which will be py3 only
Means it doesn't support python3.6 celery 4.0.x and pytest 5.0.x? @auvipy If thats the case please update the documentation
it only doesnt support pytest 5.x | 2019-09-09T14:52:43 |
celery/celery | 5,737 | celery__celery-5737 | [
"5736"
] | 08bec60513c6414bd097b1ad5e8101e26dd6224b | diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -769,6 +769,7 @@ def join(self, timeout=None, propagate=True, interval=0.5,
value = result.get(
timeout=remaining, propagate=propagate,
interval=interval, no_ack=no_ack, on_interval=on_interval,
+ disable_sync_subtasks=disable_sync_subtasks,
)
if callback:
callback(result.id, value)
| diff --git a/t/unit/tasks/test_result.py b/t/unit/tasks/test_result.py
--- a/t/unit/tasks/test_result.py
+++ b/t/unit/tasks/test_result.py
@@ -468,6 +468,17 @@ def test_get(self):
b.supports_native_join = True
x.get()
x.join_native.assert_called()
+
+ @patch('celery.result.task_join_will_block')
+ def test_get_sync_subtask_option(self, task_join_will_block):
+ task_join_will_block.return_value = True
+ x = self.app.ResultSet([self.app.AsyncResult(str(t)) for t in [1, 2, 3]])
+ b = x.results[0].backend = Mock()
+ b.supports_native_join = False
+ with pytest.raises(RuntimeError):
+ x.get()
+ with pytest.raises(TimeoutError):
+ x.get(disable_sync_subtasks=False, timeout=0.1)
def test_join_native_with_group_chain_group(self):
"""Test group(chain(group)) case, join_native can be run correctly.
| disable_sync_subtasks setting not being respected while using ResultSet
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- #5330
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:4.3.0
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.3.0 (rhubarb) kombu:4.6.4 py:3.5.6
billiard:3.6.1.0 py-amqp:2.5.1
platform -> system:Linux arch:64bit, ELF
kernel version:4.19.71-1-lts imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: 3.5.6
* **Minimal Celery Version**: 4.3.0
* **Minimal Kombu Version**: 4.6.4
* **Minimal Broker Version**: RabbitMQ 3.7.15
* **Minimal Result Backend Version**: 10.4.7-MariaDB
* **Minimal OS and/or Kernel Version**: Linux 4.19.71-1-lts
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
amqp==2.5.1
asn1crypto==0.24.0
Babel==2.7.0
bcrypt==3.1.7
billiard==3.6.1.0
celery==4.3.0
certifi==2019.9.11
cffi==1.12.3
chardet==3.0.4
cryptography==2.7
django-celery-results==1.1.2
flower==0.9.3
idna==2.8
importlib-metadata==0.22
kombu==4.6.4
more-itertools==7.2.0
mysqlclient==1.4.4
paramiko==2.6.0
pycparser==2.19
PyNaCl==1.3.0
pytz==2019.2
requests==2.22.0
requests-toolbelt==0.9.1
six==1.12.0
SQLAlchemy==1.3.8
tornado==5.1.1
urllib3==1.25.3
vine==1.3.0
websockets==7.0
zipp==0.6.0
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
@app.task
def add(x, y):
return x + y
@app.task
def test():
result_set = ResultSet([])
add_tasks = add.starmap((i, i) for i in range(10))
add_result = add_tasks.apply_async()
result_set.add(add_result)
return result_set.get(disable_sync_subtasks=False)
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
The tasks runs successfully, with the acceptance of possible deadlock
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
```
Traceback (most recent call last):
File "/home/gsfish/.pyenv/versions/scan_detect/lib/python3.5/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/gsfish/.pyenv/versions/scan_detect/lib/python3.5/site-packages/celery/app/trace.py", line 648, in __protected_call__
return self.run(*args, **kwargs)
File "/home/gsfish/work/netease/project/scan_detect/tasks.py", line 106, in test
return result_set.get(disable_sync_subtasks=False)
File "/home/gsfish/.pyenv/versions/scan_detect/lib/python3.5/site-packages/celery/result.py", line 697, in get
on_interval=on_interval,
File "/home/gsfish/.pyenv/versions/scan_detect/lib/python3.5/site-packages/celery/result.py", line 765, in join
interval=interval, no_ack=no_ack, on_interval=on_interval,
File "/home/gsfish/.pyenv/versions/scan_detect/lib/python3.5/site-packages/celery/result.py", line 205, in get
assert_will_not_block()
File "/home/gsfish/.pyenv/versions/scan_detect/lib/python3.5/site-packages/celery/result.py", line 41, in assert_will_not_block
raise RuntimeError(E_WOULDBLOCK)
RuntimeError: Never call result.get() within a task!
See http://docs.celeryq.org/en/latest/userguide/tasks.html#task-synchronous-subtasks
```
| 2019-09-17T14:16:39 |
|
celery/celery | 5,752 | celery__celery-5752 | [
"5714"
] | 77dbd379ab632f55199146a4bc37ee924821c039 | diff --git a/celery/backends/database/__init__.py b/celery/backends/database/__init__.py
--- a/celery/backends/database/__init__.py
+++ b/celery/backends/database/__init__.py
@@ -132,7 +132,7 @@ def _update_result(self, task, result, state, traceback=None,
task.status = state
task.traceback = traceback
if self.app.conf.find_value_for_key('extended', 'result'):
- task.name = getattr(request, 'task_name', None)
+ task.name = getattr(request, 'task', None)
task.args = ensure_bytes(
self.encode(getattr(request, 'args', None))
)
| diff --git a/t/unit/backends/test_database.py b/t/unit/backends/test_database.py
--- a/t/unit/backends/test_database.py
+++ b/t/unit/backends/test_database.py
@@ -231,7 +231,7 @@ def test_store_result(self, result_serializer, args, kwargs):
tid = uuid()
request = Context(args=args, kwargs=kwargs,
- task_name='mytask', retries=2,
+ task='mytask', retries=2,
hostname='celery@worker_1',
delivery_info={'routing_key': 'celery'})
| DatabaseBackend._update_result() have an error property.
python 3.7
celery 4.4.0rc3
The result has an error value NULL for the name in my backend(mysql), but it's work well when I use redis as my backend.
After I change this error in `backends/database/__init__.py` [135], alter 'task_name' to 'task', I get the correct task_name.
The 'name' in `backends/base.py` [706,717]
```
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
request_meta = {
-> 'name': getattr(request, 'task', None),
'args': getattr(request, 'args', None),
'kwargs': getattr(request, 'kwargs', None),
'worker': getattr(request, 'hostname', None),
'retries': getattr(request, 'retries', None),
'queue': request.delivery_info.get('routing_key')
if hasattr(request, 'delivery_info') and
request.delivery_info else None
}
```
The 'name' in `backends/database/__init__.py` [129,148]
```
def _update_result(self, task, result, state, traceback=None,
request=None):
task.result = result
task.status = state
task.traceback = traceback
if self.app.conf.find_value_for_key('extended', 'result'):
- task.name = getattr(request, 'task_name', None)
+ task.name = getattr(request, 'task', None)
task.args = ensure_bytes(
self.encode(getattr(request, 'args', None))
)
task.kwargs = ensure_bytes(
self.encode(getattr(request, 'kwargs', None))
)
task.worker = getattr(request, 'hostname', None)
task.retries = getattr(request, 'retries', None)
task.queue = (
request.delivery_info.get("routing_key")
if hasattr(request, "delivery_info") and request.delivery_info
else None
)
```
| 2019-09-25T16:13:28 |
|
celery/celery | 5,759 | celery__celery-5759 | [
"5597"
] | b268171d5d7b0ebf634956c0559883f28296c21c | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -193,6 +193,8 @@ def apply(self, args=None, kwargs=None, **options):
"""
args = args if args else ()
kwargs = kwargs if kwargs else {}
+ # Extra options set to None are dismissed
+ options = {k: v for k, v in options.items() if v is not None}
# For callbacks: extra args are prepended to the stored args.
args, kwargs, options = self._merge(args, kwargs, options)
return self.type.apply(args, kwargs, **options)
@@ -214,6 +216,8 @@ def apply_async(self, args=None, kwargs=None, route_name=None, **options):
"""
args = args if args else ()
kwargs = kwargs if kwargs else {}
+ # Extra options set to None are dismissed
+ options = {k: v for k, v in options.items() if v is not None}
try:
_apply = self._apply_async
except IndexError: # pragma: no cover
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -203,3 +203,9 @@ def fail(*args):
@shared_task
def chord_error(*args):
return args
+
+
+@shared_task(bind=True)
+def return_priority(self, *_args):
+ return "Priority: %s" % self.request.delivery_info['priority']
+
| diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -13,11 +13,10 @@
add_to_all_to_chord, build_chain_inside_task, chord_error,
collect_ids, delayed_sum, delayed_sum_with_soft_guard,
fail, identity, ids, print_unicode, raise_error,
- redis_echo, second_order_replace1, tsum)
+ redis_echo, second_order_replace1, tsum, return_priority)
TIMEOUT = 120
-
class test_chain:
@flaky
@@ -854,3 +853,13 @@ def test_chain_to_a_chord_with_large_header(self, manager):
c = identity.si(1) | group(identity.s() for _ in range(1000)) | tsum.s()
res = c.delay()
assert res.get(timeout=TIMEOUT) == 1000
+
+ @flaky
+ def test_priority(self, manager):
+ c = chain(return_priority.signature(priority=3))()
+ assert c.get(timeout=TIMEOUT) == "Priority: 3"
+
+ @flaky
+ def test_priority_chain(self, manager):
+ c = return_priority.signature(priority=3) | return_priority.signature(priority=5)
+ assert c().get(timeout=TIMEOUT) == "Priority: 5"
diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -408,6 +408,24 @@ def s(*args, **kwargs):
for task in c.tasks:
assert task.options['link_error'] == [s('error')]
+ def test_apply_options_none(self):
+ class static(Signature):
+
+ def clone(self, *args, **kwargs):
+ return self
+
+ def _apply_async(self, *args, **kwargs):
+ self.args = args
+ self.kwargs = kwargs
+
+ c = static(self.add, (2, 2), type=self.add, app=self.app, priority=5)
+
+ c.apply_async(priority=4)
+ assert c.kwargs['priority'] == 4
+
+ c.apply_async(priority=None)
+ assert c.kwargs['priority'] == 5
+
def test_reverse(self):
x = self.add.s(2, 2) | self.add.s(2)
assert isinstance(signature(x), _chain)
| Chain loses priority
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: 3.5.3
* **Minimal Celery Version**: 4.3.0
* **Minimal Kombu Version**: 4.6.1
* **Minimal Broker Version**: 3.6.6-1
* **Minimal OS and/or Kernel Version**: Debian 4.9.168-1+deb9u2
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
from kombu import Queue, Exchange
from celery import Celery
import time
broker = "amqp://admin:admin@localhost:5672/mqfrontend"
app = Celery(
"test celery",
broker=broker
)
app.conf.accept_content = ['json']
app.conf.task_serializer = 'json'
app.conf.result_serializer = 'json'
app.conf.task_ignore_result = True
app.conf.task_routes = {'frontend.*': {'queue': 'first_task', 'routing_key': 'frontend.first_task'}}
app.conf.task_queues = (
Queue('first_task', routing_key='frontend.first_task', queue_arguments={'x-max-priority': 10}),
)
@app.task(name="frontend.first_task", bind=True)
def priority_task(self, arg):
time.sleep(2)
print("PRIORITY: i:%s, p:%s"%(arg, self.request.delivery_info['priority']))
return self.request.delivery_info['priority']
@app.task(name="frontend.second_task", bind=True)
def priorityb_task(self, _, arg):
time.sleep(2)
print("PRIORITYB: i:%s, p:%s"%(arg, self.request.delivery_info['priority']))
return "Test%s"%self.request.delivery_info['priority']
if __name__=='__main__':
import celery
s = celery.chain(
app.signature(
"frontend.first_task",
args=(5,),
priority=5
),
app.signature(
"frontend.second_task",
args=(5,),
priority=5
)
)
s.delay()
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
We expect `first_task` and `second_task` to have a priority of 5 (it works with celery 4.2.1)
Expected output:
```
[tasks]
. frontend.first_task
. frontend.second_task
[2019-06-14 09:24:30,628: INFO/MainProcess] Connected to amqp://admin:**@127.0.0.1:5672/mqfrontend
[2019-06-14 09:24:30,640: INFO/MainProcess] mingle: searching for neighbors
[2019-06-14 09:24:31,673: INFO/MainProcess] mingle: sync with 1 nodes
[2019-06-14 09:24:31,674: INFO/MainProcess] mingle: sync complete
[2019-06-14 09:24:31,717: INFO/MainProcess] celery@ocean ready.
[2019-06-14 09:24:33,799: INFO/MainProcess] Received task: frontend.first_task[1c311ee3-436a-44a0-b849-fa938ec9be96]
[2019-06-14 09:24:35,802: WARNING/ForkPoolWorker-1] PRIORITY: i:5, p:5
[2019-06-14 09:24:35,818: INFO/ForkPoolWorker-1] Task frontend.first_task[1c311ee3-436a-44a0-b849-fa938ec9be96] succeeded in 2.01777482801117s: 5
[2019-06-14 09:24:35,819: INFO/MainProcess] Received task: frontend.second_task[fb1f1871-4b24-4f22-8ead-da74f24285d1]
[2019-06-14 09:24:37,822: WARNING/ForkPoolWorker-1] PRIORITYB: i:5, p:5
[2019-06-14 09:24:37,822: INFO/ForkPoolWorker-1] Task frontend.second_task[fb1f1871-4b24-4f22-8ead-da74f24285d1] succeeded in 2.0030388499144465s: 'Test0'
```
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
`first_task` has the priority of 5, but `second_task` has no priority.
Actual output:
```
[tasks]
. frontend.first_task
. frontend.second_task
[2019-06-14 09:24:30,628: INFO/MainProcess] Connected to amqp://admin:**@127.0.0.1:5672/mqfrontend
[2019-06-14 09:24:30,640: INFO/MainProcess] mingle: searching for neighbors
[2019-06-14 09:24:31,673: INFO/MainProcess] mingle: sync with 1 nodes
[2019-06-14 09:24:31,674: INFO/MainProcess] mingle: sync complete
[2019-06-14 09:24:31,717: INFO/MainProcess] celery@ocean ready.
[2019-06-14 09:24:33,799: INFO/MainProcess] Received task: frontend.first_task[1c311ee3-436a-44a0-b849-fa938ec9be96]
[2019-06-14 09:24:35,802: WARNING/ForkPoolWorker-1] PRIORITY: i:5, p:5
[2019-06-14 09:24:35,818: INFO/ForkPoolWorker-1] Task frontend.first_task[1c311ee3-436a-44a0-b849-fa938ec9be96] succeeded in 2.01777482801117s: 5
[2019-06-14 09:24:35,819: INFO/MainProcess] Received task: frontend.second_task[fb1f1871-4b24-4f22-8ead-da74f24285d1]
[2019-06-14 09:24:37,822: WARNING/ForkPoolWorker-1] PRIORITYB: i:5, p:0
[2019-06-14 09:24:37,822: INFO/ForkPoolWorker-1] Task frontend.second_task[fb1f1871-4b24-4f22-8ead-da74f24285d1] succeeded in 2.0030388499144465s: 'Test0'
```
| If this worked in older versions, could you try this in current master? and find the root cause of the regression?
This one looks interesting -- will take a look probably this week.
I not a lot of time at this moment, I can look at it at the end of the month
I think this is perhaps the PR where it broke, granted I haven't tested reverting it, but it did change chain apply to apply kwargs which it was ignoring before, so maybe that has something to do with it? I found this by diffing celery 4.2.1 with 4.3.0 since you said it worked in 4.2.1 specifically looking for usages of chain.
https://github.com/celery/celery/pull/4952/files
https://github.com/celery/celery/pull/4952 auto reverting this failed
I would say that some piece of code "abused" the fact that kwargs were ignored, intentionally or not, and the change in #4952 revealed that bad behaviour.
In other words, I dont think the changes in #4952 are the root cause of this problem.
Not sure if this is related, but what happens if you define another Queue with `routing_key='frontend.second_task'`? I see that you've enabled priorities for the first_task but not for the other one.
> Queues can be configured to support priorities by setting the `x-max-priority` argument:
I would think that that's missing from your code.
I think I have found a similar issue using the current latest released version 4.3.0.
My problem is that I have 2 queues. The first one is a regular one and the second one is a priority queue. I have a task routing set up. Priority task uses a priority queue and a regular task uses a regular queue.
Imagine this setup:
```py
priority_task_signature = priority_task.signature(args=('foo',), options={'priority': 3}, immutable=True)
regular_task_signature = regular_task.signature(args=('bar',), immutable=True)
result = (regular_task_signature | priority_task_signature)() # Creates and calls the chain.
```
Now, what happens is that a chain is created and sent to the RMQ. When the first regular task is done, it checks if there is a chained task. It pops the task from the list and creates a signature out of it. It calls the `apply_async()` and passes some parameters into the priority task signature ([source](https://github.com/celery/celery/blob/master/celery/app/trace.py#L443)). The parameter `task_priority` is set to `None`, because the parent task was sent into a regular queue without any priority. It goes next to [this line](https://github.com/celery/celery/blob/master/celery/canvas.py#L224) where it merges the signature options with the passed options.
The problem is that it overrides the options from the priority task signature I created explicitly. It replaces the value of priority `3` with `None` and sends the task into the RMQ.
I believe this is a bug. Can you confirm?
@olii why does a regular task pass `task_priority=None`? It should just _not pass it_, right?
It's also strange that args/kwargs are merged with options. I would expect that they are handled separately... now there are "reserved argument names"!
The parental task should not pass the priority if the chained task has set it explicitly. You are right. This is unexpected behavior. See the source that task priority is initialized from the parental task ([source](https://github.com/celery/celery/blob/master/celery/app/trace.py#L368)).
`Args` must be handled because in a chain you can pass the return value of the parental task into the chained task. Since the call uses the general method for task named `apply_async()` it handles also the `kwargs` but they are always set to `None`. No merge of `kwargs` is done.
did any of you try this with celery==4.4.0rc3?
Thank you for the suggestion with celery `4.4.0rc3`. I just did the test and the issue is still present (https://github.com/celery/celery/issues/5597#issuecomment-524833030).
I've noticed that `_chain.apply()` is using `**dict(self.options, **options)` instead of `**dict(task.options, **options)`, so it loses any options that are set on the tasks in the chain. I'm not sure if it should also include `self.options`, but it seems like the omission of `task.options` could explain the loss of priority and other options (e.g. headers).
would you mind sending a PR & mention me to review?
Yep I look at the issue and it's due to
https://github.com/celery/celery/blob/b268171d5d7b0ebf634956c0559883f28296c21c/celery/app/trace.py#L446
which set priority to None and then
https://github.com/celery/celery/blob/b268171d5d7b0ebf634956c0559883f28296c21c/celery/canvas.py#L240
which erase the `self.option['priority']` priority
Do other primitives suffer from the same issue perhaps? Ie. Group? | 2019-10-02T15:01:41 |
celery/celery | 5,773 | celery__celery-5773 | [
"5772"
] | a6453043fbfa676cca22c4945f3f165ce8eb4ec0 | diff --git a/celery/app/amqp.py b/celery/app/amqp.py
--- a/celery/app/amqp.py
+++ b/celery/app/amqp.py
@@ -130,9 +130,9 @@ def add_compat(self, name, **options):
return self._add(Queue.from_dict(name, **options))
def _add(self, queue):
+ if queue.exchange is None or queue.exchange.name == '':
+ queue.exchange = self.default_exchange
if not queue.routing_key:
- if queue.exchange is None or queue.exchange.name == '':
- queue.exchange = self.default_exchange
queue.routing_key = self.default_routing_key
if self.ha_policy:
if queue.queue_arguments is None:
| diff --git a/t/unit/app/test_amqp.py b/t/unit/app/test_amqp.py
--- a/t/unit/app/test_amqp.py
+++ b/t/unit/app/test_amqp.py
@@ -188,6 +188,35 @@ def test_setting_default_queue(self, name, exchange, rkey):
assert queue.routing_key == rkey or name
+class test_default_exchange:
+
+ @pytest.mark.parametrize('name,exchange,rkey', [
+ ('default', 'foo', None),
+ ('default', 'foo', 'routing_key'),
+ ])
+ def test_setting_default_exchange(self, name, exchange, rkey):
+ q = Queue(name, routing_key=rkey)
+ self.app.conf.task_queues = {q}
+ self.app.conf.task_default_exchange = exchange
+ queues = dict(self.app.amqp.queues)
+ queue = queues[name]
+ assert queue.exchange.name == exchange
+
+ @pytest.mark.parametrize('name,extype,rkey', [
+ ('default', 'direct', None),
+ ('default', 'direct', 'routing_key'),
+ ('default', 'topic', None),
+ ('default', 'topic', 'routing_key'),
+ ])
+ def test_setting_default_exchange_type(self, name, extype, rkey):
+ q = Queue(name, routing_key=rkey)
+ self.app.conf.task_queues = {q}
+ self.app.conf.task_default_exchange_type = extype
+ queues = dict(self.app.amqp.queues)
+ queue = queues[name]
+ assert queue.exchange.type == extype
+
+
class test_AMQP_proto1:
def test_kwargs_must_be_mapping(self):
| task_default_exchange&task_default_exchange_type config not work
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [x] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- #3926
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.3.0
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.3.0 (rhubarb) kombu:4.6.4 py:3.6.0
billiard:3.6.1.0 librabbitmq:2.0.0
platform -> system:Linux arch:64bit, ELF
kernel version:4.19.71-1-lts imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:librabbitmq results:disabled
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: 3.6.0
* **Minimal Celery Version**: 4.3.0
* **Minimal Kombu Version**: 4.6.4
* **Minimal Broker Version**: RabbitMQ 3.7.15
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: Linux 4.19.71-1-lts
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
amqp==2.5.1
asn1crypto==0.24.0
atomicwrites==1.3.0
attrs==19.1.0
Automat==0.7.0
backcall==0.1.0
billiard==3.6.1.0
case==1.5.3
celery==4.3.0
cffi==1.12.3
constantly==15.1.0
cryptography==2.7
cssselect==1.1.0
decorator==4.4.0
hyperlink==19.0.0
idna==2.8
importlib-metadata==0.23
incremental==17.5.0
ipython==7.8.0
ipython-genutils==0.2.0
jedi==0.15.1
kombu==4.6.4
librabbitmq==2.0.0
linecache2==1.0.0
lxml==4.4.1
mock==3.0.5
more-itertools==7.2.0
mysqlclient==1.4.4
nose==1.3.7
packaging==19.2
parsel==1.5.2
parso==0.5.1
pexpect==4.7.0
pickleshare==0.7.5
pluggy==0.13.0
prompt-toolkit==2.0.9
ptyprocess==0.6.0
py==1.8.0
pyasn1==0.4.7
pyasn1-modules==0.2.6
pycparser==2.19
PyDispatcher==2.0.5
Pygments==2.4.2
PyHamcrest==1.9.0
pyOpenSSL==19.0.0
pyparsing==2.4.2
pytest==5.2.1
pytz==2019.2
queuelib==1.5.0
Scrapy==1.7.3
scrapy-selenium==0.0.7
selenium==3.141.0
service-identity==18.1.0
six==1.12.0
SQLAlchemy==1.3.8
traceback2==1.4.0
traitlets==4.3.2
Twisted==19.7.0
unittest2==1.1.0
urllib3==1.25.6
vine==1.3.0
w3lib==1.21.0
wcwidth==0.1.7
zipp==0.6.0
zope.interface==4.6.0
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
celeryconfig.py:
```python
broker_url = 'amqp://guest:guest@localhost:5672//'
task_default_queue = 'default'
task_default_exchange = 'tasks'
task_default_exchange_type = 'topic'
task_default_routing_key = 'tasks.default'
task_queues = (
Queue('default', routing_key='tasks.#'),
Queue('test', routing_key='test.#'),
)
```
</p>
<p>
celery.py:
```python
app = Celery('scan_worker')
app.conf.task_default_exchange = 'tasks'
app.conf.task_default_exchange_type = 'topic'
app.config_from_object('test_celery.celeryconfig', force=True)
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
According to the [document](http://docs.celeryproject.org/en/latest/userguide/routing.html#manual-routing):
> If you don't set the exchange or exchange type values for a key, these will be taken from the task_default_exchange and task_default_exchange_type settings
The worker should automatically create queues that binding to the exchange with `task_default_exchange` and `task_default_exchange_type`
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
The output of the command `celery worker -A test_celery -l info`:
```
-------------- celery@arch v4.3.0 (rhubarb)
---- **** -----
--- * *** * -- Linux-4.19.71-1-lts-x86_64-with-arch 2019-10-10 20:13:55
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: scan_worker:0x7efdc5430a58
- ** ---------- .> transport: amqp://guest:**@localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 9 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=(direct) key=tasks.#
.> test exchange=(direct) key=test.#
```
the queues are bound to the exchange that not match `task_default_exchange` and `task_default_exchange_type`
| 2019-10-10T12:49:36 |
|
celery/celery | 5,795 | celery__celery-5795 | [
"5734"
] | ca83e250107aaad1992e87db594623b8e6698e97 | diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py
--- a/celery/backends/mongodb.py
+++ b/celery/backends/mongodb.py
@@ -157,6 +157,10 @@ def _get_connection(self):
# don't change self.options
conf = dict(self.options)
conf['host'] = host
+ if self.user:
+ conf['username'] = self.user
+ if self.password:
+ conf['password'] = self.password
self._connection = MongoClient(**conf)
| diff --git a/t/unit/backends/test_mongodb.py b/t/unit/backends/test_mongodb.py
--- a/t/unit/backends/test_mongodb.py
+++ b/t/unit/backends/test_mongodb.py
@@ -5,6 +5,7 @@
import pytest
from kombu.exceptions import EncodeError
+from pymongo.errors import ConfigurationError
from case import ANY, MagicMock, Mock, mock, patch, sentinel, skip
from celery import states, uuid
@@ -220,6 +221,42 @@ def test_get_connection_no_connection_mongodb_uri(self):
)
assert sentinel.connection == connection
+ def test_get_connection_with_authmechanism(self):
+ with patch('pymongo.MongoClient') as mock_Connection:
+ self.app.conf.mongodb_backend_settings = None
+ uri = ('mongodb://'
+ 'celeryuser:celerypassword@'
+ 'localhost:27017/'
+ 'celerydatabase?authMechanism=SCRAM-SHA-256')
+ mb = MongoBackend(app=self.app, url=uri)
+ mock_Connection.return_value = sentinel.connection
+ connection = mb._get_connection()
+ mock_Connection.assert_called_once_with(
+ host=['localhost:27017'],
+ username='celeryuser',
+ password='celerypassword',
+ authmechanism='SCRAM-SHA-256',
+ **mb._prepare_client_options()
+ )
+ assert sentinel.connection == connection
+
+ def test_get_connection_with_authmechanism_no_username(self):
+ with patch('pymongo.MongoClient') as mock_Connection:
+ self.app.conf.mongodb_backend_settings = None
+ uri = ('mongodb://'
+ 'localhost:27017/'
+ 'celerydatabase?authMechanism=SCRAM-SHA-256')
+ mb = MongoBackend(app=self.app, url=uri)
+ mock_Connection.side_effect = ConfigurationError(
+ 'SCRAM-SHA-256 requires a username.')
+ with pytest.raises(ConfigurationError):
+ mb._get_connection()
+ mock_Connection.assert_called_once_with(
+ host=['localhost:27017'],
+ authmechanism='SCRAM-SHA-256',
+ **mb._prepare_client_options()
+ )
+
@patch('celery.backends.mongodb.MongoBackend._get_connection')
def test_get_database_no_existing(self, mock_get_connection):
# Should really check for combinations of these two, to be complete.
| Celery does not consider authMechanism on mongodb backend URLs
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- #4454
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.3.0 with fixes in PR #5527
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.3.0 (rhubarb) kombu:4.6.4 py:3.7.4
billiard:3.6.1.0 py-amqp:2.5.1
platform -> system:Windows arch:64bit, WindowsPE
kernel version:10 imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
```
</p>
</details>
# Steps to Reproduce
1. Give Celery a Backend URL pointing to a MongoDB like below:
mongodb://admin:[email protected]/task?authSource=admin&authMechanism=SCRAM-SHA-256
2. Start Celery worker.
3. Send any task.
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: 3.0
* **Minimal Celery Version**: 3.0
* **Minimal Kombu Version**: Unknown
* **Minimal Broker Version**: Unknown
* **Minimal Result Backend Version**: MongoDB 4.0
* **Minimal OS and/or Kernel Version**: N/A
* **Minimal Broker Client Version**: Unknown
* **Minimal Result Backend Client Version**: Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
amqp==2.5.1
asn1crypto==0.24.0
astroid==2.2.5
bcrypt==3.1.7
billiard==3.6.1.0
celery==4.3.0
cffi==1.12.3
Click==7.0
colorama==0.4.1
cryptography==2.7
irectory=backend
Flask==1.1.1
importlib-metadata==0.19
isort==4.3.21
itsdangerous==1.1.0
Jinja2==2.10.1
kombu==4.6.4
lazy-object-proxy==1.4.2
MarkupSafe==1.1.1
mccabe==0.6.1
more-itertools==7.2.0
paramiko==2.6.0
pycparser==2.19
pylint==2.3.1
pymodm==0.4.1
pymongo==3.9.0
PyNaCl==1.3.0
pytz==2019.2
six==1.12.0
typed-ast==1.4.0
vine==1.3.0
Werkzeug==0.15.5
wrapt==1.11.2
zipp==0.6.0
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<summary><b>MongoDB:</b></summary>
<p>
Version: >= 3.0.0
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
Support all the authentication options listed at [authentication-options](https://docs.mongodb.com/manual/reference/connection-string/#authentication-options), including _authSource_, _authMechanism_, _authMechanismProperties_ and _gssapiServiceName_.
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
An error occurred as below.
```python
Traceback (most recent call last):
File "D:\GitHub\org\dev\.venv\lib\site-packages\kombu\utils\objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'collection'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\GitHub\org\dev\.venv\lib\site-packages\kombu\utils\objects.py", line 42, in __get__
return obj.__dict__[self.__name__]
KeyError: 'database'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\user\.vscode\extensions\ms-python.python-2019.9.34911\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "c:\Users\user\.vscode\extensions\ms-python.python-2019.9.34911\pythonFiles\lib\python\ptvsd\__main__.py", line 432, in main
run()
File "c:\Users\user\.vscode\extensions\ms-python.python-2019.9.34911\pythonFiles\lib\python\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "C:\Program Files\Python37\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Program Files\Python37\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Program Files\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "d:\GitHub\org\dev\backend\tests\workertest.py", line 22, in <module>
print('Task finished? ', result.ready())
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\result.py", line 313, in ready
return self.state in self.backend.READY_STATES
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\result.py", line 473, in state
return self._get_task_meta()['status']
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\result.py", line 412, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\backends\base.py", line 386, in get_task_meta
meta = self._get_task_meta_for(task_id)
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\backends\mongodb.py", line 206, in _get_task_meta_for
obj = self.collection.find_one({'_id': task_id})
File "D:\GitHub\org\dev\.venv\lib\site-packages\kombu\utils\objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\backends\mongodb.py", line 293, in collection
collection = self.database[self.taskmeta_collection]
File "D:\GitHub\org\dev\.venv\lib\site-packages\kombu\utils\objects.py", line 44, in __get__
value = obj.__dict__[self.__name__] = self.__get(obj)
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\backends\mongodb.py", line 288, in database
return self._get_database()
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\backends\mongodb.py", line 271, in _get_database
conn = self._get_connection()
File "D:\GitHub\org\dev\.venv\lib\site-packages\celery\backends\mongodb.py", line 161, in _get_connection
self._connection = MongoClient(**conf)
File "D:\GitHub\org\dev\.venv\lib\site-packages\pymongo\mongo_client.py", line 668, in __init__
username, password, dbase, opts)
File "D:\GitHub\org\dev\.venv\lib\site-packages\pymongo\client_options.py", line 151, in __init__
username, password, database, options)
File "D:\GitHub\org\dev\.venv\lib\site-packages\pymongo\client_options.py", line 39, in _parse_credentials
mechanism, source, username, password, options, database)
File "D:\GitHub\org\dev\.venv\lib\site-packages\pymongo\auth.py", line 107, in _build_credentials_tuple
raise ConfigurationError("%s requires a username." % (mech,))
pymongo.errors.ConfigurationError: SCRAM-SHA-256 requires a username.
```
| raise ConfigurationError("%s requires a username." % (mech,))
pymongo.errors.ConfigurationError: SCRAM-SHA-256 requires a username. did you provide the username?
@auvipy, thanks for your reply. The username has been provided in the connection string as 'mongodb://admin:[email protected]/task?authSource=admin&authMechanism=SCRAM-SHA-256'. Username must be provided if 'authMechanism' is used while connecting to MongoDB by calling MongoClient at line 161 in 'celery\backends\mongodb.py', but it is missing.
```python
self._connection = MongoClient(**conf)
```
The value of conf is {'maxPoolSize': 10, 'authsource': 'admin', 'authmechanism': 'SCRAM-SHA-256', 'host': ['192.168.56.104:27017']}, there are no 'username' and 'password'. I've found a solution to fix this issue, will create a pull request later.
| 2019-10-25T07:10:28 |
celery/celery | 5,820 | celery__celery-5820 | [
"5654"
] | 0346f77323ab1f51f463eebbc2d5a4920d3d0bbe | diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -595,10 +595,13 @@ def signature_from_request(self, request=None, args=None, kwargs=None,
args = request.args if args is None else args
kwargs = request.kwargs if kwargs is None else kwargs
options = request.as_execution_options()
+ delivery_info = request.delivery_info or {}
+ priority = delivery_info.get('priority')
+ if priority is not None:
+ options['priority'] = priority
if queue:
options['queue'] = queue
else:
- delivery_info = request.delivery_info or {}
exchange = delivery_info.get('exchange')
routing_key = delivery_info.get('routing_key')
if exchange == '' and routing_key:
diff --git a/t/integration/tasks.py b/t/integration/tasks.py
--- a/t/integration/tasks.py
+++ b/t/integration/tasks.py
@@ -145,6 +145,15 @@ def retry_once(self, *args, expires=60.0, max_retries=1, countdown=0.1):
max_retries=max_retries)
+@shared_task(bind=True, expires=60.0, max_retries=1)
+def retry_once_priority(self, *args, expires=60.0, max_retries=1, countdown=0.1):
+ """Task that fails and is retried. Returns the priority."""
+ if self.request.retries:
+ return self.request.delivery_info['priority']
+ raise self.retry(countdown=countdown,
+ max_retries=max_retries)
+
+
@shared_task
def redis_echo(message):
"""Task that appends the message to a redis list."""
| diff --git a/t/integration/test_tasks.py b/t/integration/test_tasks.py
--- a/t/integration/test_tasks.py
+++ b/t/integration/test_tasks.py
@@ -5,7 +5,7 @@
from celery import group
from .conftest import get_active_redis_channels
-from .tasks import add, add_ignore_result, print_unicode, retry_once, sleeping
+from .tasks import add, add_ignore_result, print_unicode, retry_once, retry_once_priority, sleeping
class test_tasks:
@@ -21,6 +21,11 @@ def test_task_retried(self):
res = retry_once.delay()
assert res.get(timeout=10) == 1 # retried once
+ @pytest.mark.flaky(reruns=5, reruns_delay=2)
+ def test_task_retried_priority(self):
+ res = retry_once_priority.apply_async(priority=7)
+ assert res.get(timeout=10) == 7 # retried once with priority 7
+
@pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_unicode_task(self, manager):
manager.join(
diff --git a/t/unit/tasks/test_tasks.py b/t/unit/tasks/test_tasks.py
--- a/t/unit/tasks/test_tasks.py
+++ b/t/unit/tasks/test_tasks.py
@@ -212,6 +212,22 @@ def test_retry(self):
self.retry_task.apply([0xFF, 0xFFFF], {'max_retries': 10})
assert self.retry_task.iterations == 11
+ def test_retry_priority(self):
+ priority = 7
+
+ # Technically, task.priority doesn't need to be set here
+ # since push_request() doesn't populate the delivery_info
+ # with it. However, setting task.priority here also doesn't
+ # cause any problems.
+ self.retry_task.priority = priority
+
+ self.retry_task.push_request()
+ self.retry_task.request.delivery_info = {
+ 'priority': priority
+ }
+ sig = self.retry_task.signature_from_request()
+ assert sig.options['priority'] == priority
+
def test_retry_no_args(self):
self.retry_task_noargs.max_retries = 3
self.retry_task_noargs.iterations = 0
| task priority disappear after self.retry()
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22I5597ssue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [x] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [x] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- #5597
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.3.0 (rhubarb)
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.3.0 (rhubarb) kombu:4.4.0 py:3.6.7
billiard:3.6.0.0 py-amqp:2.4.1
platform -> system:Linux arch:64bit
kernel version:4.18.0-25-generic imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:disabled
task_queues: [<unbound Queue test -> <unbound Exchange test(direct)> -> test>]
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: 3.6.7
* **Minimal Celery Version**: 4.3.0
* **Minimal Kombu Version**: 4.4.0
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
import celery
from kombu import Exchange
from kombu import Queue
class Config:
task_queues = [
Queue('test', Exchange('test'), routing_key='test', queue_arguments={'x-max-priority': 3})
] # yapf: disable
app = celery.Celery('test')
app.config_from_object(Config)
@app.task(bind=True)
def task(self):
print(self.request.delivery_info['priority'])
self.retry(countdown=1)
if __name__ == '__main__':
task.s().apply_async(priority=1, queue='test')
```
</p>
</details>
# Expected Behavior
Expect task to have a priority 1 after self.retry()
Expected output:
```[python]
[tasks]
. test.task
[2019-07-24 22:31:50,810: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2019-07-24 22:31:50,823: INFO/MainProcess] mingle: searching for neighbors
[2019-07-24 22:31:51,855: INFO/MainProcess] mingle: all alone
[2019-07-24 22:31:51,869: INFO/MainProcess] celery@ArtSobes-Home-Ubuntu-18 ready.
[2019-07-24 22:31:51,870: INFO/MainProcess] Received task: test.task[df27b1cc-6a9e-4fdb-aa46-dd02d15e4df2]
[2019-07-24 22:31:51,973: WARNING/ForkPoolWorker-16] 1
[2019-07-24 22:31:51,995: INFO/MainProcess] Received task: test.task[df27b1cc-6a9e-4fdb-aa46-dd02d15e4df2] ETA:[2019-07-24 19:31:52.974357+00:00]
[2019-07-24 22:31:51,995: INFO/ForkPoolWorker-16] Task test.task[df27b1cc-6a9e-4fdb-aa46-dd02d15e4df2] retry: Retry in 1s
[2019-07-24 22:31:54,828: WARNING/ForkPoolWorker-2] 1
[2019-07-24 22:31:54,850: INFO/MainProcess] Received task: test.task[df27b1cc-6a9e-4fdb-aa46-dd02d15e4df2] ETA:[2019-07-24 19:31:55.829945+00:00]
[2019-07-24 22:31:54,850: INFO/ForkPoolWorker-2] Task test.task[df27b1cc-6a9e-4fdb-aa46-dd02d15e4df2] retry: Retry in 1s
[2019-07-24 22:31:56,831: WARNING/ForkPoolWorker-4] 1
[2019-07-24 22:31:56,853: INFO/MainProcess] Received task: test.task[df27b1cc-6a9e-4fdb-aa46-dd02d15e4df2] ETA:[2019-07-24 19:31:57.832523+00:00]
[2019-07-24 22:31:56,853: INFO/ForkPoolWorker-4] Task test.task[df27b1cc-6a9e-4fdb-aa46-dd02d15e4df2] retry: Retry in 1s
[2019-07-24 22:31:58,833: WARNING/ForkPoolWorker-6] 1
```
# Actual Behavior
On the first call task priority is 1, but after retry it is None
```[python]
[tasks]
. test.task
[2019-07-24 22:30:51,901: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2019-07-24 22:30:51,913: INFO/MainProcess] mingle: searching for neighbors
[2019-07-24 22:30:52,940: INFO/MainProcess] mingle: all alone
[2019-07-24 22:30:52,956: INFO/MainProcess] celery@ArtSobes-Home-Ubuntu-18 ready.
[2019-07-24 22:30:52,957: INFO/MainProcess] Received task: test.task[bc93cbd9-9a20-41cb-aaef-55c71245038e]
[2019-07-24 22:30:53,060: WARNING/ForkPoolWorker-16] 1
[2019-07-24 22:30:53,083: INFO/MainProcess] Received task: test.task[bc93cbd9-9a20-41cb-aaef-55c71245038e] ETA:[2019-07-24 19:30:54.062046+00:00]
[2019-07-24 22:30:53,083: INFO/ForkPoolWorker-16] Task test.task[bc93cbd9-9a20-41cb-aaef-55c71245038e] retry: Retry in 1s
[2019-07-24 22:30:54,958: WARNING/ForkPoolWorker-2] None
[2019-07-24 22:30:54,979: INFO/MainProcess] Received task: test.task[bc93cbd9-9a20-41cb-aaef-55c71245038e] ETA:[2019-07-24 19:30:55.959481+00:00]
[2019-07-24 22:30:54,979: INFO/ForkPoolWorker-2] Task test.task[bc93cbd9-9a20-41cb-aaef-55c71245038e] retry: Retry in 1s
[2019-07-24 22:30:56,960: WARNING/ForkPoolWorker-4] None
[2019-07-24 22:30:56,982: INFO/MainProcess] Received task: test.task[bc93cbd9-9a20-41cb-aaef-55c71245038e] ETA:[2019-07-24 19:30:57.962134+00:00]
[2019-07-24 22:30:56,983: INFO/ForkPoolWorker-4] Task test.task[bc93cbd9-9a20-41cb-aaef-55c71245038e] retry: Retry in 1s
[2019-07-24 22:30:58,963: WARNING/ForkPoolWorker-6] None
```
| Hi, it looks like the problem could be potentially fixed here:
https://github.com/celery/celery/blob/8e34a67bdb95009df759d45c7c0d725c9c46e0f4/celery/app/task.py#L113
One might copy the priority here as well.
can you come with a PR? | 2019-11-08T20:27:45 |
celery/celery | 5,869 | celery__celery-5869 | [
"4684"
] | cf829307991da3815e1f7b105e736d13dbc7a325 | diff --git a/celery/app/base.py b/celery/app/base.py
--- a/celery/app/base.py
+++ b/celery/app/base.py
@@ -460,11 +460,24 @@ def _task_from_fun(self, fun, name=None, base=None, bind=False, **options):
self._tasks[task.name] = task
task.bind(self) # connects task to this app
- autoretry_for = tuple(options.get('autoretry_for', ()))
- retry_kwargs = options.get('retry_kwargs', {})
- retry_backoff = int(options.get('retry_backoff', False))
- retry_backoff_max = int(options.get('retry_backoff_max', 600))
- retry_jitter = options.get('retry_jitter', True)
+ autoretry_for = tuple(
+ options.get('autoretry_for',
+ getattr(task, 'autoretry_for', ()))
+ )
+ retry_kwargs = options.get(
+ 'retry_kwargs', getattr(task, 'retry_kwargs', {})
+ )
+ retry_backoff = int(
+ options.get('retry_backoff',
+ getattr(task, 'retry_backoff', False))
+ )
+ retry_backoff_max = int(
+ options.get('retry_backoff_max',
+ getattr(task, 'retry_backoff_max', 600))
+ )
+ retry_jitter = options.get(
+ 'retry_jitter', getattr(task, 'retry_jitter', True)
+ )
if autoretry_for and not hasattr(task, '_orig_run'):
| diff --git a/t/unit/tasks/test_tasks.py b/t/unit/tasks/test_tasks.py
--- a/t/unit/tasks/test_tasks.py
+++ b/t/unit/tasks/test_tasks.py
@@ -43,6 +43,14 @@ class TaskWithPriority(Task):
priority = 10
+class TaskWithRetry(Task):
+ autoretry_for = (TypeError,)
+ retry_kwargs = {'max_retries': 5}
+ retry_backoff = True
+ retry_backoff_max = 700
+ retry_jitter = False
+
+
class TasksCase:
def setup(self):
@@ -152,6 +160,81 @@ def autoretry_backoff_jitter_task(self, url):
self.autoretry_backoff_jitter_task = autoretry_backoff_jitter_task
+ @self.app.task(bind=True, base=TaskWithRetry, shared=False)
+ def autoretry_for_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.autoretry_for_from_base_task = autoretry_for_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry,
+ autoretry_for=(ZeroDivisionError,), shared=False)
+ def override_autoretry_for_from_base_task(self, a, b):
+ self.iterations += 1
+ return a / b
+
+ self.override_autoretry_for = override_autoretry_for_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry, shared=False)
+ def retry_kwargs_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.retry_kwargs_from_base_task = retry_kwargs_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry,
+ retry_kwargs={'max_retries': 2}, shared=False)
+ def override_retry_kwargs_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.override_retry_kwargs = override_retry_kwargs_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry, shared=False)
+ def retry_backoff_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.retry_backoff_from_base_task = retry_backoff_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry,
+ retry_backoff=False, shared=False)
+ def override_retry_backoff_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.override_retry_backoff = override_retry_backoff_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry, shared=False)
+ def retry_backoff_max_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.retry_backoff_max_from_base_task = retry_backoff_max_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry,
+ retry_backoff_max=16, shared=False)
+ def override_retry_backoff_max_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.override_backoff_max = override_retry_backoff_max_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry, shared=False)
+ def retry_backoff_jitter_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.retry_backoff_jitter_from_base = retry_backoff_jitter_from_base_task
+
+ @self.app.task(bind=True, base=TaskWithRetry,
+ retry_jitter=True, shared=False)
+ def override_backoff_jitter_from_base_task(self, a, b):
+ self.iterations += 1
+ return a + b
+
+ self.override_backoff_jitter = override_backoff_jitter_from_base_task
+
@self.app.task(bind=True)
def task_check_request_context(self):
assert self.request.hostname == socket.gethostname()
@@ -373,6 +456,94 @@ def test_autoretry_backoff_jitter(self, randrange):
]
assert retry_call_countdowns == [0, 1, 3, 7]
+ def test_autoretry_for_from_base(self):
+ self.autoretry_for_from_base_task.iterations = 0
+ self.autoretry_for_from_base_task.apply((1, "a"))
+ assert self.autoretry_for_from_base_task.iterations == 6
+
+ def test_override_autoretry_for_from_base(self):
+ self.override_autoretry_for.iterations = 0
+ self.override_autoretry_for.apply((1, 0))
+ assert self.override_autoretry_for.iterations == 6
+
+ def test_retry_kwargs_from_base(self):
+ self.retry_kwargs_from_base_task.iterations = 0
+ self.retry_kwargs_from_base_task.apply((1, "a"))
+ assert self.retry_kwargs_from_base_task.iterations == 6
+
+ def test_override_retry_kwargs_from_base(self):
+ self.override_retry_kwargs.iterations = 0
+ self.override_retry_kwargs.apply((1, "a"))
+ assert self.override_retry_kwargs.iterations == 3
+
+ def test_retry_backoff_from_base(self):
+ task = self.retry_backoff_from_base_task
+ task.iterations = 0
+ with patch.object(task, 'retry', wraps=task.retry) as fake_retry:
+ task.apply((1, "a"))
+
+ assert task.iterations == 6
+ retry_call_countdowns = [
+ call[1]['countdown'] for call in fake_retry.call_args_list
+ ]
+ assert retry_call_countdowns == [1, 2, 4, 8, 16, 32]
+
+ @patch('celery.app.base.get_exponential_backoff_interval')
+ def test_override_retry_backoff_from_base(self, backoff):
+ self.override_retry_backoff.iterations = 0
+ self.override_retry_backoff.apply((1, "a"))
+ assert self.override_retry_backoff.iterations == 6
+ assert backoff.call_count == 0
+
+ def test_retry_backoff_max_from_base(self):
+ task = self.retry_backoff_max_from_base_task
+ task.iterations = 0
+ with patch.object(task, 'retry', wraps=task.retry) as fake_retry:
+ task.apply((1, "a"))
+
+ assert task.iterations == 6
+ retry_call_countdowns = [
+ call[1]['countdown'] for call in fake_retry.call_args_list
+ ]
+ assert retry_call_countdowns == [1, 2, 4, 8, 16, 32]
+
+ def test_override_retry_backoff_max_from_base(self):
+ task = self.override_backoff_max
+ task.iterations = 0
+ with patch.object(task, 'retry', wraps=task.retry) as fake_retry:
+ task.apply((1, "a"))
+
+ assert task.iterations == 6
+ retry_call_countdowns = [
+ call[1]['countdown'] for call in fake_retry.call_args_list
+ ]
+ assert retry_call_countdowns == [1, 2, 4, 8, 16, 16]
+
+ def test_retry_backoff_jitter_from_base(self):
+ task = self.retry_backoff_jitter_from_base
+ task.iterations = 0
+ with patch.object(task, 'retry', wraps=task.retry) as fake_retry:
+ task.apply((1, "a"))
+
+ assert task.iterations == 6
+ retry_call_countdowns = [
+ call[1]['countdown'] for call in fake_retry.call_args_list
+ ]
+ assert retry_call_countdowns == [1, 2, 4, 8, 16, 32]
+
+ @patch('random.randrange', side_effect=lambda i: i - 2)
+ def test_override_backoff_jitter_from_base(self, randrange):
+ task = self.override_backoff_jitter
+ task.iterations = 0
+ with patch.object(task, 'retry', wraps=task.retry) as fake_retry:
+ task.apply((1, "a"))
+
+ assert task.iterations == 6
+ retry_call_countdowns = [
+ call[1]['countdown'] for call in fake_retry.call_args_list
+ ]
+ assert retry_call_countdowns == [0, 1, 3, 7, 15, 31]
+
def test_retry_wrong_eta_when_not_enable_utc(self):
"""Issue #3753"""
self.app.conf.enable_utc = False
| autoretry_for and retry_kwargs for class-based tasks
python==3.6.4
celery==4.1.0
## Steps to reproduce
```
from project.celery import app
class test_task(celery.Task):
autoretry_for = (Exception,)
retry_kwargs = {
'max_retries': 5,
'countdown': 10,
}
def run(self, *args, **kwargs):
raise Exception('test')
test_task = app.register_task(test_task())
test_task.delay()
```
## Expected behavior
5 retries of this task with countdown of 10 seconds each.
## Actual behavior
No retries at all.
So the main question - is there any way for specifying autoretry_for and retry_kwargs for class-based tasks? Just to mention - function-based tasks works properly with shared_task decorator and this arguments for it.
| Exactly the same issue for me. It looks like the `autoretry_for` attribute is only touched in `_task_from_fun`, which isn't used for a class based task. At least it looks like this. Is there a special reason or was this just forgotten?
@CompadreP maybe as a little workaround a class similar to this could help you:
```python
class AutoRetryTask(Task):
default_retry_delay = 3
max_retries = 5
autoretry_for = None
retry_kwargs = {}
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.autoretry_for and not hasattr(self, '_orig_run'):
@wraps(self.run)
def run(*args, **kwargs):
try:
return self._orig_run(*args, **kwargs)
except self.autoretry_for as exc:
if 'countdown' not in self.retry_kwargs:
countdown = int(random.uniform(2, 4) ** self.request.retries)
retry_kwargs = self.retry_kwargs.copy()
retry_kwargs.update({'countdown': countdown})
else:
retry_kwargs = self.retry_kwargs
raise self.retry(exc=exc, **retry_kwargs)
self._orig_run, self.run = self.run, run
```
I think this would be a very neat feature (unless we want to actively discourage the use of class based tasks altogether) - if one of the maintainers could mention their opinion on this (and the `Is there a special reason or was this just forgotten?` question), I could take a stab at implementing it.
Any update on this?
I was able to fix that easily by overriding Celery task decorator:
```
class CustomCelery(Celery):
def task(self, *args, **opts):
# Adds autoretry kwargs to @celery.task() decorators
if opts.get('base') == BaseTask:
opts['autoretry_for'] = (requests.ConnectionError, ConnectionError)
opts['retry_kwargs'] = {'max_retries': 5}
opts['retry_backoff'] = True
return super().task(*args, **opts)
```
@amir-hadi Thanks for the idea. With small modifications I've got working retries with exponential backoff.
Celery 4.3.0
Python 3.7.4
```python
from celery import Task
from celery.utils.time import get_exponential_backoff_interval
class AutoRetryTask(Task):
retry_kwargs = {
'max_retries': 5,
}
retry_backoff = True
retry_backoff_max = 600
retry_jitter = True
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.autoretry_for and not hasattr(self, '_orig_run'):
def run(*args, **kwargs):
try:
return self._orig_run(*args, **kwargs)
except self.autoretry_for as exc:
if 'countdown' not in self.retry_kwargs:
countdown = get_exponential_backoff_interval(
factor=self.retry_backoff,
retries=self.request.retries,
maximum=self.retry_backoff_max,
full_jitter=self.retry_jitter,
)
retry_kwargs = self.retry_kwargs.copy()
retry_kwargs.update({'countdown': countdown})
else:
retry_kwargs = self.retry_kwargs
retry_kwargs.update({'exc': exc})
raise self.retry(**retry_kwargs)
self._orig_run, self.run = self.run, run
```
Further improvements:
* Make it a base task
* Define `retry_*` properties from some config object/file | 2019-12-11T19:38:57 |
celery/celery | 5,870 | celery__celery-5870 | [
"4843"
] | cf829307991da3815e1f7b105e736d13dbc7a325 | diff --git a/celery/events/receiver.py b/celery/events/receiver.py
--- a/celery/events/receiver.py
+++ b/celery/events/receiver.py
@@ -90,7 +90,8 @@ def capture(self, limit=None, timeout=None, wakeup=True):
unless :attr:`EventDispatcher.should_stop` is set to True, or
forced via :exc:`KeyboardInterrupt` or :exc:`SystemExit`.
"""
- return list(self.consume(limit=limit, timeout=timeout, wakeup=wakeup))
+ for _ in self.consume(limit=limit, timeout=timeout, wakeup=wakeup):
+ pass
def wakeup_workers(self, channel=None):
self.app.control.broadcast('heartbeat',
| Continuous memory leak
There is a memory leak in the parent process of Celery's worker.
It is not a child process executing a task.
It happens suddenly every few days.
Unless you stop Celery, it consumes server memory in tens of hours.
This problem happens at least in Celery 4.1, and it also occurs in Celery 4.2.
Celery is running on Ubuntu 16 and brokers use RabbitMQ.

| Are you using Canvas workflows? Maybe #4839 is related.
Also I assume you are using prefork pool for worker concurrency?
Thanks georgepsarakis.
I am not using workflow.
I use prefork concurrency 1 on single server.
The increase rate seems quite linear, quite weird. Is the worker processing tasks during this time period? Also, can you add a note with the complete command you are using to start the worker?
Yes. The worker continues to process the task normally.
The worker is started with the following command.
`/xxxxxxxx/bin/celery worker --app=xxxxxxxx --loglevel=INFO --pidfile=/var/run/xxxxxxxx.pid`
This problem is occurring in both the production environment and the test environment.
I can add memory profile and test output to the test environment.
If there is anything I can do, please say something.
We need to understand what the worker is running during the time that the memory increase is observed. Any information and details you can possibly provide would definitely. It is also good that you can reproduce this.
Although it was a case occurred at a timing different from the graph, the next log was output at the timing when the memory leak started.
```
[2018-02-24 07:50:52,953: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/xxxxxxxx/lib/python3.5/site-packages/celery/worker/consumer/consumer.py", line 320, in start
blueprint.start(self)
File "/xxxxxxxx/lib/python3.5/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/xxxxxxxx/lib/python3.5/site-packages/celery/worker/consumer/consumer.py", line 596, in start
c.loop(*c.loop_args())
File "/xxxxxxxx/lib/python3.5/site-packages/celery/worker/loops.py", line 88, in asynloop
next(loop)
File "/xxxxxxxx/lib/python3.5/site-packages/kombu/async/hub.py", line 293, in create_loop
poll_timeout = fire_timers(propagate=propagate) if scheduled else 1
File "/xxxxxxxx/lib/python3.5/site-packages/kombu/async/hub.py", line 136, in fire_timers
entry()
File "/xxxxxxxx/lib/python3.5/site-packages/kombu/async/timer.py", line 68, in __call__
return self.fun(*self.args, **self.kwargs)
File "/xxxxxxxx/lib/python3.5/site-packages/kombu/async/timer.py", line 127, in _reschedules
return fun(*args, **kwargs)
File "/xxxxxxxx/lib/python3.5/site-packages/kombu/connection.py", line 290, in heartbeat_check
return self.transport.heartbeat_check(self.connection, rate=rate)
File "/xxxxxxxx/lib/python3.5/site-packages/kombu/transport/pyamqp.py", line 149, in heartbeat_check
return connection.heartbeat_tick(rate=rate)
File "/xxxxxxxx/lib/python3.5/site-packages/amqp/connection.py", line 696, in heartbeat_tick
self.send_heartbeat()
File "/xxxxxxxx/lib/python3.5/site-packages/amqp/connection.py", line 647, in send_heartbeat
self.frame_writer(8, 0, None, None, None)
File "/xxxxxxxx/lib/python3.5/site-packages/amqp/method_framing.py", line 166, in write_frame
write(view[:offset])
File "/xxxxxxxx/lib/python3.5/site-packages/amqp/transport.py", line 258, in write
self._write(s)
ConnectionResetError: [Errno 104] Connection reset by peer
[2018-02-24 08:49:12,016: INFO/MainProcess] Connected to amqp://xxxxxxxx:**@xxx.xxx.xxx.xxx:5672/xxxxxxxx
```
It seems that it occurred when the connection with RabbitMQ was temporarily cut off.
@marvelph so it occurs during RabbitMQ reconnections? Perhaps these issues are related:
- https://github.com/celery/kombu/issues/843
- https://github.com/celery/celery/pull/4839#issuecomment-399633253
Yes.
It seems that reconnection triggers it.
It looks like I'm having the same issue... It is so hard for me to find out what triggers it and why there is a memeory leak. It annoys me for at least a month. I fallback to used celery 3 and everything is fine.
For the memory leak issue, I'm using ubuntu 16, celery 4.1.0 with rabbitmq. I deployed it via docker.
The memory leak is with MainProcess not ForkPoolWorker. The memory usage of ForkPoolWorker is normal, but memory usage of MainProcess is always increasing. For five seconds, around 0.1MB memeory is leaked. The memory leak doesn't start after the work starts immediatly but maybe after one or two days.
I used gdb and pyrasite to inject the running process and try to ```gc.collect()```, but nothing is collected.
I checked the log, the ```consumer: Connection to broker lost. Trying to re-establish the connection...``` did happens, but for now I'm not sure this is the time when memory leak happens.
Any hints for debugging this issue and to find out what really happens? Thanks.
Since @marvelph mentioned it may relate with rabbitmq reconnection, I try to stop my rabbitmq server. The memory usage did increase after each reconnection, following is the log. So I can confirm this https://github.com/celery/kombu/issues/843 issue.
But after the connection is reconnected, the memory usage stops to gradually increase. So I'm not sure this is the reason for memory leak.
I will try to use redis to figure out whether this memory leak issue relates wtih rabbitmq or not.
```
[2018-06-25 02:43:33,456: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 316, in start
blueprint.start(self)
File "/app/.heroku/python/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/app/.heroku/python/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 592, in start
c.loop(*c.loop_args())
File "/app/.heroku/python/lib/python3.6/site-packages/celery/worker/loops.py", line 91, in asynloop
next(loop)
File "/app/.heroku/python/lib/python3.6/site-packages/kombu/asynchronous/hub.py", line 354, in create_loop
cb(*cbargs)
File "/app/.heroku/python/lib/python3.6/site-packages/kombu/transport/base.py", line 236, in on_readable
reader(loop)
File "/app/.heroku/python/lib/python3.6/site-packages/kombu/transport/base.py", line 218, in _read
drain_events(timeout=0)
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/connection.py", line 491, in drain_events
while not self.blocking_read(timeout):
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/connection.py", line 496, in blocking_read
frame = self.transport.read_frame()
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/transport.py", line 243, in read_frame
frame_header = read(7, True)
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/transport.py", line 418, in _read
s = recv(n - len(rbuf))
ConnectionResetError: [Errno 104] Connection reset by peer
[2018-06-25 02:43:33,497: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 2.00 seconds...
[2018-06-25 02:43:35,526: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 4.00 seconds...
[2018-06-25 02:43:39,560: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 6.00 seconds...
[2018-06-25 02:43:45,599: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 8.00 seconds...
[2018-06-25 02:43:53,639: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 10.00 seconds...
[2018-06-25 02:44:03,680: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 12.00 seconds...
[2018-06-25 02:44:15,743: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 14.00 seconds...
[2018-06-25 02:44:29,790: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 16.00 seconds...
[2018-06-25 02:44:45,839: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 18.00 seconds...
[2018-06-25 02:45:03,890: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 20.00 seconds...
[2018-06-25 02:45:23,943: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 22.00 seconds...
[2018-06-25 02:45:46,002: ERROR/MainProcess] consumer: Cannot connect to amqp://***:**@***:***/***: [Errno 111] Connection refused.
Trying again in 24.00 seconds...
[2018-06-25 02:46:10,109: INFO/MainProcess] Connected to amqp://***:**@***:***/***
[2018-06-25 02:46:10,212: INFO/MainProcess] mingle: searching for neighbors
[2018-06-25 02:46:10,291: WARNING/MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 316, in start
blueprint.start(self)
File "/app/.heroku/python/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/app/.heroku/python/lib/python3.6/site-packages/celery/worker/consumer/mingle.py", line 40, in start
self.sync(c)
File "/app/.heroku/python/lib/python3.6/site-packages/celery/worker/consumer/mingle.py", line 44, in sync
replies = self.send_hello(c)
File "/app/.heroku/python/lib/python3.6/site-packages/celery/worker/consumer/mingle.py", line 57, in send_hello
replies = inspect.hello(c.hostname, our_revoked._data) or {}
File "/app/.heroku/python/lib/python3.6/site-packages/celery/app/control.py", line 132, in hello
return self._request('hello', from_node=from_node, revoked=revoked)
File "/app/.heroku/python/lib/python3.6/site-packages/celery/app/control.py", line 84, in _request
timeout=self.timeout, reply=True,
File "/app/.heroku/python/lib/python3.6/site-packages/celery/app/control.py", line 439, in broadcast
limit, callback, channel=channel,
File "/app/.heroku/python/lib/python3.6/site-packages/kombu/pidbox.py", line 315, in _broadcast
serializer=serializer)
File "/app/.heroku/python/lib/python3.6/site-packages/kombu/pidbox.py", line 290, in _publish
serializer=serializer,
File "/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py", line 203, in _publish
mandatory=mandatory, immediate=immediate,
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/channel.py", line 1732, in _basic_publish
(0, exchange, routing_key, mandatory, immediate), msg
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/abstract_channel.py", line 50, in send_method
conn.frame_writer(1, self.channel_id, sig, args, content)
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/method_framing.py", line 166, in write_frame
write(view[:offset])
File "/app/.heroku/python/lib/python3.6/site-packages/amqp/transport.py", line 275, in write
self._write(s)
ConnectionResetError: [Errno 104] Connection reset by peer
[2018-06-25 02:46:10,375: INFO/MainProcess] Connected to amqp://***:**@***:***/***
[2018-06-25 02:46:10,526: INFO/MainProcess] mingle: searching for neighbors
[2018-06-25 02:46:11,764: INFO/MainProcess] mingle: all alone
```
Although I checked the logs, I found a log of reconnection at the timing of memory leak, but there was also a case where a memory leak started at the timing when reconnection did not occur.
I agree with the idea of jxlton.
Also, when I was using Celery 3.x, I did not encounter such a problem.
same problem here
<img width="802" alt="screenshot 2018-06-25 11 09 22" src="https://user-images.githubusercontent.com/1920678/41831344-48386766-7868-11e8-88bd-a2918fc43369.png">
Every few days i have to restart workers due to this problem
there are no any significant clues in logs, but I have a suspicion that reconnects can affect; since i have reconnect log entries somewhere in time when memory starts constantly growing
My conf is ubuntu 17, 1 server - 1 worker with 3 concurrency; rabbit and redis on backend; all packages are the latest versions
@marvelph @dmitry-kostin could you please provide your exact configuration (omitting sensitive information of course) and possibly a task, or sample, that reproduces the issue? Also, do you have any estimate of the average uptime interval that the worker memory increase starts appearing?
the config is nearby to default
imports = ('app.tasks',)
result_persistent = True
task_ignore_result = False
task_acks_late = True
worker_concurrency = 3
worker_prefetch_multiplier = 4
enable_utc = True
timezone = 'Europe/Moscow'
broker_transport_options = {'visibility_timeout': 3600, 'confirm_publish': True, 'fanout_prefix': True, 'fanout_patterns': True}
<img width="777" alt="screenshot 2018-06-25 11 35 17" src="https://user-images.githubusercontent.com/1920678/41832003-e0ef4c74-786b-11e8-9591-7c75453f7f29.png">
Basically this is new deployed node; it was deployed on 06/21 18-50; stared to grow 6/23 around 05-00 and finally crashed 6/23 around 23-00
the task is pretty simple and there is no superlogic there, i think i can reproduce the whole situation on a clear temp project but have no free time for now, if i will be lucky i will try to do a full example on weekend
UPD
as you can see the task itself consumes some memory you can see it by spikes on the graph, but the time when memory stared to leak there were no any tasks produced or any other activities
@marvelph @dmitry-kostin @jxltom I noticed you use Python3. Would you mind enabling [tracemalloc](https://docs.python.org/3/library/tracemalloc.html) for the process? You may need to patch the worker process though to log memory allocation traces, let me know if you need help with that.
@georgepsarakis You mean enable tracemalloc in worker and log stats, such as the top 10 memory usage files, at a specific interval such as 5 minutes?
@jxltom I think something like that would help locate the part of code that is responsible. What do you think?
@georgepsarakis I'v tried to use gdb and https://github.com/lmacken/pyrasite to inject the memory leak process, and start debug via tracemalloc. Here is the top 10 file with highest mem usage.
I use ```resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024``` and the memory usage is gradually increasing indeed.
```
>>> import tracemalloc
>>>
>>> tracemalloc.start()
>>> snapshot = tracemalloc.take_snapshot()
>>> top_stats = snapshot.statistics('lineno')
>>> for stat in top_stats[:10]:
... print(stat)
...
/app/.heroku/python/lib/python3.6/site-packages/kombu/utils/eventio.py:84: size=12.0 KiB, count=1, average=12.0 KiB
/app/.heroku/python/lib/python3.6/site-packages/celery/worker/heartbeat.py:47: size=3520 B, count=8, average=440 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/method_framing.py:166: size=3264 B, count=12, average=272 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:142: size=3060 B, count=10, average=306 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:157: size=2912 B, count=8, average=364 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/abstract_channel.py:50: size=2912 B, count=8, average=364 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py:181: size=2816 B, count=12, average=235 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py:203: size=2816 B, count=8, average=352 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:199: size=2672 B, count=6, average=445 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/channel.py:1734: size=2592 B, count=8, average=324 B
```
Here is the difference between two snapshots after around 5 minutes.
```
>>> snapshot2 = tracemalloc.take_snapshot()
>>> top_stats = snapshot2.compare_to(snapshot, 'lineno')
>>> print("[ Top 10 differences ]")
[ Top 10 differences ]
>>> for stat in top_stats[:10]:
... print(stat)
...
/app/.heroku/python/lib/python3.6/site-packages/celery/worker/heartbeat.py:47: size=220 KiB (+216 KiB), count=513 (+505), average=439 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:142: size=211 KiB (+208 KiB), count=758 (+748), average=285 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/method_framing.py:166: size=210 KiB (+206 KiB), count=789 (+777), average=272 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:157: size=190 KiB (+187 KiB), count=530 (+522), average=366 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/abstract_channel.py:50: size=186 KiB (+183 KiB), count=524 (+516), average=363 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:199: size=185 KiB (+182 KiB), count=490 (+484), average=386 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py:203: size=182 KiB (+179 KiB), count=528 (+520), average=353 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py:181: size=179 KiB (+176 KiB), count=786 (+774), average=233 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/channel.py:1734: size=165 KiB (+163 KiB), count=525 (+517), average=323 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/async/hub.py:293: size=157 KiB (+155 KiB), count=255 (+251), average=632 B
```
Any suggestions for how to continue to debug this? I have no clue for how to proceed. Thanks.
@georgepsarakis
I want a little time to cut out the project for reproduction.
It is setting of Celery.
```
BROKER_URL = [
'amqp://xxxxxxxx:[email protected]:5672/zzzzzzzz'
]
BROKER_TRANSPORT_OPTIONS = {}
```
The scheduler has the following settings.
```
CELERYBEAT_SCHEDULE = {
'aaaaaaaa_bbbbbbbb': {
'task': 'aaaa.bbbbbbbb_cccccccc',
'schedule': celery.schedules.crontab(minute=0),
},
'dddddddd_eeeeeeee': {
'task': 'dddd.eeeeeeee_ffffffff',
'schedule': celery.schedules.crontab(minute=0),
},
}
```
On EC 2, I am using supervisord to operate it.
@georgepsarakis
Since my test environment can tolerate performance degradation, you can use tracemalloc.
Can you make a patched Celery to dump memory usage?
@jxltom I bet tracemalloc with 5 minutes wont help to locate problem
For example I have 5 nodes and only 3 of them had this problem for last 4 days, and 2 worked fine all this this time, so it will be very tricky to locate problem ..
I feel like there is some toggle that switches on and then memory starts grow, until this switch memory consumption looks very well
I tried to find out whether similar problems occurred in other running systems.
The frequency of occurrence varies, but a memory leak has occurred on three systems using Celery 4.x, and it has not happened on one system.
The system that has a memory leak is Python 3.5.x, and the system with no memory leak is Python 2.7.x.
@dmitry-kostin What's the difference with the other two normal nodes, are they both using same rabbitmq as broker?
Since our discussion mentioned it may related to rabbitmq, I started another new node with same configuration except for using redis instead. So far, this node has no memory leak after running 24 hours. I will post it here if it has memory leak later
@marvelph So do you mean that the three system with memory leak are using python3 while the one which is fine is using python2?
@jxltom no difference at all, and yes they are on python 3 & rabit as broker and redis on backend
I made a testing example to reproduce this, if it will succeed in a couple of days i will give credentials to this servers for somebody who aware how to locate this bug
@jxltom
Yes.
As far as my environment is concerned, problems do not occur in Python 2.
I tracked the memory leak via tracemalloc within a longer period.
The start memory usage reported by ```resource``` module is ```146.58MB```, after the 3.5 hours, it reports the memory usage is ```224.21MB```.
Following is the snapshot difference reported by ```tracemalloc```
```
>>> snapshot2 = tracemalloc.take_snapshot(); top_stats = snapshot2.compare_to(snapshot, 'lineno')
>>> for stat in top_stats[:10]:
... print(stat)
...
/app/.heroku/python/lib/python3.6/site-packages/celery/worker/heartbeat.py:47: size=3619 KiB (+3614 KiB), count=8436 (+8426), average=439 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:142: size=3470 KiB (+3466 KiB), count=12529 (+12514), average=284 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/method_framing.py:166: size=3418 KiB (+3414 KiB), count=12920 (+12905), average=271 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:157: size=3149 KiB (+3145 KiB), count=8762 (+8752), average=368 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/abstract_channel.py:50: size=3099 KiB (+3096 KiB), count=8685 (+8676), average=365 B
/app/.heroku/python/lib/python3.6/site-packages/celery/events/dispatcher.py:199: size=3077 KiB (+3074 KiB), count=8354 (+8345), average=377 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py:203: size=3020 KiB (+3017 KiB), count=8723 (+8713), average=355 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/messaging.py:181: size=2962 KiB (+2959 KiB), count=12952 (+12937), average=234 B
/app/.heroku/python/lib/python3.6/site-packages/amqp/channel.py:1734: size=2722 KiB (+2718 KiB), count=8623 (+8613), average=323 B
/app/.heroku/python/lib/python3.6/site-packages/kombu/async/hub.py:293: size=2588 KiB (+2585 KiB), count=4193 (+4188), average=632 B
```
Any ideas? It looks like it is not a single file is leaking?
I also imported ```gc```, and ```gc.collect()``` returns ```0```...
@georgepsarakis I was able to reproduce this, ping me for access creds
Update: I have updated the broker from rabbitmq to redis by updating broker url as environment variable and keep docker/code completely same. And it's running for 4 days and **there is no memory leak**.
So I believe this issue is related with rabbitmq broker.
If possible please try running the benchmark command, mentioned here: https://github.com/celery/celery/issues/2927#issuecomment-171455414
This system is running workers with 20 servers.
A memory leak occurred yesterday, but it is occurring on almost all servers at the same time.

Don't know if it's related, leaving it here in case it helps.
I have a different issue with celery and rabbitmq (celery loses connection and starts reconnecting loads of times per second, cpu goes 100% on 1 core, beat can't send new tasks, need to restart celery).
The reason I am reporting this here is the trigger: after days of monitoring I think I located the start of the issue and it appears to be rabbitmq moving some messages from memory to disk. At that time celery starts trying to reconnect as fast as it can and rabbitmq logs show tens of connection/disconnection operations per second, in batches of ~10 or so at a time. Restarting rabbitmq doesn't fix the issue, restarting celery fixes it right away. I do not have a proper fix but as an example, setting an expire policy allowing messages to always stay in memory works around the issue and I haven't seen it since.
Given some details of this issue match what I saw (swapping rabbitmq with redis fixes it, there's not a clear starting point, it happens on more than one worker/server at the same time) I guess there might be a common trigger and it might be the same I spotted.
The test suite is changed from ```https://github.com/celery/celery/tree/master/funtests/stress``` to ```https://github.com/celery/cyanide```, and it only supports Python2.
So I run it in Python2 with rabbitmq as broker. It raised ```!join: connection lost: error(104, 'Connection reset by peer')```. Is this related with memory leak issue?
Here is the log for test suite.
```
➜ cyanide git:(master) pipenv run python -m cyanide.bin.cyanide
Loading .env environment variables…
Cyanide v1.3.0 [celery 4.2.0 (windowlicker)]
Linux-4.13.0-45-generic-x86_64-with-debian-stretch-sid
[config]
.> app: cyanide:0x7fb097f31710
.> broker: amqp://**:**@**:**/cyanide
.> suite: cyanide.suites.default:Default
[toc: 12 tests total]
.> 1) manyshort,
.> 2) always_timeout,
.> 3) termbysig,
.> 4) timelimits,
.> 5) timelimits_soft,
.> 6) alwayskilled,
.> 7) alwaysexits,
.> 8) bigtasksbigvalue,
.> 9) bigtasks,
.> 10) smalltasks,
.> 11) revoketermfast,
.> 12) revoketermslow
+enable worker task events...
+suite start (repetition 1)
[[[manyshort(50)]]]
1: manyshort OK (1/50) rep#1 runtime: 15.00 seconds/15.01 seconds
1: manyshort OK (2/50) rep#1 runtime: 13.16 seconds/28.17 seconds
1: manyshort OK (3/50) rep#1 runtime: 13.29 seconds/41.46 seconds
1: manyshort OK (4/50) rep#1 runtime: 13.70 seconds/55.16 seconds
1: manyshort OK (5/50) rep#1 runtime: 13.77 seconds/1.15 minutes
1: manyshort OK (6/50) rep#1 runtime: 13.91 seconds/1.38 minutes
!join: connection lost: error(104, 'Connection reset by peer')
!Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!Still waiting for 475/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!Still waiting for 475/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',)
!join: connection lost: error(104, 'Connection reset by peer')
failed after 7 iterations in 3.12 minutes
Traceback (most recent call last):
File "/home/***/.pyenv/versions/2.7.15/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/home/***/.pyenv/versions/2.7.15/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/***/Documents/Python-Dev/cyanide/cyanide/bin/cyanide.py", line 62, in <module>
main()
File "/home/***/Documents/Python-Dev/cyanide/cyanide/bin/cyanide.py", line 58, in main
return cyanide().execute_from_commandline(argv=argv)
File "/home/***/.local/share/virtualenvs/cyanide-Vy3PQPTU/lib/python2.7/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/home/***/.local/share/virtualenvs/cyanide-Vy3PQPTU/lib/python2.7/site-packages/celery/bin/base.py", line 363, in handle_argv
return self(*args, **options)
File "/home/***/.local/share/virtualenvs/cyanide-Vy3PQPTU/lib/python2.7/site-packages/celery/bin/base.py", line 238, in __call__
ret = self.run(*args, **kwargs)
File "/home/***/Documents/Python-Dev/cyanide/cyanide/bin/cyanide.py", line 20, in run
return self.run_suite(names, **options)
File "/home/***/Documents/Python-Dev/cyanide/cyanide/bin/cyanide.py", line 30, in run_suite
).run(names, **options)
File "cyanide/suite.py", line 366, in run
self.runtest(test, iterations, j + 1, i + 1)
File "cyanide/suite.py", line 426, in runtest
self.execute_test(fun)
File "cyanide/suite.py", line 447, in execute_test
fun()
File "cyanide/suites/default.py", line 22, in manyshort
timeout=10, propagate=True)
File "cyanide/suite.py", line 246, in join
raise self.TaskPredicate('Test failed: Missing task results')
cyanide.suite.StopSuite: Test failed: Missing task results
```
Here is the log for worker.
```
➜ cyanide git:(master) pipenv run celery -A cyanide worker -c 1
Loading .env environment variables…
-------------- celery@** v4.2.0 (windowlicker)
---- **** -----
--- * *** * -- Linux-4.13.0-45-generic-x86_64-with-debian-stretch-sid 2018-07-03 12:59:28
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: cyanide:0x7fdc988b4e90
- ** ---------- .> transport: amqp://**:**@**:**/cyanide
- ** ---------- .> results: rpc://
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> c.stress exchange=c.stress(direct) key=c.stress
[2018-07-03 12:59:29,883: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [e6e71bed-8e58-4e7e-96c5-f56b583a37af, 42fd4f97-4ff5-4e0e-b874-89e7b3f0ff22, 3de45eeb-9b89-41bc-8284-95a4c07aa34a,...]: TimeoutError('The operation timed out.',) !
[2018-07-03 12:59:29,886: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [e6e71bed-8e58-4e7e-96c5-f56b583a37af, 42fd4f97-4ff5-4e0e-b874-89e7b3f0ff22, 3de45eeb-9b89-41bc-8284-95a4c07aa34a,...]: TimeoutError('The operation timed out.',) !
[2018-07-03 12:59:30,964: WARNING/ForkPoolWorker-1] + suite start (repetition 1) +
[2018-07-03 12:59:30,975: WARNING/ForkPoolWorker-1] --- 1: manyshort (0/50) rep#1 runtime: 0.0000/0.0000 ---
[2018-07-03 13:01:07,835: WARNING/ForkPoolWorker-1] ! join: connection lost: error(104, 'Connection reset by peer') !
[2018-07-03 13:01:17,918: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:01:27,951: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:01:38,902: WARNING/ForkPoolWorker-1] ! Still waiting for 475/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:01:48,934: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:01:58,961: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:02:09,906: WARNING/ForkPoolWorker-1] ! Still waiting for 475/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:02:19,934: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:02:29,964: WARNING/ForkPoolWorker-1] ! Still waiting for 1000/1000: [1624cd7a-3cc0-474a-b957-b0484f6b4937, 2b436525-73de-4062-bd6b-924cbd11ba74, ce04cb5e-a99e-41e2-95dc-e9bc351e606d,...]: TimeoutError(u'The operation timed out.',) !
[2018-07-03 13:02:37,900: WARNING/ForkPoolWorker-1] ! join: connection lost: error(104, 'Connection reset by peer') !
```
I have updated to use celery 3.1.25 with same stress test suite, everything is fine.
BTW For everybody who are looking for a fast fix – replacing rabbit with redis solves the problem as @jxltom suggested, i have more than week of stable work with redis only now
So the problem is definitely somewhere near the rabbit<->celery border
@dieeasy we have experienced the same issue. I assume you are using RPC result backend. If so, try switching to DB result backend and see if that helps. The issue that causes this is: https://github.com/celery/kombu/pull/779 and is explained here: https://github.com/celery/kombu/pull/779#discussion_r134961611
I have a same problem memory leak
**Memory**

**Version**
python `3.6.5` celery `4.2.1` backend `redis` broker `rabbitmq`
**Config**
```conf
[celery]
broker_url=amqp://taunt:[email protected]:5672/%2ftaunt
celery_result_backend=redis://xx.xx.xx.xx:6379
# 7days
celery_task_result_expires=604800
celery_task_serializer=msgpack
celery_result_serializer=json
celery_accept_content=json,msgpack
celery_timezone=Asia/Shanghai
celery_enable_utc=True
[cmd]
worker=True
proj=app.worker.celery
log_level=INFO
name=send_im%%h
queue=im
autoscale=10,3
concurrency=10
```
```python
# -*- coding: utf-8 -*-
from kombu import Queue, Exchange
from oslo_log import log as logging
from app.conf import CONF
LOG = logging.getLogger(__name__)
celery_queues = (
Queue('im', exchange=Exchange('sender'), routing_key='im'),
Queue('sms', exchange=Exchange('sender'), routing_key='sms'),
Queue('mail', exchange=Exchange('sender'), routing_key='mail'),
Queue('ivr', exchange=Exchange('sender'), routing_key='ivr')
)
celery_routes = {
'sender.im': {'queue': 'im', 'routing_key': 'im'},
'sender.sms': {'queue': 'sms', 'routing_key': 'sms'},
'sender.mail': {'queue': 'mail', 'routing_key': 'mail'},
'sender.ivr': {'queue': 'ivr', 'routing_key': 'ivr'}
}
config = {
'BROKER_URL': CONF.celery.broker_url,
'CELERY_RESULT_BACKEND': CONF.celery.celery_result_backend,
'CELERY_TASK_RESULT_EXPIRES': CONF.celery.celery_task_result_expires,
'CELERY_TASK_SERIALIZER': CONF.celery.celery_task_serializer,
'CELERY_RESULT_SERIALIZER': CONF.celery.celery_result_serializer,
'CELERY_ACCEPT_CONTENT': CONF.celery.celery_accept_content.split(','),
'CELERY_TIMEZONE': CONF.celery.celery_timezone,
'CELERY_ENABLE_UTC': CONF.celery.celery_enable_utc,
'CELERY_QUEUES': celery_queues,
'CELERY_ROUTES': celery_routes
}
```
**Startup**
```python
def make_command() -> list:
log_path = f'{CONF.log_dir}{os.sep}{CONF.log_file}'
command_name = f'{sys.path[0]}{os.sep}celery'
command = [command_name, 'worker', '-A', CONF.cmd.proj, '-E']
if CONF.cmd.log_level:
command.extend(['-l', CONF.cmd.log_level])
if CONF.cmd.queue:
command.extend(['-Q', CONF.cmd.queue])
if CONF.cmd.name:
command.extend(['-n', CONF.cmd.name])
# if CONF.cmd.autoscale:
# command.extend(['--autoscale', CONF.cmd.autoscale])
if CONF.cmd.concurrency:
command.extend(['--concurrency', CONF.cmd.concurrency])
command.extend(['-f', log_path])
return command
if CONF.cmd.worker:
LOG.info(make_command())
entrypoint = celery.start(argv=make_command())
```
**I can provide more information if needed.**
For what it's worth, I am having this issue and can reproduce it consistently by opening the rabbitmq management console, going to connections, and closing connections with traffic from celery to rabbitmq.
I've tested with celery 4.1 and 4.2 and rabbitmq 3.7.7-1
EDIT: also python version 3.6.5 and the ubuntu 16.04 (AWS EC2 image)
I'm having a memory leak with celery 4.2.1 and redis broker. The memory grows from 100 MiB to 500 MiB(limited) in 3 hours, and the workers are marked as offline in flower. Both prefork pool and gevent show the same issue.
@yifeikong this may not be the same issue, but for your case could you please try the solution proposed https://github.com/celery/celery/pull/4839#issuecomment-447739820 ?
@georgepsarakis I'm using Python 3.6.5, so I'm not affected by this bug. I will use tracemalloc to do some research. If it was a celery bug, I'll open a new issue. Thanks
Maybe same cause with [#5047](https://github.com/celery/celery/issues/5047), it seems this bug can lead to different phenomenon.
We are facing the same memory leak running Celery 4.2.1, Kombu 4.2.2 and python3.6 with RabbitMQ as broker.
```
$ celery --app=eventr.celery_app report
software -> celery:4.2.1 (windowlicker) kombu:4.2.2-post1 py:3.6.8
billiard:3.5.0.5 py-amqp:2.4.0
platform -> system:Linux arch:64bit imp:CPython
```
I can say we have tried many things that other people mentioned as possible workarounds (redis as broker, using jemalloc, libamqp, monkey path `__del__` on `AsyncResult`) but we always ended up having memory leaked.
By analysing our log we noticed that we had a lot of messages related to missed heartbeats from gossip.
```
{"asctime": "2019-01-25 13:40:06,486", "levelname": "INFO", "name": "celery.worker.consumer.gossip", "funcName": "on_node_lost", "lineno": 147, "message": "missed heartbeat from celery@******"}
```
One last thing that we tried was disabling gossip by running the workers with `--without-gossip`, surprisingly, disabling gossip had an immediate effect.
You can see it here:

Since we deactivated gossip in two projects running celery workers the memory consumption has improved.
If you pay attention, before we were having similar memory spikes as described here https://github.com/celery/celery/issues/4843#issuecomment-399833781
One thing that I've been trying to fully understand is what are the implications of completely disabling gossip, since it's only described as worker <-> worker communication, if anyone could shed some light about this I would be very grateful.
Hope this helps and thanks for the hard work.
Why was this issue closed?
There is active feedback and interest in this issue, so I am reopening.
Well @georgepsarakis since we diagnosed my leak as not being #4839, and you [suspected](https://github.com/celery/celery/pull/4839#issuecomment-458817459) that it was #4843, I'll flip over to this leak thread at least for now. I'm not sure #4843 is my leak either. According to the initial issue on this thread:
> This problem happens at least in Celery 4.1, and it also occurs in Celery 4.2.
Celery is running on Ubuntu 16 and brokers use RabbitMQ.
I'm currently on:
python 2.7.12
Ubuntu 16.04.1 amd64
RabbitMQ 3.7.5
using:
Celery 4.1.1
librabbitmq 2.0.0
amqp 2.4.0
vine 1.1.4
billiard 3.5.0.5
kombu 4.2.2.post1
gevent 1.2.2
However, Celery 4.1.1 + gevent 1.2.2 doesn't leak for me (nor does Celery 3.1.25 + gevent 1.2.2 AFAICT); Celery 4.2.1 + gevent 1.3.7 does. Unfortunately, gevent 1.3.7 and gevent 1.2.2 are not interchangeable to demonstrate (or exclude) a gevent library as a possible source of the problem.
EDIT: Hmm...there seems to be a gevent patch (022f447dd) that looks like it could fix the error I encountered. I'll try and get that to work.
I applied 022f447 to Celery 4.1.1 and installed gevent 1.3.7. That Celery + gevent combination ran...and produced memory usage patterns consistent with the leak I've been experiencing. I'll install Celery 4.2.1 + gevent 1.2.2 (with the reverse patch) and see if I get the usual memory usage pattern.
I notice gevent 1.4.0 is out. Maybe I should give that a whirl as well to see how that behaves.
Celery 4.2.1 + gevent 1.2.2 + reverse patch for gevent 1.2.2 doesn't seem to produce the leak as does Celery 4.2.1 + gevent 1.3.7.
Celery 4.2.1 + gevent 1.4.0 does seem to leak at approximately the same rate as gevent 1.3.7 AFAICT.
https://github.com/celery/celery/blob/9f0a554dc2d28c630caf9d192873d040043b7346/celery/events/dispatcher.py
```python
def _publish(self, event, producer, routing_key, retry=False,
retry_policy=None, utcoffset=utcoffset):
exchange = self.exchange
try:
producer.publish(...)
except Exception as exc: # pylint: disable=broad-except
if not self.buffer_while_offline: # <-- False by default
raise
self._outbound_buffer.append((event, routing_key, exc)) # <---- Always buffered
def send(self, type, blind=False, utcoffset=utcoffset, retry=False,
...
if group in self.buffer_group: # <--- Never true for eventlet & gevent
...
if len(buf) >= self.buffer_limit:
self.flush() # <---- Never flushed even when grows above limit
...
else:
return self.publish(type, fields, self.producer, blind=blind,
Event=Event, retry=retry,
```
https://github.com/celery/celery/blob/b2668607c909c61becd151905b4525190c19ff4a/celery/worker/consumer/events.py
```python
def start(self, c):
# flush events sent while connection was down.
prev = self._close(c)
dis = c.event_dispatcher = c.app.events.Dispatcher(
...
# we currently only buffer events when the event loop is enabled
# XXX This excludes eventlet/gevent, which should actually buffer.
buffer_group=['task'] if c.hub else None,
on_send_buffered=c.on_send_event_buffered if c.hub else None,
)
if prev:
dis.extend_buffer(prev)
dis.flush() # <---- The only (!) chance to flush on [g]event[let] is on reconnect.
```
Now, if I understand correctly what AMQP does under the hood, then it has it's own heartbeat and when it detects a broken connection, it goes ahead and reconnects under the hood. Depending on the types of events that are enabled (gossip, heartbeat), this can leak pretty fast.
This should be true for any version of eventlet & gevent but some could exhibit connection issues that make things worse/more noticeable.
Hi,
I suspect that we are having the same issue.
Our configuration is below. Can I either negate or confirm that this is the same issue discussed here?
Python: 2.7
Celery: 4.2.1
OS: CentOS release 6.10
Redis as broker
In the attached image you can see:
1. Memory consumption increasing constantly and dropping on restart.
2. On January 13 - we upgraded from celery 3.1.25 to 4.2.1. Memory consumption increasing pace grows.

**UPDATE**
Regardless this issue, we upgraded to python 3.6 and since then it seems like the leak does not happen anymore.

(the upgrade was on February 19)
@georgepsarakis
Not sure how relevant this is, but I'm having my 2GB of SWAP space exhausted by celery in production. Stopping Flower didn't clear the memory, but stopping Celery did.
could anyone try celery 4.3rc1?
@auvipy I installed Celery 4.3.0rc1 + gevent 1.4.0. pip upgraded billiard to 3.6.0.0 and kombu 4.3.0.
Kind of puzzled that vine 1.2.0 wasn't also required by the rc1 package, given that #4839 is fixed by that upgrade.
Anyway, Celery 4.3.0 rc1 seems to run OK.
@ldav1s thanks a lot for the feedback. So, vine is declared as a dependency in [py-amqp](https://github.com/celery/py-amqp/blob/master/requirements/default.txt) actually. In new installations the latest `vine` version will be installed but this might not happen in existing ones.
@thedrow perhaps we should declare the dependency in Celery requirements too?
Let's open an issue about it and discuss it there.
Celery 4.3.0rc1 + gevent 1.4.0 has been running a couple of days now. Looks like it's leaking in the same fashion as Celery 4.2.1 + gevent 1.4.0.

Having the same leak with celery 4.2.1, python 3.6
Any updates on this?
having same problem here
Greetings,
I'm experiencing a similar issue, but I'm not sure it is the same.
After I have migrated our celery app in a different environment/network, celery workers started to leak. Previously the celery application and the rabbitmq instance were in the same environment/network.
My configuration is on Python 3.6.5:
```
amqp (2.4.2)
billiard (3.5.0.5)
celery (4.1.1)
eventlet (0.22.0)
greenlet (0.4.15)
kombu (4.2.1)
vine (1.3.0)
```
celeryconfig
```
broker_url = rabbitmq
result_backend = mongodb
task_acks_late = True
result_expires = 0
task_default_rate_limit = 2000
task_soft_time_limit = 120
task_reject_on_worker_lost = True
loglevel = 'INFO'
worker_pool_restarts = True
broker_heartbeat = 0
broker_pool_limit = None
```
The application is composed by several workers with eventlet pool, started via command in supervisord:
```
[program:worker1]
command={{ celery_path }} worker -A celery_app --workdir {{ env_path }} -l info -E -P eventlet -c 250 -n worker1@{{ hostname }} -Q queue1,queue2
```
The memory leak behaviour it looks like this, every ~10 hours usually 1 worker, max 2 start leaking:

So I have created a broadcast message for being executed on each worker, for using tracemalloc, this is the result of top command on the machine, there is 1 worker only leaking with 1464m:
```
217m 1% 2 0% /usr/bin/python3 -m celery worker -A celery_app --workdir 379
189m 1% 0 0% /usr/bin/python3 -m celery worker -A celery_app --workdir 377
1464m 9% 1 0% /usr/bin/python3 -m celery worker -A celery_app --workdir 378
218m 1% 0 0% /usr/bin/python3 -m celery worker -A celery_app --workdir 376
217m 1% 2 0% /usr/bin/python3 -m celery worker -A celery_app --workdir 375
217m 1% 3 0% /usr/bin/python3 -m celery worker -A celery_app --workdir 394
163m 1% 0 0% /usr/bin/python3 -m celery beat -A celery_app --workdir /app
```
tracemalloc TOP 10 results on the leaking worker
```
[2019-03-29 07:18:03,809: WARNING/MainProcess] [ Top 10: worker5@hostname ]
[2019-03-29 07:18:03,809: WARNING/MainProcess] /usr/lib/python3.6/site-packages/eventlet/greenio/base.py:207: size=17.7 MiB, count=26389, average=702 B
[2019-03-29 07:18:03,810: WARNING/MainProcess] /usr/lib/python3.6/site-packages/kombu/messaging.py:203: size=16.3 MiB, count=44422, average=385 B
[2019-03-29 07:18:03,811: WARNING/MainProcess] /usr/lib/python3.6/site-packages/celery/worker/heartbeat.py:49: size=15.7 MiB, count=39431, average=418 B
[2019-03-29 07:18:03,812: WARNING/MainProcess] /usr/lib/python3.6/site-packages/celery/events/dispatcher.py:156: size=13.0 MiB, count=40760, average=334 B
[2019-03-29 07:18:03,812: WARNING/MainProcess] /usr/lib/python3.6/site-packages/eventlet/greenio/base.py:363: size=12.9 MiB, count=19507, average=695 B
[2019-03-29 07:18:03,813: WARNING/MainProcess] /usr/lib/python3.6/site-packages/amqp/transport.py:256: size=12.7 MiB, count=40443, average=328 B
[2019-03-29 07:18:03,814: WARNING/MainProcess] /usr/lib/python3.6/site-packages/celery/events/dispatcher.py:138: size=12.4 MiB, count=24189, average=539 B
[2019-03-29 07:18:03,814: WARNING/MainProcess] /usr/lib/python3.6/site-packages/amqp/transport.py:256: size=12.3 MiB, count=19771, average=655 B
[2019-03-29 07:18:03,815: WARNING/MainProcess] /usr/lib/python3.6/site-packages/amqp/connection.py:505: size=11.9 MiB, count=39514, average=317 B
[2019-03-29 07:18:03,816: WARNING/MainProcess] /usr/lib/python3.6/site-packages/kombu/messaging.py:181: size=11.8 MiB, count=61362, average=201 B
```
TOP 1 with 25 frames
```
TOP 1
[2019-03-29 07:33:05,787: WARNING/MainProcess] [ TOP 1: worker5@hostname ]
[2019-03-29 07:33:05,787: WARNING/MainProcess] 26938 memory blocks: 18457.2 KiB
[2019-03-29 07:33:05,788: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 207
[2019-03-29 07:33:05,788: WARNING/MainProcess] mark_as_closed=self._mark_as_closed)
[2019-03-29 07:33:05,789: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 328
[2019-03-29 07:33:05,789: WARNING/MainProcess] timeout_exc=socket_timeout('timed out'))
[2019-03-29 07:33:05,790: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 357
[2019-03-29 07:33:05,790: WARNING/MainProcess] self._read_trampoline()
[2019-03-29 07:33:05,790: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/eventlet/greenio/base.py", line 363
[2019-03-29 07:33:05,791: WARNING/MainProcess] return self._recv_loop(self.fd.recv, b'', bufsize, flags)
[2019-03-29 07:33:05,791: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/amqp/transport.py", line 440
[2019-03-29 07:33:05,791: WARNING/MainProcess] s = recv(n - len(rbuf))
[2019-03-29 07:33:05,792: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/amqp/transport.py", line 256
[2019-03-29 07:33:05,792: WARNING/MainProcess] frame_header = read(7, True)
[2019-03-29 07:33:05,792: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 505
[2019-03-29 07:33:05,793: WARNING/MainProcess] frame = self.transport.read_frame()
[2019-03-29 07:33:05,793: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 500
[2019-03-29 07:33:05,793: WARNING/MainProcess] while not self.blocking_read(timeout):
[2019-03-29 07:33:05,793: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 103
[2019-03-29 07:33:05,794: WARNING/MainProcess] return connection.drain_events(**kwargs)
[2019-03-29 07:33:05,794: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 301
[2019-03-29 07:33:05,794: WARNING/MainProcess] return self.transport.drain_events(self.connection, **kwargs)
[2019-03-29 07:33:05,795: WARNING/MainProcess] File "/usr/lib/python3.6/site-packages/celery/worker/pidbox.py", line 120
[2019-03-29 07:33:05,795: WARNING/MainProcess] connection.drain_events(timeout=1.0)
```
I hope it could help, there are no error in the logs, other than the missed heartbeat between the workers. Now I'm trying to use the exact version of the libs we were using the old env.
UPDATE: Using the same exact dependencies lib versions and a broker heartbeat every 5 minutes the application looked like stable for longer time: more than 2 days, than it leaked again.
There were small spike continuing for ~1hour time by time, but the were "absorbed/collected".. the last one looks like started the spike.
After the 1st spike, 1st ramp, I have restarted the leaking worker.. as you can see another worker started to leak after it or probably it was already leaking, 2nd ramp.

I'm going to test without heartbeat.
UPDATE: without heartbeat leaked again after 2 days, same behaviour
```
440m 3% 1 0% /usr/bin/python3 -m celery worker -A celery_app --without-heartbeat --workdir /app -l info -E -P eventlet -c 250 -Ofair -n worker1@ -Q p_1_queue,p_2_queue
176m 1% 0 0% /usr/bin/python3 -m celery worker -A celery_app --without-heartbeat --workdir /app -l info -E -P eventlet -c 250 -Ofair -n worker2@ -Q p_1_queue,p_2_queue
176m 1% 2 0% /usr/bin/python3 -m celery worker -A celery_app --without-heartbeat --workdir /app -l info -E -P eventlet -c 250 -Ofair -n worker5@ -Q p_1_queue,p_2_queue
176m 1% 1 0% /usr/bin/python3 -m celery worker -A celery_app --without-heartbeat --workdir /app -l info -E -P eventlet -c 250 -Ofair -n worker3@ -Q p_1_queue,p_2_queue
176m 1% 1 0% /usr/bin/python3 -m celery worker -A celery_app --without-heartbeat --workdir /app -l info -E -P eventlet -c 250 -Ofair -n worker4@ -Q p_1_queue,p_2_queue
171m 1% 1 0% /usr/bin/python3 -m celery worker -A celery_app --without-heartbeat --workdir /app -l info -E -P eventlet -c 20 -n worker_p_root@ -Q p_root_queue
157m 1% 0 0% /usr/bin/python3 -m celery beat -A celery_app --workdir /app --schedule /app/beat.db -l info
```

UPDATE:
Using celery 4.3.0 it seems the problem resolved and it is stable since a week

```
amqp (2.4.2)
billiard (3.6.0.0)
celery (4.3.0)
eventlet (0.24.1)
greenlet (0.4.15)
kombu (4.5.0)
vine (1.3.0)
```
Please let me know if I could help somehow, instrumenting the code. If necessary provide links and example please.
Thank you
I'm also having a memory leak. It looks like I've managed to find the cause.
https://github.com/celery/celery/blob/master/celery/events/dispatcher.py#L75
I can see that this buffer starts to grow after connection issues with rabbit. I don't understand why it fails to clear events eventually, it continues to grow over time and consume more and more ram. Passing `buffer_while_offline=False` here https://github.com/celery/celery/blob/master/celery/worker/consumer/events.py#L43 seems to fix the leak for me. Can someone please check if this is related?
https://github.com/celery/celery/pull/5482
@yevhen-m thank you a lot! That helped us to solve the memory leakage!
Its good that we have a workaround but can we please find a proper fix? | 2019-12-12T03:56:03 |
|
celery/celery | 5,898 | celery__celery-5898 | [
"5897"
] | 47d3ef152cb22ba1291d1935235e88d1fb2e5634 | diff --git a/celery/utils/timer2.py b/celery/utils/timer2.py
--- a/celery/utils/timer2.py
+++ b/celery/utils/timer2.py
@@ -102,7 +102,7 @@ def stop(self):
self.running = False
def ensure_started(self):
- if not self.running and not self.isAlive():
+ if not self.running and not self.is_alive():
if self.on_start:
self.on_start(self)
self.start()
| Python 3.9 compatibility issue regarding usage of threading.Thread.isAlive
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [ ] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [ ] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have verified that the issue exists against the `master` branch of Celery.
## Optional Debugging Information
`isAlive` was deprecated and removed in Python 3.9 . Celery has the deprecation warning that will become error in Python 3.9 .
https://travis-ci.org/celery/celery/jobs/628813003#L3262-L3263
Relevant CPython PR : https://github.com/python/cpython/pull/15225
| 2020-01-02T13:25:44 |
||
celery/celery | 5,910 | celery__celery-5910 | [
"1013"
] | 77099b876814ec0008fd8da18f35de70deccbe03 | diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -244,6 +244,7 @@ def __repr__(self):
short_lived_sessions=Option(
False, type='bool', old={'celery_result_db_short_lived_sessions'},
),
+ table_schemas=Option(type='dict'),
table_names=Option(type='dict', old={'celery_result_db_tablenames'}),
),
task=Namespace(
diff --git a/celery/backends/database/__init__.py b/celery/backends/database/__init__.py
--- a/celery/backends/database/__init__.py
+++ b/celery/backends/database/__init__.py
@@ -88,6 +88,10 @@ def __init__(self, dburi=None, engine_options=None, url=None, **kwargs):
'short_lived_sessions',
conf.database_short_lived_sessions)
+ schemas = conf.database_table_schemas or {}
+ self.task_cls.__table__.schema = schemas.get('task')
+ self.taskset_cls.__table__.schema = schemas.get('group')
+
tablenames = conf.database_table_names or {}
self.task_cls.__table__.name = tablenames.get('task',
'celery_taskmeta')
| diff --git a/t/unit/backends/test_database.py b/t/unit/backends/test_database.py
--- a/t/unit/backends/test_database.py
+++ b/t/unit/backends/test_database.py
@@ -79,6 +79,15 @@ def test_missing_dburi_raises_ImproperlyConfigured(self):
with pytest.raises(ImproperlyConfigured):
DatabaseBackend(app=self.app)
+ def test_table_schema_config(self):
+ self.app.conf.database_table_schemas = {
+ 'task': 'foo',
+ 'group': 'bar',
+ }
+ tb = DatabaseBackend(self.uri, app=self.app)
+ assert tb.task_cls.__table__.schema == 'foo'
+ assert tb.taskset_cls.__table__.schema == 'bar'
+
def test_missing_task_id_is_PENDING(self):
tb = DatabaseBackend(self.uri, app=self.app)
assert tb.get_state('xxx-does-not-exist') == states.PENDING
| Support storing results in a specific schema of a PostgreSQL database
I'm using celery in a PostgreSQL environment where I only have a single database.
So I use the "schema" idiom to keep non-strongly-related data apart.
That's why I'd like to be able to store the celery task results in a different schema than the default "public" one.
Source code modification would take place in `backends/database/models.py`, both the Task & TaskSet tables and their related Sequence.
Plus of course adding a configuration hook to specify the schema name.
Thanks!
| Sounds like a good idea, but If you want this then you would have to submit a patch using a pull request!
| 2020-01-08T15:17:16 |
celery/celery | 5,915 | celery__celery-5915 | [
"4556",
"4556"
] | d0563058f8f47f347ac1b56c44f833f569764482 | diff --git a/celery/worker/consumer/consumer.py b/celery/worker/consumer/consumer.py
--- a/celery/worker/consumer/consumer.py
+++ b/celery/worker/consumer/consumer.py
@@ -51,7 +51,7 @@
"""
CONNECTION_RETRY_STEP = """\
-Trying again {when}...\
+Trying again {when}... ({retries}/{max_retries})\
"""
CONNECTION_ERROR = """\
@@ -421,8 +421,11 @@ def ensure_connected(self, conn):
def _error_handler(exc, interval, next_step=CONNECTION_RETRY_STEP):
if getattr(conn, 'alt', None) and interval == 0:
next_step = CONNECTION_FAILOVER
- error(CONNECTION_ERROR, conn.as_uri(), exc,
- next_step.format(when=humanize_seconds(interval, 'in', ' ')))
+ next_step = next_step.format(
+ when=humanize_seconds(interval, 'in', ' '),
+ retries=int(interval / 2),
+ max_retries=self.app.conf.broker_connection_max_retries)
+ error(CONNECTION_ERROR, conn.as_uri(), exc, next_step)
# remember that the connection is lazy, it won't establish
# until needed.
| diff --git a/t/unit/worker/test_consumer.py b/t/unit/worker/test_consumer.py
--- a/t/unit/worker/test_consumer.py
+++ b/t/unit/worker/test_consumer.py
@@ -264,6 +264,22 @@ def test_connect_error_handler(self):
errback = conn.ensure_connection.call_args[0][0]
errback(Mock(), 0)
+ @patch('celery.worker.consumer.consumer.error')
+ def test_connect_error_handler_progress(self, error):
+ self.app.conf.broker_connection_retry = True
+ self.app.conf.broker_connection_max_retries = 3
+ self.app._connection = _amqp_connection()
+ conn = self.app._connection.return_value
+ c = self.get_consumer()
+ assert c.connect()
+ errback = conn.ensure_connection.call_args[0][0]
+ errback(Mock(), 2)
+ assert error.call_args[0][3] == 'Trying again in 2.00 seconds... (1/3)'
+ errback(Mock(), 4)
+ assert error.call_args[0][3] == 'Trying again in 4.00 seconds... (2/3)'
+ errback(Mock(), 6)
+ assert error.call_args[0][3] == 'Trying again in 6.00 seconds... (3/3)'
+
class test_Heart:
| Celery hangs indefinitely (instead of failing) if redis is not started
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
```
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.12
billiard:3.5.0.3 redis:2.10.6
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:disabled
broker_url: u'redis://localhost:6379/0'
```
## Steps to reproduce
Stop `redis-server.`
Initialize a celery instance configured with redis.
## Expected behavior
I expected an `OperationalError` or another exception communicating that celery cannot connect to the broker server.
## Actual behavior
It hangs indefinitely.
## Why this is important
In the current situation is not possible to write automated functional and integration tests that check whether the software handles failures when brokers are not reachable.
Celery hangs indefinitely (instead of failing) if redis is not started
## Checklist
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
```
software -> celery:4.1.0 (latentcall) kombu:4.1.0 py:2.7.12
billiard:3.5.0.3 redis:2.10.6
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:disabled
broker_url: u'redis://localhost:6379/0'
```
## Steps to reproduce
Stop `redis-server.`
Initialize a celery instance configured with redis.
## Expected behavior
I expected an `OperationalError` or another exception communicating that celery cannot connect to the broker server.
## Actual behavior
It hangs indefinitely.
## Why this is important
In the current situation is not possible to write automated functional and integration tests that check whether the software handles failures when brokers are not reachable.
| Sounds similar to #4328 to me, we've been running into a similar issue, although with RabbitMQ.
I have seen this issue with Redis stopped and when Celery tries to connect to the broker over IPv6 but the firewall is configured to silently drop the connection attempts. It would eventually fall back to IPv4 and get an instant connection refused. Could this be your issue?
@keaneokelley I'm testing this in a development environment with no firewall:
```
iptables -L -v -n
Chain INPUT (policy ACCEPT 799K packets, 2310M bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
0 0 ACCEPT udp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * lxcbr0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 592K packets, 47M bytes)
pkts bytes target prot opt in out source destination
```
`ip6tables` output:
```
ip6tables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
```
I have tracked similar issues here on github and it seems to me they have been closed without a real solution.
This issue is easy to test automatically, for example I created a test class which inserts a bogus broker url. The test hangs indefinitely.
@auvipy this is still an issue in v4.3
if you can install celery and kombu from master and try to find the root cause of this hang?
I ran into the same issue, both with Redis and RabbitMQ. I think this is a serious problem, since it could easily lead to a denial-of-service of a webservice.
Did anyone identify the root cause?
Still an issue... Plus fix could easily lead to DOS
You should probably try to set [`broker_connection_max_retries`](https://docs.celeryproject.org/en/latest/userguide/configuration.html#broker-connection-max-retries) to a much lower value.
+1 @thedrow that might be the reason.
I've seen that it's not indefinitely, actually. As @thedrow mentioned, `broker_connection_max_retries` have been set to 100 by default. It means that eventually it will fail in 2+4+6+..+200 secs = 10100 secs (2 hours 48 mins 20 secs, to be exact. Yeah, a long time indeed). When I modify the configuration and either:
1. Set `broker_connection_max_retries` to 1, celery will retry the connection once in 2 secs, if it's still not available then it will throw ConnectionError.
2. Set `broker_connection_retry` to 0, celery will not even try to retry the connection and will throw ConnectionError anyway.
So you might either don't try celery to retry the connection, or maybe set the max retries to maybe 3 tries, depends on your case.
Or maybe I missed something? I don't really sure what do you mean by "hangs". Is it really hangs, like the process is not responding, or it will retry the connection again and again and again?
@georgepsarakis
I think `broker_connection_max_retry` should be set to lower value by default, maybe 3 retries, for example. 100 is kinda ridiculous especially with increased retry time.
Also the broker keep retrying without saying anything like retry attempt (Attempt 1 of 100..Attempt 2 of 100..etc) or how many attempt for broker to reconnect (Reconnecting to broker in 100 tries..). That made user confused on how long celery worker will reconnect to the broker and assumed that it will retry indefinitely, especially those who does not notice this configuration variable.
That's just my suggestion, feel free to improve or maybe another good suggestion :smile:
Thank you for your analysis @marfgold1, what do you guys think?
@auvipy @georgepsarakis
Sounds similar to #4328 to me, we've been running into a similar issue, although with RabbitMQ.
I have seen this issue with Redis stopped and when Celery tries to connect to the broker over IPv6 but the firewall is configured to silently drop the connection attempts. It would eventually fall back to IPv4 and get an instant connection refused. Could this be your issue?
@keaneokelley I'm testing this in a development environment with no firewall:
```
iptables -L -v -n
Chain INPUT (policy ACCEPT 799K packets, 2310M bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
0 0 ACCEPT udp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
0 0 ACCEPT tcp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
0 0 ACCEPT udp -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * lxcbr0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- lxcbr0 * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 592K packets, 47M bytes)
pkts bytes target prot opt in out source destination
```
`ip6tables` output:
```
ip6tables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
```
I have tracked similar issues here on github and it seems to me they have been closed without a real solution.
This issue is easy to test automatically, for example I created a test class which inserts a bogus broker url. The test hangs indefinitely.
@auvipy this is still an issue in v4.3
if you can install celery and kombu from master and try to find the root cause of this hang?
I ran into the same issue, both with Redis and RabbitMQ. I think this is a serious problem, since it could easily lead to a denial-of-service of a webservice.
Did anyone identify the root cause?
Still an issue... Plus fix could easily lead to DOS
You should probably try to set [`broker_connection_max_retries`](https://docs.celeryproject.org/en/latest/userguide/configuration.html#broker-connection-max-retries) to a much lower value.
+1 @thedrow that might be the reason.
I've seen that it's not indefinitely, actually. As @thedrow mentioned, `broker_connection_max_retries` have been set to 100 by default. It means that eventually it will fail in 2+4+6+..+200 secs = 10100 secs (2 hours 48 mins 20 secs, to be exact. Yeah, a long time indeed). When I modify the configuration and either:
1. Set `broker_connection_max_retries` to 1, celery will retry the connection once in 2 secs, if it's still not available then it will throw ConnectionError.
2. Set `broker_connection_retry` to 0, celery will not even try to retry the connection and will throw ConnectionError anyway.
So you might either don't try celery to retry the connection, or maybe set the max retries to maybe 3 tries, depends on your case.
Or maybe I missed something? I don't really sure what do you mean by "hangs". Is it really hangs, like the process is not responding, or it will retry the connection again and again and again?
@georgepsarakis
I think `broker_connection_max_retry` should be set to lower value by default, maybe 3 retries, for example. 100 is kinda ridiculous especially with increased retry time.
Also the broker keep retrying without saying anything like retry attempt (Attempt 1 of 100..Attempt 2 of 100..etc) or how many attempt for broker to reconnect (Reconnecting to broker in 100 tries..). That made user confused on how long celery worker will reconnect to the broker and assumed that it will retry indefinitely, especially those who does not notice this configuration variable.
That's just my suggestion, feel free to improve or maybe another good suggestion :smile:
Thank you for your analysis @marfgold1, what do you guys think?
@auvipy @georgepsarakis | 2020-01-11T14:49:58 |
celery/celery | 5,918 | celery__celery-5918 | [
"5917"
] | d0563058f8f47f347ac1b56c44f833f569764482 | diff --git a/celery/backends/mongodb.py b/celery/backends/mongodb.py
--- a/celery/backends/mongodb.py
+++ b/celery/backends/mongodb.py
@@ -180,7 +180,9 @@ def encode(self, data):
def decode(self, data):
if self.serializer == 'bson':
return data
- return super(MongoBackend, self).decode(data)
+
+ payload = self.encode(data)
+ return super(MongoBackend, self).decode(payload)
def _store_result(self, task_id, result, state,
traceback=None, request=None, **kwargs):
| DecodeError - "the JSON object must be str, bytes or bytearray, not dict" Mongo backend
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [ ] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [x] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [x] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [x] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [x] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [x] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
I use mongo broker and mongo backend.
There is a decoding error while try to get task result with AsyncResult
`File "task_result.py", line 9, in <module>
print(res.result)
......
\python37-32\Lib\json\__init__.py", line 341, in loads
raise TypeError(f'the JSON object must be str, bytes or bytearray, '
kombu.exceptions.DecodeError: the JSON object must be str, bytes or bytearray, not dict`
tasks.py
```
> @celery.task(name='web.add', bind=True)
> def add_test(self, x):
> time.sleep(6)
> message = 'IN WORKER'
> self.update_state(state='PROGRESS', meta={ 'current': 50, 'total': 100, 'status': message})
> time.sleep(10)
> message = 'END'
> return { 'current': 100, 'total': 100, 'status': message, 'result': { 'video_url': 42, 'video_player_url' : 'https://invidza.com' } }
```
task_result.py
```
>res = add.AsyncResult('d6605146-9296-463f-9463-9795d6b87f37')
>print(res)
>print(res.result)
```
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
```
</p>
</details>
# Expected Behavior
result = {'current': 50, 'total': 100, 'status': 'IN WORKER'}
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
| 2020-01-14T10:17:14 |
||
celery/celery | 5,921 | celery__celery-5921 | [
"5919",
"5919"
] | f2ddd894c32f642a20f03b805b97e460f4fb3b4f | diff --git a/celery/backends/redis.py b/celery/backends/redis.py
--- a/celery/backends/redis.py
+++ b/celery/backends/redis.py
@@ -3,6 +3,7 @@
from __future__ import absolute_import, unicode_literals
import time
+from contextlib import contextmanager
from functools import partial
from ssl import CERT_NONE, CERT_OPTIONAL, CERT_REQUIRED
@@ -78,6 +79,11 @@
E_LOST = 'Connection to Redis lost: Retry (%s/%s) %s.'
+E_RETRY_LIMIT_EXCEEDED = """
+Retry limit exceeded while trying to reconnect to the Celery redis result \
+store backend. The Celery application must be restarted.
+"""
+
logger = get_logger(__name__)
@@ -88,6 +94,8 @@ def __init__(self, *args, **kwargs):
super(ResultConsumer, self).__init__(*args, **kwargs)
self._get_key_for_task = self.backend.get_key_for_task
self._decode_result = self.backend.decode_result
+ self._ensure = self.backend.ensure
+ self._connection_errors = self.backend.connection_errors
self.subscribed_to = set()
def on_after_fork(self):
@@ -99,6 +107,31 @@ def on_after_fork(self):
logger.warning(text_t(e))
super(ResultConsumer, self).on_after_fork()
+ def _reconnect_pubsub(self):
+ self._pubsub = None
+ self.backend.client.connection_pool.reset()
+ # task state might have changed when the connection was down so we
+ # retrieve meta for all subscribed tasks before going into pubsub mode
+ metas = self.backend.client.mget(self.subscribed_to)
+ metas = [meta for meta in metas if meta]
+ for meta in metas:
+ self.on_state_change(self._decode_result(meta), None)
+ self._pubsub = self.backend.client.pubsub(
+ ignore_subscribe_messages=True,
+ )
+ self._pubsub.subscribe(*self.subscribed_to)
+
+ @contextmanager
+ def reconnect_on_error(self):
+ try:
+ yield
+ except self._connection_errors:
+ try:
+ self._ensure(self._reconnect_pubsub, ())
+ except self._connection_errors:
+ logger.critical(E_RETRY_LIMIT_EXCEEDED)
+ raise
+
def _maybe_cancel_ready_task(self, meta):
if meta['status'] in states.READY_STATES:
self.cancel_for(meta['task_id'])
@@ -124,9 +157,10 @@ def stop(self):
def drain_events(self, timeout=None):
if self._pubsub:
- message = self._pubsub.get_message(timeout=timeout)
- if message and message['type'] == 'message':
- self.on_state_change(self._decode_result(message['data']), message)
+ with self.reconnect_on_error():
+ message = self._pubsub.get_message(timeout=timeout)
+ if message and message['type'] == 'message':
+ self.on_state_change(self._decode_result(message['data']), message)
elif timeout:
time.sleep(timeout)
@@ -139,13 +173,15 @@ def _consume_from(self, task_id):
key = self._get_key_for_task(task_id)
if key not in self.subscribed_to:
self.subscribed_to.add(key)
- self._pubsub.subscribe(key)
+ with self.reconnect_on_error():
+ self._pubsub.subscribe(key)
def cancel_for(self, task_id):
+ key = self._get_key_for_task(task_id)
+ self.subscribed_to.discard(key)
if self._pubsub:
- key = self._get_key_for_task(task_id)
- self.subscribed_to.discard(key)
- self._pubsub.unsubscribe(key)
+ with self.reconnect_on_error():
+ self._pubsub.unsubscribe(key)
class RedisBackend(BaseKeyValueStoreBackend, AsyncBackendMixin):
| diff --git a/t/unit/backends/test_redis.py b/t/unit/backends/test_redis.py
--- a/t/unit/backends/test_redis.py
+++ b/t/unit/backends/test_redis.py
@@ -1,5 +1,6 @@
from __future__ import absolute_import, unicode_literals
+import json
import random
import ssl
from contextlib import contextmanager
@@ -26,6 +27,10 @@ def on_first_call(*args, **kwargs):
mock.return_value, = retval
+class ConnectionError(Exception):
+ pass
+
+
class Connection(object):
connected = True
@@ -55,9 +60,27 @@ def execute(self):
return [step(*a, **kw) for step, a, kw in self.steps]
+class PubSub(mock.MockCallbacks):
+ def __init__(self, ignore_subscribe_messages=False):
+ self._subscribed_to = set()
+
+ def close(self):
+ self._subscribed_to = set()
+
+ def subscribe(self, *args):
+ self._subscribed_to.update(args)
+
+ def unsubscribe(self, *args):
+ self._subscribed_to.difference_update(args)
+
+ def get_message(self, timeout=None):
+ pass
+
+
class Redis(mock.MockCallbacks):
Connection = Connection
Pipeline = Pipeline
+ pubsub = PubSub
def __init__(self, host=None, port=None, db=None, password=None, **kw):
self.host = host
@@ -71,6 +94,9 @@ def __init__(self, host=None, port=None, db=None, password=None, **kw):
def get(self, key):
return self.keyspace.get(key)
+ def mget(self, keys):
+ return [self.get(key) for key in keys]
+
def setex(self, key, expires, value):
self.set(key, value)
self.expire(key, expires)
@@ -144,7 +170,9 @@ class _RedisBackend(RedisBackend):
return _RedisBackend(app=self.app)
def get_consumer(self):
- return self.get_backend().result_consumer
+ consumer = self.get_backend().result_consumer
+ consumer._connection_errors = (ConnectionError,)
+ return consumer
@patch('celery.backends.asynchronous.BaseResultConsumer.on_after_fork')
def test_on_after_fork(self, parent_method):
@@ -194,6 +222,33 @@ def test_drain_events_before_start(self):
# drain_events shouldn't crash when called before start
consumer.drain_events(0.001)
+ def test_consume_from_connection_error(self):
+ consumer = self.get_consumer()
+ consumer.start('initial')
+ consumer._pubsub.subscribe.side_effect = (ConnectionError(), None)
+ consumer.consume_from('some-task')
+ assert consumer._pubsub._subscribed_to == {b'celery-task-meta-initial', b'celery-task-meta-some-task'}
+
+ def test_cancel_for_connection_error(self):
+ consumer = self.get_consumer()
+ consumer.start('initial')
+ consumer._pubsub.unsubscribe.side_effect = ConnectionError()
+ consumer.consume_from('some-task')
+ consumer.cancel_for('some-task')
+ assert consumer._pubsub._subscribed_to == {b'celery-task-meta-initial'}
+
+ @patch('celery.backends.redis.ResultConsumer.cancel_for')
+ @patch('celery.backends.asynchronous.BaseResultConsumer.on_state_change')
+ def test_drain_events_connection_error(self, parent_on_state_change, cancel_for):
+ meta = {'task_id': 'initial', 'status': states.SUCCESS}
+ consumer = self.get_consumer()
+ consumer.start('initial')
+ consumer.backend.set(b'celery-task-meta-initial', json.dumps(meta))
+ consumer._pubsub.get_message.side_effect = ConnectionError()
+ consumer.drain_events()
+ parent_on_state_change.assert_called_with(meta, None)
+ assert consumer._pubsub._subscribed_to == {b'celery-task-meta-initial'}
+
class test_RedisBackend:
def get_backend(self):
| Handle Redis connection errors in result consumer
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
enhancement requests which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical enhancement to an existing feature.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed enhancements.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the if the same enhancement was already implemented in the
master branch.
- [x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Brief Summary
<!--
Please include a brief summary of what the enhancement is
and why it is needed.
-->
When there is a connection error with Redis while executing a command, in most cases, the redis client will [discard the connection](https://github.com/andymccurdy/redis-py/blob/272d3139e1c82c2d89551f87d12df7c18d938ea2/redis/client.py#L885-L886), causing the next command sent to Redis to open a new connection. This allows applications to recover from connection errors by simply retrying, a property that is used in Celery, for example when setting keys in the Redis result backend: https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L324-L325
This is not the case however when the [connection to Redis is in a pubsub state](https://github.com/andymccurdy/redis-py/blob/272d3139e1c82c2d89551f87d12df7c18d938ea2/redis/client.py#L3397-L3414). The reason for that is that some state associated with the connection (namely the list of keys subscibed to). The Redis client doesn't keep track of this state, so it can't possibly restore it when creating a new connection and leaves the connection handling to the application code.
The Celery Redis result consumer uses pubsub in order to be notified when results are available, but doesn't handle connection errors at all, causing a result consumer to end up in a state where it can't connect to the result backend any more after a single connection error, as any further attempt will reuse the same faulty connection.
The solution would be to add error handling logic to the result consumer, so it will recreate the connection on connection errors and initialize it to the proper state.
# Design
## Architectural Considerations
<!--
If more components other than Celery are involved,
describe them here and the effect it would have on Celery.
-->
None
## Proposed Behavior
<!--
Please describe in detail how this enhancement is going to change the behavior
of an existing feature.
Describe what happens in case of failures as well if applicable.
-->
Add error handling in all places the Redis result consumer sends a Redis command in a pubsub context:
* https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L127
* https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L142
* https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L148
We should catch all Redis connection errors, and call a new method that will reinitialize a pubsub connection in the proper state (discard the current connection from the pool, start the pubsub context, subscribe to all keys in `ResultConsumer.subscribed_to`) using the retry policy. If in `drain_events`, we should try to get new messages again.
This will take care of most issues with connection errors. I see two remaining issues:
1.Some message might have been lost (sent between losing the connection and reconnecting). We could read all keys subscribed to right after reconnecting and before starting the pubsub context and call `on_state_change` for each existing key, but this might cause some messages to be delivered twice and I don't know how Celery will react to that.
2. If the connection can't be re-established despite the retries and reaches max-retries, the result consumer will end up with a faulty connection that can't be recovered from. This should be communicated somehow to the user (documentation, logging an explicit error message, custom exception).
## Proposed UI/UX
<!--
Please provide your ideas for the API, CLI options,
configuration key names etc. that will be adjusted for this enhancement.
-->
None
## Diagrams
<!--
Please include any diagrams that might be relevant
to the implementation of this enhancement such as:
* Class Diagrams
* Sequence Diagrams
* Activity Diagrams
You can drag and drop images into the text box to attach them to this issue.
-->
N/A
## Alternatives
<!--
If you have considered any alternative implementations
describe them in detail below.
-->
None
Handle Redis connection errors in result consumer
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
enhancement requests which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical enhancement to an existing feature.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed enhancements.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the if the same enhancement was already implemented in the
master branch.
- [x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Brief Summary
<!--
Please include a brief summary of what the enhancement is
and why it is needed.
-->
When there is a connection error with Redis while executing a command, in most cases, the redis client will [discard the connection](https://github.com/andymccurdy/redis-py/blob/272d3139e1c82c2d89551f87d12df7c18d938ea2/redis/client.py#L885-L886), causing the next command sent to Redis to open a new connection. This allows applications to recover from connection errors by simply retrying, a property that is used in Celery, for example when setting keys in the Redis result backend: https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L324-L325
This is not the case however when the [connection to Redis is in a pubsub state](https://github.com/andymccurdy/redis-py/blob/272d3139e1c82c2d89551f87d12df7c18d938ea2/redis/client.py#L3397-L3414). The reason for that is that some state associated with the connection (namely the list of keys subscibed to). The Redis client doesn't keep track of this state, so it can't possibly restore it when creating a new connection and leaves the connection handling to the application code.
The Celery Redis result consumer uses pubsub in order to be notified when results are available, but doesn't handle connection errors at all, causing a result consumer to end up in a state where it can't connect to the result backend any more after a single connection error, as any further attempt will reuse the same faulty connection.
The solution would be to add error handling logic to the result consumer, so it will recreate the connection on connection errors and initialize it to the proper state.
# Design
## Architectural Considerations
<!--
If more components other than Celery are involved,
describe them here and the effect it would have on Celery.
-->
None
## Proposed Behavior
<!--
Please describe in detail how this enhancement is going to change the behavior
of an existing feature.
Describe what happens in case of failures as well if applicable.
-->
Add error handling in all places the Redis result consumer sends a Redis command in a pubsub context:
* https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L127
* https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L142
* https://github.com/celery/celery/blob/d0563058f8f47f347ac1b56c44f833f569764482/celery/backends/redis.py#L148
We should catch all Redis connection errors, and call a new method that will reinitialize a pubsub connection in the proper state (discard the current connection from the pool, start the pubsub context, subscribe to all keys in `ResultConsumer.subscribed_to`) using the retry policy. If in `drain_events`, we should try to get new messages again.
This will take care of most issues with connection errors. I see two remaining issues:
1.Some message might have been lost (sent between losing the connection and reconnecting). We could read all keys subscribed to right after reconnecting and before starting the pubsub context and call `on_state_change` for each existing key, but this might cause some messages to be delivered twice and I don't know how Celery will react to that.
2. If the connection can't be re-established despite the retries and reaches max-retries, the result consumer will end up with a faulty connection that can't be recovered from. This should be communicated somehow to the user (documentation, logging an explicit error message, custom exception).
## Proposed UI/UX
<!--
Please provide your ideas for the API, CLI options,
configuration key names etc. that will be adjusted for this enhancement.
-->
None
## Diagrams
<!--
Please include any diagrams that might be relevant
to the implementation of this enhancement such as:
* Class Diagrams
* Sequence Diagrams
* Activity Diagrams
You can drag and drop images into the text box to attach them to this issue.
-->
N/A
## Alternatives
<!--
If you have considered any alternative implementations
describe them in detail below.
-->
None
| 2020-01-15T13:11:47 |
|
celery/celery | 5,931 | celery__celery-5931 | [
"5930"
] | 9ee6c3bd31ffeb9ef4feb6c082e9c86022283143 | diff --git a/celery/backends/asynchronous.py b/celery/backends/asynchronous.py
--- a/celery/backends/asynchronous.py
+++ b/celery/backends/asynchronous.py
@@ -200,6 +200,7 @@ def _wait_for_pending(self, result,
return self.result_consumer._wait_for_pending(
result, timeout=timeout,
on_interval=on_interval, on_message=on_message,
+ **kwargs
)
@property
diff --git a/celery/backends/redis.py b/celery/backends/redis.py
--- a/celery/backends/redis.py
+++ b/celery/backends/redis.py
@@ -114,7 +114,7 @@ def start(self, initial_task_id, **kwargs):
self._consume_from(initial_task_id)
def on_wait_for_pending(self, result, **kwargs):
- for meta in result._iter_meta():
+ for meta in result._iter_meta(**kwargs):
if meta is not None:
self.on_state_change(meta, None)
diff --git a/celery/result.py b/celery/result.py
--- a/celery/result.py
+++ b/celery/result.py
@@ -834,9 +834,9 @@ def join_native(self, timeout=None, propagate=True,
acc[order_index[task_id]] = value
return acc
- def _iter_meta(self):
+ def _iter_meta(self, **kwargs):
return (meta for _, meta in self.backend.get_many(
- {r.id for r in self.results}, max_iterations=1,
+ {r.id for r in self.results}, max_iterations=1, **kwargs
))
def _failed_join_report(self):
| GroupResult has minimum result latency of 500ms
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have verified that the issue exists against the `master` branch of Celery. 90fe53f
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [x] I have tried reproducing the issue on more than one workers pool.
- [x] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.4.0 (cliffs) kombu:4.6.7 py:3.7.6
billiard:3.6.1.0 redis:3.3.11
platform -> system:Linux arch:64bit
kernel version:5.4.13-3-MANJARO imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://redis:6379/2
```
</p>
</details>
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
amqp==2.5.2
apipkg==1.5
attrs==19.3.0
backcall==0.1.0
billiard==3.6.1.0
blinker==1.4
boltons==19.3.0
cachetools==4.0.0
celery==4.4.0
certifi==2019.11.28
cffi==1.13.2
chardet==3.0.4
Click==7.0
cryptography==2.8
decorator==4.4.1
defusedxml==0.6.0
dnspython==1.16.0
dotted==0.1.8
elasticsearch==7.1.0
elasticsearch-dsl==7.1.0
execnet==1.7.1
Flask==1.1.1
Flask-Caching==1.4.0
Flask-Cors==3.0.8
Flask-Limiter==1.1.0
Flask-Login==0.4.1
flask-mongoengine==0.9.5
flask-shell-ipython==0.4.1
Flask-WTF==0.14.2
gevent==1.4.0
google-api-core==1.15.0
google-auth==1.10.0
google-cloud-core==1.1.0
google-cloud-pubsub==1.1.0
google-cloud-storage==1.23.0
google-resumable-media==0.5.0
googleapis-common-protos==1.6.0
greenlet==0.4.15
grpc-google-iam-v1==0.12.3
grpcio==1.26.0
httpagentparser==1.9.0
httplib2==0.15.0
idna==2.8
importlib-metadata==1.4.0
ipython==7.10.2
ipython-genutils==0.2.0
itsdangerous==1.1.0
jedi==0.15.2
Jinja2==2.10.3
kombu==4.6.7
libthumbor==1.3.2
limits==1.3
MarkupSafe==1.1.1
marshmallow==3.3.0
mixpanel==4.5.0
mmh3==2.5.1
mongoengine==0.18.2
more-itertools==8.1.0
multidict==4.7.4
ndg-httpsclient==0.5.1
newrelic==5.4.1.134
nexmo==2.4.0
oauth2client==4.1.3
oauthlib==3.1.0
packaging==20.0
parso==0.5.2
pexpect==4.7.0
phonenumbers==8.11.1
pickleshare==0.7.5
Pillow-SIMD==6.0.0.post0
pluggy==0.13.1
prompt-toolkit==3.0.2
protobuf==3.9.0
ptyprocess==0.6.0
pusher==2.1.4
py==1.8.1
py-cpuinfo==5.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pybase62==0.4.3
pycparser==2.19
Pygments==2.5.2
PyJWT==1.7.1
pymongo==3.10.0
PyNaCl==1.3.0
pyOpenSSL==19.1.0
pyparsing==2.4.6
pytelegraf==0.3.3
pytest==5.3.2
pytest-benchmark==3.2.3
pytest-forked==1.1.3
pytest-mock==1.13.0
pytest-sugar==0.9.2
pytest-xdist==1.31.0
python-dateutil==2.8.1
python-rapidjson==0.9.1
python3-openid==3.1.0
pytz==2019.3
PyYAML==5.2
redis==3.3.11
requests==2.22.0
requests-oauthlib==1.3.0
rsa==4.0
semantic-version==2.8.3
sentry-sdk==0.13.5
six==1.13.0
social-auth-app-flask==1.0.0
social-auth-core==3.2.0
social-auth-storage-mongoengine==1.0.1
termcolor==1.1.0
traitlets==4.3.3
twilio==6.35.1
urllib3==1.25.7
uWSGI==2.0.18
vine==1.3.0
wcwidth==0.1.8
webargs==5.5.2
Werkzeug==0.15.5
WTForms==2.2.1
yarl==1.4.2
zipp==0.6.0
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
import time
from flask import Flask
from celery import Celery, group
app = Flask(__name__)
celery = Celery('app', broker='redis://redis:6379/1', backend='redis://redis:6379/2')
@celery.task
def debug():
return
@app.route('/', methods={'GET', 'POST'})
def hello_world():
task = group([
debug.si() for i in range(10)
]).apply_async()
start = time.perf_counter()
task.get(timeout=5, interval=0.01)
print('END', (time.perf_counter() - start) * 1000)
return {}
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
Scheduling noop tasks and setting `interval` should make the response time near the set `interval` in ideal clean environments.
Example: Setting `task.get(interval=0.1)` with 5x noop tasks, I would expect near `100ms` response.
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
Regardless of the setting of `interval`, the response time is at least 500ms.
The cause is that `interval` is not passed all the way to `get_many()` where it defaults to `500ms` which is where the minimum latency is comming from.
https://github.com/celery/celery/blob/cf829307991da3815e1f7b105e736d13dbc7a325/celery/result.py#L837-L840
https://github.com/celery/celery/blob/dc03b6d342a8008d123c97cb889d19add485f8a2/celery/backends/base.py#L663-L666
| 2020-01-22T19:27:02 |
||
celery/celery | 5,952 | celery__celery-5952 | [
"5936"
] | 9ee6c3bd31ffeb9ef4feb6c082e9c86022283143 | diff --git a/celery/backends/redis.py b/celery/backends/redis.py
--- a/celery/backends/redis.py
+++ b/celery/backends/redis.py
@@ -185,6 +185,8 @@ def __init__(self, host=None, port=None, db=None, password=None,
socket_timeout = _get('redis_socket_timeout')
socket_connect_timeout = _get('redis_socket_connect_timeout')
+ retry_on_timeout = _get('redis_retry_on_timeout')
+ socket_keepalive = _get('redis_socket_keepalive')
self.connparams = {
'host': _get('redis_host') or 'localhost',
@@ -193,6 +195,8 @@ def __init__(self, host=None, port=None, db=None, password=None,
'password': _get('redis_password'),
'max_connections': self.max_connections,
'socket_timeout': socket_timeout and float(socket_timeout),
+ 'retry_on_timeout': retry_on_timeout or False,
+ 'socket_keepalive': socket_keepalive or False,
'socket_connect_timeout':
socket_connect_timeout and float(socket_connect_timeout),
}
| Extend config of Redis backend
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
feature requests which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Issue+Type%3A+Feature+Request%22+)
for similar or identical feature requests.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?utf8=%E2%9C%93&q=is%3Apr+label%3A%22PR+Type%3A+Feature%22+)
for existing proposed implementations of this feature.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the if the same feature was already implemented in the
master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Description
I was trying to change config of Redis backend connections (not Kombu _broker_transport_options_).
This is not a problem for `socket_timeout` and `socket_connect_timeout` (I've set `CELERY_REDIS_SOCKET_TIMEOUT` and `CELERY_REDIS_SOCKET_CONNECT_TIMEOUT`), but problem to change `socket_keepalive` and `retry_on_timeout`.
I think this can be helpful - sometimes I catching `TimeoutError` from Beat on `'SUBSCRIBE'` event.
# Suggestions
I propose add `socket_keepalive` and `retry_on_timeout` to connparams of `RedisBackend`:
https://github.com/celery/celery/blob/240ef1f64c8340bfffc31359f842ea4a6c8c493a/celery/backends/redis.py#L185-L194
| There are all sorts of Redis connection errors that could cause problems with Celery that we deal on a daily basis... Most typical ones are timeout errors. `retry_on_timeout`, as far as I know, does it only once, so it is almost useless...
> There are all sorts of Redis connection errors that could cause problems with Celery that we deal on a daily basis... Most typical ones are timeout errors. `retry_on_timeout`, as far as I know, does it only once, so it is almost useless...
Sometimes I catch TimeoutError on the start of periodic task (on "SUBSCRIPTION" event). This rare and important task and I think this retry can be helpful:
https://github.com/andymccurdy/redis-py/blob/ff69f0d77284643909462ee6d1e37233c6677672/redis/client.py#L877-L893
Yep, it will definitely help in this case. | 2020-02-04T10:22:37 |
|
celery/celery | 5,984 | celery__celery-5984 | [
"5947"
] | 6892beb33b4c6950d2fd28bf633ff320d972afe5 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -1366,6 +1366,8 @@ def _traverse_tasks(self, tasks, value=None):
task = stack.popleft()
if isinstance(task, group):
stack.extend(task.tasks)
+ elif isinstance(task, _chain) and isinstance(task.tasks[-1], group):
+ stack.extend(task.tasks[-1].tasks)
else:
yield task if value is None else value
| diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -742,6 +742,13 @@ def test_app_fallback_to_current(self):
x = chord([t1], body=t1)
assert x.app is current_app
+ def test_chord_size_with_groups(self):
+ x = chord([
+ self.add.s(2, 2) | group([self.add.si(2, 2), self.add.si(2, 2)]),
+ self.add.s(2, 2) | group([self.add.si(2, 2), self.add.si(2, 2)]),
+ ], body=self.add.si(2, 2))
+ assert x.__length_hint__() == 4
+
def test_set_immutable(self):
x = chord([Mock(name='t1'), Mock(name='t2')], app=self.app)
x.set_immutable(True)
| chord of chains with groups: body duplication and invalid task execution order
# Checklist
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: 4.4.0
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
software -> celery:4.4.0 (cliffs) kombu:4.6.7 py:3.7.3
billiard:3.6.1.0 py-amqp:2.5.2
platform -> system:Linux arch:64bit, ELF
kernel version:5.0.0-37-generic imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://redis/
CELERY_BROKER_URL: 'amqp://guest:********@rabbit:5672//'
CELERY_RESULT_BACKEND: 'redis://redis/'
CELERY_TASK_SERIALIZER: 'json'
is_overridden: <bound method Settings.is_overridden of <Settings "app.settings">>
beat_schedule: {}
task_routes: {
'app.tasks.*': {'queue': 'main'}}
```
</p>
</details>
# Steps to Reproduce
1. Start celery worker with `--concurrency 8` and `-O fair` using code from test case below
2. Call a broken task via `celery -A app.celery_app call app.tasks.bug.task`
3. Call a correct task via `celery -A app.celery_app call app.tasks.bug.task_correct`
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
aiohttp==3.6.2
amqp==2.5.2
asgiref==3.2.3
async-timeout==3.0.1
attrs==19.3.0
billiard==3.6.1.0
celery==4.4.0
certifi==2019.9.11
chardet==3.0.4
curlify==2.2.1
decorator==4.4.1
defusedxml==0.6.0
Django==3.0.2
django-braces==1.13.0
django-cors-headers==3.1.1
django-filter==2.2.0
django-oauth-toolkit==1.2.0
django-rest-framework-social-oauth2==1.1.0
django-templated-mail==1.1.1
djangorestframework==3.11.0
djoser==2.0.3
drf-nested-routers==0.91
facebook-business==5.0.0
ffmpeg-python==0.2.0
future==0.18.1
idna==2.8
ImageHash==4.0
imageio==2.6.1
imageio-ffmpeg==0.3.0
importlib-metadata==1.3.0
kombu==4.6.7
more-itertools==8.0.2
moviepy==1.0.1
multidict==4.5.2
numpy==1.17.4
oauthlib==3.1.0
Pillow==6.2.0
proglog==0.1.9
psycopg2-binary==2.8.3
PyJWT==1.7.1
pymongo==3.9.0
pyslack==0.5.0
python3-openid==3.1.0
pytz==2019.2
PyWavelets==1.1.1
redis==3.3.8
requests==2.22.0
requests-oauthlib==1.2.0
rest-social-auth==3.0.0
scipy==1.3.2
sentry-sdk==0.13.5
six==1.12.0
slackclient==2.3.0
social-auth-app-django==3.1.0
social-auth-core==3.2.0
sqlparse==0.3.0
tqdm==4.39.0
urllib3==1.25.6
vine==1.3.0
yarl==1.3.0
zipp==0.6.0
```
</p>
</details>
### Other Dependencies
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
from time import sleep
from celery import chord, group
from app.celery_app import app
@app.task
def task():
chord([
a.s(1) | group([b.si(), b.si()]),
a.s(3) | group([b.si(), b.si()]),
])(c.si())
@app.task
def task_correct():
chord([
a.s(1) | group([b.si(), b.si()]) | dummy.si(),
a.s(3) | group([b.si(), b.si()]) | dummy.si(),
])(c.si())
@app.task
def dummy():
pass
@app.task
def a(delay):
sleep(delay)
@app.task
def b():
pass
@app.task
def c():
pass
```
</p>
</details>
# Expected Behavior
I expect the tasks to complete in the following order:
`A B B A B B C`
# Actual Behavior
Task C gets duplicated:
```
[2020-01-31 13:48:40,765: INFO/MainProcess] Received task: app.tasks.bug.task[9d35ec5f-a268-4db9-9068-1e27fe64cef9]
[2020-01-31 13:48:40,817: INFO/MainProcess] Received task: app.tasks.bug.a[b2833570-bca5-4875-ab6d-e3ec339f36d1]
[2020-01-31 13:48:40,824: INFO/MainProcess] Received task: app.tasks.bug.a[3d49f0d7-6b1a-4937-ac1c-29bac230e533]
[2020-01-31 13:48:40,828: INFO/ForkPoolWorker-8] Task app.tasks.bug.task[9d35ec5f-a268-4db9-9068-1e27fe64cef9] succeeded in 0.06039188499562442s: None
[2020-01-31 13:48:41,864: INFO/MainProcess] Received task: app.tasks.bug.b[796bcd9f-f548-4282-9f6d-16ddb3ddc29b]
[2020-01-31 13:48:41,867: INFO/ForkPoolWorker-8] Task app.tasks.bug.b[796bcd9f-f548-4282-9f6d-16ddb3ddc29b] succeeded in 0.001961939036846161s: None
[2020-01-31 13:48:41,869: INFO/MainProcess] Received task: app.tasks.bug.b[eacaa3ce-cc71-4daf-8e62-8d24f9322c7b]
[2020-01-31 13:48:41,871: INFO/ForkPoolWorker-9] Task app.tasks.bug.a[b2833570-bca5-4875-ab6d-e3ec339f36d1] succeeded in 1.0521306470036507s: None
[2020-01-31 13:48:41,884: INFO/ForkPoolWorker-8] Task app.tasks.bug.b[eacaa3ce-cc71-4daf-8e62-8d24f9322c7b] succeeded in 0.013368424959480762s: None
[2020-01-31 13:48:41,884: INFO/MainProcess] Received task: app.tasks.bug.c[8e50636b-a440-454e-a6fe-57bb7595b82f]
[2020-01-31 13:48:41,886: INFO/ForkPoolWorker-8] Task app.tasks.bug.c[8e50636b-a440-454e-a6fe-57bb7595b82f] succeeded in 0.0010153759503737092s: None
[2020-01-31 13:48:43,880: INFO/MainProcess] Received task: app.tasks.bug.b[07eb9c18-1303-4517-b95b-d555de1d6315]
[2020-01-31 13:48:43,884: INFO/ForkPoolWorker-8] Task app.tasks.bug.b[07eb9c18-1303-4517-b95b-d555de1d6315] succeeded in 0.0018030810169875622s: None
[2020-01-31 13:48:43,887: INFO/MainProcess] Received task: app.tasks.bug.b[7ebaf58a-28f5-4170-86fb-61c3b5ee1909]
[2020-01-31 13:48:43,887: INFO/ForkPoolWorker-2] Task app.tasks.bug.a[3d49f0d7-6b1a-4937-ac1c-29bac230e533] succeeded in 3.0580567660508677s: None
[2020-01-31 13:48:43,892: INFO/ForkPoolWorker-8] Task app.tasks.bug.b[7ebaf58a-28f5-4170-86fb-61c3b5ee1909] succeeded in 0.004260396934114397s: None
[2020-01-31 13:48:43,893: INFO/MainProcess] Received task: app.tasks.bug.c[8e50636b-a440-454e-a6fe-57bb7595b82f]
[2020-01-31 13:48:43,896: INFO/ForkPoolWorker-8] Task app.tasks.bug.c[8e50636b-a440-454e-a6fe-57bb7595b82f] succeeded in 0.000999139971099794s: None
```
Actual execution order is weird, and task C is executed twice.
But when you add a dummy task to the end of each chain, execution order becomes correct (as seen in `task_correct` task):
```
[2020-01-31 15:13:00,867: INFO/MainProcess] Received task: app.tasks.bug.task_correct[d9dd4fb3-8df9-4c39-8b3d-e8892f9d6fef]
[2020-01-31 15:13:00,928: INFO/MainProcess] Received task: app.tasks.bug.a[e7eb341f-3c10-4bb0-9dee-2cfa38354005]
[2020-01-31 15:13:00,939: INFO/MainProcess] Received task: app.tasks.bug.a[c13967b3-b53a-4005-9249-dffa4b6122af]
[2020-01-31 15:13:00,941: INFO/ForkPoolWorker-8] Task app.tasks.bug.task_correct[d9dd4fb3-8df9-4c39-8b3d-e8892f9d6fef] succeeded in 0.07147384795825928s: None
[2020-01-31 15:13:01,974: INFO/MainProcess] Received task: app.tasks.bug.b[88ceb430-5056-40a1-ae82-cfd95d27845e]
[2020-01-31 15:13:01,977: INFO/MainProcess] Received task: app.tasks.bug.b[3a354338-edb7-447b-9d40-855e9e386531]
[2020-01-31 15:13:01,977: INFO/ForkPoolWorker-9] Task app.tasks.bug.a[e7eb341f-3c10-4bb0-9dee-2cfa38354005] succeeded in 1.0470000900095329s: None
[2020-01-31 15:13:01,978: INFO/ForkPoolWorker-8] Task app.tasks.bug.b[88ceb430-5056-40a1-ae82-cfd95d27845e] succeeded in 0.0024882910074666142s: None
[2020-01-31 15:13:02,008: INFO/MainProcess] Received task: app.tasks.bug.dummy[2b9094d8-fe73-43b3-9a2a-c5ad756b3e22]
[2020-01-31 15:13:02,009: INFO/ForkPoolWorker-3] Task app.tasks.bug.b[3a354338-edb7-447b-9d40-855e9e386531] succeeded in 0.02914690098259598s: None
[2020-01-31 15:13:02,013: INFO/ForkPoolWorker-8] Task app.tasks.bug.dummy[2b9094d8-fe73-43b3-9a2a-c5ad756b3e22] succeeded in 0.002397596021182835s: None
[2020-01-31 15:13:03,989: INFO/MainProcess] Received task: app.tasks.bug.b[fafd28c6-55d0-41c5-a035-1d4da0d6067b]
[2020-01-31 15:13:03,992: INFO/MainProcess] Received task: app.tasks.bug.b[ea8c9c21-f860-4268-b44c-6faa8ba66dc2]
[2020-01-31 15:13:03,993: INFO/ForkPoolWorker-8] Task app.tasks.bug.b[fafd28c6-55d0-41c5-a035-1d4da0d6067b] succeeded in 0.0024769710144028068s: None
[2020-01-31 15:13:03,994: INFO/ForkPoolWorker-2] Task app.tasks.bug.a[c13967b3-b53a-4005-9249-dffa4b6122af] succeeded in 3.050520417978987s: None
[2020-01-31 15:13:04,001: INFO/ForkPoolWorker-9] Task app.tasks.bug.b[ea8c9c21-f860-4268-b44c-6faa8ba66dc2] succeeded in 0.006369335926137865s: None
[2020-01-31 15:13:04,001: INFO/MainProcess] Received task: app.tasks.bug.dummy[998aa572-9010-4991-856a-191cf1741680]
[2020-01-31 15:13:04,021: INFO/MainProcess] Received task: app.tasks.bug.c[976f5323-1505-41c6-ad01-e0e720c4b5c7]
[2020-01-31 15:13:04,021: INFO/ForkPoolWorker-8] Task app.tasks.bug.dummy[998aa572-9010-4991-856a-191cf1741680] succeeded in 0.018995660939253867s: None
[2020-01-31 15:13:04,023: INFO/ForkPoolWorker-9] Task app.tasks.bug.c[976f5323-1505-41c6-ad01-e0e720c4b5c7] succeeded in 0.0012671550503000617s: None
```
| I've figured out what exactly is happening. After a task in a group finishes (`b` in my example) it calls `on_chord_part_return` and pushes task result into redis list. When redis list size becomes equal to `chord_size` a chord callback is triggered. But `chord_size` in my example is 2 because header consists of two chains, so after two `b` tasks finish, chord callback is triggered. And after callback task has been queued, redis list is deleted. Another two `b` tasks trigger this process again, and callback is called twice.
This problem can be fixed by adding two lines to `chord._traverse_tasks` method:
```diff
def _traverse_tasks(self, tasks, value=None):
stack = deque(tasks)
while stack:
task = stack.popleft()
if isinstance(task, group):
stack.extend(task.tasks)
+ elif isinstance(task, _chain) and isinstance(task.tasks[-1], group):
+ stack.extend(task.tasks[-1].tasks)
else:
yield task if value is None else value
```
With this fix `chord_size` becomes 4 in my example and everything runs correctly. I'm not familiar with Celery internals enough, so I can't tell whether this is the right way to fix this.
It seems that the fix does not break any existing tests, but there were some failed tests even without the fix, so I'm not sure about correctness.
feel free to come with a PR :) and sorry for so late response as I was on vacation :+1: | 2020-03-01T17:45:58 |
celery/celery | 5,997 | celery__celery-5997 | [
"5996"
] | 78f864e69ffdb736817ba389454971a7e38629fb | diff --git a/celery/app/control.py b/celery/app/control.py
--- a/celery/app/control.py
+++ b/celery/app/control.py
@@ -217,13 +217,15 @@ def election(self, id, topic, action=None, connection=None):
def revoke(self, task_id, destination=None, terminate=False,
signal=TERM_SIGNAME, **kwargs):
- """Tell all (or specific) workers to revoke a task by id.
+ """Tell all (or specific) workers to revoke a task by id
+ (or list of ids).
If a task is revoked, the workers will ignore the task and
not execute it after all.
Arguments:
- task_id (str): Id of the task to revoke.
+ task_id (Union(str, list)): Id of the task to revoke
+ (or list of ids).
terminate (bool): Also terminate the process currently working
on the task (if any).
signal (str): Name of signal to send to process if terminate.
@@ -240,7 +242,8 @@ def revoke(self, task_id, destination=None, terminate=False,
def terminate(self, task_id,
destination=None, signal=TERM_SIGNAME, **kwargs):
- """Tell all (or specific) workers to terminate a task by id.
+ """Tell all (or specific) workers to terminate a task by id
+ (or list of ids).
See Also:
This is just a shortcut to :meth:`revoke` with the terminate
| Control.revoke can be get list of task ids
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Description
<!--
Please describe what's missing or incorrect about our documentation.
Include links and/or screenshots which will aid us to resolve the issue.
-->
The existing documentation does not show the option to send some tasks ids
# Suggestions
<!-- Please provide us suggestions for how to fix the documentation -->
Update the document
| 2020-03-09T19:26:27 |
||
celery/celery | 6,000 | celery__celery-6000 | [
"5994"
] | 78d04b3758f882127c9a21e6cc5e6c1f4820927c | diff --git a/celery/backends/redis.py b/celery/backends/redis.py
--- a/celery/backends/redis.py
+++ b/celery/backends/redis.py
@@ -232,11 +232,14 @@ def __init__(self, host=None, port=None, db=None, password=None,
'max_connections': self.max_connections,
'socket_timeout': socket_timeout and float(socket_timeout),
'retry_on_timeout': retry_on_timeout or False,
- 'socket_keepalive': socket_keepalive or False,
'socket_connect_timeout':
socket_connect_timeout and float(socket_connect_timeout),
}
+ # absent in redis.connection.UnixDomainSocketConnection
+ if socket_keepalive:
+ self.connparams['socket_keepalive'] = socket_keepalive
+
# "redis_backend_use_ssl" must be a dict with the keys:
# 'ssl_cert_reqs', 'ssl_ca_certs', 'ssl_certfile', 'ssl_keyfile'
# (the same as "broker_use_ssl")
| diff --git a/t/unit/backends/test_redis.py b/t/unit/backends/test_redis.py
--- a/t/unit/backends/test_redis.py
+++ b/t/unit/backends/test_redis.py
@@ -324,6 +324,7 @@ def test_socket_url(self):
assert 'port' not in x.connparams
assert x.connparams['socket_timeout'] == 30.0
assert 'socket_connect_timeout' not in x.connparams
+ assert 'socket_keepalive' not in x.connparams
assert x.connparams['db'] == 3
@skip.unless_module('redis')
| TypeError in make_connection, unexpected keyword argument 'socket_keepalive'
# Checklist
- [ ] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [ ] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
#### Related Issues
- #5952
- #2903
#### Possible Duplicates
- None
## Environment & Settings
**Celery version**: 4.4.1
# Steps to Reproduce
## Required Dependencies
### Python Packages
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
redis==3.4.1
billiard==3.6.3.0
celery==4.4.1
kombu==4.6.8
```
</p>
</details>
### Other Dependencies
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<details>
<p>
```python
```
</p>
</details>
# Expected Behavior
Celery just works
# Actual Behavior
After upgrading Celery from 4.4.0 to 4.4.1 I suddenly see `TypeError: __init__() got an unexpected keyword argument 'socket_keepalive'`.
This seems to be caused by PR #5952, which adds this kwarg for Redis connections. However, not all Redis connection constructors take the same arguments (e.g. `UnixDomainSocketConnection` doesn't take `socket_keepalive`). This seems to be happened before as noted in issue #2903.
Everything works fine if I downgrade to 4.4.0 again.
| can you check the commit log after 4.4.1 release? this might have already resolved in master?
I did check the commit log (I also checked that option in the checklist). None of the (now) 4 commits since the release of 4.4.1 seem related to this issue.
I'm unable to test against master as the only place I can test this reliably is a production server. I was hoping Celery's tests would catch an issue like this, but it seems that the Travis pipeline is not running the integration tests.
ops sorry to know that! thanks for letting us know! will try to push another minor release ASAP | 2020-03-12T11:15:12 |
celery/celery | 6,020 | celery__celery-6020 | [
"6019"
] | 6957f7de13fa6925508a4c9b6e823eb11d88496c | diff --git a/celery/backends/database/session.py b/celery/backends/database/session.py
--- a/celery/backends/database/session.py
+++ b/celery/backends/database/session.py
@@ -39,7 +39,9 @@ def get_engine(self, dburi, **kwargs):
engine = self._engines[dburi] = create_engine(dburi, **kwargs)
return engine
else:
- return create_engine(dburi, poolclass=NullPool)
+ kwargs = dict([(k, v) for k, v in kwargs.items() if
+ not k.startswith('pool')])
+ return create_engine(dburi, poolclass=NullPool, **kwargs)
def create_session(self, dburi, short_lived_sessions=False, **kwargs):
engine = self.get_engine(dburi, **kwargs)
| diff --git a/t/unit/backends/test_database.py b/t/unit/backends/test_database.py
--- a/t/unit/backends/test_database.py
+++ b/t/unit/backends/test_database.py
@@ -317,6 +317,14 @@ def test_get_engine_forked(self, create_engine):
engine2 = s.get_engine('dburi', foo=1)
assert engine2 is engine
+ @patch('celery.backends.database.session.create_engine')
+ def test_get_engine_kwargs(self, create_engine):
+ s = SessionManager()
+ engine = s.get_engine('dbur', foo=1, pool_size=5)
+ assert engine is create_engine()
+ engine2 = s.get_engine('dburi', foo=1)
+ assert engine2 is engine
+
@patch('celery.backends.database.session.sessionmaker')
def test_create_session_forked(self, sessionmaker):
s = SessionManager()
| Unable to use mysql SSL parameters in create_engine()
PR for proposed fix to this issue: https://github.com/celery/celery/pull/6020
# Checklist
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] I have included the contents of ``pip freeze`` in the issue.
- [x] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [x] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [x] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
https://github.com/celery/celery/commit/94dae1b899aae6ae2ca333773fddbc6dd603213c
This PR was made to address the following issue, which has resulted in the issue I am having now. https://github.com/celery/celery/issues/1930
#### Related Issues
https://github.com/celery/celery/issues/1930
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: celery>=4.0.0 (using it in Airflow)
</p>
</details>
# Steps to Reproduce
(see Minimally Reproducible Test Case for step by step commands. This contains information leading to the issue and a proposed fix)
In Airflow, you can set celery configs. I was setting up cloudsql to use a private IP instead of a proxy. Currently, we use mysql as the `results_backend`. Changing the host address from local host to the private IP caused some errors, as expected.
```
OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.x.x.xxx' (using password: YES)")
```
In order to use the private IP, I need to use the SSL cert, key, and ca. I confirmed that by logging into the Airflow worker and scheduler pods that my url and engine arg params worked.
```
from airflow.models import DagRun
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
e = create_engine({AIRFLOW__CELERY__SQL_ALCHEMY_CONN},connect_args= {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}})
s = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=e))
s.query(DagRun).all()
```
This worked fine, so I know that the my ssl certs are accessible, the engine can be created, and a session used. Non-celery mysql connections no longer gave an error.
The Celery documentation (https://docs.celeryproject.org/en/stable/userguide/configuration.html#conf-database-result-backend) outlines how to add engine args to via `database_engine_options`. Therefore, I added
```
'database_engine_options': {
'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}}
```
However, I still get the same error.
```
OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.x.x.xxx' (using password: YES)")
```
Additionally, I get logs in the scheduler like the following:
```
{{__init__.py:56}} WARNING - Failed operation _get_task_meta_for. Retrying 1 more times.
68918-Traceback (most recent call last):
68919- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py", line 51, in _inner
68920- return fun(*args, **kwargs)
68921- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py", line 154, in _get_task_meta_for
68922: session = self.ResultSession()
68923: File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/__init__.py", line 113, in ResultSession
68924- **self.engine_options)
68925- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/session.py", line 59, in session_factory
68926- self.prepare_models(engine)
68927- File "/usr/local/lib/python2.7/dist-packages/celery/backends/database/session.py", line 54, in prepare_models
68928- ResultModelBase.metadata.create_all(engine)
```
After digging through the code with @dangermike, we noticed that `get_engine` will not use the kwargs passed to it unless it has been forked.(https://github.com/celery/celery/blob/master/celery/backends/database/session.py#L34) Therefore, the SSL params will not be passed in our case. The only place that self.forked = True is after the fork cleanup session. This used to not be the case (https://github.com/celery/celery/commit/94dae1b899aae6ae2ca333773fddbc6dd603213c), but after an issue was made about passing pool_size (https://github.com/celery/celery/issues/1930), `**kwargs` were taken out of create_engine() entirely.
Possibly something like the following would allow for kwargs to be passed in, while still addressing the pool params issue.
```
class SessionManager(object):
# ...
def get_engine(self, dburi, **kwargs):
if self.forked:
try:
return self._engines[dburi]
except KeyError:
engine = self._engines[dburi] = create_engine(dburi, **kwargs)
return engine
else:
kwargs = dict([(k, v) for k, v in kwargs.items() if not k.startswith('pool')])
return create_engine(dburi, poolclass=NullPool, **kwargs)
```
where `kwargs = dict([(k, v) for k, v in kwargs.items() if not k.startswith('pool')])` omits any pool args while keeping the rest.
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: >=2.7
* **Minimal Celery Version**: >=4.0.0
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
Used Airflow
### Other Dependencies
N/A
## Minimally Reproducible Test Case
In a python shell,
get the url with a private mysql IP to make result_backend, giving something like `db+mysql://airflow:***@10.x.xx.xx/airflow`
and the celery config
```
celery_configuration =
{'broker_transport_options': {'visibility_timeout': 21600},
'result_serializer': 'pickle',
'task_acks_late': True,
'database_engine_options': { 'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}},
'task_default_queue': 'default',
'worker_concurrency': 32,
'worker_prefetch_multiplier': 1,
'event_serializer': 'json',
'accept_content': ['json', 'pickle'],
'broker_url': 'redis://{URL}/1',
'result_backend': 'db+mysql://airflow:***@10.x.xx.xx/airflow',
'task_default_exchange': 'default'}
```
the line most important here is:
` 'database_engine_options': { 'connect_args': {'ssl': {'ca': '/path-to-mysql-sslcert/server-ca', 'cert': '/path-to-mysql-sslcert/client-cert', 'key': '/path-to-mysql-sslcert/client-key'}}}`
then try to connect to result_backend by creating app.
```
app = Celery(celery_app_name=airflow.executors.celery_executor,
config_source=celery_configuration)
```
create a database backend
```
dbbe = database.DatabaseBackend(url={results_backend url without the 'db+' in the beginning}, engine_options=celery_configuration['database_engine_options'], app=app)
```
and you will get the error again
```
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.xx.xx.xxx' (using password: YES)")
(Background on this error at: http://sqlalche.me/e/e3q8)
```
# Expected Behavior
It seems like the expected behavior here would be for the connection to be successful and use the SSL certs in the **kwargs passed into `get_engine`.
# Actual Behavior
Since self.fork is not True, and will not be True, create_engine is made by:
```
return create_engine(dburi, poolclass=NullPool)
```
since the SSL certs are not included, an error is returned and the connection is _not_ successful.
```
sqlalchemy.exc.OperationalError: (_mysql_exceptions.OperationalError) (1045, "Access denied for user 'airflow'@'10.xx.xx.xxx' (using password: YES)")
(Background on this error at: http://sqlalche.me/e/e3q8)
```
| I am working on a PR for the line I proposed to change. | 2020-04-01T22:56:00 |
celery/celery | 6,059 | celery__celery-6059 | [
"5973"
] | dd28a0fdf620f6ba177264cdb786068cfa5db4f3 | diff --git a/celery/canvas.py b/celery/canvas.py
--- a/celery/canvas.py
+++ b/celery/canvas.py
@@ -408,8 +408,12 @@ def __or__(self, other):
other = maybe_unroll_group(other)
if isinstance(self, _chain):
# chain | group() -> chain
+ tasks = self.unchain_tasks()
+ if not tasks:
+ # If the chain is empty, return the group
+ return other
return _chain(seq_concat_item(
- self.unchain_tasks(), other), app=self._app)
+ tasks, other), app=self._app)
# task | group() -> chain
return _chain(self, other, app=self.app)
@@ -622,7 +626,7 @@ def clone(self, *args, **kwargs):
return signature
def unchain_tasks(self):
- # Clone chain's tasks assigning sugnatures from link_error
+ # Clone chain's tasks assigning signatures from link_error
# to each task
tasks = [t.clone() for t in self.tasks]
for sig in self.options.get('link_error', []):
@@ -878,7 +882,9 @@ def __new__(cls, *tasks, **kwargs):
if not kwargs and tasks:
if len(tasks) != 1 or is_list(tasks[0]):
tasks = tasks[0] if len(tasks) == 1 else tasks
- return reduce(operator.or_, tasks)
+ # if is_list(tasks) and len(tasks) == 1:
+ # return super(chain, cls).__new__(cls, tasks, **kwargs)
+ return reduce(operator.or_, tasks, chain())
return super(chain, cls).__new__(cls, *tasks, **kwargs)
| diff --git a/celery/contrib/testing/app.py b/celery/contrib/testing/app.py
--- a/celery/contrib/testing/app.py
+++ b/celery/contrib/testing/app.py
@@ -34,6 +34,7 @@ def __getattr__(self, name):
# in Python 3.8 and above.
if name == '_is_coroutine':
return None
+ print(name)
raise RuntimeError('Test depends on current_app')
diff --git a/t/integration/test_canvas.py b/t/integration/test_canvas.py
--- a/t/integration/test_canvas.py
+++ b/t/integration/test_canvas.py
@@ -123,6 +123,10 @@ def test_group_results_in_chain(self, manager):
res = c()
assert res.get(timeout=TIMEOUT) == [4, 5]
+ def test_chain_of_chain_with_a_single_task(self, manager):
+ sig = signature('any_taskname', queue='any_q')
+ chain([chain(sig)]).apply_async()
+
def test_chain_on_error(self, manager):
from .tasks import ExpectedException
diff --git a/t/unit/tasks/test_canvas.py b/t/unit/tasks/test_canvas.py
--- a/t/unit/tasks/test_canvas.py
+++ b/t/unit/tasks/test_canvas.py
@@ -269,6 +269,10 @@ def test_chunks(self):
class test_chain(CanvasCase):
+ def test_chain_of_chain_with_a_single_task(self):
+ s = self.add.s(1, 1)
+ assert chain([chain(s)]).tasks == list(chain(s).tasks)
+
def test_clone_preserves_state(self):
x = chain(self.add.s(i, i) for i in range(10))
assert x.clone().tasks == x.tasks
| maximum recursion depth exceeded for a canvas in Celery 4.4.0 (cliffs)
<!--
-->
# Checklist
<!--
-->
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first. (No option to post for me)
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [x] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected). - Celery 4.4.0 (cliffs)
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [x] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies. Works in Celery 4.1
## Related Issues and Possible Duplicates
<!--
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version 4.4.0 (cliffs)**:
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
```
</p>
</details>
# Steps to Reproduce
```
sig = signature('any_taskname', queue='any_q')
chain( [ chain( sig ) ] ).apply_async()
```
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: 2.7
* **Minimal Celery Version**: 4.4
* **Minimal Kombu Version**: 4.6.7
* **Minimal Broker Version**: N/A
* **Minimal Result Backend Version**: N/A
* **Minimal OS and/or Kernel Version**: N/A
* **Minimal Broker Client Version**: N/A
* **Minimal Result Backend Client Version**: N/A
### Python Packages
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
```
</p>
</details>
### Other Dependencies
<!--
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
-->
<details>
<p>
```
sig = signature('any_taskname', queue='any_q')
chain( [ chain( sig ) ] ).apply_async()
```
</p>
</details>
# Expected Behavior
It should publish task 'any_taskname' to queue 'any_q'
# Actual Behavior
Max recursion depth exceeded
```
Traceback (most recent call last):
File "test.py", line 30, in <module>
chain([chain(s2)]).apply_async() # issue
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 642, in apply_async
dict(self.options, **options) if options else self.options))
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 660, in run
task_id, group_id, chord,
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 721, in prepare_steps
task = task.clone(args, kwargs)
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 620, in clone
for sig in signature.kwargs['tasks']
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 1513, in maybe_signature
d = d.clone()
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 620, in clone
for sig in signature.kwargs['tasks']
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 1513, in maybe_signature
d = d.clone()
...
..
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 620, in clone
for sig in signature.kwargs['tasks']
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 1513, in maybe_signature
d = d.clone()
keeps repeating
..
..
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 617, in clone
signature = Signature.clone(self, *args, **kwargs)
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 272, in clone
app=self._app)
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 153, in from_dict
return target_cls.from_dict(d, app=app)
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 599, in from_dict
return _upgrade(d, _chain(tasks, app=app, **d['options']))
File "/bb/bin/dl/celery/4.4/celery/canvas.py", line 602, in __init__
tasks = (regen(tasks[0]) if len(tasks) == 1 and is_list(tasks[0])
File "/bb/bin/dl/celery/4.4/kombu/utils/functional.py", line 256, in is_list
return isinstance(l, iters) and not isinstance(l, scalars or ())
File "/opt/bb/lib/python2.7/abc.py", line 132, in __instancecheck__
if subclass is not None and subclass in cls._abc_cache:
File "/opt/bb/lib/python2.7/_weakrefset.py", line 72, in __contains__
wr = ref(item)
RuntimeError: maximum recursion depth exceeded
```
| I just ran the test case and indeed I see the same error.
It seems like the error occurs when you apply the task. If you don't everything proceeds correctly.
The problem is in line 881.
https://github.com/celery/celery/blob/01dd66ceb5b9167074c2f291b165055e7377641b/celery/canvas.py#L876-L882
reduce is called on a pair of arguments, of which there are currently none in this test case. | 2020-04-26T09:40:58 |
celery/celery | 6,103 | celery__celery-6103 | [
"5598"
] | 6e091573f2ab0d0989b8d7c26b677c80377c1721 | diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -505,7 +505,7 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
)
ack = self.task.acks_on_failure_or_timeout
if reject:
- requeue = not self.delivery_info.get('redelivered')
+ requeue = True
self.reject(requeue=requeue)
send_failed_event = False
elif ack:
| diff --git a/t/unit/worker/test_request.py b/t/unit/worker/test_request.py
--- a/t/unit/worker/test_request.py
+++ b/t/unit/worker/test_request.py
@@ -653,7 +653,7 @@ def test_on_failure_acks_late_reject_on_worker_lost_enabled(self):
job.delivery_info['redelivered'] = True
job.on_failure(exc_info)
- assert self.mytask.backend.get_status(job.id) == states.FAILURE
+ assert self.mytask.backend.get_status(job.id) == states.PENDING
def test_on_failure_acks_late(self):
job = self.xRequest()
| Document and code are inconsistent about task_reject_on_worker_lost config
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Description
<!--
Please describe what's missing or incorrect about our documentation.
Include links and/or screenshots which will aid us to resolve the issue.
-->
In the latest version of the documentation about [task_reject_on_worker_lost](http://docs.celeryproject.org/en/latest/userguide/configuration.html?highlight=task_reject_on_worker_lost), it says `Enabling this can cause message loops`
But actually, enabling this will not cause message loops, tasks only execute twice.Tasks that have been redelivered will not be redelivered again, [source code](https://github.com/celery/celery/blob/master/celery/worker/request.py#L518)
# Suggestions
<!-- Please provide us suggestions for how to fix the documentation -->
If it is a documentation error, it is best to remove the warning from the document.
If the document is ok, it is need to modify the code.
I can help modify the document or code.
| feel free to come with a PR? | 2020-05-15T17:58:42 |
celery/celery | 6,134 | celery__celery-6134 | [
"4116",
"4116"
] | 2479f9571e89857bd53c48289b9a243bc3fd5242 | diff --git a/celery/fixups/django.py b/celery/fixups/django.py
--- a/celery/fixups/django.py
+++ b/celery/fixups/django.py
@@ -151,7 +151,7 @@ def on_worker_process_init(self, **kwargs):
self._maybe_close_db_fd(c.connection)
# use the _ version to avoid DB_REUSE preventing the conn.close() call
- self._close_database()
+ self._close_database(force=True)
self.close_cache()
def _maybe_close_db_fd(self, fd):
@@ -180,10 +180,13 @@ def close_database(self, **kwargs):
self._close_database()
self._db_recycles += 1
- def _close_database(self):
+ def _close_database(self, force=False):
for conn in self._db.connections.all():
try:
- conn.close()
+ if force:
+ conn.close()
+ else:
+ conn.close_if_unusable_or_obsolete()
except self.interface_errors:
pass
except self.DatabaseError as exc:
| diff --git a/t/unit/fixups/test_django.py b/t/unit/fixups/test_django.py
--- a/t/unit/fixups/test_django.py
+++ b/t/unit/fixups/test_django.py
@@ -145,7 +145,7 @@ def test_on_worker_process_init(self, patching):
f.on_worker_process_init()
mcf.assert_called_with(conns[1].connection)
f.close_cache.assert_called_with()
- f._close_database.assert_called_with()
+ f._close_database.assert_called_with(force=True)
f.validate_models = Mock(name='validate_models')
patching.setenv('FORKED_BY_MULTIPROCESSING', '1')
@@ -213,13 +213,35 @@ def test__close_database(self):
f._db.connections = Mock() # ConnectionHandler
f._db.connections.all.side_effect = lambda: conns
- f._close_database()
+ f._close_database(force=True)
conns[0].close.assert_called_with()
+ conns[0].close_if_unusable_or_obsolete.assert_not_called()
conns[1].close.assert_called_with()
+ conns[1].close_if_unusable_or_obsolete.assert_not_called()
conns[2].close.assert_called_with()
+ conns[2].close_if_unusable_or_obsolete.assert_not_called()
+
+ for conn in conns:
+ conn.reset_mock()
+
+ f._close_database()
+ conns[0].close.assert_not_called()
+ conns[0].close_if_unusable_or_obsolete.assert_called_with()
+ conns[1].close.assert_not_called()
+ conns[1].close_if_unusable_or_obsolete.assert_called_with()
+ conns[2].close.assert_not_called()
+ conns[2].close_if_unusable_or_obsolete.assert_called_with()
conns[1].close.side_effect = KeyError(
'omg')
+ f._close_database()
+ with pytest.raises(KeyError):
+ f._close_database(force=True)
+
+ conns[1].close.side_effect = None
+ conns[1].close_if_unusable_or_obsolete.side_effect = KeyError(
+ 'omg')
+ f._close_database(force=True)
with pytest.raises(KeyError):
f._close_database()
| Django celery fixup doesn't respect Django settings for PostgreSQL connections
When using the Django-Celery fixup to run background tasks for a Django web service, the tasks in the background do not respect the settings in Django for PostgreSQL (possibly other) connections. Every task will always create a new connection no matter the Django settings. Although it is possible to bypass this with the environment variable CELERY_DB_REUSE_MAX, it is preferred for it to follow the settings given in Django.
## Checklist
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
Celery 4.0.2 with potentially all versions of Django (tested on 1.10.3 and 1.11.2)
- [X] I have verified that the issue exists against the `master` branch of Celery.
This line causes this "issue":
https://github.com/celery/celery/blob/master/celery/fixups/django.py#L186
## Steps to reproduce
Note that these steps require some monitoring service to be used, we have New Relic.
Note also that we use Heroku for this app in question.
1) Have a web facing process with Django that connects to your PostgreSQL database for ORM purposes
2) Have a worker process that also connects to the PostgreSQL for ORM purposes
3) Have the DATABASES['default']['CONN_MAX_AGE'] setting set to anything that isn't 0 (easiest to see with `None` for persistent connections)
4) Make multiple requests to the web portion of Django to cause some ORM activity (easiest to see if it happens on every request)
5) Get multiple tasks to execute on the worker that will cause some ORM activity (easiest to see if it happens on every task)
6) Use your monitoring service (New Relic in our case) to view a breakdown of all of the requests and worker activity. In New Relic you can check this using the transaction tracing; select the endpoint/task that made the db queries and check the breakdown.
## Expected behavior
psycopg2:connect would occur rarely with an average calls per transaction <<< 1
## Actual behavior
psycopg2:connect occurs very rarely with an average calls per transaction of <<< 1 for the web processes.
psycopg2:connect occurs every time with an average calls per transaction of 1 for the worker processes.
## Potential Resolution
With my limited knowledge of Celery's inner workings, it feels like a fairly simple fix that I could make on a PR myself, but I wanted some input before I spend the time setting that all up.
This fix seems to work when monkey patched into the `DjangoWorkerFixup` class.
``` Python
def _close_database(self):
try:
# Use Django's built in method of closing old connections.
# This ensures that the database settings are respected.
self._db.close_old_connections()
except AttributeError:
# Legacy functionality if we can't use the old connections for whatever reason.
for conn in self._db.connections.all():
try:
conn.close()
except self.interface_errors:
pass
except self.DatabaseError as exc:
str_exc = str(exc)
if 'closed' not in str_exc and 'not connected' not in str_exc:
raise
celery.fixups.django.DjangoWorkerFixup._close_database = _close_database
```
Django celery fixup doesn't respect Django settings for PostgreSQL connections
When using the Django-Celery fixup to run background tasks for a Django web service, the tasks in the background do not respect the settings in Django for PostgreSQL (possibly other) connections. Every task will always create a new connection no matter the Django settings. Although it is possible to bypass this with the environment variable CELERY_DB_REUSE_MAX, it is preferred for it to follow the settings given in Django.
## Checklist
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
Celery 4.0.2 with potentially all versions of Django (tested on 1.10.3 and 1.11.2)
- [X] I have verified that the issue exists against the `master` branch of Celery.
This line causes this "issue":
https://github.com/celery/celery/blob/master/celery/fixups/django.py#L186
## Steps to reproduce
Note that these steps require some monitoring service to be used, we have New Relic.
Note also that we use Heroku for this app in question.
1) Have a web facing process with Django that connects to your PostgreSQL database for ORM purposes
2) Have a worker process that also connects to the PostgreSQL for ORM purposes
3) Have the DATABASES['default']['CONN_MAX_AGE'] setting set to anything that isn't 0 (easiest to see with `None` for persistent connections)
4) Make multiple requests to the web portion of Django to cause some ORM activity (easiest to see if it happens on every request)
5) Get multiple tasks to execute on the worker that will cause some ORM activity (easiest to see if it happens on every task)
6) Use your monitoring service (New Relic in our case) to view a breakdown of all of the requests and worker activity. In New Relic you can check this using the transaction tracing; select the endpoint/task that made the db queries and check the breakdown.
## Expected behavior
psycopg2:connect would occur rarely with an average calls per transaction <<< 1
## Actual behavior
psycopg2:connect occurs very rarely with an average calls per transaction of <<< 1 for the web processes.
psycopg2:connect occurs every time with an average calls per transaction of 1 for the worker processes.
## Potential Resolution
With my limited knowledge of Celery's inner workings, it feels like a fairly simple fix that I could make on a PR myself, but I wanted some input before I spend the time setting that all up.
This fix seems to work when monkey patched into the `DjangoWorkerFixup` class.
``` Python
def _close_database(self):
try:
# Use Django's built in method of closing old connections.
# This ensures that the database settings are respected.
self._db.close_old_connections()
except AttributeError:
# Legacy functionality if we can't use the old connections for whatever reason.
for conn in self._db.connections.all():
try:
conn.close()
except self.interface_errors:
pass
except self.DatabaseError as exc:
str_exc = str(exc)
if 'closed' not in str_exc and 'not connected' not in str_exc:
raise
celery.fixups.django.DjangoWorkerFixup._close_database = _close_database
```
| plz proceed with the PR
if it is django-celery package only then that doesn't support celery 4.x
plz proceed with the PR
if it is django-celery package only then that doesn't support celery 4.x | 2020-06-01T10:41:37 |
celery/celery | 6,138 | celery__celery-6138 | [
"6135"
] | 52aef4bf7041ef4b8e42a95e17d87b0a828f97bf | diff --git a/celery/app/base.py b/celery/app/base.py
--- a/celery/app/base.py
+++ b/celery/app/base.py
@@ -23,7 +23,7 @@
_register_app, _set_current_app, _task_stack,
connect_on_app_finalize, get_current_app,
get_current_worker_task, set_default_app)
-from celery.exceptions import AlwaysEagerIgnored, ImproperlyConfigured, Ignore
+from celery.exceptions import AlwaysEagerIgnored, ImproperlyConfigured, Ignore, Retry
from celery.five import (UserDict, bytes_if_py2, python_2_unicode_compatible,
values)
from celery.loaders import get_loader_cls
@@ -492,6 +492,8 @@ def run(*args, **kwargs):
# If Ignore signal occures task shouldn't be retried,
# even if it suits autoretry_for list
raise
+ except Retry:
+ raise
except autoretry_for as exc:
if retry_backoff:
retry_kwargs['countdown'] = \
| diff --git a/t/unit/tasks/test_tasks.py b/t/unit/tasks/test_tasks.py
--- a/t/unit/tasks/test_tasks.py
+++ b/t/unit/tasks/test_tasks.py
@@ -95,7 +95,7 @@ def retry_task_noargs(self, **kwargs):
self.retry_task_noargs = retry_task_noargs
@self.app.task(bind=True, max_retries=3, iterations=0, shared=False)
- def retry_task_without_throw(self, **kwargs):
+ def retry_task_return_without_throw(self, **kwargs):
self.iterations += 1
try:
if self.request.retries >= 3:
@@ -105,7 +105,60 @@ def retry_task_without_throw(self, **kwargs):
except Exception as exc:
return self.retry(exc=exc, throw=False)
- self.retry_task_without_throw = retry_task_without_throw
+ self.retry_task_return_without_throw = retry_task_return_without_throw
+
+ @self.app.task(bind=True, max_retries=3, iterations=0, shared=False)
+ def retry_task_return_with_throw(self, **kwargs):
+ self.iterations += 1
+ try:
+ if self.request.retries >= 3:
+ return 42
+ else:
+ raise Exception("random code exception")
+ except Exception as exc:
+ return self.retry(exc=exc, throw=True)
+
+ self.retry_task_return_with_throw = retry_task_return_with_throw
+
+ @self.app.task(bind=True, max_retries=3, iterations=0, shared=False, autoretry_for=(Exception,))
+ def retry_task_auto_retry_with_single_new_arg(self, ret=None, **kwargs):
+ if ret is None:
+ return self.retry(exc=Exception("I have filled now"), args=["test"], kwargs=kwargs)
+ else:
+ return ret
+
+ self.retry_task_auto_retry_with_single_new_arg = retry_task_auto_retry_with_single_new_arg
+
+ @self.app.task(bind=True, max_retries=3, iterations=0, shared=False)
+ def retry_task_auto_retry_with_new_args(self, ret=None, place_holder=None, **kwargs):
+ if ret is None:
+ return self.retry(args=[place_holder, place_holder], kwargs=kwargs)
+ else:
+ return ret
+
+ self.retry_task_auto_retry_with_new_args = retry_task_auto_retry_with_new_args
+
+ @self.app.task(bind=True, max_retries=3, iterations=0, shared=False, autoretry_for=(Exception,))
+ def retry_task_auto_retry_exception_with_new_args(self, ret=None, place_holder=None, **kwargs):
+ if ret is None:
+ return self.retry(exc=Exception("I have filled"), args=[place_holder, place_holder], kwargs=kwargs)
+ else:
+ return ret
+
+ self.retry_task_auto_retry_exception_with_new_args = retry_task_auto_retry_exception_with_new_args
+
+ @self.app.task(bind=True, max_retries=3, iterations=0, shared=False)
+ def retry_task_raise_without_throw(self, **kwargs):
+ self.iterations += 1
+ try:
+ if self.request.retries >= 3:
+ return 42
+ else:
+ raise Exception("random code exception")
+ except Exception as exc:
+ raise self.retry(exc=exc, throw=False)
+
+ self.retry_task_raise_without_throw = retry_task_raise_without_throw
@self.app.task(bind=True, max_retries=3, iterations=0,
base=MockApplyTask, shared=False)
@@ -365,7 +418,22 @@ def test_retry_kwargs_can_be_empty(self):
self.retry_task_mockapply.pop_request()
def test_retry_without_throw_eager(self):
- assert self.retry_task_without_throw.apply().get() == 42
+ assert self.retry_task_return_without_throw.apply().get() == 42
+
+ def test_raise_without_throw_eager(self):
+ assert self.retry_task_raise_without_throw.apply().get() == 42
+
+ def test_return_with_throw_eager(self):
+ assert self.retry_task_return_with_throw.apply().get() == 42
+
+ def test_eager_retry_with_single_new_params(self):
+ assert self.retry_task_auto_retry_with_single_new_arg.apply().get() == "test"
+
+ def test_eager_retry_with_new_params(self):
+ assert self.retry_task_auto_retry_with_new_args.si(place_holder="test").apply().get() == "test"
+
+ def test_eager_retry_with_autoretry_for_exception(self):
+ assert self.retry_task_auto_retry_exception_with_new_args.si(place_holder="test").apply().get() == "test"
def test_retry_eager_should_return_value(self):
self.retry_task.max_retries = 3
| Retry args change BUG
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [x] This has already been asked to the [discussion group](https://groups.google.com/forum/#!forum/celery-users) first.
- [x] I have read the relevant section in the
[contribution guide](http://docs.celeryproject.org/en/latest/contributing.html#other-bugs)
on reporting bugs.
- [ ] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Bug+Report%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical bug reports.
- [ ] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22PR+Type%3A+Bugfix%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [ ] I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
## Mandatory Debugging Information
- [ ] I have included the output of ``celery -A proj report`` in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
- [x] I have verified that the issue exists against the `master` branch of Celery.
- [ ] I have included the contents of ``pip freeze`` in the issue.
- [ ] I have included all the versions of all the external dependencies required
to reproduce this bug.
## Optional Debugging Information
<!--
Try some of the below if you think they are relevant.
It will help us figure out the scope of the bug and how many users it affects.
-->
- [ ] I have tried reproducing the issue on more than one Python version
and/or implementation.
- [ ] I have tried reproducing the issue on more than one message broker and/or
result backend.
- [ ] I have tried reproducing the issue on more than one version of the message
broker and/or result backend.
- [ ] I have tried reproducing the issue on more than one operating system.
- [ ] I have tried reproducing the issue on more than one workers pool.
- [ ] I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled.
- [ ] I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**: celery==4.4.2
<!-- Include the output of celery -A proj report below -->
<details>
<summary><b><code>celery report</code> Output:</b></summary>
<p>
```
[2020-05-31 23:28:34,434: INFO/MainProcess] Connected to amqp://remote_worker:**@127.0.0.1:5672//
[2020-05-31 23:28:34,453: INFO/MainProcess] mingle: searching for neighbors
[2020-05-31 23:28:35,487: INFO/MainProcess] mingle: all alone
[2020-05-31 23:28:35,528: WARNING/MainProcess] /home/ubuntu/.local/lib/python3.7/site-packages/celery/fixups/django.py:203: UserWarning: Using settings.DEBUG leads to a memory
leak, never use this setting in production environments!
leak, never use this setting in production environments!''')
[2020-05-31 23:28:35,529: INFO/MainProcess] celery@testroom ready.
[2020-05-31 23:28:47,351: INFO/MainProcess] Received task: api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906]
[2020-05-31 23:28:47,689: WARNING/ForkPoolWorker-1] started
[2020-05-31 23:28:47,690: WARNING/ForkPoolWorker-1] retry
[2020-05-31 23:28:47,721: INFO/MainProcess] Received task: api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] ETA:[2020-05-31 23:28:57.692348+00:00]
[2020-05-31 23:28:47,722: INFO/MainProcess] Received task: api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] ETA:[2020-05-31 23:28:57.716321+00:00]
[2020-05-31 23:28:47,777: INFO/ForkPoolWorker-1] Task api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] retry: Retry in 10s: Retry(Retry(...), Exception('i have filled now'), 10)
[2020-05-31 23:28:57,999: WARNING/ForkPoolWorker-1] started
[2020-05-31 23:28:58,000: WARNING/ForkPoolWorker-1] ended
[2020-05-31 23:28:58,062: INFO/ForkPoolWorker-1] Task api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] succeeded in 0.34440315900428686s: None
[2020-05-31 23:28:58,301: WARNING/ForkPoolWorker-1] started
[2020-05-31 23:28:58,302: WARNING/ForkPoolWorker-1] retry
[2020-05-31 23:28:58,304: INFO/MainProcess] Received task: api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] ETA:[2020-05-31 23:29:08.303091+00:00]
[2020-05-31 23:28:58,307: INFO/MainProcess] Received task: api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] ETA:[2020-05-31 23:29:08.306141+00:00]
[2020-05-31 23:28:58,368: INFO/ForkPoolWorker-1] Task api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] retry: Retry in 10s: Retry(Retry(...), Exception('i have filled now'), 10)
[2020-05-31 23:29:08,572: WARNING/ForkPoolWorker-1] started
[2020-05-31 23:29:08,573: WARNING/ForkPoolWorker-1] ended
[2020-05-31 23:29:08,633: INFO/ForkPoolWorker-1] Task api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] succeeded in 0.3256059319974156s: None
[2020-05-31 23:29:08,872: WARNING/ForkPoolWorker-1] started
[2020-05-31 23:29:08,873: WARNING/ForkPoolWorker-1] retry
[2020-05-31 23:29:08,875: INFO/MainProcess] Received task: api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] ETA:[2020-05-31 23:29:18.873799+00:00]
[2020-05-31 23:29:08,880: INFO/MainProcess] Received task: api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] ETA:[2020-05-31 23:29:18.877550+00:00]
[2020-05-31 23:29:08,940: INFO/ForkPoolWorker-1] Task api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] retry: Retry in 10s: Retry(Retry(...), Exception('i have filled now'), 10)
[2020-05-31 23:29:19,144: WARNING/ForkPoolWorker-1] started
[2020-05-31 23:29:19,145: WARNING/ForkPoolWorker-1] ended
[2020-05-31 23:29:19,205: INFO/ForkPoolWorker-1] Task api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] succeeded in 0.326258520995907s: None
[2020-05-31 23:29:19,444: WARNING/ForkPoolWorker-1] started
[2020-05-31 23:29:19,445: WARNING/ForkPoolWorker-1] retry
[2020-05-31 23:29:19,505: ERROR/ForkPoolWorker-1] Task api_v3.tests.execute[e97d93b5-b0e5-4b87-96ab-1aab66119906] raised unexpected: Exception('i have filled now')
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.7/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.7/site-packages/celery/app/trace.py", line 650, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.7/site-packages/celery/app/base.py", line 500, in run
raise task.retry(exc=exc, **retry_kwargs)
File "/home/ubuntu/.local/lib/python3.7/site-packages/celery/app/task.py", line 704, in retry
raise_with_context(exc)
File "/home/ubuntu/.local/lib/python3.7/site-packages/celery/app/base.py", line 487, in run
return task._orig_run(*args, **kwargs)
File "/var/www/django_projects/earthalytics-api/api_v3/tests.py", line 26, in execute
self.retry(exc=Exception("i have filled now"), args=[param_a, param_b], kwargs=kwargs)
File "/home/ubuntu/.local/lib/python3.7/site-packages/celery/app/task.py", line 704, in retry
raise_with_context(exc)
File "/home/ubuntu/.local/lib/python3.7/site-packages/celery/utils/serialization.py", line 288, in raise_with_context
_raise_with_context(exc, exc_info[1])
File "<string>", line 1, in _raise_with_context
Exception: i have filled now
```
</p>
</details>
# Steps to Reproduce
Make a celery task with a retry changing one parameters.
Set the max_retries and countdown.
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Python Version**: N/A or Unknown
* **Minimal Celery Version**: N/A or Unknown
* **Minimal Kombu Version**: N/A or Unknown
* **Minimal Broker Version**: N/A or Unknown
* **Minimal Result Backend Version**: N/A or Unknown
* **Minimal OS and/or Kernel Version**: N/A or Unknown
* **Minimal Broker Client Version**: N/A or Unknown
* **Minimal Result Backend Client Version**: N/A or Unknown
### Python Packages
<!-- Please fill the contents of pip freeze below -->
<details>
<summary><b><code>pip freeze</code> Output:</b></summary>
<p>
```
```
</p>
</details>
### Other Dependencies
<!--
Please provide system dependencies, configuration files
and other dependency information if applicable
-->
<details>
<p>
N/A
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
<details>
<p>
```python
@app_cluster.task(bind=True, autoretry_for=(Exception,), max_retries=3,
default_retry_delay=10)
def execute(self, param_a, param_b=None, **kwargs):
print("started")
if param_b is None:
param_b = "filled"
print("retry")
self.retry(exc=Exception("i have filled now"), args=[param_a, param_b], kwargs=kwargs)
print("ended")
def test_celery(self):
sig = execute.si("something")
t = sig.delay()
t = 0
```
</p>
</details>
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
I expect the task get overrided with updated parameters
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
--> The task goes in retry with origianl parameters.
A new task is made with new parameters and get scheduled.
The "old" task get executed.
The "new" task get executed.
The "old" task goes in retry making a "new" "new task" with updated parameters and the "old" goes scheduled again with original parameters.
| I tried to fix that by myself but code part is pretty weird...a function get called and never return jumping in another code part.
task.py
```
if is_eager:
# if task was executed eagerly using apply(),
# then the retry must also be executed eagerly.
S.apply().get() # This never return
if throw:
raise ret
return ret
```
Anyway, i figured maybe what happens. A signature is called updated and another signature is re-called not updated, making a split in "def retry"(Maybe Retry dosn't get counted as "Safe Exception"?).
EDIT:
Pretty sure that "Retry" launch an exception for a real retry(so a new job get executed updated) BUT Retry is an exception too, so "this job" goes in exception and need to be re-executed.
Here is the guilty:
`autoretry_for=(Exception,)`
OR WORSE:
Retry fail before the real end and can't handle the exception.
Ok, i fixed it.
2 Main error and exception overlap when in eager.
Fix in minutes.
#6137
I saw in master
```
if is_eager:
# if task was executed eagerly using apply(),
# then the retry must also be executed eagerly.
S.apply()
if throw:
raise ret
return ret
```
got changed removing "S.apply()". I can't run the master branch but...works? Cause in my "4.4.2" this one run the task in sync locally when eager.
| 2020-06-01T19:48:30 |
celery/celery | 6,142 | celery__celery-6142 | [
"6136",
"4412"
] | 574b616f0a1570e9a91a2d15e9bdaf9c91b3cac6 | diff --git a/celery/apps/multi.py b/celery/apps/multi.py
--- a/celery/apps/multi.py
+++ b/celery/apps/multi.py
@@ -151,11 +151,11 @@ def _setdefaultopt(self, d, alt, value):
return d[opt]
except KeyError:
pass
- value = os.path.normpath(value)
+ value = d.setdefault(alt[0], os.path.normpath(value))
dir_path = os.path.dirname(value)
if not os.path.exists(dir_path):
os.makedirs(dir_path)
- return d.setdefault(alt[0], value)
+ return value
def _prepare_expander(self):
shortname, hostname = self.name.split('@', 1)
| diff --git a/t/unit/apps/test_multi.py b/t/unit/apps/test_multi.py
--- a/t/unit/apps/test_multi.py
+++ b/t/unit/apps/test_multi.py
@@ -64,7 +64,7 @@ def test_parse(self, gethostname):
'-c:jerry,elaine', '5',
'--loglevel:kramer=DEBUG',
'--flag',
- '--logfile=foo', '-Q', 'bar', 'jerry',
+ '--logfile=/var/log/celery/foo', '-Q', 'bar', 'jerry',
'elaine', 'kramer',
'--', '.disable_rate_limits=1',
])
@@ -86,19 +86,19 @@ def assert_line_in(name, args):
assert_line_in(
'*P*jerry@*S*',
['COMMAND', '-n *P*jerry@*S*', '-Q bar',
- '-c 5', '--flag', '--logfile=foo',
+ '-c 5', '--flag', '--logfile=/var/log/celery/foo',
'-- .disable_rate_limits=1', '*AP*'],
)
assert_line_in(
'*P*elaine@*S*',
['COMMAND', '-n *P*elaine@*S*', '-Q bar',
- '-c 5', '--flag', '--logfile=foo',
+ '-c 5', '--flag', '--logfile=/var/log/celery/foo',
'-- .disable_rate_limits=1', '*AP*'],
)
assert_line_in(
'*P*kramer@*S*',
['COMMAND', '--loglevel=DEBUG', '-n *P*kramer@*S*',
- '-Q bar', '--flag', '--logfile=foo',
+ '-Q bar', '--flag', '--logfile=/var/log/celery/foo',
'-- .disable_rate_limits=1', '*AP*'],
)
expand = nodes[0].expander
@@ -278,6 +278,33 @@ def test_logfile(self):
assert self.node.logfile == self.expander.return_value
self.expander.assert_called_with(os.path.normpath('/var/log/celery/%n%I.log'))
+ @patch('celery.apps.multi.os.path.exists')
+ def test_pidfile_default(self, mock_exists):
+ n = Node.from_kwargs(
+ '[email protected]',
+ )
+ assert n.options['--pidfile'] == '/var/run/celery/%n.pid'
+ mock_exists.assert_any_call('/var/run/celery')
+
+ @patch('celery.apps.multi.os.makedirs')
+ @patch('celery.apps.multi.os.path.exists', return_value=False)
+ def test_pidfile_custom(self, mock_exists, mock_dirs):
+ n = Node.from_kwargs(
+ '[email protected]',
+ pidfile='/var/run/demo/celery/%n.pid'
+ )
+ assert n.options['--pidfile'] == '/var/run/demo/celery/%n.pid'
+
+ try:
+ mock_exists.assert_any_call('/var/run/celery')
+ except AssertionError:
+ pass
+ else:
+ raise AssertionError("Expected exists('/var/run/celery') to not have been called.")
+
+ mock_exists.assert_any_call('/var/run/demo/celery')
+ mock_dirs.assert_any_call('/var/run/demo/celery')
+
class test_Cluster:
| Celery 4.4.3 always trying create /var/run/celery directory, even if it's not needed.
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
#6017 Celery Multi creates pid and log files in the wrong directory
## Environment & Settings
<!-- Include the contents of celery --version below -->
**Celery version**:
<!-- Include the output of celery -A proj report below -->
<summary><b><code>celery report</code> Output:</b></summary>
```
# celery report
software -> celery:4.4.3 (cliffs) kombu:4.6.9 py:3.7.7
billiard:3.6.3.0 py-amqp:2.6.0
platform -> system:Linux arch:64bit, ELF
kernel version:4.19.0-8-amd64 imp:CPython
loader -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled
```
# Steps to Reproduce
## Required Dependencies
<!-- Please fill the required dependencies to reproduce this issue -->
* **Minimal Celery Version**: 4.4.3
</p>
</details>
## Minimally Reproducible Test Case
<!--
Please provide a reproducible test case.
Refer to the Reporting Bugs section in our contribution guide.
We prefer submitting test cases in the form of a PR to our integration test suite.
If you can provide one, please mention the PR number below.
If not, please attach the most minimal code example required to reproduce the issue below.
If the test case is too large, please include a link to a gist or a repository below.
-->
```
celery multi start ... --pidfile=/var/run/demo/celeryd-%n.pid
```
# Expected Behavior
<!-- Describe in detail what you expect to happen -->
celery runs
# Actual Behavior
<!--
Describe in detail what actually happened.
Please include a backtrace and surround it with triple backticks (```).
In addition, include the Celery daemon logs, the broker logs,
the result backend logs and system logs below if they will help us debug
the issue.
-->
start failed
```
celery multi v4.4.3 (cliffs)
_annotate_with_default_opts: print options
OrderedDict([('--app', 'service.celery:app'),
('--pidfile', '/var/run/demo/celeryd-%n.pid'),
('--logfile', '/var/log/demo/celeryd-%n%I.log'),
('--loglevel', 'INFO'),
('--workdir', '/var/lib/demo-celery'),
('--events', None),
('--heartbeat-interval', '5'),
('--without-gossip', None),
('--queues', 'high'),
('--concurrency', '1'),
('-n', '[email protected]')])
Traceback (most recent call last):
File "/var/www/misc/ve-2006011156/bin/celery", line 8, in <module>
sys.exit(main())
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/__main__.py", line 16, in main
_main()
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/celery.py", line 495, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/base.py", line 305, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/celery.py", line 487, in handle_argv
return self.execute(command, argv)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/celery.py", line 419, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/celery.py", line 335, in run_from_argv
return cmd.execute_from_commandline([command] + argv)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/multi.py", line 266, in execute_from_commandline
return self.call_command(argv[0], argv[1:])
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/multi.py", line 273, in call_command
return self.commands[command](*argv) or EX_OK
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/multi.py", line 143, in _inner
return fun(self, *args, **kwargs)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/multi.py", line 151, in _inner
return fun(self, self.cluster_from_argv(argv), **kwargs)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/multi.py", line 361, in cluster_from_argv
_, cluster = self._cluster_from_argv(argv, cmd=cmd)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/bin/multi.py", line 366, in _cluster_from_argv
return p, self.Cluster(list(nodes), cmd=cmd)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/apps/multi.py", line 306, in <genexpr>
for name in names
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/apps/multi.py", line 314, in _node_from_options
p.optmerge(namespace, options), p.passthrough)
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/apps/multi.py", line 136, in __init__
options or OrderedDict())
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/apps/multi.py", line 145, in _annotate_with_default_opts
self._setdefaultopt(options, ['--pidfile', '-p'], '/var/run/celery/%n.pid')
File "/var/www/misc/ve-2006011156/lib/python3.7/site-packages/celery/apps/multi.py", line 159, in _setdefaultopt
os.makedirs(dir_path)
File "/var/www/misc/ve-2006011156/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/var/run/celery'
systemd[1]: [email protected]: Control process exited, code=exited, status=1/FAILURE
```
Request on_timeout should ignore soft time limit exception
When Request.on_timeout receive a soft timeout from billiard, it does the same as if it was receiving a hard time limit exception. This is ran by the controller.
But the task may catch this exception and eg. return (this is what soft timeout are for).
This cause:
1. the result to be saved once as an exception by the controller (on_timeout) and another time with the result returned by the task
2. the task status to be passed to failure and to success on the same manner
3. if the task is participating to a chord, the chord result counter (at least with redis) is incremented twice (instead of once), making the chord to return prematurely and eventually loose tasks…
1, 2 and 3 can leads of course to strange race conditions…
## Steps to reproduce (Illustration)
with the program in test_timeout.py:
```python
import time
import celery
app = celery.Celery('test_timeout')
app.conf.update(
result_backend="redis://localhost/0",
broker_url="amqp://celery:celery@localhost:5672/host",
)
@app.task(soft_time_limit=1)
def test():
try:
time.sleep(2)
except Exception:
return 1
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([test.s().set(link_error=on_error.s()), test.s().set(link_error=on_error.s())])(add.s())
result.get()
```
start a worker and the program:
```
$ celery -A test_timeout worker -l WARNING
$ python3 test_timeout.py
```
## Expected behavior
add method is called with `[1, 1]` as argument and test_timeout.py return normally
## Actual behavior
The test_timeout.py fails, with
```
celery.backends.base.ChordError: Callback error: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",
```
On the worker side, the **on_error is called but the add method as well !**
```
[2017-11-29 23:07:25,538: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[15109e05-da43-449f-9081-85d839ac0ef2]
[2017-11-29 23:07:25,546: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,546: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:25,547: WARNING/MainProcess] Soft time limit (1s) exceeded for test_timeout.test[38f3f7f2-4a89-4318-8ee9-36a987f73757]
[2017-11-29 23:07:25,553: ERROR/MainProcess] Chord callback for 'ef6d7a38-d1b4-40ad-b937-ffa84e40bb23' raised: ChordError("Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)",)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in on_chord_part_return
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 290, in <listcomp>
callback.delay([unpack(tup, decode) for tup in resl])
File "/usr/local/lib/python3.4/dist-packages/celery/backends/redis.py", line 243, in _unpack_chord_result
raise ChordError('Dependency {0} raised {1!r}'.format(tid, retval))
celery.exceptions.ChordError: Dependency 15109e05-da43-449f-9081-85d839ac0ef2 raised SoftTimeLimitExceeded('SoftTimeLimitExceeded(True,)',)
[2017-11-29 23:07:25,565: WARNING/MainProcess] ### on_error:
[2017-11-29 23:07:25,565: WARNING/MainProcess] SoftTimeLimitExceeded(True,)
[2017-11-29 23:07:27,262: WARNING/PoolWorker-2] ### adding
[2017-11-29 23:07:27,264: WARNING/PoolWorker-2] [1, 1]
```
Of course, on purpose did I choose to call the test.s() twice, to show that the count in the chord continues. In fact:
- the chord result is incremented twice by the error of soft time limit
- the chord result is again incremented twice by the correct returning of `test` task
## Conclusion
Request.on_timeout should not process soft time limit exception.
here is a quick monkey patch (correction of celery is trivial)
```python
def patch_celery_request_on_timeout():
from celery.worker import request
orig = request.Request.on_timeout
def patched_on_timeout(self, soft, timeout):
if not soft:
orig(self, soft, timeout)
request.Request.on_timeout = patched_on_timeout
patch_celery_request_on_timeout()
```
## version info
software -> celery:4.1.0 (latentcall) kombu:4.0.2 py:3.4.3
billiard:3.5.0.2 py-amqp:2.1.4
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:redis://10.0.3.253/0
| PermissionError: [Errno 13] Permission denied: '/var/run/celery'
can you try sudo?
@mchataigner can you chime in?
> PermissionError: [Errno 13] Permission denied: '/var/run/celery'
>
> can you try sudo?
Do you mean
`sudo celery multy start ...` ?
I use systemd with RuntimeDirectory option
```config
[Unit]
Description = Demo celery workers
# When systemd stops or restarts the app.service, the action is propagated to this unit
PartOf = demo.target
# Start this unit after the demo.service start
After = demo.target
After = redis-server.service
Requires = redis-server.service
[Service]
Type = forking
User = www-data
Group = www-data
PermissionsStartOnly = true
RuntimeDirectory = demo
RuntimeDirectoryMode = 0775
EnvironmentFile = /var/www/misc/conf/celery.conf
ExecStart = /usr/local/sbin/demo-exec celery multi start ${CELERYD_NODES} \
--app=${CELERY_APP} \
--hostname=celeryd.worker \
--pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} \
--loglevel=${CELERYD_LOG_LEVEL} \
${CELERYD_OPTS}
ExecStop = /usr/local/sbin/demo-exec celery multi stopwait ${CELERYD_NODES} \
--hostname=celeryd.worker \
--pidfile=${CELERYD_PID_FILE}
ExecReload = /usr/local/sbin/demo-exec celery multi restart ${CELERYD_NODES} \
--app=${CELERY_APP} \
--hostname=celeryd.worker \
--pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} \
--loglevel=${CELERYD_LOG_LEVEL} \
${CELERYD_OPTS}
PrivateTmp = true
[Install]
WantedBy = multi-user.target
```
with celery v4.4.2 everything is working
The problem is in the method [`Node._setdefaultopt`](https://github.com/celery/celery/blob/master/celery/apps/multi.py#L148)
```python
def _setdefaultopt(self, d, alt, value):
for opt in alt[1:]:
try:
return d[opt]
except KeyError:
pass
value = os.path.normpath(value)
dir_path = os.path.dirname(value)
if not os.path.exists(dir_path):
os.makedirs(dir_path)
return d.setdefault(alt[0], value)
```
Proof of Concept:
```python
import os
def _setdefaultopt(d, alt, value):
for opt in alt[1:]:
try:
return d[opt]
except KeyError:
pass
value = os.path.normpath(value)
dir_path = os.path.dirname(value)
if not os.path.exists(dir_path):
print("make dir!!!: ", dir_path)
return d.setdefault(alt[0], value)
```
Run in console:
```python
In [23] _setdefaultopt({'--pidfile': '/var/run/demo/celeryd-%n.pid'}, ['--pidfile', '-p'], '/var/run/celery/%n.pid')
make dir!!!: /var/run/celery
Out[23]: '/var/run/demo/celeryd-%n.pid'
```
can you come with a fix? with a proper test?
> can you come with a fix? with a proper test?
Yes
@ask I think this is quite a big problem (with a trivial fix).
It requires attention though as it is bringging a new behaviour (but previous behaviour is not well documented, and, in my opinion, the new behaviour was the one expected)
This change in behaviour is what kept my team from upgrading to celery 4. Indeed, the chord callback was often not called at all.
I don't know if it is related, but I modified your code sample and it resulted in some `Exception raised outside body` errors and multiple other errors if you try running `python test_timeout.py` multiple times.
Here is my script:
```python
import time
import celery
app = celery.Celery(
'test_timeout',
broker='amqp://localhost',
backend='redis://localhost')
@app.task(soft_time_limit=1)
def test(nb_seconds):
try:
time.sleep(nb_seconds)
return nb_seconds
except:
print("### error handled")
return nb_seconds
@app.task()
def add(args):
print("### adding", args)
return sum(args)
@app.task()
def on_error(context, exception, traceback, **kwargs):
print("### on_error: ", exception)
if __name__ == "__main__":
result = celery.chord([
test.s(i).set(link_error=on_error.s()) for i in range(0, 2)
])(add.s())
result.get()
```
NB: if you update the range for range(2, 4) for instance, the `Exception raised outside body` does not seem to happen. It seems this particular issue happens when the `SoftTimeLimitExceeded` is raised exactly during the `return`.
could you please send a PR with your proposed fix/workaround on master branch?
hi @auvipy, I won't have the time until jannuary. I'll need help on how to write the tests also (that's the reason why I didn't propose a PR).
OK. dont get afraid of sending logical changes just for the reason you don't know how to write test. we will certainly try to help you
thanks a lot! | 2020-06-02T08:00:52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.