Skip to content

Commit

Permalink
Which, which, which
Browse files Browse the repository at this point in the history
  • Loading branch information
ask committed Aug 2, 2016
1 parent f2622c6 commit 2b2e4b2
Show file tree
Hide file tree
Showing 42 changed files with 250 additions and 232 deletions.
2 changes: 1 addition & 1 deletion celery/app/amqp.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ class AMQP(object):

# Exchange class/function used when defining automatic queues.
# E.g. you can use ``autoexchange = lambda n: None`` to use the
# AMQP default exchange, which is a shortcut to bypass routing
# AMQP default exchange: a shortcut to bypass routing
# and instead send directly to the queue named in the routing key.
autoexchange = None

Expand Down
12 changes: 6 additions & 6 deletions celery/app/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ def _after_fork_cleanup_app(app):

class PendingConfiguration(UserDict, AttributeDictMixin):
# `app.conf` will be of this type before being explicitly configured,
# which means the app can keep any configuration set directly
# meaning the app can keep any configuration set directly
# on `app.conf` before the `app.config_from_object` call.
#
# accessing any key will finalize the configuration,
Expand Down Expand Up @@ -216,7 +216,7 @@ def __init__(self, main=None, loader=None, backend=None,
self._tasks = TaskRegistry(self._tasks or {})

# If the class defines a custom __reduce_args__ we need to use
# the old way of pickling apps, which is pickling a list of
# the old way of pickling apps: pickling a list of
# args instead of the new way that pickles a dict of keywords.
self._using_v1_reduce = app_has_custom(self, '__reduce_args__')

Expand Down Expand Up @@ -284,8 +284,8 @@ def __exit__(self, *exc_info):
def close(self):
"""Clean up after the application.
Only necessary for dynamically created apps for which you can
use the :keyword:`with` statement instead
Only necessary for dynamically created apps, and you should
probably use the :keyword:`with` statement instead.
Example:
>>> with Celery(set_as_current=False) as app:
Expand Down Expand Up @@ -575,8 +575,8 @@ def autodiscover_tasks(self, packages=None,
This argument may also be a callable, in which case the
value returned is used (for lazy evaluation).
related_name (str): The name of the module to find. Defaults
to "tasks", which means it look for "module.tasks" for every
module in ``packages``.
to "tasks": meaning "look for 'module.tasks' for every
module in ``packages``."
force (bool): By default this call is lazy so that the actual
auto-discovery won't happen until an application imports
the default modules. Forcing will cause the auto-discovery
Expand Down
4 changes: 2 additions & 2 deletions celery/app/control.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,8 @@ def clock(self):

def active(self, safe=None):
# safe is ignored since 4.0
# as we now have argsrepr/kwargsrepr which means no objects
# will need to be serialized.
# as no objects will need serialization now that we
# have argsrepr/kwargsrepr.
return self._request('active')

def scheduled(self, safe=None):
Expand Down
9 changes: 4 additions & 5 deletions celery/app/task.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,20 +213,19 @@ class Task(object):
#: finished, or waiting to be retried.
#:
#: Having a 'started' status can be useful for when there are long
#: running tasks and there's a need to report which task is currently
#: running tasks and there's a need to report what task is currently
#: running.
#:
#: The application default can be overridden using the
#: :setting:`task_track_started` setting.
track_started = None

#: When enabled messages for this task will be acknowledged **after**
#: the task has been executed, and not *just before* which is the
#: default behavior.
#: the task has been executed, and not *just before* (the
#: default behavior).
#:
#: Please note that this means the task may be executed twice if the
#: worker crashes mid execution (which may be acceptable for some
#: applications).
#: worker crashes mid execution.
#:
#: The application default can be overridden with the
#: :setting:`task_acks_late` setting.
Expand Down
10 changes: 5 additions & 5 deletions celery/canvas.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ class Signature(dict):
Signatures can also be created from tasks:
- Using the ``.signature()`` method which has the same signature
- Using the ``.signature()`` method that has the same signature
as ``Task.apply_async``:
.. code-block:: pycon
Expand Down Expand Up @@ -512,8 +512,8 @@ class chain(Signature):
Note:
If called with only one argument, then that argument must
be an iterable of tasks to chain, which means you can
use this with a generator expression.
be an iterable of tasks to chain: this allows us
to use generator expressions.
Example:
This is effectively :math:`((2 + 2) + 4)`:
Expand Down Expand Up @@ -853,8 +853,8 @@ class group(Signature):
Note:
If only one argument is passed, and that argument is an iterable
then that'll be used as the list of tasks instead, which
means you can use ``group`` with generator expressions.
then that'll be used as the list of tasks instead: this
allows us to use ``group`` with generator expressions.
Example:
>>> lazy_group = group([add.s(2, 2), add.s(4, 4)])
Expand Down
10 changes: 6 additions & 4 deletions celery/concurrency/asynpool.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@ def _flush_outqueue(self, fd, remove, process_index, on_state_change):
proc = process_index[fd]
except KeyError:
# process already found terminated
# which means its outqueue has already been processed
# this means its outqueue has already been processed
# by the worker lost handler.
return remove(fd)

Expand Down Expand Up @@ -1045,8 +1045,10 @@ def create_process_queues(self):

def on_process_alive(self, pid):
"""Handler called when the :const:`WORKER_UP` message is received
from a child process, which marks the process as ready
to receive work."""
from a child process.
Marks the process as ready to receive work.
"""
try:
proc = next(w for w in self._pool if w.pid == pid)
except StopIteration:
Expand Down Expand Up @@ -1142,7 +1144,7 @@ def _find_worker_queues(self, proc):
raise ValueError(proc)

def _setup_queues(self):
# this is only used by the original pool which uses a shared
# this is only used by the original pool that used a shared
# queue for all processes.

# these attributes makes no sense for us, but we'll still
Expand Down
2 changes: 1 addition & 1 deletion celery/contrib/abortable.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ class AbortableAsyncResult(AsyncResult):
"""Represents a abortable result.
Specifically, this gives the `AsyncResult` a :meth:`abort()` method,
which sets the state of the underlying Task to `'ABORTED'`.
that sets the state of the underlying Task to `'ABORTED'`.
"""

def is_aborted(self):
Expand Down
6 changes: 3 additions & 3 deletions celery/contrib/migrate.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ def move(predicate, connection=None, exchange=None, routing_key=None,
"""Find tasks by filtering them and move the tasks to a new queue.
Arguments:
predicate (Callable): Filter function used to decide which messages
predicate (Callable): Filter function used to decide the messages
to move. Must accept the standard signature of ``(body, message)``
used by Kombu consumer callbacks. If the predicate wants the
message to be moved it must return either:
Expand All @@ -134,11 +134,11 @@ def move(predicate, connection=None, exchange=None, routing_key=None,
2) a :class:`~kombu.entity.Queue` instance, or
3) any other true value which means the specified
3) any other true value means the specified
``exchange`` and ``routing_key`` arguments will be used.
connection (kombu.Connection): Custom connection to use.
source: List[Union[str, kombu.Queue]]: Optional list of source
queues to use instead of the default (which is the queues
queues to use instead of the default (queues
in :setting:`task_queues`). This list can also contain
:class:`~kombu.entity.Queue` instances.
exchange (str, kombu.Exchange): Default destination exchange.
Expand Down
4 changes: 2 additions & 2 deletions celery/contrib/rdb.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ def add(x, y):
``CELERY_RDB_HOST``
-------------------
Hostname to bind to. Default is '127.0.01', which means the socket
will only be accessible from the local host.
Hostname to bind to. Default is '127.0.01' (only accessable from
localhost).
.. envvar:: CELERY_RDB_PORT
Expand Down
10 changes: 5 additions & 5 deletions celery/platforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@
"""

ROOT_DISCOURAGED = """\
You're running the worker with superuser privileges, which is
You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the -u option.
Expand Down Expand Up @@ -127,8 +127,8 @@ class Pidfile(object):
See Also:
Best practice is to not use this directly but rather use
the :func:`create_pidlock` function instead,
which is more convenient and also removes stale pidfiles (when
the :func:`create_pidlock` function instead:
more convenient and also removes stale pidfiles (when
the process holding the lock is no longer running).
"""

Expand Down Expand Up @@ -481,7 +481,7 @@ def setgroups(groups):


def initgroups(uid, gid):
"""Compat version of :func:`os.initgroups` which was first
"""Compat version of :func:`os.initgroups` that was first
added to Python 2.7."""
if not pwd: # pragma: no cover
return
Expand Down Expand Up @@ -725,7 +725,7 @@ def get_errno_name(n):
def ignore_errno(*errnos, **kwargs):
"""Context manager to ignore specific POSIX error codes.
Takes a list of error codes to ignore, which can be either
Takes a list of error codes to ignore: this can be either
the name of the code, or the code integer itself::
>>> with ignore_errno('ENOENT'):
Expand Down
2 changes: 1 addition & 1 deletion celery/schedules.py
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ def is_due(self, last_run_at):
it does not need to be accurate but will influence the precision
of your schedule. You must also keep in mind
the value of :setting:`beat_max_loop_interval`,
which decides the maximum number of seconds the scheduler can
that decides the maximum number of seconds the scheduler can
sleep between re-checking the periodic task intervals. So if you
have a task that changes schedule at run-time then your next_run_at
check will decide how long it will take before a change to the
Expand Down
2 changes: 1 addition & 1 deletion celery/utils/collections.py
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,7 @@ class LimitedSet(object):
``maxlen`` is enforced at all times, so if the limit is reached
we'll also remove non-expired items.
You can also configure ``minlen``, which is the minimal residual size
You can also configure ``minlen``: this is the minimal residual size
of the set.
All arguments are optional, and no limits are enabled by default.
Expand Down
2 changes: 1 addition & 1 deletion celery/utils/timeutils.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ def delta_resolution(dt, delta):
:class:`~datetime.datetime` will be rounded to the nearest days,
if the :class:`~datetime.timedelta` is in hours the
:class:`~datetime.datetime` will be rounded to the nearest hour,
and so on until seconds which will just return the original
and so on until seconds, which will just return the original
:class:`~datetime.datetime`.
"""
delta = max(delta.total_seconds(), 0)
Expand Down
4 changes: 2 additions & 2 deletions celery/worker/consumer/consumer.py
Original file line number Diff line number Diff line change
Expand Up @@ -242,8 +242,8 @@ def _update_prefetch_count(self, index=0):
Note:
Currently pool grow operations will end up with an offset
of +1 if the initial size of the pool was 0 (which could
be the case with old deprecated autoscale option, may consider
of +1 if the initial size of the pool was 0 (this could
be the case with the old deprecated autoscale option, may consider
removing this now that it's no longer supported).
"""
num_processes = self.pool.num_processes
Expand Down
2 changes: 1 addition & 1 deletion celery/worker/request.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
"""This module defines the :class:`Request` class, which specifies
"""This module defines the :class:`Request` class, that specifies
how tasks are executed."""
from __future__ import absolute_import, unicode_literals

Expand Down
2 changes: 1 addition & 1 deletion docs/THANKS
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Thanks to Rune Halvorsen <[email protected]> for the name.
Thanks to Anton Tsigularov <[email protected]> for the previous name (crunchy)
which we had to abandon because of an existing project with that name.
that we had to abandon because of an existing project with that name.
Thanks to Armin Ronacher for the Sphinx theme.
Thanks to Brian K. Jones for bunny.py (https://github.com/bkjones/bunny), the
tool that inspired 'celery amqp'.
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -728,7 +728,7 @@ is following the conventions.
set textwidth=78
If adhering to this limit makes the code less readable, you have one more
character to go on, which means 78 is a soft limit, and 79 is the hard
character to go on. This means 78 is a soft limit, and 79 is the hard
limit :)

* Import order
Expand Down
2 changes: 1 addition & 1 deletion docs/django/first-steps-with-django.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ for the :program:`celery` command-line program:
You don't need this line, but it saves you from always passing in the
settings module to the ``celery`` program. It must always come before
creating the app instances, which is what we do next:
creating the app instances, as is what we do next:

.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -270,7 +270,7 @@ When using the RabbitMQ (AMQP) and Redis transports it should work
out of the box.

For other transports the compatibility prefork pool is
used which requires a working POSIX semaphore implementation,
used and requires a working POSIX semaphore implementation,
this is enabled in FreeBSD by default since FreeBSD 8.x.
For older version of FreeBSD, you have to enable
POSIX semaphores in the kernel and manually recompile billiard.
Expand Down Expand Up @@ -445,7 +445,7 @@ setting to "json" or "yaml" instead of pickle.
Similarly for task results you can set :setting:`result_serializer`.

For more details of the formats used and the lookup order when
checking which format to use for a task see :ref:`calling-serializers`
checking what format to use for a task see :ref:`calling-serializers`

Can messages be encrypted?
--------------------------
Expand Down
4 changes: 2 additions & 2 deletions docs/getting-started/brokers/rabbitmq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,8 +140,8 @@ be `rabbit@myhost`, as verified by :command:`rabbitmqctl`:
...done.
This is especially important if your DHCP server gives you a host name
starting with an IP address, (e.g. `23.10.112.31.comcast.net`), because
then RabbitMQ will try to use `rabbit@23`, which is an illegal host name.
starting with an IP address, (e.g. `23.10.112.31.comcast.net`). In this
case RabbitMQ will try to use `rabbit@23`: an illegal host name.

.. _rabbitmq-macOS-start-stop:

Expand Down
6 changes: 3 additions & 3 deletions docs/getting-started/brokers/sqs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ Polling Interval

The polling interval decides the number of seconds to sleep between
unsuccessful polls. This value can be either an int or a float.
By default the value is 1 second, which means that the worker will
sleep for one second whenever there are no more messages to read.
By default the value is *one second*: this means the worker will
sleep for one second when there's no more messages to read.

You must note that **more frequent polling is also more expensive, so increasing
the polling interval can save you money**.
Expand All @@ -89,7 +89,7 @@ setting::

broker_transport_options = {'polling_interval': 0.3}

Very frequent polling intervals can cause *busy loops*, which results in the
Very frequent polling intervals can cause *busy loops*, resulting in the
worker using a lot of CPU time. If you need sub-millisecond precision you
should consider using another transport, like `RabbitMQ <broker-amqp>`,
or `Redis <broker-redis>`.
Expand Down
Loading

0 comments on commit 2b2e4b2

Please sign in to comment.