Taskkit is an experimental, lightweight distributed task runner designed as an alternative to Celery, with improved resource efficiency for handling asynchronous tasks.
At Nailbook, we initially used Celery for asynchronous task processing. However, Celery assigns one process per worker, which resulted in inefficient resource utilization—especially since most tasks in Nailbook are I/O-bound.
To solve this issue, we developed Taskkit, a task runner that enables worker execution on a per-thread basis. This approach optimizes resource usage, making it more efficient for I/O-heavy workloads.
Taskkit is designed with an extremely simple API in mind.
All major APIs (except for encoding/decoding of task data) are fully type-annotated using Python’s typing
module.
This ensures:
- Low implementation cost – The API remains lightweight and easy to integrate.
- Readability & predictability – Developers can quickly understand the expected behavior of tasks.
By prioritizing type safety and simplicity, Taskkit provides a clear and minimalistic approach to distributed task execution.
Taskkit uses a backend-based queue to manage tasks. Each worker:
- Fetches the oldest due task from the queue.
- Assigns the task to itself for execution.
- Processes the task asynchronously.
- This approach allows scaling by adding more workers, which increases processing capacity.
- However, the backend queue and the exclusivity of task assignment can become bottlenecks under high loads.
At Nailbook, we use Amazon Aurora as our main database.
Taskkit runs on this shared Aurora instance, which costs approximately $2,000/month including all associated costs.
Despite running alongside other database operations, this setup has been observed to process up to 100 tasks per second without issues.
Taskkit has been running in production at Nailbook for over two years. However, it has been extensively tested only in a Django-based backend using Aurora MySQL.
While an experimental Redis implementation is available, it has never been used in production, and its stability is not guaranteed. Other backends may also not work as expected.
As a result, Taskkit remains highly experimental, and its functionality outside of this specific environment has not been thoroughly verified. Use at your own discretion.
You can install Taskkit via pip:
pip install taskkit
Each task must be handled by a TaskHandler
implementation.
This class defines how Taskkit should process tasks, encode/decode data, and handle retries.
import json
from typing import Any
from taskkit import TaskHandler, Task, DiscardTask
class Handler(TaskHandler):
def handle(self, task: Task) -> Any:
# Use `tagk.group` and `task.name` to determine how to handle the task
# If it returns any data, it must be encodable by `self.encode_result`.
if task.group == '...':
if task.name == 'foo':
# decode the data which encoded by `self.encode_data` if needed
data = json.loads(task.data)
# do something with the `data`
...
# return result for the task
return ...
elif task.name == 'bar':
# do something
return ...
# you should raise DiscardTask if you want to discard the task
raise DiscardTask
def get_retry_interval(self,
task: Task,
exception: Exception) -> float | None:
# This method will be called if the handle method raises exceptions. You
# should return how long time should be wait to retry the task in seconds
# as float. If you don't want to retry the task, you can return None to
# make the task fail or raise DiscardTask to discard the task.
return task.retry_count if task.retry_count < 10 else None
def encode_data(self, group: str, task_name: str, data: Any) -> bytes:
# encode data of tasks for serializing it
return json.dumps(data).encode()
def encode_result(self, task: Task, result: Any) -> bytes:
# encode the result of the task
return json.dumps(result).encode()
def decode_result(self, task: Task, encoded: bytes) -> Any:
# decode the result of the task
return json.loads(encoded)
An experimental Redis backend is available, but it has never been used in production, and its stability is not guaranteed.
If you would like to try it, you can use the following setup:
from redis.client import Redis
from taskkit.impl.redis import make_kit
REDIS_HOST = '...'
REDIS_PORT = '...'
redis = Redis(host=REDIS_HOST, port=REDIS_PORT)
kit = make_kit(redis, Handler())
- Add
'taskkit.contrib.django'
toINSTALLED_APPS
in the settings - Run
python manage.py migrate
- Make a
kit
instance like below:
from taskkit.impl.django import make_kit
kit = make_kit(Handler())
GROUP_NAME = 'Any task group name'
# it starts busy loop
kit.start(
# number of processes
num_processes=3,
# number of worker threads per process
num_worker_threads_per_group={GROUP_NAME: 3})
# you can use `start_processes` to avoid busy loop
kit.start_processes(
num_processes=3,
num_worker_threads_per_group={GROUP_NAME: 3},
daemon=True)
from datetime import timedelta
from taskkit import ResultGetTimedOut
result = kit.initiate_task(
GROUP_NAME,
# task name
'your task name',
# task data which can be encoded by `Handler.encode_data`
dict(some_data=1),
# run the task after 10 or more seconds.
due=datetime.now() + timedelta(seconds=10))
try:
value = result.get(timeout=10)
except ResultGetTimedOut:
...
from datetime import timezone, timedelta
from taskkit import ScheduleEntry, ScheduleEntryDict, RegularSchedule, ScheduleEntriesCompatMapping
# define entries
# key is a name for scheduler
# value is a list of instances of ScheduleEntry
# or a list of dicts conforming to ScheduleEntryDict
#
# ScheduleEntryCompat: TypeAlias = ScheduleEntry | ScheduleEntryDict
# ScheduleEntriesCompat: TypeAlias = Sequence[ScheduleEntryCompat]
# ScheduleEntriesCompatMapping: TypeAlias = Mapping[str, ScheduleEntriesCompat]
#
schedule_entries: ScheduleEntriesCompatMapping = {
'scheduler_name': [
# You can use ScheduleEntry instance as follows. Note that the data
# MUST be encoded by the same algorithm as `Handler.encode_data`.
ScheduleEntry(
# A key which can identify the schedule in the list
key='...',
# group name
group=GROUP_NAME,
# task name
name='test2',
# The data MUST BE encoded by the same algorithm as
# `Handler.encode_data` so it would looks like:
data=Handler.encode_data(GROUP_NAME, 'test2', 'SOME DATA'),
# It means that the scheduler will initiate the task twice
# an hour at **:00:00 and **:30:00.
schedule=RegularSchedule(
seconds={0},
minutes={0, 30},
),
),
# You can use dict form of schedule entry (recommended).
# Note that in dict form, the data MUST NOT be encoded because `Kit`
# takes care of encoding for convenience. Other properties are same
# as ScheduleEntry. Also you can use ScheduleEntryDict for annotation.
{
'key': '...',
'group': GROUP_NAME,
'name': 'test3',
# IT MUST NOT BE ENCODED
'data': 2,
'schedule': RegularSchedule(seconds={0}, minutes={30}),
}
],
# You can have multiple schedulers
'another_scheduler': [
# other entries ...
],
}
# pass the entries with kit.start
kit.start(
num_processes=3,
num_worker_threads_per_group={GROUP_NAME: 3},
schedule_entries=schedule_entries,
tzinfo=timezone(timedelta(hours=9), 'JST'))