bq
is a very simple program that maintains a queue of one job. Unlike
exqueue, which has a "server" component
managing the queue and firing off tasks as each one completes, bq
simply
uses a database and locking to manage the queue.
(You will need to install the DBM::Deep
perl module for bq
to work).
Here's the usage message.
Usage: bq [options] [subcommand]
bq [options] shell-command [args]
Options:
-q queue ID to use (default: 'default')
-s, -sleep N[mh] sleep N (s|m|h) before queueing 'shell-command'
-lf, -logfile PATH also log messages to given PATH; also works by setting
BQ_LOGFILE env var
Subcommands:
-h show this help
-t, -tail "tail" running and queued jobs' stdout and stderr files
-f, -flush print and "flush" completed jobs stdout and stderr files
-c, -history show "completed" jobs
-e, -errors show jobs with "errors"
-purge purge history records from DB
-limit N set limit to N for given queue
If no subcommand is given, print status for given queue. If you supply more
than one subcommand, behaviour is undefined, so don't do that.
See docs for more, especially for details on '-q', '-s', etc.
It's mostly self-explanatory, or at least I hope it is.
Sometimes you need more than one queue. I usually use the default queue for download jobs, and a different one, (arbitrarily called "CPU") for CPU-hogging jobs:
bq ... # uses queue called 'default'
bq -q CPU ... # uses queue called 'CPU'
This is even more useful if you want different limits for different jobs. Just run these commands once and you're all set (these settings will stay even across reboots, because state is maintained by a database, not a process):
bq -q IO -limit 1
bq -q net -limit 2
bq -q CPU -limit 4
You can also block a queue by setting the limit to 0. When you do this, jobs will be queued but won't start. I sometimes do this to queue up any disk or CPU intensive tasks, then open the queue just before I go for lunch.
The -s time
is useful when you want to run a job after some delay. You
might think this should work:
bq sleep 2h
bq wget ...
But that sleep is now needlessly taking up a runnable slot, and if your queue limit is 1 you can't run anything else now. Instead, this:
bq -s 2h wget ...
sleeps before doing anything at all, so the queue is not held up, letting you
continue to use it in the meanwhile. In fact, until the 2 hours (in this
example) have passed, you won't even see the job in the status output; you can
only see it using ps
or some variant (like pgrep
).
Normally, all output is ephemeral (running 'bq -f' flushes the saved STDOUT and STDERR data). Since STDERR includes bq's job start/end messages, they're also gone.
But sometimes you may want those messages to be stored somewhere for later
examination; that's what -logfile
(-lf
) does.
Note that the actual STDOUT/STDERR of the command itself is not saved to this logfile; after all, it could be arbitrarily large. If you need that your command should do it's own redirection.
-
When you do something like this:
bq wget -c --progress=dot:mega http://media.risky.biz/RB391.mp3 bq id3v2 -D RB391.mp3
bq
does not check if the wget finished OK or not; as far as it is concerned these are two separate jobs. If you need conditional evaluation, you need to run them together, likebq 'command1 && command2'
. -
Forcing a job to run right now (which was a feature available in exqueue, is trivial here. Just use a different queue!
Of course there's a risk of creating all sorts of queues whose names you forgot, so you should probably do this one-time:
bq -q now -limit 99
, and simply use thenow
queue for all "do it now" jobs. Simple, easy to remember :)
list of queues available, canceling a queued job, redoing the last failed job, ...