sup — SuperPipe
Usage: sup
SuperPipe is a utility to define transformations on files. These transformations may be triggered regularly like a cron job or through file system events.
This is a work in progress. See the feature completion list for an overview of what has been done and what is left to do.
- config
- Configure super pipe
- help
- Prints this message or the help of the given subcommand(s)
- init
- Ensure config files are in place
- path
- Path-related commands
- pipe
- Pipeline-related commands
- run
- Manually fire all (or one if specified) pipelines
- watch
- Watch all paths and run pipelines when the path changes (write and create events)
Run sup <cmd> help
for more information on any one of the commands.
SuperPipe lets you manage sets of files on which you define a set of arbitrary transformations to occur when the file changes. A transformation is just a set of commands to run, which may or may not actually have anything to do with the file the transformation is attached to. You could use this to keep a file in sync between different file systems (e.g. a file you need to keep up-to-date between Dropbox and Keybase) or to run Pandoc on a file to publish as a blog.
SuperPipe does keep state (right now using a SQLite database), so you will need to run sup init
before running any further commands.
Status | Feature | Notes |
---|---|---|
DONE | CRUD paths to manage | |
TODO | Add file system watcher | |
TODO | Add cron-like support | |
TODO | Add round-trip detection | This might become its own command |
TODO | Use git to resolve merge conflicts |
The config file lives in whatever the configuration file for your operating system is. On macOS this is ~/Library/Preferences/super_pipe
. On Linux this is ~/.config/
.
- Where put the database
init
- State “DONE” from “TODO” [2019-10-04 Fri 14:20]
I just thought of a use case for two pipelines with one file: one task runs every time a file changes, another happens daily. Same file, two different pipelines.
- State “DONE” from “IN_PROGRESS” [2019-11-05 Di 19:03]
- State “IN_PROGRESS” from “TODO” [2019-09-28 Sat 20:19]
- State “DONE” from “TODO” [2019-10-10 Do 13:52]
I’ve decided to move away from SQLite because 1.) I can execute bash files from Rust, but strings are tricky; moving the pipelines into files is more ergonomic, and 2.) this is a 100% self-contained binary!
I think it would be cool if I could add some grantees to the program: what if I could make sure everything that happened was atomic? Maybe I say that all the commands run in a particular sandbox (e.g. in a directory with only the file in question present) than then run, then they define what products from the transformation they want to extract and put elsewhere. If any point of the pipeline fails, then they can roll it back.