To run all unit tests:
./test_unit.sh
Code coverage:
./test_coverage.sh
and open /var/tmp/capillaries.html in a web browser.
Some integration tests use data and config files stored in S3. Make sure you have the test bucket and IAM user credentials set up as described in s3 data access.
To run all integration tests, make sure you have RabbitMQ, Casandra and Capillaries Daemon running (either in Docker containers or as regular applications), test data is present (see ./copy_demo_data.sh) and run:
./test_integration.sh
There is a number of extensive integration tests that cover a big part of Capillaries script, database, and workflow functionality:
- lookup: comprehensive lookup test
- py_calc: focuses on custom processor implementation - py_calc
- tag_and_denormalize: focuses on custom processor implementation - tag_and_denormalize
- portfolio: exercises lookups, py_calc, tag_and_denormalize
- proto_file_reader_creator: exercises toolbelt
proto_file_reader_creator
command and csv/parquet file read/write - fannie_mae: distinct_table test
All tests require running Cassandra and (in most cases) RabbitMQ containers (see Getting started for details). All tests run Toolbelt to send work batches to the queue and to check Capillaries workflow status.
Before running an integration test (before you build Docker containers, if you choose to do so), make sure you have copied all test configuration and data files to /tmp/capi_* directories as described in Prepare Data Directories.
How to run integration tests?
Run test_exec_nodes.sh
- the Toolbelt executes test's script.json
nodes one by one, without invoking RabbitMQ workflow. Running nodes one by one is not something you want to do in production environment, but in can be particulary convenient when troubleshooting specific script nodes.
Make sure the Daemon is running:
- either run
go run capidaemon.go
to start it in pkg/exe/daemon - or start the Daemon container (
docker compose -p "test_capillaries_containers" start daemon
)
Run test_one_run.sh
- the Toolbelt publishes batch messages to RabbitMQ and the Daemon consumes them and executes all script nodes in parallel as part of a single run.
Make sure the Daemon is running:
- either run
go run capidaemon.go
to start it in pkg/exe/daemon - or start the Daemon container (
docker compose -p "test_capillaries_containers" start daemon
)
Run test_two_runs.sh
(if it is available for the speficic test) - the Toolbelt publishes batch messages to RabbitMQ and the Daemon consumes them and executes script nodes that load data from files as part of the first run.
After the first run is complete, the Toolbelt publishes batch messages to RabbitMQ and the Daemon consumes them and executes script nodes that process the data as part of the second run.
This test mimics the "operator validation" scenario.
This test variation is supported only by tag_and_denormalize integration test.
Make sure the Daemon is running:
- either run
go run capidaemon.go
to start it in pkg/exe/daemon - or start the Daemon container (
docker compose -p "test_capillaries_containers" start daemon
)
Make sure that the daemon can connect to github.com.
Same as test_one_run.sh
, but uses GitHub as the source of configuration and input data.
Run test_one_run_input_https.sh
- the Toolbelt publishes batch messages to RabbitMQ and the Daemon consumes them and executes all script nodes in parallel as part of a single run.
Make sure the Daemon is running:
- either run
go run capidaemon.go
to start it in pkg/exe/daemon - or start the Daemon container (
docker compose -p "test_capillaries_containers" start daemon
)
Make sure the Webapi is running:
- either run
go run capiwebapi.go
to start it in pkg/exe/webapi - or start the Webapi container (
docker compose -p "test_capillaries_containers" start webapi
)
Navigate to http://localhost:8080
and start a new run from the UI as described in the Getting started: Run 100% dockerized demo.