This document is part of the OpenBB Terminal
library documentation.
It aims to provide necessary information in order to:
- Run
unit tests
- Build
unit tests
- Fix failed existing
unit tests
- Maintain
unit tests
Unit tests
purpose is to allow developers to update without fear.
Writing tests that are easy to run will help the person modifying the code to do so quick and with more confidence. Having tests available also decreases the chances that bugs will make it into production and break other parts of the code.
To do that we need unit tests
to be :
- fast to write
- fast to run
FAST TO WRITE.
The unit tests
is used for testing a single functionality for its correctness of input and the corresponding output. Unit tests are about ensuring that low-level implementation details are rock solid.
FAST TO RUN.
Tests can be slow at times, due to the need to connect with external services. This can be sending a HTTP request or connecting to a database. We can speed this up by using mocks. In case this is not possible, make sure to marked
it as slow
In this section we will explain everything you need to run the unit tests
on OpenBB Terminal
.
To run the tests you will need first to install the dev-dependencies
. By default poetry installs the dev-dependencies
when you run this command :
poetry install
If you do not want to install the dev-dependencies
you will have to add the option --no-dev
like this :
poetry install --no-dev
You can run tests on a specific package/module by specifying the path of this package/module, as follows:
pytest tests/openbb_terminal/some_package
pytest tests/openbb_terminal/some_package/some_module.py
You can run tests by their name with the argument -k
pytest -k "test_function1"
You can run tests only on specific markers, like this :
pytest -m slow
pytest -k "not slow"
pytest -k "not slow and not network"
You can list the available markers using this command :
pytest --markers
You can use the decorator @pytest.mark.skip
as below:
import pytest
@pytest.mark.skip
def test_some_function(mocker):
pass
@pytest.mark.skip(reason="This time with a comment")
def test_another_function(mocker):
pass
import pytest
pytest.skip(msg="Some optional comment.", allow_module_level=True)
def test_some_function(mocker):
pass
When you contribute a new feature to the Terminal, it's important that tests are added for this particular feature. It is a part of the Checklist for the PR to be approved.
All the unit tests
should be insides the tests
folder. There should be at most one test module
for each module
of OpenBB Terminal
.
Each test module
should follow the same path of the module
that it is testing
.For instance,
- in order to test the following module
openbb_terminal/stocks/due_diligence/dd_controller.py
- a
test module
should be added here:tests/openbb_terminal/stocks/due_diligence/test_dd_controller.py
Now that you know where to add tests, let's go through 3 types of unit tests that we have:
As mentioned in the main repo's README, we are following the MVC pattern. The _model.py
's job to get the required data, e.g. from an external APIs or scraping.
Therefore, in unit tests for _model.py
, the function output is captured and saved when the test is first run. Then, the next time the test is run, the function's returned value is compared with the captured value in the first run, to ensure parity.
Depending on the data type of function output, we can generate the following files:
- csv : for pandas.DataFrame
- json : for python native data type (list, dict, tuple)
- txt : store str or printed output
Examples:
@pytest.fixture(scope="module")
def vcr_config():
return {
"filter_query_parameters": [
("token", "MOCK_TOKEN"),
],
}
@pytest.mark.vcr
@pytest.mark.parametrize(
"ticker",
[
("BTC"),
("BTC-USD"),
],
)
def test_check_supported_ticker(ticker, recorder):
df = sentimentinvestor_model.check_supported_ticker(ticker)
recorder.capture(df)
Several things worth noting here:
- If the model utilizes an API key or token, make sure to filter it insider the
vcr_config
fixture inside your test module, similar to the example. This will hide your actual API key / token for security purposes. - Record HTTP requests using the decorator
@pytest.mark.vcr
- When run the first time, the VCR will record the requests to
cassettes/test_*.yml
- Run it again, and VCR will replay the recorded response when the HTTP request is made. This test is now fast (no real HTTP requests are made anymore), deterministic (the test will continue to pass, even if you have no internet connection or that external APIs go down) and accurate (the response will contain the same headers and body you get from a real request).
- When run the first time, the VCR will record the requests to
- Test out multiple input variables to ensure tests cover multiple edge cases
- Use the decorator
@pytest.mark.parametrize
to specify the inputs for your tests.
- Use the decorator
- Capture the returned df using
recorder.capture(<return_value>)
- Next time the test run, return output will be compared against the captured file
- Each fixture / cassette file for a test module should be stored within the same folder. An example with
tests/openbb_terminal/some_package/some_module.py
:tests/openbb_terminal/some_package
/cassettes/some_module
/test_function1.yamltests/openbb_terminal/some_package
/txt/some_module
/test_function1.csvtests/openbb_terminal/some_package
/csv/some_module
/test_function1.csvtests/openbb_terminal/some_package
/json/some_module
/test_function1.json
- You can choose the
cassette
name using the@pytest.mark.default_cassette
marker. Here is an example :
@pytest.mark.default_cassette("example.yaml")
@pytest.mark.vcr
def test_default():
assert requests.get("http://httpbin.org/get").text == '{"get": true}'
Different from the model.py
, view.py
module handles how data is displayed, e.g.as a chart or a table.
The view.py
has no return statement, and simply prints to stdout. Hence, our unit tests will capture stdout values, and compare against the recorded fixture in subsequent runs.
Example:
@pytest.mark.vcr
@pytest.mark.record_stdout
def test_display_historical(mocker):
# MOCK VISUALIZE_OUTPUT
mocker.patch(target="openbb_terminal.helper_classes.TerminalStyle.visualize_output")
sentimentinvestor_view.display_historical(
ticker="AAPL", start="2021-12-12", end="2021-12-15", export="", raw=True
)
Important notes to go through:
- Use
@pytest.mark.record_stdout
to capturesdtout
.- Recorded stdout is saved as fixtures in a txt file
- If your view function only plots a chart, and does not output anything in
sdtout
, you can skip unit testing yourview.py
- Make sure to mock
visualize_output
method.- Inside
visualize_output
, we useion
to enable interactive mode. However, under Windows, havingion
enabled would make tests fail to run. Hence, you should always mockvisualize_output
, and henceion
- Inside
Unit test for _controller.py
is the last component you would need to add. Most of the time you won't need to create a _controller.py
from scratch. Hence, we will only cover how to add tests for a new command to an existing test_*_controller.py
file
This test in the test_*_controller.py
captured the help text printed to stdout.
Example:
@pytest.mark.vcr(record_mode="none")
@pytest.mark.record_stdout
def test_print_help():
controller = alt_controller.AlternativeDataController(queue=None)
controller.print_help()
Adding an additional feature to the controller.py
will alter the help text. Make sure to run the tests with the argument --rewrite-expected
to re-generate the text file.
Example: pytest openbb_terminal.cryptocurrency.defi.defi_controller --rewrite-expected
This test is to check when controller.py
calls a command, the corresponding function from a view.py
will be called with the correct arguments.
Example:
@pytest.mark.vcr(record_mode="none")
@pytest.mark.parametrize(
"tested_func, other_args, mocked_func, called_args, called_kwargs",
[
(
"call_aterra",
[
"ust",
"--address=terra1wg2mlrxdmnnkkykgqg4znky86nyrtc45q336yv", # pragma: allowlist secret
],
"terraengineer_view.display_terra_asset_history",
[],
dict(),
),
(
"call_newsletter",
[],
"substack_view.display_newsletters",
[],
dict(),
),
],
)
def test_call_func(
tested_func, mocked_func, other_args, called_args, called_kwargs, mocker
):
path_controller = "openbb_terminal.cryptocurrency.defi.defi_controller"
if mocked_func:
mock = mocker.Mock()
mocker.patch(
target=f"{path_controller}.{mocked_func}",
new=mock,
)
controller = defi_controller.DefiController(queue=None)
getattr(controller, tested_func)(other_args)
if called_args or called_kwargs:
mock.assert_called_once_with(*called_args, **called_kwargs)
else:
mock.assert_called_once()
else:
controller = defi_controller.DefiController(queue=None)
getattr(controller, tested_func)(other_args)
When adding a new feature to an existing menu, there should already be a test_*_controller.py
file with the test_call_func()
function. You can simply add another tuple under @pytest.mark.parametrize()
, which contains:
call_<command>
from the controller.py file- Arguments you want to pass in the
call_<command>
- The corresponding view function, which will be mocked (Here we only want check if the view function is called at least once. We mock it such that the function won't execute.)
- Arguments you want to pass to the view function. Note that the arguments here need to be the same as arguments in (2). This is to ensure that arguments passed from the controller.py file are passed correctly to function in
view.py
It is not uncommon that your new feature or bug fixes break the existing code. Understanding a few concepts can help you navigate and fix most common cases.
Before continuing this section, please have a look at our FIXTURES documentation.
TL;DR:
You can run unit tests under a few mode:
--rewrite-expected
will re-generate the fixtures (csv, json and txt files) if there's detected change.--record-mode
will impact both the cassette (yaml) and fixtures (csv, json, txt files)--record-mode=none
: searches for existing cassette / fixtures. If there's none, an exception will be thrown.--record-mode=once
: creates new fixtures and cassette if they did not exist. If there are already fixtures or cassette, it will take the existing one.--record-mode=rewrite
: this rewrites both the cassette and fixtures, even if they already exist.
Let's say you just updated the code in _model.py
file, which then altered the recorded output. You can update the fixture files by running the tests with --rewrite-expected
argument. This would update the fixtures with latest changes.
Example: pytest openbb_terminal.cryptocurrency.defi.defi_controller --rewrite-expected
If you want to update existing cassettes and fixtures, use the argument --record-mode=rewrite
when running tests.
As existing cassettes and fixtures will be overwritten (if there's any), make sure that you check both, and that they yield expected output.
PYTEST-MOCK
A mocker
fixture is available through the package pytest-mock
.
Here is an example on how to mocker
fixture can be used :
def test_something(mocker):
mocker.patch("some_package.some_module")
More information about pytest-mock
are available here :
MOCKEYPATCH
There is also a monkeypatch
fixture available by default inside pytest
.
def test_double(monkeypatch):
monkeypatch.setattr("some_package.some_module", "Some value")
More information about monkeypatch
are available here :
You can find the available helpers inside the following package/module :
tests/helpers/
tests/conftest.py
See also the pytest fixtures
which are autoloaded helpers.
You can list all the available pytest fixtures
using the following command :
pytest --fixtures
More on custom fixtures here :
You can list the available markers using this command :
pytest --markers
More information on markers location are available in pytest documentation :
ION Usage
As mentioned earlier, if you are testing for a _view.py
module that uses ION
, you would need to mock it. With Windows, the charts / graphs are unable to close the graph applications. This would make the tests failed when running with Windows.
You can use the following example below. Once you have mocked the graphs / charts, simply call the display function, which is now a mocked object.
@pytest.mark.vcr
@pytest.mark.record_stdout
def test_display_defi_tvl(mocker):
# MOCK EXPORT_DATA
mocker.patch(target="openbb_terminal.cryptocurrency.defi.llama_view.export_data")
# MOCK VISUALIZE_OUTPUT
mocker.patch(target="openbb_terminal.helper_classes.TerminalStyle.visualize_output")
llama_view.display_defi_tvl(20)
YFINANCE
If a method is using the yfinance
library but do not let you pick the start/end
dates it will pick the current date each time.
So your test might fail the next days.
To solve that you can filter the start/end
date like this :
@pytest.fixture(scope="module")
def vcr_config():
return {
"filter_headers": [("User-Agent", None)],
"filter_query_parameters": [
("period1", "MOCK_PERIOD_1"),
("period2", "MOCK_PERIOD_2"),
("date", "MOCK_DATE"),
],
}
You can also refactor this method to let access to stard/end
dates.
USER-AGENT
Some function uses a random User-Agent
on the HTTP HEADER
when fetching data from an API
.
Here is how to filter this random User-Agent
.
@pytest.fixture(scope="module")
def vcr_config():
return {
"filter_headers": [("User-Agent", None)],
}
YFINANCE
If you do something like this with yfinance
library.
import yfinance as yf
yf.download(tickers="VCYT QSI")
Chances are you requests will be multi-threaded.
Issues : as for now vcrpy
seems to be incompatible with multi-threading.
The library vcrpy
is used to record cassettes
(network
calls into yaml
files).
Here is a solution to still combine yfinance
and vcrpy
:
import pytest
import yfinance
yf_download = yf.download
def mock_yf_download(*args, **kwargs):
kwargs["threads"] = False
return yf_download(*args, **kwargs)
@pytest.mark.vcr
def test_ark_orders_view(kwargs_dict, mocker, use_color):
mocker.patch("yfinance.download", side_effect=mock_yf_download)
yf.download(tickers="VCYT QSI")
BROTLI
The library requests
doesn't support : brotli
comnpression.
Unless brotli
library is installed in the environment.
Plus brotli
is not in OpenBBTerminal
requirements.
So if both of these conditions are fulfilled :
- one generates a cassette with
brotli
installed in his/her environment - the server choose to send
brotli
compressed data
Then the test
might work in local but crash during PullRequest
.
BEFORE_RECORD_RESPONSE
The library vcrpy
has a before_record_response
which accept a filtering function that can be used to filter your cassette content :
import pytest
def my_custom_filter(response):
return response
@pytest.mark.vcr(before_record_response=my_custom_filter)
def test_simple(recorder):
pass
For instance this can be used to reduce the size of a cassette
if it's too heavy.
Or to filter sensitive data in the response.
The issue with this before_record_response
: it isn't launched at the first run of the test.
More on this here : kevin1024/vcrpy#594
A solution for now is to run this command while initializing the cassettes
:
# THE SAME COMMAND NEEDS TO BE RUN TWICE
pytest tests/.../test_some_test_module.py --record-mode=once --rewrite-expected
pytest tests/.../test_some_test_module.py --record-mode=once --rewrite-expected
VSCODE TESTING
The default testing tool from vscode
should let you add breakpoints and run debug on a specific test
.
It's a convenient way to see what's inside your test
while running.
More information on this tool are available here :
UPDATE PYPROJECT
If a library need to be added or removed from dev-dependencies
this can be done directly in the pyproject.toml
file.
All the dev-dependencies
should be under the tool.poetry.dev-dependencies
section, like in this example :
[tool.poetry.dev-dependencies]
pytest = "^6.2.2"
pylint = "^2.7.2"
flake8 = "^3.9.0"
...
EXPORT REQUIREMENTS
After updating the pyproject.toml
you will have to export the requirements
files using the following commands :
poetry export -f requirements.txt -o requirements.txt --without-hashes --dev
poetry export -f requirements.txt -o requirements-full.txt --extras prediction --extras optimization --without-hashes --dev
Here are the steps to write proper tests for Gamestonk :
A. Find right place
B. Verify coverage is above 90%
C. Set the right markers
A. Find right place
Put the code following the same module and package structure than openbb_terminal
package.
B. Verify coverage is above 90%
Once you made your PullRequest
an automation will let you know whether or not you have the proper amount of tests coverage.
You can also run the following command to check your coverage manually :
pytest --cov --cov-fail-under=90
C. Set the right markers
If parts of your have specificities like :
- being
slow
- requiring
network
connectivity - requiring
authentication
toAPIs
Then the proper markers should be sent on each test.
The rests of this document is there to provide a deeper comprehension of this steps.
A tests update might be asked whenever the answer to one of the following question is yes
:
A. Is this PR fixing bug which was not detected by the current tests ?
B. Is this PR reducing code coverage ?
C. Is this PR adding features ?
D. Is this PR updating features ?
PULL REQUEST
A github action
should comment every Pull Request
with the code coverage
.
This automation will not enforce any rules regarding tests
.
PULL REQUEST AUTOMATION
You should be able to see the code coverage on comment of your Pull Requests
.
MANUALLY
This is how to manually check code coverage.
pytest --cov=openbb_terminal --cov=terminal --cov-report term-missing
You can also select a specific package/module with the option --cov
.
Here is an example where we select only the package stocks\due_diligence
:
pytest --cov=openbb_terminal\stocks\due_diligence --cov-report term-missing
PRE-COMMIT
Code coverage
and unit tests
are not run on pre-commit
hooks since it will slow down each commit.