Skip to content

Latest commit

 

History

History
 
 

tests

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Tips on Writing Automated Tests in Jest

Jest is a testing framework we use to ensure our most mission critical libraries are as stable as possible. Here are a few things to consider with regards to our app's architecture when testing in Jest.

Asynchronous Testing

  • Much of the logic in the app is asynchronous in nature. react-native-onyx writes data async before updating subscribers.
  • Actions do not typically return a Promise and therefore can't always be "awaited" before running an assertion.
  • To test a result after some asynchronous code has run we can use Onyx.connect() and the helper method waitForBatchedUpdates() which returns a Promise and will ensure that all other Promises have finished running before resolving.
  • Important Note: When writing any asynchronous Jest test it's very important that your test itself return a Promise.

Mocking Network Requests

  • Network requests called in tests do not run against any test database so we must mock them with a jest.fn().
  • To simulate a network request succeeding or failing we can mock the expected response first and then manually trigger the action that calls that API command.
  • Mocking the response of HttpUtils.xhr() is the best way to simulate various API conditions so we can verify whether a result occurs or not.

Mocking collections / collection items

When unit testing an interface with Jest/performance testing with Reassure you might need to work with collections of data. These often get tricky to generate and maintain. To help with this we have a few helper methods located in tests/utils/collections/.

  • createCollection() - Creates a collection of data (Record<string, T>) with a given number of items (default=500). This is useful for eg. testing the performance of a component with a large number of items. You can use it to populate Onyx.
  • createRandom*() - like createRandomPolicy, these functions are responsible for generating a randomised object of the given type. You can use them as your defaults when calling createCollection() or as standalone utilities.

Basic example:

const policies = createCollection<Policy>(item => `policies_${item.id}`, createRandomPolicy);

/**
    Output:
    {
        "policies_0": policyItem0,
        "policies_1": policyItem1,
        ...
    }
*/

Example with overrides:

const policies = createCollection<Policy>(
    item => `policies_${item.id}`,
    index => ({ ...createRandomPolicy(index), isPinned: true })
);

Mocking node_modules, user modules, and what belongs in jest/setup.ts

If you need to mock a library that exists in node_modules then add it to the __mocks__ folder in the root of the project. More information about this here. If you need to mock an individual library you should create a mock module in a __mocks__ subdirectory adjacent to the library as explained here. However, keep in mind that when you do this you also must manually require the mock by calling something like jest.mock('../../src/libs/Log'); at the top of an individual test file. If every test in the app will need something to be mocked that's a good case for adding it to jest/setup.ts, but we should generally avoid adding user mocks or node_modules mocks to this file. Please use the __mocks__ subdirectories wherever appropriate.

Assertions

  • There are a ton of matchers that jest offers for making assertions.
  • When testing an Action it is often best to test that Onyx data matches our expectations after the action runs.
expect(onyxData).toBe(expectedOnyxData);

Documenting Tests

Tests aren't always clear about what exactly is being tested. To make this a bit easier we recommend adopting the following format for code comments:

// Given <initial_condition>
...  code that sets initial condition

// When <something_happens>
... code that does something

// Then <expectation>
... code that performs the assertion

Example Test

HttpUtils.xhr = jest.fn();

describe('actions/Report', () => {
    it('adds an optimistic comment', () => {
        // Given an Onyx subscription to a report's `reportActions`
        const ACTION_ID = 1;
        const REPORT_ID = 1;
        let reportActions;
        Onyx.connect({
            key: `${ONYXKEYS.COLLECTION.REPORT_ACTIONS}${REPORT_ID}`,
            callback: val => reportActions = val,
        });

        // Mock Report_AddComment command so it can succeed
        HttpUtils.xhr.mockImplementation(() => Promise.resolve({
            jsonCode: 200,
        }));

        // When we add a new action to that report
        Report.addComment(REPORT_ID, 'Hello!');
        return waitForBatchedUpdates()
            .then(() => {
                const action = reportActions[ACTION_ID];

                // Then the action set in the Onyx callback should match
                // the comment we left and it will be in a loading state because
                // it's an "optimistic comment"
                expect(action.message[0].text).toBe('Hello!');
                expect(action.isPending).toBe(true);
            });
    });
});

When to Write a Test

Many of the UI features of our application should go through rigorous testing by you, your PR reviewer, and finally QA before deployment. It's also difficult to maintain UI tests when the UI changes often. Therefore, it's not valuable for us to place every single part of the application UI under test at this time. The manual testing steps should catch most major UI bugs. Therefore, if we are writing any test there should be a good reason.

What's a "good reason" to write a test?

  • Anything that is difficult or impossible to run a manual tests on
    • e.g. a test to verify an outcome after an authentication token expires (which normally takes two hours)
  • Areas of the code that are changing often, breaking often, and would benefit from the resiliency an automated test would provide
  • Lower JS libraries that might have many downstream effects
  • Actions. It's important to verify that data is being saved as expected after one or more actions have finished doing their work.

Debugging Tests

If you are using Visual Studio Code, it's easy to debug a test you are writing or attempting to fix one that is now failing as a result of your changes. To step through a test while it's running grab the Jest plugin and make sure your launch.json settings match this:

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "node",
            "name": "vscode-jest-tests",
            "request": "launch",
            "args": [
                "--runInBand"
            ],
            "cwd": "${workspaceFolder}",
            "console": "integratedTerminal",
            "internalConsoleOptions": "neverOpen",
            "disableOptimisticBPs": true,
            "program": "${workspaceFolder}/node_modules/jest/bin/jest"
        }
    ]
}

You should now be able to set breakpoints anywhere in the code and run your test from within Visual Studio Code.