test
Folders and files
Name | Name | Last commit date | ||
---|---|---|---|---|
parent directory.. | ||||
# Automated Testing of Cockpit This directory contains automated tests for Cockpit, and the support files for them. To run the tests, you need to install the following packages on Fedora: # yum install npm python-libguestfs qemu mock qemu-kvm python \ curl libvirt-client openssl rpmdevtools libguestfs-tools \ rpm-build krb5-workstation python-lxml expect selinux-policy-devel Testing requires phantomjs globally on the system, not just inside the cockpit package. # npm -g install phantomjs ## Quick list of tools For setting up the host: vm-prep, for creating the test network etc. For managing test machines: vm-create, for creating (or downloading) test machine images. vm-reset, for starting over from a clean slate. vm-install, for installing RPMs into a test machine image. For debugging: vm-run, for running a test machine image. vm-copy, for copying files into a running test machine. vm-sh, for executing commands in a running test machine. The test suite itself: testsuite-prepare, for preparing everything so that tests can run. check-verify, for running all tests. check-*, for running selected tests. ## Configuration You can set these environment variables to configure the test suite: TEST_OS The OS to run the tests in. Currently, "fedora-22", "fedora-rawhide", and "rhel-7" are supported, with "fedora-22" as the default. Note on "rhel-7": The installation routine expects subscription credentials to be present on the host: ~/.rhel/login and ~/.rhel/pass, which will be copied to the guest. Without these, the system can't be registered and the runtime dependencies of cockpit won't be installed. TEST_ARCH The machine architecture to use. Currently, the default and only supported value is "x86_64". TEST_DATA Where to find and store test machine images. The default is the same directory that this README file is in. TEST_JOBS How many tests to run in parallel. The default is 1. ## Quick start First, you need to run ./testsuite-prepare. This will download necessary images, compile cockpit, install the result, and set up some virtual networking. It might ask for the sudo password in order to setup the network. Please make sure that mock does not issue a warning. You may have to add your user to the mock system group. $ ./testsuite-prepare Then you can run some tests: $ ./check-connection To run all tests in parallel, run check-verify like this: $ TEST_JOBS=4 ./check-verify ## Test machines and their images The code under test is executed in one or more dedicated virtual machines, called the "test machines". Fresh test machines are started for each test. A test machine runs a "test machine image". Such a test machine image contains the root filesystem that is booted inside the virtual machine. A running test machine can write to its root filesystem, of course, but these changes are (usually) not propagated back to its image. Thus, you can run multiple test machines from the same image, at the same time or one after the other, and each test machine starts with a clean state. A test machine image is created with vm-create, like so: $ ./vm-create -v -f cockpit The image will be created in $TEST_DATA/images/. The vm-create command will not overwrite existing test machine images and simply report success in that case. Use the "--force" option to create a new image no matter what. The vm-create command will try to download an existing image from file.cockpit-project.org instead of creating a new image. There can be more than one test machine image. For example, you might want to test a scenario where Cockpit on one machine talks to FreeIPA on another, and you want those two machines to use different images. This is handled by passing the "--flavor" / "-f" option to vm-create and other scripts that work with test machine images. Currently, two flavors are available: "cockpit" -- The basic image for running the development version of Cockpit. This is the default. "stock" -- A stock installation of the given OS, including the stock version of Cockpit. This is used to test compatibility between released versions of Cockpit and the development version. "ipa" -- A FreeIPA server. "openshift" -- An Openshift Origin server. To define a new flavor, create a script called "FLAVOR.setup" in this directory. Then, calling "vm-create -f FLAVOR ARGS..." will run "FLAVOR.setup ARGS..." inside the virtual machine. By default, launching a test machine from an image will dynamically assign it a 'random' MAC, which results in DHCP assigning it a random IP address. This means that you can launch multiple test machines from a single image, maybe to use multiple machines inside a single test case, or to run multiple test cases in parallel. If you want a fixed MAC for a given flavor, you can set a cockpit:flavor attribute in the guest/network-cockpit.xml file, like this: <host mac='52:54:00:9e:00:F0' ip='10.111.112.100' name='f0.cockpit.lan' cockpit:flavor="ipa"/> Whenever a test machine of the flavor is launched, it will receive this MAC, and you can thus only run one of them at any given time. A test machine image created by vm-create doesn't contain any Cockpit code in it yet. You can build and install the currently checked out working copy of Cockpit like this: $ ./vm-install $(../tools/make-rpms) This will modify the test machine image that is used for the next run, but will not modify the saved version in $TEST_DATA/images. Use vm-reset to revert the test machine images for the next run to the versions in $TEST_DATA/images. A typical sequence of steps would thus be the following: $ ./vm-create ... # Create a fresh image $ ./vm-reset # Start over $ ./vm-install ... # Install code to test $ ./check-... # Run some tests $ ./vm-reset # Start over $ ./vm-install ... # Install code to test $ ./check-... # Run some tests etc. To update a test machine image for a given flavor (after making changes to the setup scripts, for example, or simply to install new versions of distribution packages), you should increase its 'tag'. The tags are stored in guest/FLAVOR.conf, one per OS. Changing the tag will cause testsuite-prepare to create new images for that tag. You can have images for multiple tags in $TEST_DATA/images and switching between branches that use different tags works as expected. ## Running tests Once you have a test machine image that contains the version of Cockpit that you want to test, you can run tests by picking a program and just executing it: $ ./check-connection ## Debugging tests If you pass the "-s" ("sit on failure") option to a test program, it will pause when a failure occurs so that you can log into the test machine and investigate the problem. You can log into a running test-machine with "vm-sh". You can also put calls to sit() into the tests themselves to stop them at strategic places. You can use the "--vm-start-hook" option to execute an arbitrary shell command after a test machine has been booted but before the test begins. That way, you can run a test cleanly while still being able to make quick changes, such as adding debugging output to JavaScript. The environment variable TEST_MACHINE is set to the identifier of the machine that has just been booted. For example, one might run a full "make install DESTDIR=..." and copy the resulting installation directory to the test machine. This can maybe be done by a script like this: inst=/var/tmp/INST f20 rm -rf "$inst" f20 make-top -k install DESTDIR="$inst" f20 tar -C "$inst" -cf - . | ./vm-sh --machine $TEST_MACHINE 'cd / && cpio -idu' The hypthetical "f20" command executes a command in the build environment. ## Guidelines for writing tests It is OK for a test to destroy the test machine OS installation, or otherwise modify it without cleaning up. For example, it is OK to remove all of /etc just to see what happens. The next test will get a pristine test machine. A fast running test suite is more important than independent, small test cases. Thus, it is OK for tests to be long. Starting the test machine is so slow that we should run as many checks within a single session as make sense. Still, within a long test, try to have independent sections, where each section returns the machine to more or less the state that it was in before the section. This makes it easier to run these sections ad-hoc when doing incremental development.