3.5 Software Testing Framework for R Packages
The objective of this section is:
- Create unit tests for an R package using the testthat package
Once you’ve written code for an R package and have gotten that code to a point where you believe it’s working, it may be a good time to step back and consider a few things about your code.
How do you know it’s working? Given that you wrote the functions, you have a certain set of expectations about how the functions should behave. Specifically, for a given set of inputs you expect a certain output. Having these expectations clearly in mind is an important aspect of knowing whether code is “working.”
Have you already tested your code? Chances are, throughout the development of your code, you ran little tests to see if your functions were working. Assuming these tests were valid for the code you were testing, it’s worth keeping these tests on hand and making them part of your package.
Setting up a battery of tests for the code in your package can play a big role in maintaining the ongoing smooth operation of the package in hunting down bugs in the code, should they arise. Over time, many aspects of a package can change. Specifically:
As you actively develop your code, you may change/break older code without knowing it. For example, modifying a helper function that lots of other functions rely on may be better for some functions but may break behavior for other functions. Without a comprehensive testing framework, you might not know that some behavior is broken until a user reports it to you.
The environment in which your package runs can change. The version of R, libraries, web sites and any other external resources, and packages can all change without warning. In such cases, your code may be unchanged, but because of an external change, your code may not produce the expected output given a set of inputs. Having tests in place that are run regularly can help to catch these changes even if your package isn’t under active development.
As you fix bugs in your code, it’s often a good idea to include a specific test that addresses each bug so that you can be sure that the bug does not “return” in a future version of the package (this is also known as a regression).
Testing your code effectively has some implications for code design. In particular, it may be more useful to divide your code into smaller functions so that you can test individual pieces more effectively. For example, if you have one large function that returns TRUE
or FALSE
, it is easy to test this function, but ultimately it may not be possible to identify problems deep in the code by simply checking if the function returns the correct logical value. It may be better to divide up large function into smaller functions so that core elements of the function can be tested separately to ensure that they are behaving appropriately.
3.5.1 The testthat
Package
The testthat
package is designed to make it easy to setup a battery of tests for your R package. A nice introduction to the package can be found in Hadley Wickham’s article in the R Journal. Essentially, the package contains a suite of functions for testing function/expression output with the expected output. The simplest use of the package is for testing a simple expression:
library(testthat)
expect_that(sqrt(3) * sqrt(3), equals(3))
Note that the equals()
function allows for some numerical fuzz, which is why this expression actually passes the test. When a test fails, expect_that()
throws an error and does not return something.
## Use a strict test of equality (this test fails)
expect_that(sqrt(3) * sqrt(3), is_identical_to(3))
: sqrt(3) * sqrt(3) not identical to 3.
Error Objects equal but not identical
The expect_that()
function can be used to wrap many different kinds of test, beyond just numerical output. The table below provides a brief summary of the types of comparisons that can be made.
Expectation | Description |
---|---|
equals() |
check for equality with numerical fuzz |
is_identical_to() |
strict equality via identical() |
is_equivalent_to() |
like equals() but ignores object attributes |
is_a() |
checks the class of an object (using inherits() ) |
matches() |
checks that a string matches a regular expression |
prints_text() |
checks that an expression prints to the console |
shows_message() |
checks for a message being generated |
gives_warning() |
checks that an expression gives a warning |
throws_error() |
checks that an expression (properly) throws an error |
is_true() |
checks that an expression is TRUE |
A collection of calls to expect_that()
can be put together with the test_that()
function, as in
test_that("model fitting", {
data(airquality)
<- lm(Ozone ~ Wind, data = airquality)
fit expect_that(fit, is_a("lm"))
expect_that(1 + 1, equals(2))
}) Test passed 😸
Typically, you would put your tests in an R file. If you have multiple sets of tests that test different domains of a package, you might put those tests in different files. Individual files can have their tests run with the test_file()
function. A collection of tests files can be placed in a directory and tested all together with the test_dir()
function.
In the context of an R package, it makes sense to put the test files in the tests
directory. This way, when running R CMD check
(see the next section) all of the tests will be run as part of the process of checking the entire package. If any of your tests fail, then the entire package checking process will fail and will prevent you from distributing buggy code. If you want users to be able to easily see the tests from an installed package, you can place the tests in the inst/tests
directory and have a separate file in the tests
directory to run all of the tests.