You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+62-23Lines changed: 62 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,7 +28,7 @@ The resulting estimates are printed to the screen (assuming that the ``verbose``
28
28
29
29
## Test 1: A simple smoke test
30
30
31
-
For our first test, let's simply instantiate the ``RTAnalysis`` class and ensure that the resulting object is not empty. We call this a "smoke test" since it mostly just makes sure that things run and don't break --- it doesn't actually test the functionality. This is done in [test_1_smoketest.py](rtanalysis/test_1_smoketest.py):
31
+
For our first test, let's simply instantiate the ``RTAnalysis`` class and ensure that the resulting object is not empty. We call this a "smoke test" since it mostly just makes sure that things run and don't break --- it doesn't actually test the functionality. This is done in [test_1_smoketest.py](tests/test_1_smoketest.py):
32
32
33
33
import pytest
34
34
from rtanalysis.rtanalysis import RTAnalysis
@@ -39,7 +39,7 @@ For our first test, let's simply instantiate the ``RTAnalysis`` class and ensure
39
39
40
40
We can run the test using pytest from the command line:
@@ -65,7 +65,7 @@ This data frame includes two series, called ``rt`` and ``accuracy`` that can be
65
65
66
66
rta.fit(test_df.rt, test_df.accuracy)
67
67
68
-
Here is what our test function looks like:
68
+
Here is what our test function looks like ([test_2_fit.py](tests/test_2_fit.py)):
69
69
70
70
def test_rtanalysis_fit():
71
71
rta = RTAnalysis()
@@ -85,14 +85,14 @@ Test 2 checked whether the our program performed as advertised. However, as Mye
85
85
86
86
> Examining a program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do.
87
87
88
-
That is, we need to try to cause the program to make errors, and make sure that it avoids them appropriately. In this case, we will start by seeing what happens if our rt and accuracy series are of different sizes. Let's first write a test to see what happens if we do this [test_3_type_fail.py](rtanalysis/test_3_type_fail.py):
88
+
That is, we need to try to cause the program to make errors, and make sure that it avoids them appropriately. In this case, we will start by seeing what happens if our rt and accuracy series are of different sizes. Let's first write a test to see what happens if we do this [test_3_type_fail.py](tests/test_3_type_fail.py):
89
89
90
90
def test_dataframe_error():
91
91
rta = RTAnalysis()
92
92
test_df = generate_test_df(2, 1, 0.8)
93
93
rta.fit(test_df.rt, test_df.accuracy.loc[1:])
94
94
95
-
If we run this test, we will see that it fails, due to the error that is raised by the function when the data are incorrectly sized. (Note that we have told pytest to ignore this failure, so that it won't cause our entire test run to fail, using the ``@pytest.mark.xfail`` decorator.) This is the correct behavior on the part of our function, but it's not the correct behavior on the part of our test! Instead, we want the test to succeed *if and only if* the correct exception is raised. To do this, we can use the ``pytest.raises`` function as a context manager [test_3_type_success.py](rtanalysis/test_3_type_success.py):
95
+
If we run this test, we will see that it fails, due to the error that is raised by the function when the data are incorrectly sized. (Note that we have told pytest to ignore this failure, so that it won't cause our entire test run to fail, using the ``@pytest.mark.xfail`` decorator.) This is the correct behavior on the part of our function, but it's not the correct behavior on the part of our test! Instead, we want the test to succeed *if and only if* the correct exception is raised. To do this, we can use the ``pytest.raises`` function as a context manager [test_3_type_success.py](tests/test_3_type_success.py):
96
96
97
97
def test_dataframe_error_with_raises():
98
98
rta = RTAnalysis()
@@ -129,7 +129,7 @@ If you would like to add a badge to your README file that shows the status of th
129
129
130
130
## Test 4: Making a persistent fixture for testing
131
131
132
-
Let's say that we want to create several tests, all of which use the same object. In this case, let's say that we want to create several tests that use the same simulated dataset. We can do that by creating what we call a *fixture* in pytest, which is an object that can be passed into a test. In addition to a fixture containing the dataset, we also create a fixture to contain our parameters, so that they can be used for testing (see [test_4_fixture.py](rtanalysis/test_4_fixture.py)):
132
+
Let's say that we want to create several tests, all of which use the same object. In this case, let's say that we want to create several tests that use the same simulated dataset. We can do that by creating what we call a *fixture* in pytest, which is an object that can be passed into a test. In addition to a fixture containing the dataset, we also create a fixture to contain our parameters, so that they can be used for testing (see [test_4_fixture.py](tests/test_4_fixture.py)):
133
133
134
134
135
135
@pytest.fixture
@@ -161,7 +161,7 @@ Let's say that we want to create several tests, all of which use the same object
161
161
162
162
## Test 5: Parametric tests
163
163
164
-
Sometimes we wish to test a function across multiple values of a parameter. For example, let's say that we want to make sure that our function works for response times that are coded either in seconds or milliseconds. We can run the same test with different parameters in pytest using the ``@pytest.mark.parametrize`` decorator.
164
+
Sometimes we wish to test a function across multiple values of a parameter. For example, let's say that we want to make sure that our function works for response times that are coded either in seconds or milliseconds. We can run the same test with different parameters in pytest using the ``@pytest.mark.parametrize`` decorator ([test_5_parametric.py](tests/test_5_parametric.py)).
165
165
166
166
167
167
@pytest.mark.parametrize("meanRT, sdRT, meanAcc",
@@ -185,21 +185,60 @@ This loops through each of the sets of parameters for the three variables. It c
185
185
186
186
It can be useful to know which portions of our code are actually being exercised by our tests. There are various types of test coverage; we will focus here on simply assessing whether each line in the code has been covered, but see The Art of Software Testing](http://barbie.uta.edu/~mehra/Book1_The%20Art%20of%20Software%20Testing.pdf) for much more on this topic.
187
187
188
-
We can assess the degree to which our tests cover our code using the Coverage.py tool (``pip install coverage``) with the pytest-cov extension (``pip install pytest-cov``). With these installed, we simply add the ``--cov`` argument to our pytest commandm, which will give us a coverage report:
188
+
We can assess the degree to which our tests cover our code using the Coverage.py tool (``pip install coverage``) with the pytest-cov extension (``pip install pytest-cov``). With these installed, we simply add the ``--cov`` argument to our pytest commandm, which will give us a coverage report. We will specify the code directory so that the coverage is only computed for our code of interest, not for the tests themselves:
==================================================================================== test session starts ====================================================================================
192
+
platform darwin -- Python 3.8.3, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
=============================================================================== 8 passed, 1 xfailed in 1.10s ================================================================================
215
+
216
+
Now we see that our pytest output also includes a coverage report, which tells us that we have only covered 85% of the statements in rtanalysis.py. We can look further at which statements we are missing using the ``coverage annotate`` function, which generates a set of files that are annotated with regard to which statements have been covered:
217
+
218
+
pytest_tutorial % coverage annotate
219
+
pytest_tutorial % ls -1 rtanalysis
220
+
__init__.py
221
+
__init__.py,cover
222
+
__pycache__
223
+
generate_testdata.py
224
+
generate_testdata.py,cover
225
+
rtanalysis.py
226
+
rtanalysis.py,cover
227
+
228
+
We see here that the annotation function has generated a set of files with the suffix ",cover". Each line in this file is marked with a ``>`` symbol if it was covered in the testing, and a ``!`` symbol if it was not. From this, we can see that there were two sections in the code that were not covered:
0 commit comments