Chapter 3: Unit Testing#
What is Unit Testing?#
Unit testing is a common software testing technique to check individual code components (e.g., functions, methods, classes, or modules) in isolation from the rest of the program.
The main idea is to run such isolated components with a variety of inputs and check the outputs against expected results.
For unit testing to be achievable and effective, the code design must facilitate easy isolation of components and their dependencies. In line with the design principles we discussed earlier, below are some key practices to follow:
Small, cohesive functions
Explicit interfaces
Avoid side effects
Use dependency injection
Benefits of Unit Testing#
Catches bugs early
Facilitates code changes
Improves code design
Documentation
Unit Testing in Action#
Take the diffusive_flux function from the previous chapter as an example.
vec = list[float]
def diffusive_flux(f_out: vec, c: vec, kappa: float, dx: float) -> None:
"""Given a cell field (c), compute the diffusive flux (f_out)."""
assert len(f_out) == len(c) + 1, "Size mismatch"
assert dx > 0 and kappa > 0, "Non-positive dx or kappa"
for i in range(1, len(f_out) - 1):
f_out[i] = -kappa * (c[i] - c[i-1]) / dx
Unit testing this function is as simple as calling it with some test inputs and checking the outputs. Here’s how you might write a unit test for the diffusive_flux function using pytest:
from numpy import isclose
def test_diffusive_flux():
"""Constant field leads to zero flux"""
u = [100.0, 100.0, 100.0]
F = [0.0] * (len(u) + 1)
diffusive_flux(F, u, kappa=0.1, dx=1.0)
assert all(isclose(F[:], 0)), f"Expected all zeros, got {F[1:-1]}"
test_diffusive_flux()
That’s it! We unit tested the diffusive_flux function. And the test passed successfully.
How do we know? The assertion did not raise an error.
What are some of the limitations of this test?
We only did so for a specific case, where we confirmed that a constant field
and zero boundary fluxes lead to zero flux. We’ll have to add more tests to cover different scenarios,
but first let’s introduce the pytest framework, which makes writing and running tests easier.
The pytest Library#
While manual testing is useful, it can be time-consuming and error-prone. Automated testing with a framework like pytest allows us to quickly and easily run our tests, for instance, when added into a continuous integration (CI) pipeline.
While there are other commonly used frameworks such as unittest, we prefer
pytest for its simplicity and powerful features. It’s also worth noting that pytest
is not just for unit testing: it’s a general purpose testing framework that can be used
for a wide range of testing needs in an automated fashion.
Note that pytest is a command-line tool. As such, we will follow the following
workflow to run our tests in a Jupyter notebook environment:
Using the
%%writefilemagic command, we will save our test codes to Python files.We will then run the tests using the command line command
pytest. Recall, in Jupyter notebooks, we can run such shell commands by prefixing them with!.
But first, let’s load the solver code that was saved in the previous chapter.
%load heat1d.py
Key Features of pytest#
Here, we highlight some of the key features of pytest:
Assertion
Test discovery
Fixtures
Marking
Parameterization
Assertions#
pytest uses plain Python assert statements, no special API, to decide whether a test passes. When an assertion fails, pytest reports the failure along with the values of the expressions involved.
In cases where you have property specifications (preconditions, postconditions, invariants) specified as part of the actual code, you can automatically leverage them in your tests. Otherwise, you can implement them as asserts that precede or follow the function under test.
Be careful with floating-point comparisons: exact equality is brittle. In tests,
prefer pytest.approx for tolerant comparisons. Using numpy.isclose inside your
library code is preferable in production code. But for test assertions,
approx tends to produce clearer failure messages.
Let’s write a minimal test for the div function. In practice, source code and test are
often located in separate files. But for brevity, we’ll keep them together and save to
test_div.py:
%%writefile test_div.py
def div(x, y):
assert y != 0 # P (precondition)
res = x / y # code (implementation)
assert res * y == x # Q (postcondition)
return res
def test_division():
div(7, 25)
Overwriting test_div.py
To run this test, we can simply run the pytest test_div.py command:
!pytest test_div.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 1 item
test_div.py
F [100%]
=================================== FAILURES ===================================
________________________________ test_division _________________________________
def test_division():
> div(7, 25)
test_div.py:9:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
x = 7, y = 25
def div(x, y):
assert y != 0 # P (precondition)
res = x / y # code (implementation)
> assert res * y == x # Q (postcondition)
^^^^^^^^^^^^^^^^^^^
E assert (0.28 * 25) == 7
test_div.py:5: AssertionError
=========================== short test summary info ============================
FAILED test_div.py::test_division - assert (0.28 * 25) == 7
============================== 1 failed in 0.27s ===============================
Notice the test fails because the assertion is violated due to floating point precision of division operation. We may address this test failure in several ways. First, we can use the raises context manager:
%%writefile test_div.py
from pytest import raises
def div(x, y):
assert y != 0 # P (precondition)
res = x / y # code (implementation)
assert res * y == x # Q (postcondition)
return res
def test_division():
with raises(AssertionError):
res = div(7, 25)
Overwriting test_div.py
!pytest test_div.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 1 item
test_div.py . [100%]
============================== 1 passed in 0.20s ===============================
Or, more appropriately for this situation, we can weaken the postcondition by replacing the equality assertion with an approximate equality assertion using the pytest.approx function:
%%writefile test_div.py
from pytest import approx
def div(x, y):
assert y != 0 # P (precondition)
res = x / y # code (implementation)
assert res * y == approx(x) # Q (postcondition)
return res
def test_division():
res = div(7, 25)
Overwriting test_div.py
!pytest test_div.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 1 item
test_div.py
. [100%]
============================== 1 passed in 0.38s ===============================
Back to the heat-equation solver. - Let’s try this out!:
Let’s save the test_flux_simple unit test in a file named test_flux.py. This time,
instead of the numpy.isclose function, we will use pytest.approx, which serves
the same purpose but provides better failure messages in unit testing contexts.
%%writefile test_simple.py
from heat1d import diffusive_flux
def test_flux_simple():
Overwriting test_simple.py
To execute this test, we will simply run the pytest command in the terminal.
!pytest test_simple.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 0 items / 1 error
==================================== ERRORS ====================================
_______________________ ERROR collecting test_simple.py ________________________
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/python.py:498: in importtestmodule
mod = import_path(
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/pathlib.py:587: in import_path
importlib.import_module(module_name)
../../miniconda3/envs/r3sw/lib/python3.12/importlib/__init__.py:90: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<frozen importlib._bootstrap>:1387: in _gcd_import
???
<frozen importlib._bootstrap>:1360: in _find_and_load
???
<frozen importlib._bootstrap>:1331: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:935: in _load_unlocked
???
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:177: in exec_module
source_stat, co = _rewrite_test(fn, self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:357: in _rewrite_test
tree = ast.parse(source, filename=strfn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/r3sw/lib/python3.12/ast.py:52: in parse
return compile(source, filename, mode, flags,
E File "/Users/altuntas/r3sw/notebooks/test_simple.py", line 4
E def test_flux_simple():
E ^
E IndentationError: expected an indented block after function definition on line 4
=========================== short test summary info ============================
ERROR test_simple.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 0.47s ===============================
The output confirms that the test has passed successfully, i.e., no (unexpected) assertion errors were raised during the test execution.
Test Discovery#
pytest automatically discovers tests by looking for files that start with test_ or end with _test.py. Within each of these files, it looks for functions that start with test_ and classes starting with Test. All discovered tests are then executed when you run pytest.
Say, you run pytest in a directory with the following structure:
heat_solver/
└── heat1d.py
└── unit_tests/
├── test_simple.py
└── test_flux_via_params.py
When you execute pytest from the root directory, it will recursively discover and run the tests in all the modules starting with test_. Since we have saved two test files so far,
this means that both test_simple.py and test_flux_via_params.py will be executed.
!pytest
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collecting 0 items
collected 21 items / 1 error
==================================== ERRORS ====================================
_______________________ ERROR collecting test_simple.py ________________________
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/python.py:498: in importtestmodule
mod = import_path(
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/pathlib.py:587: in import_path
importlib.import_module(module_name)
../../miniconda3/envs/r3sw/lib/python3.12/importlib/__init__.py:90: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<frozen importlib._bootstrap>:1387: in _gcd_import
???
<frozen importlib._bootstrap>:1360: in _find_and_load
???
<frozen importlib._bootstrap>:1331: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:935: in _load_unlocked
???
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:177: in exec_module
source_stat, co = _rewrite_test(fn, self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:357: in _rewrite_test
tree = ast.parse(source, filename=strfn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/r3sw/lib/python3.12/ast.py:52: in parse
return compile(source, filename, mode, flags,
E File "/Users/altuntas/r3sw/notebooks/test_simple.py", line 4
E def test_flux_simple():
E ^
E IndentationError: expected an indented block after function definition on line 4
=========================== short test summary info ============================
ERROR test_simple.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 1.22s ===============================
Fixtures#
— live coding —
# TODO: live coding --- fixtures
— end of live coding —
Why do we use fixtures?#
pytest fixtures give you small, named pieces of test state, e.g., parameters, meshes, boundary conditions, that pytest builds and injects into tests by name. They remove duplicated setup,
keep tests independent, and make intent explicit. Pytest discovers fixtures in any test file
and in a shared conftest.py, so you can reuse them across modules.
In this chapter we’ll use a few simple fixtures throughout:
dx,kappa: canonical numerical parameters.mesh3,mesh5: tiny meshes for hand-checkable and slightly larger cases.insulated,linear_bc: boundary-condition objects.F3: Flux array sized to mesh3.u_spike,u_uniform: representative initial conditions.
You’ll see these fixtures appear as function arguments in the tests that follow. pytest creates them automatically and passes them in. This keeps each test concise and focused.
%%writefile conftest.py
"""
Shared pytest fixtures for the heat-1D solver. These create small, well-labeled
objects we can reuse across tests without repeating setup code.
"""
import pytest
from heat1d import Mesh
@pytest.fixture
def dx(): return 1.0
@pytest.fixture
def kappa(): return 0.1
@pytest.fixture
def mesh3(dx): return Mesh(dx=dx, N=3)
@pytest.fixture
def mesh5(dx): return Mesh(dx=dx, N=5)
@pytest.fixture
def insulated(): return [0.0, 0.0]
@pytest.fixture
def linear_bc(): return [1.0, 1.0]
@pytest.fixture
def F3(mesh3): return mesh3.face_field()
@pytest.fixture
def u_spike(): return [0.0, 100.0, 0.0]
@pytest.fixture
def u_uniform(mesh5): return [7.5] * mesh5.N
Overwriting conftest.py
Marking#
%%writefile test_fast_slow.py
import pytest
def test_fast():
assert True
def test_slow():
import time
time.sleep(3)
assert True
def test_fail():
assert False
Overwriting test_fast_slow.py
!pytest test_fast_slow.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 3 items
test_fast_slow.py .
.
F [100%]
=================================== FAILURES ===================================
__________________________________ test_fail ___________________________________
def test_fail():
> assert False
E assert False
test_fast_slow.py:13: AssertionError
=========================== short test summary info ============================
FAILED test_fast_slow.py::test_fail - assert False
========================= 1 failed, 2 passed in 3.29s ==========================
%%writefile test_fast_slow_marked.py
import pytest
def test_fast():
assert True
def test_slow():
import time
time.sleep(3)
assert True
def test_fail():
assert False
Overwriting test_fast_slow_marked.py
!pytest test_fast_slow_marked.py -m "xfail"
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 3 items / 3 deselected / 0 selected
============================ 3 deselected in 0.23s =============================
How to register a custom mark!
# In a file called pytest.ini
[pytest]
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
Parameterization#
You may have realized that running a test with different inputs can be tedious if we have to write separate test functions for each case. To ease this process, you can use the @pytest.mark.parametrize decorator to run a test function with different sets of input data.
Let’s return to our simple flux test.
%%writefile test_manual_parametrize.py
import pytest
from pytest import approx
from heat1d import diffusive_flux
def test_flux_simple():
"""Constant field leads to zero flux"""
u = [100.0, 100.0, 100.0]
F = [0.0] * (len(u) + 1)
diffusive_flux(F, u, kappa=0.1, dx=1.0)
assert all(f == approx(0.0) for f in F), f"Expected all zeros, got {F[1:-1]}"
def test_flux_non_constant():
"""Non-constant fields lead to non-zero flux"""
Overwriting test_manual_parametrize.py
!pytest test_manual_parametrize.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 2 items
test_manual_parametrize.py
.. [100%]
============================== 2 passed in 0.35s ===============================
%%writefile test_flux_via_params.py
import pytest
from pytest import approx
from heat1d import diffusive_flux
@pytest.mark.parametrize(
"u,kappa,dx,expected",
[
([100,100,100], 0.1, 1.0, [0.0, 0.0]),
([0,10,20], 0.5, 2.0, [-0.5*(10/2), -0.5*(10/2)]),
],
)
def test_flux_param(u, kappa, dx, expected):
print(f"\nTesting u={u}, kappa={kappa}, dx={dx}")
F = [0.0]*(len(u)+1)
diffusive_flux(F, u, kappa, dx)
assert F[1:-1] == approx(expected)
Overwriting test_flux_via_params.py
!pytest -s test_flux_via_params.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 2 items
test_flux_via_params.py
Testing u=[100, 100, 100], kappa=0.1, dx=1.0
.
Testing u=[0, 10, 20], kappa=0.5, dx=2.0
.
============================== 2 passed in 0.36s ===============================
Note: The -s flag in the above call is to allow print statements in the test to be displayed,
and so to confirm that all specified inputs via the parameterization mechanism are being tested.
In the absence of this flag, the print statements are suppressed.
Telescoping Property#
In finite-volume discretizations, fluxes between neighboring cells telescope: the flux leaving one cell enters the next (except in the presence of sources, variable cell volumes or densities, or numerical errors.)
Consequently, when we sum the discrete divergence over all cells, the interior fluxes cancel pairwise, leaving only the boundary contributions. in other words, the total divergence equals the net flux through the boundaries:
\(\qquad \sum_{i=0}^{N-1} (\nabla \cdot F)_i = F_0 - F_N \qquad\)
Recall the encoding of this property:
def telescoping(c, f, dx: float) -> bool:
"""Check the finite volume telescoping property."""
total_divergence = sum(c) * dx
boundary_flux = f[0] - f[-1]
return total_divergence == approx(boundary_flux)
Exercise 3.1#
Write a test named test_divergence_telescopes that verifies the telescoping property of the divergence function.
%%writefile test_divergence.py
from pytest import approx
from heat1d import divergence
def telescoping(c, f, dx: float) -> bool:
"""Check the finite volume telescoping property."""
total_divergence = sum(c) * dx
boundary_flux = f[0] - f[-1]
return total_divergence == approx(boundary_flux)
def test_divergence_telescopes(dx=1.0):
...
Overwriting test_divergence.py
!pytest test_divergence.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 1 item
test_divergence.py . [100%]
============================== 1 passed in 0.17s ===============================
Answer:
Since the divergence_telescoping function is already incorporated in the divergence function, we can directly test this property by simply running it with some arbitrary inputs. This shows the value of specifying critical properties as pre- and postconditions in terms of making testing and debugging easier.
%%writefile test_divergence.py
from pytest import approx
from heat1d import divergence
def telescoping(c, f, dx: float) -> bool:
"""Check the finite volume telescoping property."""
total_divergence = sum(c) * dx
boundary_flux = f[0] - f[-1]
return total_divergence == approx(boundary_flux)
def test_divergence_telescopes(dx=1.0):
"""Sum of divF * dx must equal net boundary flux F[0] - F[-1]."""
F = [2.0, 7.0, -5.0, -3.0]
divF = [0.0, 0.0, 0.0]
divergence(divF, F, dx)
assert telescoping(divF, F, dx)
Overwriting test_divergence.py
!pytest test_divergence.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 1 item
test_divergence.py
. [100%]
============================== 1 passed in 0.28s ===============================
Step & Solve (physical invariants + stability)#
Finally, we check several physical invariants and stability constraints end-to-end. The purpose of each test is summarized in the docstrings("""...""") within each test function.
%%writefile test_step_solve.py
from pytest import approx, raises
from heat1d import Mesh, step_heat_eqn, solve_heat_eqn
def test_step_moves_spike_toward_neighbors(mesh3, insulated, u_spike):
"""Given insulated BCs and a stable dt, a single step should diffuse the spike:
- middle cell decreases, neighbors increase."""
F = mesh3.face_field()
u = u_spike[:]
step_heat_eqn(u, kappa=0.1, dt=0.1, mesh=mesh3, bc=insulated)
assert u[1] < 100.0 and u[0] > 0.0 and u[2] > 0.0
def test_conservation_insulated_solve(insulated):
"""With qL=qR=0, total discrete heat (sum(u)*dx) is invariant across step_heat_eqn."""
u0 = [0.0, 100.0, 0.0]
u = solve_heat_eqn(u0=u0, kappa=0.1, dt=0.1, nt=20, dx=1.0, bc=insulated)
assert sum(u) == approx(sum(u0))
def test_conservation_with_boundary_work():
"""With qL!=qR, total heat changes by dt*(qL - qR) per step."""
u0 = [10.0, 10.0, 10.0]
dx, dt, nt = 1.0, 0.05, 4
bc = [2.0, -3.0] # net in = 5
u = solve_heat_eqn(u0=u0, kappa=0.1, dt=dt, nt=nt, dx=dx, bc=bc)
expected = sum(u0)*dx + nt*dt*(bc[0] - bc[1])
assert sum(u)*dx == approx(expected)
# NOTE: bug above on purpose to show failing message; fix to bc[1] in next test.
def test_symmetry_preserved_one_step(mesh3, insulated):
"""A symmetric initial state (a,b,a) under insulated BCs remains symmetric after 1 step."""
u0 = [0.0, 100.0, 0.0]
u = solve_heat_eqn(u0, kappa=0.1, dt=0.1, nt=1, dx=1.0, bc=insulated)
assert u[0] == approx(u[2])
def test_unstable_dt_raises(insulated):
""" Stability guard for dx=1, kappa=0.1. Pick dt=10 to force assert."""
u0 = [0.0, 100.0, 0.0]
with raises(AssertionError):
solve_heat_eqn(u0=u0, kappa=0.1, dt=10.0, nt=1, dx=1.0, bc=insulated)
def test_uniform_is_fixed_point(mesh5, insulated, u_uniform):
"""Uniform field is a fixed point (steady state) under insulated BCs for any stable dt/kappa."""
u = solve_heat_eqn(u_uniform, kappa=5.0, dt=0.05, nt=10, dx=mesh5.dx, bc=insulated)
assert u == approx(u_uniform)
def test_equal_flux_bc_trends_toward_linear_profile(mesh5, kappa, linear_bc):
""" If qL==qR==c (nonzero), steady state has constant interior flux == c and thus a linear
gradient. This test checks that after many steps the cell differences approach a constant."""
u0 = [0.0, 0.0, 0.0, 0.0, 0.0]
# stable dt: r = kappa*dt/dx^2; choose small dt to be safe
u = solve_heat_eqn(u0, kappa=0.1, dt=0.5, nt=400, dx=mesh5.dx, bc=linear_bc)
diffs = [u[i]-u[i-1] for i in range(1, len(u))]
# Differences should be (approximately) equal across cells
avg = sum(diffs)/len(diffs)
assert diffs == approx([avg]*len(diffs), rel=1e-3, abs=1e-3)
Overwriting test_step_solve.py
!pytest test_step_solve.py
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collected 7 items
test_step_solve.py .
...... [100%]
============================== 7 passed in 0.37s ===============================
Summary#
We now have a comprehensive suite of unit tests for the 1D heat equation solver.
We can use this test suite to validate any changes or additions to the solver’s code.
To re-run all of these tests, one can simply execute the pytest command.
To list all available tests, the pytest --collect-only command can be used.
!pytest --collect-only
============================= test session starts ==============================
platform darwin -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /Users/altuntas/r3sw/notebooks
plugins: hypothesis-6.136.9, anyio-4.10.0
collecting ...
collecting 0 items
collected 21 items / 1 error
<Dir notebooks>
<Module test_div.py>
<Function test_division>
<Module test_divergence.py>
<Function test_divergence_telescopes>
<Module test_fast_slow.py>
<Function test_fast>
<Function test_slow>
<Function test_fail>
<Module test_fast_slow_marked.py>
<Function test_fast>
<Function test_slow>
<Function test_fail>
<Module test_flux.py>
<Function test_flux_constant_field_yields_zero_interior>
<Function test_flux_spike_has_opposite_signed_fluxes>
<Module test_flux_via_params.py>
<Function test_flux_param[u0-0.1-1.0-expected0]>
<Function test_flux_param[u1-0.5-2.0-expected1]>
<Module test_manual_parametrize.py>
<Function test_flux_simple>
<Function test_flux_non_constant>
<Module test_step_solve.py>
<Function test_step_moves_spike_toward_neighbors>
<Function test_conservation_insulated_solve>
<Function test_conservation_with_boundary_work>
<Function test_symmetry_preserved_one_step>
<Function test_unstable_dt_raises>
<Function test_uniform_is_fixed_point>
<Function test_equal_flux_bc_trends_toward_linear_profile>
==================================== ERRORS ====================================
_______________________ ERROR collecting test_simple.py ________________________
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/python.py:498: in importtestmodule
mod = import_path(
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/pathlib.py:587: in import_path
importlib.import_module(module_name)
../../miniconda3/envs/r3sw/lib/python3.12/importlib/__init__.py:90: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<frozen importlib._bootstrap>:1387: in _gcd_import
???
<frozen importlib._bootstrap>:1360: in _find_and_load
???
<frozen importlib._bootstrap>:1331: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:935: in _load_unlocked
???
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:177: in exec_module
source_stat, co = _rewrite_test(fn, self.config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/r3sw/lib/python3.12/site-packages/_pytest/assertion/rewrite.py:357: in _rewrite_test
tree = ast.parse(source, filename=strfn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/r3sw/lib/python3.12/ast.py:52: in parse
return compile(source, filename, mode, flags,
E File "/Users/altuntas/r3sw/notebooks/test_simple.py", line 4
E def test_flux_simple():
E ^
E IndentationError: expected an indented block after function definition on line 4
=========================== short test summary info ============================
ERROR test_simple.py
!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!
===================== 21 tests collected, 1 error in 1.04s =====================
Limitations of Unit Testing (and how we’ll push beyond)#
Unit tests are necessary but not sufficient:
Limited Coverage: Handpicked inputs may miss edge cases or unexpected behaviors.
Repetitive and Tedious: Writing unit tests can be repetitive and tedious, especially for functions with many parameters or complex logic.
Overfitting: Tests can become too specific, making them brittle and hard to maintain. If the implementation changes, the tests may need to be rewritten, even if the overall behavior remains correct.
Looking Ahead:#
In Chapter 4, we’ll encode properties (conservation, symmetry, maximum-principle intuition, stability ranges) and let a generator explore many inputs automatically
R3Sw tutorial by Alper Altuntas (NSF NCAR). Guest lecture by Manish Venumuddula (NSF NCAR). Sponsored by the BSSw Fellowship Program. © 2025.
Cite as: Alper Altuntas, Deepak Cherian, Adrianna Foster, Manish Venumuddula, and Helen Kershaw. (2025). “Rigor and Reasoning in Research Software (R3Sw) Tutorial.” Retrieved from https://www.alperaltuntas.com/R3Sw