MRKISS Library
MR RK (Runge-Kutta) KISS (Keep It Super Simple)
Author: | Mitch Richling |
Updated: | 2025-08-03 10:09:46 |
Generated: | 2025-08-03 10:09:48 |
Copyright © 2025 Mitch Richling. All rights reserved.
Table of Contents
- 1. Introduction
- 2. Features & Requirements
- 3. Vocabulary & Definitions
- 4. Defining Runge-Kutta Methods in
MRKISS
- 5. Predefined Runge-Kutta Methods in
MRKISS
- 6. Homogeneous vs Non-Homogeneous IVPs Naming Conventions
- 7. Providing ODE Equations For Solvers
- 8. High Level Solvers
- 9. Low Level, One Step Solvers
- 10. Quick Start – The Absolute Minimum
- 11. Using
MRKISS
In Your Projects - 12.
MRKISS
Testing - 13. FAQ
- 13.1. What's with the name?
- 13.2. Why Fortran
- 13.3. Why did you write another ODE solver when so many good options exist?
- 13.4. Why don't you use package XYZ?
- 13.5. What are those zotero links in the references?
- 13.6. Are high order RK methods overkill for strange attractors?
- 13.7. I need a more comprehensive solution. Do you have advice?
- 13.8. I need something faster. Do you have advice?
- 13.9. It seems like things are used other than Fortran. Are there really no external dependencies?
1. Introduction
MRKISS
is a simple, tiny library with zero dependencies[*] that aims to make it easy to use
and experiment with explicit Runge-Kutta methods.
From a feature standpoint this library doesn't do anything you can't find in more comprehensive packages like
SUNDIALS or Julia's
DifferentialEquations.jl
; however, it has the rather charming feature of being super simple and self contained. This makes it ideal for sharing results and
code with others others without asking them to recreate my technical computing environment or learn about new software. It also makes it easy to get things
up and and running quickly when moving to a new supercomputer, cloud, or HPC cluster.
Here is a complete example in 23 lines that solves Lorenz's system and generates a CSV file of the results.
!------------------------------------------------------------------------------------------------- program minimal use :: mrkiss_config, only: rk, ik use :: mrkiss_solvers_nt, only: steps_fixed_stab_nt use :: mrkiss_utils, only: print_solution use :: mrkiss_erk_kutta_4, only: a, b, c implicit none real(kind=rk), parameter :: y_iv(3) = [1.0_rk, 0.0_rk, 0.0_rk] real(kind=rk), parameter :: param(3) = [10.0_rk, 28.0_rk, 8.0_rk/3.0_rk] real(kind=rk), parameter :: t_end = 50.0_rk real(kind=rk) :: solution(4, 10000) integer(kind=ik) :: status, istats(16) call steps_fixed_stab_nt(status, istats, solution, eq, y_iv, param, a, b, c, t_end_o=t_end) call print_solution(status, solution, filename_o="minimal.csv") contains subroutine eq(status, dydt, y, param) integer(kind=ik), intent(out) :: status real(kind=rk), intent(out) :: dydt(:) real(kind=rk), intent(in) :: y(:) real(kind=rk), intent(in) :: param(:) dydt = [ param(1)*(y(2)-y(1)), y(1)*(param(2)-y(3))-y(2), y(1)*y(2)-param(3)*y(3) ] status = 0 end subroutine eq end program minimal
You can check out a couple larger examples:
If you just want to jump in, then take a look at the Quick Start section.
2. Features & Requirements
The IVPs I work with are generally pretty well behavied:
- Generally non-stiff
- Time forward (\(\Delta{t} \ge 0\))
- Defined by a small (<50) set of equations expressable in closed form.
Typical examples are strange attractors and systems related to chaotic science models from celestial/classical mechanics, population dynamics, oscillating chemical reactions, and electronic circuits.
My motivation for solving IVPs generally revolve around generative art and visualization. You will actually see this in the code and feature set of the library.
Things I care about:
- Simple to use for simple problems.
- Easily create custom solvers for the, admittedly bizarre, demands of generative art.
- Graceful response to evaluation failure in derivative functions
- A good selection of predefined RK methods
- Easy to use, hardwired methods for fixed step size visualization use cases:
- Fixed \(t\) step size solvers
- Fixed \(\mathbf{y}\) space step size solvers
- Solutions include derivative values by default for visualization tools that perform Hermite interpolation.
- Interpolate entire solutions to new time points (Hermite & linear).
- Programmable step processing. Examples:
- Stop the routine if the solution curve is too long in y-space
- Stop the routine if the step delta, or some components of it, are too long in y-space
- Stop the routine if the solution has returned to the IV
- Stop the routine if the solution intersects itself
- Provide an alternate y-delta and redo the step based on some condition.
- Trigger a bisection search for a t_delta fitting some condition based on t-space and/or y-space. Examples:
- Find t_delta so that y-delta, or some components of it, are the perfect length.
- Find where a step crosses over a boundary in space (ex: root finding)
- Find where a step approaches closest to a point (ex: like the problem's IV)
- Runge-Kutta Research
- Try out new RK methods by simply feeding the solvers a Butcher tableau.
- Directly accessible one step routines for assembling custom solvers.
- Simple code flow to facilitate instrumentation and deep runtime analysis and reporting.
- Individual access to each method in an embedded tableau, and control over how each is used.
- Maple worksheets rational values, variable floating point approximations, and stability graphs for every Tableau.
- I have included a few RK methods more for research interests than practical usefulness.
- Easy deployment & sharing
- Easy to compile and tune for a new architecture.
- Zero external dependencies[*] except a Fortran compiler.
- 100% standard Fortran that works with various compilers (Intel, Cray, NAG, gfortran, clang fortran, Nvidia, etc…).
- Simple text output that can be compressed and sent back home or shared with others.
Things I don't care about:
- Usage error checking. For example, the code makes no attempt to check that the user has supplied consistent Butcher tableau arguments, or that
t_delta
values are positive, etc… - Performance. I can generally perform hundreds of thousands of RK steps in a few milliseconds for the problems I work with. This gives me a lot of
performance headroom allowing me to not worry about sophisticated techniques to avoid RK steps. In fact, this library diverges from best practices in a
couple significant ways:
- I don't use interpolating polynomials for intrastep approximations. I even have a bisection routine that takes an RK step for every bisection!
- I use generic loops to compute RK steps over the Butcher tableau instead of optimized formulas.
- Butcher tableau arrays are not sparse. In fact, I even include the top and final row full of zeros!
3. Vocabulary & Definitions
Within the confines of this software, we define a system of ODEs as:
\[ \frac{\mathrm{d}\mathbf{y}}{\mathrm{d}t} = \mathbf{f}(t, \mathbf{y}) = \left[\begin{array}{c} \frac{\mathrm{d}y_1}{\mathrm{d}t} \\ \vdots \\ \frac{\mathrm{d}y_n}{\mathrm{d}t} \\ \end{array}\right] = \left[\begin{array}{c} f_1(t, \mathbf{y}) \\ \vdots \\ f_n(t, \mathbf{y}) \\ \end{array}\right] = \left[\begin{array}{c} f_1(t, [y_1, \cdots, y_n]^\mathrm{T}) \\ \vdots \\ f_n(t, [y_1, \cdots, y_n]^\mathrm{T}) \\ \end{array}\right] \]
The goal is to find numerical values for the unknown function \(\mathbf{y}:\mathbb{R}\rightarrow\mathbb{R}^{n}\).
We define an embedded explicit Runge-Kutta method via a set of coefficients organized into a Butcher tableau:
\[ \begin{array}{l|llll} c_1 & a_{11} & a_{12} & \dots & a_{1s} \\ c_2 & a_{21} & a_{22} & \dots & a_{2s} \\ c_3 & a_{31} & a_{32} & \dots & a_{3s} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ c_s & a_{s1} & a_{s2} & \dots & a_{ss} \\ \hline \rule{0pt}{12pt} & \check{b}_1 & \check{b}_2 & \dots & \check{b}_s \\ & \hat{b}_1 & \hat{b}_2 & \dots & \hat{b}_s \\ \end{array} \]
Explicit methods, which are the focus of MRKISS
, have \(c_1=0\) and \(a_{ij}=0\) for \(i\le j\).
The word embedded indicates that we actually have two explicit Runge-Kutta methods using the same \(\mathbf{a}\) matrix and \(\mathbf{c}\) vector. That is
to say each \(\mathbf{b}\) vector defines a unique, explicit Runge-Kutta method. MRKISS
supports both embedded and
non-embedded (no \(\mathbf{\hat{b}}\) vector defined) methods.
Given \(\Delta{t}\) and initial conditions (\(t_0\) and \(\mathbf{y_0}\)), we may form an approximation of \(\mathbf{y}(t_0+\Delta{t})\) as:
\[ \mathbf{y}(t_0+\Delta{t}) \approx \mathbf{y_0}+\mathbf{\Delta\check{y}} \]
and, for embedded methods, an estimate of this approximation's error from:
\[\vert\mathbf{\Delta\check{y}} - \mathbf{\Delta\hat{y}} \vert\]
With \(\mathbf{\Delta\check{y}}\) and \(\mathbf{\Delta\hat{y}}\) (we only have \(\mathbf{\Delta\hat{y}}\) for embedded methods) computed as follows:
\[ \begin{array}{l} \mathbf{\Delta\check{y}} = \Delta{t}\sum_{i=1}^s \check{b}_i \mathbf{k}_i \\ \mathbf{\Delta\hat{y}} = \Delta{t}\sum_{i=1}^s \hat{b}_i \mathbf{k}_i \\ \end{array} \]
and the \(\mathbf{k}_i\) defined as:
\[ \mathbf{k}_i = \mathbf{f}\left(t + c_i \Delta{t},\, \mathbf{y} + \Delta{t} \sum_{j=1}^{i-1} a_{ij} \mathbf{k}_j\right) \]
4. Defining Runge-Kutta Methods in MRKISS
In MRKISS
an explicit Runge-Kutta method is specified by directly providing the Butcher tableau via arguments to
subroutines.
4.1. Non-embedded Methods
a
– The \(\mathbf{a}\) matrix.c
– The \(\mathbf{c}\) vector.p
– The order of the methodb
– The \(\mathbf{\check{b}}\) vector.
Wherever arguments a
, c
, or b
appear together, they must have consistent sizes:
size(a, 1) > 0
size(a, 1) == size(a, 2)
size(b, 1) == size(a, 1)
size(c, 1) == size(a, 1)
The value of p
must be a positive integer.
4.2. Embedded Method
Instead of a single b
and p
argument, we have b1
, p1
, b2
, and p2
.
a
– The \(\mathbf{a}\) matrix.c
– The \(\mathbf{c}\) vector.p1
– The order of the method associated with \(\mathbf{\check{b}}\) vector.b1
– The \(\mathbf{\check{b}}\) vector.p2
– The order of the method associated with the \(\mathbf{\hat{b}}\) vector (only for embedded methods).b2
– The \(\mathbf{\hat{b}}\) vector (only for embedded methods).
Wherever arguments a
, c
, or b
appear together, they must have consistent sizes:
size(a, 1) > 0
size(a, 1) == size(a, 2)
size(b1, 1) == size(a, 1)
size(b2, 1) == size(a, 1)
size(c, 1) == size(a, 1)
The values of p1
and p2
must be a positive integers.
5. Predefined Runge-Kutta Methods in MRKISS
MRKISS
provides several predefined methods in modules found in the
"lib/
" directory. Each module defines a single tableau via parameters with names mirroring the
Butcher Tableau arguments documented in the previous section. In addition, these modules also have a parameter containing the number of
stages for the overall method and the number of stages for any embedded method that differs from the overall method.
s
– The number of stages for the entire method.s1
– The number of stages for theb1
method if it differs froms
.s2
– The number of stages for theb2
method if it differs froms
.
In some special cases an EERK may have more than two methods embedded. If so you may find variables for these additional methods following the same
naming conventions. See mrkiss_eerk_cash_karp_5_4.f90
for an example.
The modules follow a simple naming conventions:
- They have one of two prefixes:
mrkiss_eerk_
- The module contains an embedded explicit Runge Kutta method.
mrkiss_erk_
- The module contains an explicit Runge Kutta method – i.e. it is not embedded.
- The names end with numbers indicating the orders of the
b1
andb2
methods. These numbers are separated from the rest of the name by an underscore.
In addition to the parameters, the comments in these files normally include at least the following three sections:
IMO
- Personal commentary about the method in question. Please note this material is simply my personal opinion.
Known Aliases
- These include names used in the literature as well as names in some common ODE software.
References
- I try to include the original reference if I have it. I also frequently include discussions found in other texts.
To make all this concrete, here is what one of these modules looks like (mrkiss_erk_kutta_4.f90
):
!----------------------------------------------------------------------------------------------------------------------- !> Butcher tableau for the classic 4 stage Runge-Kutta method of O(4) !! !! IMO: Useful for low accuracy applications; however, I find I rarely use it. !! !! Known Aliases: 'RK4' (OrdinaryDiffEq.jl), 'RK41' (Butcher), & 'The Runge-Kutta Method'. !! !! References: !! Kutta (1901); Beitrag Zur N\"herungsweisen Integration Totaler Differentialgleichungen; Z. Math. Phys. 46; p435-53 !! Hairer, Norsett & Wanner (2009). Solving Ordinary Differential Equations. I: Nonstiff Problems. p138 !! Butcher (2016); Numerical Methods for Ordinary Differential Equations. 3rd Ed; Wiley; p102 !! module mrkiss_erk_kutta_4 use mrkiss_config, only: rk, ik implicit none public integer(kind=ik), parameter :: s = 4 real(kind=rk), parameter :: a(s,s) = reshape([ 0.0_rk, 0.0_rk, 0.0_rk, 0.0_rk, & 1.0_rk, 0.0_rk, 0.0_rk, 0.0_rk, & 0.0_rk, 1.0_rk, 0.0_rk, 0.0_rk, & 0.0_rk, 0.0_rk, 2.0_rk, 0.0_rk], [s, s]) / 2.0_rk real(kind=rk), parameter :: c(s) = [ 0.0_rk, 1.0_rk, 1.0_rk, 2.0_rk] / 2.0_rk integer(kind=ik), parameter :: p = 4 real(kind=rk), parameter :: b(s) = [ 1.0_rk, 2.0_rk, 2.0_rk, 1.0_rk] / 6.0_rk end module mrkiss_erk_kutta_4
Also note all the zeros. KISS! Seriously, it takes up a tiny bit of extra space and simplifies the code considerably…
Each embedded method defines two Runge-Kutta methods. Normally these two methods are used in conjunction to simultaneously estimate the solution and the
error. In this library, the p1
& b1
method is recommended for approximating the solution while the p2
& b2
method should be used to estimate error.
This is a recommendation, and is in no way enforced by the library. When the higher order method is used for the solution, we say we are using local
extrapolation. Note that each of the methods in an embedded Butcher tableau may be used individually as a non-embedded method.
In addition to the module files, several maple worksheets may be found in the
"rk_methods_maple/
" directory. The filenames mirror the names of the modules. These
worksheets contain the coefficients for the method's Butcher tableau, code to convert the coefficients into floating point values, and a plot of the method's
stability region.
5.1. Predefined Non-embedded Methods
Module Name | Order | Stages | Status |
---|---|---|---|
mrkiss_erk_euler_1 |
1 | 1 | BOO |
mrkiss_erk_midpoint_2 |
2 | 2 | |
mrkiss_erk_ralston_2 |
2 | 2 | BOO |
mrkiss_erk_ralston_3 |
3 | 3 | |
mrkiss_erkknoth_wolke_3 |
3 | 3 | |
mrkiss_erk_ralston_4 |
4 | 4 | |
mrkiss_erk_kutta_4 |
4 | 4 | |
mrkiss_erk_kutta_three_eight_4 |
4 | 4 | |
mrkiss_erk_feagin_10 |
10 | 17 | EXP |
5.2. Predefined Embedded Methods
Module Name | Ord_1 | Ord_2 | Stages | Status |
---|---|---|---|---|
mrkiss_eerk_heun_euler_2_1 |
2 | 1 | 2 | |
mrkiss_eerk_bogacki_shampine_3_2 |
3 | 2 | 4 | BOO |
mrkiss_eerk_fehlberg_4_5 |
4 | 5 | 6 | |
mrkiss_eerk_sofroniou_spaletta_4_3 |
4 | 3 | 5 | BOO |
mrkiss_eerk_cash_karp_5_4 |
5 | 4 | 6 | |
mrkiss_eerk_bogacki_shampine_4_5 |
4 | 5 | 7 | |
mrkiss_eerk_dormand_prince_5_4 |
5 | 4 | 7 | BOO |
mrkiss_eerk_verner_7_6 |
7 | 6 | 10 | |
mrkiss_eerk_fehlberg_7_8 |
7 | 8 | 13 | |
mrkiss_eerk_dormand_prince_7_8 |
7 | 8 | 13 | BOO |
mrkiss_eerk_verner_8_7 |
8 | 7 | 13 | |
mrkiss_eerk_verner_9_8 |
9 | 8 | 16 | BOO |
6. Homogeneous vs Non-Homogeneous IVPs Naming Conventions
Throughout the code you will see subroutines, functions, and types suffixed with "_nt
" or "_wt
":
_nt
stands for "No T" – homogeneous problems._wt
stands for "With T" – non-homogeneous problems.
In the documentation below you will see "_*t
" in subroutine names as shorthand to indicate both the "_nt
" and "_wt
" versions.
7. Providing ODE Equations For Solvers
The equation to be solved is implimented in a user provided subroutine with one of the following two signatures:
For Non-Homogeneous (with t) problems:
subroutine deq_iface_wt(status, dydt, t, y, param) ! Non-Homogeneous Case (with t) integer(kind=ik), intent(out) :: status real(kind=rk), intent(out) :: dydt(:) real(kind=rk), intent(in) :: t real(kind=rk), intent(in) :: y(:) real(kind=rk), intent(in) :: param(:)
For Homogeneous (no t) problems:
subroutine deq_iface_nt(status, dydt, y, param) ! Homogeneous Case (no t) integer(kind=ik), intent(out) :: status real(kind=rk), intent(out) :: dydt(:) real(kind=rk), intent(in) :: y(:) real(kind=rk), intent(in) :: param(:)
The arguments are as follows:
status ........ A status code. A positive value indicates failure. Do not return a value larger than 255! dydt .......... The value of for f(t, y) is returned in this argument t ............. The time (only for deq_iface_wt) y ............. Values for the dependent variables param ......... Constant parameters
This function should return the value for \( \mathbf{f}(t, \mathbf{y}) \) in dydt
. The value of status
should be non-positive, \((-\infty, 0]\), if
everything worked, and a value between 1 and 255 inclusive, \([1, 255]\), if something went wrong. This value will be passed back via the status
argument
of higher level routines to indicate an error condition.
8. High Level Solvers
steps_adapt_etab_*t()
uses traditional adaptive step size- This solver is very similar to solvers found in other ODE packages.
- Programmable step processing
- A programmable bisection option to solve for interesting t_delta values
- Sophisticated curve length computations, and exit options when a maximum length is reached
- It can end precisely on a time value, or it can simply quit when a step goes beyond a maximum time value.
- These last two could be achieved with the programmable step processing and bisection features, but these requirements are so common that is convenient to have them directly available.
steps_fixed_stab_*t()
uses fixed time steps- Solution points separated by fixed time steps allow animations of the solution to naturally display velocity.
- This is a good place to start when writing a custom solver.
- With most modern ODE packages, this would be done with interpolation.
- This routine has the option to use Richardson extrapolation.
steps_condy_stab_*t()
uses fixed (y-space) steps- Produce solution points separated by fixed deltas in y-space, or some subset of y-space.
- This is a good place to start when writing a custom solver with a bisection step.
- A parametric plot of the first two components of a solution looks better when the points are uniformly separated.
- With most modern ODE packages, this would be done with interpolation.
8.1. High Level Solver Common Arguments
The first several arguments are common across the higher level solvers.
8.1.1. Results (first three arguments):
status
- This is an integer return code. A positive value means failure – see the documentation for each routine for details.
istats
- Statistics regarding the solver run.
istats(1)
: number of computed solution pointsistats(2)
: number of one_step_* calls not triggerd by an eventistats(3)
: number of one_step_* calls triggered by y_delta length constraintistats(4)
: number of one_step_* calls triggered by y_delta error constraintistats(5)
: number of one_step_* calls triggered by step processing with new t_deltaistats(6)
: number of one_step_* calls triggered by SDF bisectionistats(7)
: number of times bisection failed because of max_bisect_oistats(8)
: number of times bisection failed because target was not contained
solution
- Array for solution.
Each column is a solution. By default each column contains \(t\), the coordinates ofy
, and the derivatives. The coordinates ofy
start in columnsol_y_idx_o
, by default2
. The inclusion of \(t\) may be suppressed viasol_no_t_o
, and the inclusion of the derivatives may be suppressed viasol_no_dy_o
.
8.1.2. The IVP
deq
- The subroutine used to evaluate the derivative function
t
- The initial value for \(t\).
y
- The initial value for \(\mathbf{y}\).
param
- A set of real values passed to
deq()
. These are usually constants in the defining equation.
8.1.3. The Butcher Tableau
These arguments vary a bit, but mirror the names documented in the section on predefined Runge-Kutta methods.
9. Low Level, One Step Solvers
Behind all of the above high level solvers are single step routines to carry out the step calculations. These are handy for creating DIY solvers.
one_step_stab_*t()
non-embedded RK methodsone_richardson_step_stab_*t()
uses Richardson extrapolation with non-embedded RK methodsone_step_etab_*t()
embedded RK methodsone_step_rk4_*t()
hardwired RK4 for unit testsone_step_rkf45_*t()
hardwired RKF45 for unit tests
10. Quick Start – The Absolute Minimum
If you are interested playing around with MRKISS
as quickly as possible, then this section is for you.
10.1. Getting MRKISS
10.2. Check Out The Examples
The newly cloned repository will contain a directory called "examples/
".
Change into the examples/
directory.
cd MRKISS/examples
10.2.1. Using something other than gfortran
This directory contains a makefile
used to build all the examples. This makefile
may require modification if you are not using gfortran
. At the top of
each makefile you will find something like this:
############################################################### MRKISS_PATH = .. include $(MRKISS_PATH)/make_includes/tools_gfortran.mk # include $(MRKISS_PATH)/make_includes/tools_flang.mk # include $(MRKISS_PATH)/make_includes/tools_ifx.mk # include $(MRKISS_PATH)/make_includes/tools_lfortran.mk # include $(MRKISS_PATH)/make_includes/tools_nvfortran.mk include $(MRKISS_PATH)/make_includes/include.mk ###############################################################
If you want to use a different compiler, then you may be able to simply uncomment the appropriate line if your system is similarly configured to mine. If you
are unlucky, then you may need to set some variables. In particular, you might need to comment out the gfortran
include and add something like this:
AR := ar FC := nvfortran FFLAGS := -O3 -Wall -W -Xlinker -z -Xlinker execstack FSHFLG = -o $(MRFFL_SHARED_LIB_FILE) -shared $(MRFFL_OBJ_FILES)
The only tricky one is the FSHFLG
variable. Luckily you only need the FSHFLG
variable if you plan on building a shared library. The shared library is
completely unnecessary for making full use of the modules, so you you can safely ignore that one unless you really, really want to use a shared library. ;)
10.2.2. Build An Example
Once you have the makefile
worked out, pick an example to build. For example, we might try the one called
lorenz.f90
:
make lorenz ls
rm -f mrkiss_config.obj mrkiss_config.mod gfortran -O3 -Wsurprising -W -std=f2023 -c ../src/mrkiss_config.f90 -o mrkiss_config.obj rm -f mrkiss_utils.obj mrkiss_utils.mod gfortran -O3 -Wsurprising -W -std=f2023 -c ../src/mrkiss_utils.f90 -o mrkiss_utils.obj rm -f mrkiss_solvers_wt.obj mrkiss_solvers_wt.mod ...... gfortran -O3 -Wsurprising -W -std=f2023 lorenz.f90 ....
Assuming the build worked, we can now run the code. On UNIX systems the binary will be called lorenz
and on Windows it will be called lorenz.exe
. On
Windows running it looks like this:
./lorenz.exe
Milliseconds: 0.000 Solution Points: 10000 Total one_step calls: 9999 Adjustment one_step calls: 0
That's not very interesting. The fun part is what it did in the background. The program should produce a file called lorenz.csv
that has the solution
curve. If you have GNU Plot, you can graph it with something like this:
gnuplot -p < lorenz.gplt
11. Using MRKISS
In Your Projects
All of the code is in the module source files with no external dependencies at all. So you just need to call the modules from your code, and then compile/link everything together.
You can do that by just listing all the source files on the command line with most Fortran compilers. For example, we could compile the
lorenz.f90
example in the
examples/
directly like this:
cd examples
gfortran.exe lorenz.f90 ../src/*.f90
That said, most people will probably want to use a build system. If GNU Make is your thing, then the files in the
make_include/
directory may be of help. In particular the makefile fragment
include.mk
provides useful targets and variables. The makefile in the
examples/
directory is a good guide on how to use
include.mk
. In essence you do the following in your makefile:
- Set MRKISS_PATH in your makefile to the path of the
MRKISS
source directory – that's the one with theinclude.mk
file. - Set FC, FFLAGS, & AR if necessary – most of the time you can use the defaults.
- Include the "
include.mk
" file in theMRKISS
source directory. - Add a build rule for your program.
Your makefile will look something like this:
MRKISS_PATH = ../MRKISS # Set FC, FFLAGS, & AR here. The include below has the settings I use on my system. include $(MRKISS_PATH)/tools_gfortran.mk include $(MRKISS_PATH)/include.mk your_program : your_program.f90 $(MRKISS_OBJ_FILES) $(FC) $(FFLAGS) $^ -o $@
Note the rule for your_program
in the makefile above takes the lazy approach of adding every MRKISS
module as a
dependency regardless of if your program actually needs them all. This is how most people use the modules because it's simple. The cost might be a couple
seconds of extra compile time. You can explicitly list out the modules in the makefile if you wish. Such a rule might look like the following:
your_program : your_program.f90 mrkiss_config$(OBJ_SUFFIX) mrkiss_solvers_wt(OBJ_SUFFIX) mrkiss_utils$(OBJ_SUFFIX) $(FC) $(FFLAGS) $^ -o $@
11.1. Notes about include.mk
11.1.1. Names of files
- File extensions on Windows (outside of WSL)
- Executable files use
.exe
- Shared libraries use
.dll
- Object files will
.obj
- Executable files use
- On UNIX systems (not including MSYS2)
- Executable files have no extension
- Shared libraries use
.so
- Object files will use
.o
11.1.2. Useful Variables
MRKISS_MOD_FILES
- All the module (
.mod
) files. These will appear in your build directory. MRKISS_OBJ_FILES
- All the object (
.obj
or.o
) files. These will appear in your build directory. MRKISS_STATIC_LIB_FILE
- The name of the static library file. It's not created by default. It will appear in your build directory if it is listed as a dependency on one of your targets.
MRKISS_SHARED_LIB_FILE
- The name of the shared library file. It's not created by default. It will appear in your build directory if it is listed as a dependency on one of your targets.
11.1.3. Useful Targets
all_mrkiss_lib
- Builds the library files.
all_mrkiss_mod
- Builds the module (
.mod
) files all_mrkiss_obj
- Builds the object (
.obj
or.o
) files clean_mrkiss_mod
- Deletes all the
MRKISS
module (.mod
) files in the build directory. clean_mrkiss_obj
- Deletes all the
MRKISS
object (.obj
or.o
) files in the build directory. clean_mrkiss_lib
- Deletes all the library files in the build directory.
clean_mrkiss
- Simply calls the following targets:
clean_mrkiss_mod
,clean_mrkiss_obj
, &clean_mrkiss_lib
clean_multi_mrkiss
- The previous clean targets will only remove products from the current platform. For example, the
clean_mrkiss_obj
target will delete object files with an extension of.obj
on windows and an extension of.o
on UNIX'ish platforms. I use the same directories to build for all platforms, so I sometimes want to clean up the build products from all platforms at once. That's what this target will do.
11.1.4. Static Library
A rule to make a static library is included in include.mk
. A build rule like the following should build that library and link it to your executable. Note
I'm just including the library file on the command line instead of linker like options (i.e. -L
and -l
for GNU compilers). That's because simply including
the library on the command line is broadly supported across more compilers – this way I don't have to document how to do the same thing for each one. ;)
your_program : your_program.f90 $(MRKISS_STATIC_LIB_FILE) $(FC) $(FFLAGS) $^ $(MRKISS_STATIC_LIB_FILE) -o $@
11.1.5. Dynamic Library (.so
and .dll
files)
A rule to make a static library is included in include.mk
. You can build it with the target clean_mrkiss_lib
, or by using $(MRKISS_SHARED_LIB_FILE)
as a
dependency in your build rule. As the options to link to a shared library differ wildly across platforms and compilers/linkers, I don't provide an example of
how to do that.
12. MRKISS
Testing
This section is about how I test MRKISS
.
The tests/
directory contains code I primary use for testing
MRKISS
while the examples/
directory contains code I
primarily use to demonstrate how to use MRKISS
. The difference between a "test" and an "example" in
MRKISS
is a little bit slippery. Some of the tests, like tc1_*
and tc2_*
, could be considered
demonstrations. In addition, I use all of the code in examples/
for tests.
The tests can be run by changing into the appropriate directory (tests/
or
examples/
), and building the make target tests
. For example:
cd tests
make -j 16 tests
Note the -j 16
argument to make. When running all the tests, especially in the tests/
directory,
I strongly recommend running in parallel.
In addition, the make files in tests/
and
examples/
have numerous additional targets to run various individual tests or subsets of the
test suite. These targets are documented in the subsections below.
12.1. Refrence One Step Solvers
The production solvers in MRKISS
all consume Butcher tableaux in the mrkiss_erk*
and mrkiss_eerk*
modules.
Therefore the accuracy of these tableaux are critical to the proper operation of MRKISS
.
I don't have a way to automatically test that the data in every tableau is correct. That said, I can check some of the most important ones by comparing
output of one_step*()
solvers using them with hand coded solvers implemented using alternate reference sources.
Some of the most used RKs in history are the classical O(4) method (mrkiss_erk_kutta_4
), Fehlberg's embedded O(4,5) method (mrkiss_eerk_fehlberg_4_5
), and
Dormand & Prince's embedded O(5,4) method (mrkiss_eerk_dormand_prince_5_4
). To this end the mrkiss_solvers_wt
module contains hand written versions of
these solvers (one_step_rk4_wt()
, one_step_rkf45_wt()
, & one_step_dp54_wt()
). The MRKISS
test suite contains
three tests corresponding to these hand written solvers:
test_dp54
Runs the following teststest_dp54_ref_vs_stab_5
Test tableau data of the O(5) method vs hand coded results.test_dp54_ref_vs_stab_4
Test tableau data of the O(4) method vs hand coded results.test_dp54_stab_vs_etab_5
Test consistency ofone_step_stab_wt()
andone_step_etab_wt()
on O(5) method.test_dp54_stab_vs_etab_4
Test consistency ofone_step_stab_wt()
andone_step_etab_wt()
on O(5) method.test_dp54_archive
Test new output with archived output indata/
.
test_rkf45
Runs the following teststest_rkf45_ref_vs_stab_5
Test tableau data of the O(5) method vs hand coded results.test_rkf45_ref_vs_stab_4
Test tableau data of the O(4) method vs hand coded results.test_rkf45_stab_vs_etab_5
Test consistency ofone_step_stab_wt()
andone_step_etab_wt()
on O(5) method.test_rkf45_stab_vs_etab_4
Test consistency ofone_step_stab_wt()
andone_step_etab_wt()
on O(5) method.test_rkf45_archive
Test new output with archived output indata/
.
test_rk4
Runs the following teststest_rk4_stab_vs_steps
Test consistency ofsteps_fixed_stab_wt()
vs hand coded loop.test_rk4_ref_vs_stab
Test tableau data of the method vs hand coded results.test_rk4_ref_vs_hnd
Test hand coded method vs hand computed results.test_rk4_archive
Test new output with archived output indata/
.
A note about the tests for consistency between one_step_stab_wt()
and one_step_etab_wt()
is probably in order. The subroutine one_step_stab_wt()
is
generated from one_step_etab_wt()
– see the comment above one_step_etab_wt()
in
mrkiss_solvers_wt.f90
for details. These tests make sure my code generation is
working and produces correct results.
12.2. Tableau Plausibility Tests
In the previous section we have some spot checks for three tableaux. In this section we have tests that verify the plausibility of the remaining methods. What do I mean by "plausibility"? I mean that we can verify that the tableaux define methods that at least seem to act like well behaved RK methods of the appropriate order.
These tests also serve as "change detection". That is to say, if I change something in the code and get different results then I may have introduced a bug.
tc1_png
This will run the tests and display diagnostic visualizations a human can check for plausibility. Note thetc1.R
file contains R code that may be useful in investigating these results.test_tc1
This will compare the output of the tests to archived results indata/
to detect changes.
Note these tests include about 40 individual test cases each with it's own Fortran source. These tests are generated from a single seed Fortran source file
named tc1_template.f90
that is expanded into the remaining source files via the
script tc_make_make.rb
. All of this is handled in the make file which will
regenerate all the test source if the template is modified.
The test equation used is:
\[ \frac{\mathrm{d}y}{\mathrm{d}t} = e^\left(-t^2\right) \]
The primary diagnostic plot is of global error:
12.3. Homogeneous Solvers and More Tableau Plausibility Tests
In the previous section we documented plausibility tests for the tableaux via the mrkiss_solvers_wt
module. This set of tests
continues that with a new test equation:
\[ \frac{\mathrm{d}y}{\mathrm{d}t} = -2y \]
For these tests we use the homogeneous solvers in the mrkiss_solvers_nt
module. This source code for this module is entirely generated from
mrkiss_solvers_wt.f90
via
wt2nt.sed
. So these tests also serve as a way to make sure this code gets generated and
produces reasonable results.
Like the tests in the previous section these tests also serve as "change detection". That is to say, if I change something in the code and get different results then I may have introduced a bug.
tc2_png
This will run the tests and display diagnostic visualizations a human can check for plausibility. Note thetc2.R
file contains R code that may be useful in investigating these results.test_tc2
This will compare the output of the tests to archived results indata/
to detect changes.
Note these tests include about 40 individual test cases each with it's own Fortran source. These tests are generated from a single seed Fortran source file
named tc2_template.f90
that is expanded into the remaining source files via the
script tc_make_make.rb
. All of this is handled in the make file which will
regenerate all the test source if the template is modified.
The primary diagnostic plot is of global error:
12.4. Richardson Extrapolation
This is another plausibility and change detection test. This test compares one_step_stab_wt()
and one_richardson_step_stab_wt()
using mrkiss_erk_euler_1
as the method.
test_rich
Runs the code and compares the output to archived output indata/
.rich_png
Produces diagnostic plots for human to verify the behavior is plausible.
The primary diagnostic plot is of global error:
12.5. Short Stages
The one_step_stab*()
, steps_fixed*()
, steps_condy*()
, and steps_sloppy_condy*()
solvers all use the b
vector to determine the number of stages –
not the size of a
. This allows us to use a shorter b
vector cutting off trailing zero entries. This test makes sure this functionality works.
test_short_b
Runs the following teststest_short_b_sub_vs_arc
Compare full stage results with truncated resultstest_short_b_all_vs_sub
Compare with archived output indata/
.
12.6. Examples Are Tests Too
The tdata/
directory in
examples/
is used to archive old output files generated from running the examples. The
makefile
contains tests to run the examples and compare the results to what was archived.
These tests serve as "change detection" helping me to identify the introduction of bugs.
tests
Runs the following testsminimal_test
Creates and checks minimal.csv.brusselator_test
Creates and checks brusselator.csv.lorenz_test
Creates and checks the CSV files created bylorenz
.three_body_test
Creates and checks the CSV files created bythree_body
.
13. FAQ
13.1. What's with the name?
It's an overlapping acronym
MRKISS => MR RK KISS => Mitch Richling's Runge-Kutta Keep It Super Simple
It amuses me, perhaps more than it should, having such a complex name for a super simple library.
13.2. Why Fortran
I do most of my programming in other languages, but I really like Fortran specifically for this kind of work. It's just good at math. Especially when vectors and matrices are involved.
13.3. Why did you write another ODE solver when so many good options exist?
For a long time I have had a few annoyances related available packages:
- Sharing results and code required others to install and learn a complex software tool chain.
- Some generative art use cases drive some odd requirements that can be frustratingly difficult to do with some packages.
- Getting tools installed on new supercomputers and HPC clusters can be a challenge. It can even be annoying in the cloud.
The "last straw" was the frustration of spending four hours trying to get my normal technical computing environment deployed to a new supercomputer with insufficient user privilege and a broken user space package manager.
In short, sometimes I just want something to work without downloading and installing gigabytes of stuff.
Oh. And I enjoy writing this kind of code..
13.4. Why don't you use package XYZ?
Don't get me wrong, I do use other packages!
One of my favorites is Julia's DifferentialEquations.jl
. For bare-metal, I'm quite fond of
SUNDIALS. I also find myself using higher level tools like
R, MATLAB/Octave, and
Maple/Maxima.
13.5. What are those zotero links in the references?
Zotero is a bibliography tool. On my computer, those links take me to the Zotero application with the reference in question highlighted. This allows me to see the full bibliography entry and related documents (like personal notes, etc…).
Unfortunately they are not of much use to anyone but me.
13.6. Are high order RK methods overkill for strange attractors?
Yes. In fact, Euler's method is normally good enough for strange attractors.
13.7. I need a more comprehensive solution. Do you have advice?
My favorite is Julia's DifferentialEquations.jl
. It is comprehensive, well designed, fast, and
pretty easy to use.
If you are looking for something you can call from C, C++, or Fortran then my first choice is SUNDIALS.
The ode*
set of commands in MATLAB/Octave are easy to use, work well, and are extensively
documented. In addition, Octave has lsode
built-in which is pretty cool.
Maple has a good selection of numerical solvers, a well designed interface, and rich ODE related graphics. It also has some of the best symbolic ODE capabilities available.
If you are doing statistics in combination with ODEs, then R is a fantastic choice.
13.8. I need something faster. Do you have advice?
All of the options listed for the question "I need a more comprehensive solution. Do you have
advice?" are faster than MRKISS
. In particular Julia's
DifferentialEquations.jl
and SUNDIALS.
If you are looking for something small without a lot of dependencies, then you might like Hairer's classic
codes – they are faster than MRKISS
.
13.9. It seems like things are used other than Fortran. Are there really no external dependencies?
I use several tools in the development of MRKISS
. In addition several of the examples use external tools to draw
graphs. None of these tools are required to compile and use the package because I have included all the generated code in the repository. Here is a summary:
- POSIX shell (
sh
) - Used to generate
one_step_stab_wt
fromone_step_etab_wt
inmrkiss_solvers_wt.f90
. - Used in some makefile constructs for code generation, plotting, testing, etc…
- Used to generate
- sed
- Used to generate
one_step_stab_wt
fromone_step_etab_wt
inmrkiss_solvers_wt.f90
. - Generates
mrkiss_solvers_nt.f90
frommrkiss_solvers_wt.f90
.
- Used to generate
- ruby
- Generates code and make files for testing
- My
float_diff.rb
script, used by the tests.
- R
- Used to visualize output files
- GNUplot
- Used to visualize output files
- Maple
- Used for Butcher tableau computations.
- nomacs
- Used to display images
- ImageMagick
- Used to process and/or convert image files