Updated May 2019 to be much more comprehensive than the original September 2015 version
A little background
The motivation for the original version of this article was to flag Docker as a possible solution to:
- the problem of installing OpenFOAM® on anything other than a mainstream Linux distribution
- providing a consistent CFD environment across a range of operating systems
- make it easier to work with several different versions (&/or different releases) of OpenFOAM on the same machine
Since then, both OpenCFD & The OpenFOAM Foundation have started distributing Docker-ised versions of their respective releases, easing these issues. But I think there’s still some confusion about using OpenFOAM with Docker. What is Docker? Why is it useful for CFD? What is it good/not-so-good for? And, most importantly, how do you actually use it? That’s what I hope this updated version might achieve.
Table of Contents
In the olden days…
Back in late-2015 running OpenFOAM® on something other than a mainstream Linux distribution was a daunting proposition. Back then, running OpenFOAM on Windows, macOS (or even a niche Linux distribution) involved virtual machines, patching &/or compiling from source or using an unofficial port. Not impossible, but not ideal, particularly if you were just looking to take OpenFOAM for a quick spin.
On Windows that problem has largely gone away with the introduction of the Windows Subsystem for Linux. A feature that lets you run Linux command-line tools alongside your usual Windows programs. Meaning you can install OpenFOAM on Windows as easily as you can on Linux (especially if you’re running the Foundation version & using their Ubuntu packs).
But if you want to run OpenFOAM on multiple operating systems with a high-degree of confidence that they’re all running the same version of the code and that they’ll all behave consistently, then perhaps Docker could be useful?
I typically run OpenFOAM on macOS, and on AWS instances running Ubuntu Linux. It would be pretty helpful to me if…
- all machines had the same CFD environment (identical OpenFOAM releases, OS versions & other software tools)
- behaviour was consistent across the board (scripts that worked locally, also worked in the cloud)
- the process of updating/upgrading OpenFOAM was consistent & could be done just once
- I could deploy the same CFD environment anywhere, without doing any extra work.
That’s all possible & it’s thanks to an idea called containerisation & an open-source project called Docker
Never heard of either? Read on. But, if you don’t run on multiple operating systems, don’t need to manage multiple versions/releases or you need to juice every last drop of performance from OpenFOAM, then this may not be the post for you…sorry.
What is Docker?
Docker is set of tools that make running, building and managing software containers much easier than it otherwise might be.
But what is a container in this context? Think of a shipping container. They’re standardised so they can be handled & transported wherever they end up in the world. The container doesn’t need to be modified to go on a ship, on a train, to be lifted by a crane or hauled on a truck. In the same way, a Docker container lets you package up all of the code you need to run a particular set of computing tasks. That container can then be used on any machine running the Docker Engine. And, as the Docker Engine can be installed on almost any infrastructure, your container can be used, without modification, almost anywhere.
So, by packaging your CFD environment (OpenFOAM & whatever other tools you may need) into a container you can, thanks to Docker run OpenFOAM pretty much anywhere.
What problem does Docker solve for CFD?
The number one benefit of Docker for CFD is how easy it becomes to provide identical, isolated CFD environments on any platform or operating system. Reducing system administration time and increasing confidence that everything will work (just about) everywhere🤞.
Why not just use a virtual machine?
We could get broadly the same effect using a virtual machine running Linux. Except with Docker we get these extra benefits:
- Lightweight — unlike a virtual machine, containers don’t need to contain an entire operating system in order to function. As such, containers can be smaller than a virtual machine whilst providing equivalent functionality. Docker also use package up the contents of the containers in layers, rather than a complete snapshot. So if only one layer of a container’s contents changes the other layers can be re-used rather than having to be re-downloaded and duplicated.
- Easily Configured — Docker containers can also be stored as a plain-text config file, a Dockerfile. This is the recipe that describes the ingredients & the steps required to create the contents of a container. Being plain-text it’s readable, understandable, sharable, archivable, version controllable and easily edited. Need to share your exact CFD environment with a client so that they can do some runs themselves? No problem — email them the Dockerfile & away they go. Try doing that with a virtual machine image.
- Quick Start — Docker is architected in such a way that starting a process in a container is almost instantaneous. So, no waiting for your virtual machine to boot or leaving it running just in case you want to access it quickly.
- Isolated — Containers are effectively self-contained sandboxes. By isolating different OpenFOAM versions in their own containers you can have multiple OpenFOAM releases (&/or versions) on one system without needing to worry about incompatible libraries or clashing language versions. Plus, you can update or modify any one of those different versions completely independent of the others.
Performance Comparison: Docker vs. Native
But, surely the performance takes a hit when running OpenFOAM in a container, doesn’t it?
Docker introduces an additional interface between our code & the underlying system resources on Windows & Mac (less so on Linux) which can reduce performance. This is the main reason why it probably isn’t the approach for someone looking to juice the last drops of crunch power from their system.
As a very simple illustration, I timed the execution of the standard motorbike tutorial in OpenFOAM v6. And the winner was…
- Foundation Ubuntu package running in Docker = 5m 10s
- Foundation Ubuntu package running natively = 5m 13s
Surprised? Me too. Your mileage may vary. The takeaway here is not who won, but that the performance gap can be negligible.
Installing OpenFOAM via Docker
Installing OpenFOAM can be ridiculously easy with Docker - it can also be ridiculously complicated & frustrating if you’re not familiar with it. Here are some options for getting started.
The 5-min Version
These are the very basic steps to get OpenFOAM running on any machine in 5 mins:
- Download & Install Docker Community Edition
- On Mac - Docker Desktop
- On Windows - Docker Desktop
- On Linux - Docker Engine
- Open a new terminal window & see if Docker’s there
docker --version
- Search the online Docker repository (Docker Hub) for an OpenFOAM image
docker search openfoam
- Choose an existing image & start it (it will download the contents the first time you run it)
docker container run -ti cfdengine/openfoam
- Do OpenFOAM-y things in your new container, just to show it worked
The “5 mins” bit depends on your internet connection, but you can get the idea from the more-or-less realtime animation below.
Installing the Official Releases
Both OpenCFD & The OpenFOAM Foundation use Docker to package releases for Windows, Mac and other Linux versions.
Their approaches vary slightly, but you can find their detailed installation & usage instructions here:
Using OpenFOAM in a Docker container
Installation is really only half of the story. How you use any given container depends heavily on how it was built. So, before we go on to show you how to use OpenFOAM via Docker, you need to understand a little bit more about containers and how their images are built.
Terminology primer - you run a container, which is created from an image which was built from a Dockerfile (or it should’ve been 😉).
Here’s our command from our 5-min install:
docker container run -ti cfdengine/openfoam
Running this fetches my simple OpenFOAM image from Docker Hub and starts a container on our local machine. That drops us into a new BASH shell, as a user called “foam”, with access to all of the codes within the container (a minimal Ubuntu install with OpenFOAM v6 but no ParaView). Most of this functionality is down to how the image was built (using this Dockerfile).
An OpenFOAM Docker image that was built differently will need to be used differently. Hence the official OpenFOAM Docker versions (from The OpenFOAM Foundation & OpenCFD) have “helper” scripts to start the containers. These scripts setup how we access them and determine how we work with them.
To help us understand a bit more about how this all works, let’s piece together our own command to work with OpenFOAM in a container, starting from our earlier example:
docker container run -ti cfdengine/openfoam
The docker container run
bit is self-explanatory and the -ti
switches tell Docker that we want to use the container as an interactive terminal (so we can type stuff in there).
However, if we use this command as-is we’re going to create a new container every time we type it. It won’t have to download the image every time, it only does this the first time around. But we will end up with unused containers hanging around our system, looking untidy & taking up space. So let’s add a new switch --rm
to automatically remove the container after we’ve finished with it.
docker container run -ti --rm cfdengine/openfoam
But, that means anything we do in the container, any files we create (or change) or any data we produce, will be lost when we exit that container. Less than ideal, how do we avoid that?
My number one Docker recommendation around this is do not store your simulation data in the container. Instead let’s give our container access to just a little bit of our local filesystem.
docker container run -ti --rm -v $PWD:/data -w /data cfdengine/openfoam
By adding the -v
switch we’re asking Docker to mount our current working directory ($PWD) as /data
in the container. We’ve also added the -w
switch to tell Docker that we’d like to be in /data
when the container starts.
This is the command I use to run OpenFOAM on my Mac. I navigate to my project directory, fire off the above command, do OpenFOAM-things & then exit the container. You can always create an alias for it if you don’t want to have to type it every time.
Automation (i.e. Running Scripts) via Docker
All of the previous examples have seen us using the container interactively, but we can also use it to run commands by tagging them onto the end of our command-line, an Allrun
for example:
docker container run -ti --rm -v $PWD:/data -w /data \
cfdengine/openfoam ./Allrun
We keep the -ti
switch (even though we aren’t technically running it interactively) so that we can drop in & ctrl-c
if things aren’t working as planned.
We also needed to make a couple of small changes to the Allrun
. We changed the shell shebang from #!/bin/sh
to #!/bin/bash
and added the following lines (near the top of the script) to ensure all of the OpenFOAM environment variables are set.
# Source OpenFOAM BASH profile
. /opt/openfoam6/etc/bashrc
# Export fix for OPENMPI in a container
export OMPI_MCA_btl_vader_single_copy_mechanism=none
These small changes allow us to use our usual automation/batch running techniques with OpenFOAM in a Docker container. See the example below (click for a larger version).
Building your own OpenFOAM Docker image
If you want/need something a little more custom than the official images then you might want to build your own container image.
There are several ways of building an image, but the most reproducible, version-controllable & transparent way of doing it is with a Dockerfile like this:
Simple OpenFOAM Dockerfile Example
Here we describe all of the steps we need to take to build our Docker image. In this case, we start from the official Ubuntu Bionic image, we add some additional tools which we’ll need later (ssh, vi, wget etc), create a new user called foam, install OpenFOAM (more-or-less following the standard Ubuntu install instructions from the Foundation), activate OpenFOAM for user foam
, add a little environment variable to keep OpenMPI happy when running in a container & lastly set our container user to be foam
- simples.
The format of the Dockerfile is a little odd with many of the commands clumped together with ;\
and &&
characters. This is just a quirk of how Docker builds images & is an attempt to make the built image slightly smaller (but don’t worry about that for now).
We can use our Dockerfile to build our very own OpenFOAM image. Simply navigate our terminal to the folder containing the Dockerfile & issue the following command:
docker image build -t myopenfoam:v6.0 .
The process is relatively quick (depends on your internet connection). We end up (fingers crossed) with a local OpenFOAM Docker image called “myopenfoam” which we’ve tagged as v6.0 - see the animation below (click for larger version).
We can launch a new container from this image using essentially the same command we learnt earlier.
docker container run -ti --rm \
-v $PWD:/data -w /data myopenfoam:v6.0
Questions
How do I get my data into & out of the container?
Don’t. Treat the container as immutable & disposable. By mounting part of our local filesystem in the container, our simulation data can live outside the container & won’t be accidentally lost (or hidden) within a container.
If you’re using Docker as a development environment (you want to make changes to the source code and recompile) then you can still use the mounted volume as a gateway to get your modifications into or out of the container.
I’ve made some changes to my container. Can I keep them or do I have to re-do them every time?
You can convert a container into an image by using the docker commit
command - see docs. Identify the container you’re interested in, then commit it so that you can start new instances of this custom container whenever you need it. This can lead to problems working out which images have which changes - better to use a Dockerfile & rebuild it wherever possible.
How do I run ParaView in Docker?
Short answer, don’t bother. ParaView has great packages for most major operating systems & will always work better natively than from a Container. If you’re keeping your simulation data on your local drive (& not in the container) then you’ll be able to access it directly with a native ParaView install.
You might have a problem if you need the ParaView reader that ships with OpenFOAM (i.e. you normally use paraFoam
). In my experience the built-in OpenFOAM reader is more than up to the job. Create an empty file with a .foam
extension in your case directory, i.e. touch open.foam
and read that into ParaView - bingo.
How do I add my favourite text editor to the container?
Same answer as ParaView - use a native version. Your control files live on your drive (not in the container) so using a native text editor (or IDE) to edit your files is a better solution than running it from the container (vi excepted, ‘cos sometimes you just need to use edit from the command-line).
How do I add some other package to the container?
The easiest way is to modify your Dockerfile and rebuild. Particularly easy if the software you want to add has an Ubuntu deb package.
Quick Tip
Change the access to CPU & memory on Mac & Windows
Docker on Linux has access to all of the system resources when it comes to RAM & CPU. However, this isn’t the case on Mac & Windows & you’ll need to change the settings via the advanced tab in preferences.
In summary
Docker has been a very useful tool for me over the last few years, maybe it can help you too? What do you think? Have I missed anything? Have you tried running OpenFOAM in Docker & found it a pain? What was difficult? What were the showstoppers? Or have you found your groove & containers have made life easier? Either way I’d be keen to know - drop me an email.