Using rust_light
If you can’t (or don’t want to) use the rust_code_server
container image, you
may instead use the rust_light
image. It is a simplified version of
rust_code_server
that does not contain a code editor, only the command-line
tooling needed to build and run this course’s Rust code.
This image is meant to be used in one of two ways:
- You can directly run a container from this
image, with a bind mount from a
local directory where the course-provided source code is located. You will
then edit this source code on the host system using your favorite host-side
text editing method, but switch to a shell running within the container when
the time comes to build and run programs.
- We recommend this approach if you are new to containers, as it is easiest to
get working. Its main drawback is that we cannot easily replicate your code
editing environment, so it may be hard for us to provide suggestions on e.g.
how to configure your code editor for Rust syntax highlighting. You will
also likely not manage to set up the very convenient
rust-analyzer
code editor plugin in this configuration.
- We recommend this approach if you are new to containers, as it is easiest to
get working. Its main drawback is that we cannot easily replicate your code
editing environment, so it may be hard for us to provide suggestions on e.g.
how to configure your code editor for Rust syntax highlighting. You will
also likely not manage to set up the very convenient
- You can build another container image based on
rust_light
, featuring your favorite code editor and any other devlopment utility you fancy. By running a container based on this extended image, you will be able to do all your work inside of the container, from code editing to building and running programs, resulting in a more cohesive user experience.- Although a source code bind mount is not necessary with this approach, we still recommend using one because it will let you easily save your work across container executions, transfer compiled binaries across machines (which may be necessary on HPC centers) and visualize the generated concentration pictures1.
In the remainder of this section, we will explain how to apply these two approaches in practice.
Direct use with host-side code editing
Container setup
First of all, you will need a container runtime and a copy of the exercises’
source code. For this you can mostly refer to the first two sections of the
rust_code_server
tutorial.
However, you can disregard the warning there about Apptainer and Singularity
compatibility. It is possible to get these runtimes to work with rust_light
,
although it will take more work than with Docker or Podman.
Once you have done the above, given a shell at the location where the
exercises
source directory was extracted, you are ready to start a container.
Please click the tab matching your container runtime below in order to see how
this is done.
You can start a container using the following command, which is a variation of
the one used in the rust_code_server
tutorial:
docker run -it --rm \
-v "$(pwd)/exercises":/root/exercises:Z \
gitlab-registry.in2p3.fr/grasland/numerical-rust-cpu/rust_light:latest
However, if you run Docker CE natively on Linux (not via the Docker Desktop
virtual machine as is typically done on Windows and macOS), then be aware that
you will likely encounter file permission issues in the exercises/
directory
later on.
Indeed, Docker runs containers based on the rust_light
image as root
2,
and thus any file/directory created inside of exercises/
by the container is
initially owned by root
. You will need privilege excalation tools
like sudo
to edit or delete these newly created files.
You can fix the permissions of container-created files at any time, even while the container is active, by running the following command in a host shell, outside of the container:
chmod -R $(id -u):$(id -g) exercises/
Once this setup is up and running, the basic workflow is for you to edit files
in the exercises/
directory in the way you would normally edit any local text
file on the host, then use the container when you need to build and run code
(any time a cargo
command is involved). From the container’s perspectives, the
code that you are editing will be located in ~/exercises
.
Computing cluster example
Let’s make this more concrete by giving examples for a typical computing cluster that uses the Apptainer container runtime and provides a mixture of frontal nodes (which you access via SSH) and worker nodes (where you submit jobs from frontal nodes using a batch scheduler like Slurm).
Code editing
After logging in to the frontal node via SSH, you can directly edit code in the
rust_light_home/exercises/
directory using a console text editor like vi
,
emacs
or nano
, that you will point to desired source files from the frontal
node’s SSH command line.
If you are not comfortable with this sort of text editor
and prefer to have a graphical editor on your laptop/desktop, then you can also
use tools like Visual Studio Code’s Remote-SSH
extension or
sshfs
in order to get a more local code editing feel.
Under the hood, these tools mostly work by downloading local copies of the source files that you are editing, letting you edit these copies, and updating the files on the server every time you make changes to the local files.
Basic test on frontal node
Once you are satisfied with your code edits, you will want to test your changes. Most HPC centers will tolerate that you perform basic checks on the frontal nodes, as long as you do not introduce a level of system load that significantly degrades service quality for other system users. For example, a simplified test computation that takes a few seconds to execute is often considered acceptable.
In our case, we will demonstrate this flavor of interactive testing by running the “Hello World” example program, which is only meant to test your setup and does nothing but print a welcome message. You can run it by firing up an Apptainer shell using the procedure described above, if you have not done so already, then running the following command inside of the container…
cd ~/exercises \
&& cargo run --example 00-hello
…and after cargo
is done building all the external libraries that are not
needed for this specific example but will be needed later in the course, you
should get the expected welcome message:
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.25s
Running `target/debug/examples/00-hello`
Hello!
Great news: if you can see this message in your terminal, your Rust development
environment seems to be set up correctly.
While this is good for basic testing, more complex jobs will require you to use worker nodes. The next section will therefore introduce this topic.
Worker node introduction
The procedure to use worker nodes is unfortunately specific to each computing cluster as it depends on which batch scheduler is installed and how it is configured. Please check your computing center’s documentation and ask any question to the local system administration team.
Here, for the sake of illustration, we will assume that they use the common
Slurm batch scheduler, and have not disallowed the use of interactive jobs. In
this case the easiest way to get started with the use of worker nodes on a
lightly loaded cluster is to run srun --time 30 --pty bash
in order to request
an interactive shell on a randomly allocated node with the default (usually
minimal) amount of resources, for a short amount of time that suffices for basic
tests (in this case 30 minutes).
This will let you quickly experiment with some commands and debug your
understanding of worker nodes, before you move to non-interactive scripting
workflows that scale better to larger jobs. If needed, you can adjust the
requested time budget with the --time
parameter above, at the expense of
possibly waiting longer for resources to free up as longer-running jobs are
normally penalized by the batch system’s resource allocation algorithm.
Once you have an interactive shell on a worker node, if you try to run a
rust_light
container on the worker node using the apptainer run
command
provided above, you are likely to experience network access problems. That’s
because in most computing centers, worker nodes are forbidden from accessing the
Internet as a security measure. All Internet use must happen on frontal nodes.
You can address this by asking Apptainer to create a local copy of the
rust_light
image, using the following command on the frontal nodes…
apptainer pull docker://gitlab-registry.in2p3.fr/grasland/numerical-rust-cpu/rust_light:latest
…which will create a file in the current directory named
rust_light_latest.sif
. You will then be able to use this file on the worker
nodes by replacing the docker://
URL that we have used so far with the path
to this file (i.e. rust_light_latest.sif
if it’s in your working directory) in
all your apptainer run
commands.
Simple batch operation
While srun bash
is very convenient for quick tests, it becomes awkward for
more loaded clusters and larger resource allocations as you need to wait for
resources to free up, then quickly react before your allocated time runs out. In
this situation, it is better to use true batch operation.
You may find out that cargo
’s sensitivity to the working directory is
inconvenient in this context. However, this is nothing that a pinch of shell
programming can’t fix. Here is a full example of running the aforementioned “Hello
World” example in a worker node in batch mode:
srun --time 1 \
-- apptainer run --cleanenv --home "$(pwd)/rust_light_home" --no-mount cwd rust_light_latest.sif \
-- bash -c "cd ~/exercises && cargo run --example 00-hello"
While this command may look intimidating from a distance, it is actually just a combination of the concepts that we have introduced above:
- On the first line, we schedule a job with
srun
, using the tightest possible time limit of 1 minute (remember, requesting shorter budgets ensures quicker access to resources on most batch systems) and default resource limits (which you will need to tune for larger jobs, but this is a system-specific process that we cannot explain in this generic tutorial). - On the second line, we call
apptainer run
using the procedure described above, with therust_light_latest.sif
offline image that was generated viaapptainer pull
in the previous section and the home directory that we have set up previously (whose filesystem paths you may need to adjust depending on what your shell’s working directory is). - Finally, instead of letting
apptainer run
start an interactivebash
shell as it does by default, we ask Apptainer to callbash -c
to make it run a short inline bash script. This script then proceeds to change to the right directory where the exercises’ source code is located, and callcargo run
to compile and run the specified example.
This should be enough to get you started. Please read your computing center’s documentation carefully for guidance on proper use of their batch system, and get in touch with the teacher or the computing center’s administration team (depending if the issue seems to lie in the course material or the computing center’s infrastructure) if you run into any issue while following this tutorial.
Editing code inside of the container
The previous approach is easy to get started with, but it makes you juggle between at least two system environments: the one from the container and the one from the host that the container is running on. This will result in an inconsistent and suboptimal user experience.
For example commands that are available on the host will not be available inside of the container and vice versa. Or worse, commands will be available on both sides, but with different semantics, for example the host may have a different Rust toolchain version installed. This inconsistency may result in debugging trouble later on if you ever get confused between the two environments.
It is therefore more convenient to use a container not just for the purpose of compiling and running code, but also for the purpose of editing it. This will allow you to do pretty much all work from this course in a single consistent system environment, which will be easier.
The rust_code_server
image aims to provide you with a
pre-packaged way to do this, but its web-based code editor may not be suitable
for your host, or not match your personal text editor preferences. In this case,
the easiest approach will likely be for you to layer your favorite code editing
environment on top of rust_light
using standard container image building tools
like docker build
, docker commit
or kaniko
.
If you are willing to do this, we will assume that you are reasonably familiar with Linux containers and do not need a step-by-step guide on how this is done. That being said, you may want to keep the following considerations in mind:
- At the time of writing,
rust_light
is based on theubuntu:22.04
image from the Docker Hub. You may therefore install (old-ish) packages from the Ubuntu repositories and PPAs usingapt
. But containerized Snap/Flatpak packages are best avoided because running containers inside of containers is not supported in the default configuration of most container runtimes. - Getting graphical applications to run inside of containers is tricky, system
specific (due to X11/Wayland nuance + use of GPU rendering in modern GUI
toolkits) and overall best avoided. Prefer console-based text editors like
vi
,emacs
andnano
; or web server-based graphical code editors like JupyterLab andcode_server
.- To be compatible with VM-based container runtimes like Docker Desktop, you
will want server-based editors to listen to catch-all IP
0.0.0.0
, relegating much-needed network filtering work to the end user’s container runtime via options like Docker and Podman’s--publish 127.0.0.1:<HOST>:<CONTAINER>
. See therust_code_server
tutorial for an example of this strategy in action.
- To be compatible with VM-based container runtimes like Docker Desktop, you
will want server-based editors to listen to catch-all IP
- Many code editors from the Ubuntu 22.04 repositories do not provide Rust and
TOML syntax highlighting by default. For optimal comfort in this course, you
will want to set these up, which may be tricky from a Dockerfile depending on
how your code editor is configured. Along the way, you may also want to check
whether your chosen code editor has
rust-analyzer
support, in which case I would also advise installing it for greater code editing comfort. - By default
rust_light
is configured to run as theroot
user, which may make you nervous. However, this is only a problem when using Docker. Other container runtimes normally run such images in a “fake root” environment, where the code actually runs as your local user masquerading asroot
. Furthermore, by switching away from this configuration, you increase the odds that you will encounter file permission issues later on, as container users are not kept in sync with local system users unless you are using Apptainer/Singularity. - Speaking of Apptainer/Singularity, if your image intends to support them for HPC center compatibility, you will want to read and understand how these runtimes deeply differ from Docker and Podman. It is particularly important to remember that these container runtimes are not readily compatible with modern software packages that install themselves into the current user’s home directory, and that it will likely take a fair amount of experimentation on your side to get such software to work with them.
- While bundling the exercises source code into the container to avoid bind mounts may sound like a good idea, it will make your life harder on HPC centers, where you commonly need to move compiled executables from the frontal nodes to the worker nodes in order to run them there. Bind mounts make this much easier (when combined with home directory synchronization), and also avoid accidents where you accidentally discard unsaved work by exiting the container’s shell. They are therefore recommended even in this configuration.
- Containers are ultimately binary packages, and like all binary packages they are not portable across CPU architectures. Building an Arm container on an x86_64 machine (or vice versa) is not for the faint of the heart, and emulating x86_64 containers on Arm is not recommended in the context of this course, which is about acquiring a fine understanding of the performance of your CPU (as opposed to that of an emulator running on top of your CPU).
If the above seems daunting, it is likely that custom container building is not
for you. You may want to use the rust_code_server
or
rust_light
image directly, or even
set up a local development environment instead. That being said,
if you can invest the time into it, this could be the way for you to get the
optimal code editing environment during this course, and easily transport it to
any HPC center you have access to. In the end, the choice is yours. Happy
container hacking!
-
Getting X11/Wayland software to work inside of Linux containers involves a fair amount of suffering. You will make your life easier by favoring code editors that run directly in the terminal (like vi and nano) or expose a web-based user interface via an HTTP server (like jupyterlab and code-server). For the same reason, concentration images from the
data-to-pics
utility are best watched using your host system’s standard picture viewer, instead of installing tooling for this purpose inside of the container. ↩ -
While running such containers as non-
root
with Docker is possible, it is a poorly supported and ill-documented configuration. This is unlike Podman which is designed for easy execution of containers as regular users, leveraging “fake root” mechanisms like sub-UIDs/GIDs as necessary. ↩ -
Podman uses “fake root” mechanisms to make processes running inside of the container believe that they are running as
root
, when they are actually running as your regular host system user. In addition to being more secure than Docker’s approach of actually running containerized processes asroot
, this configuration also avoids file permission issues on both the host and the container side. Indeed, in this setup, containerized processes believe that they run asroot
and can write to any file/directory, so they will allow all writes. Whereas on the host system side, actual filesystem writes will be carried out by your regular host system user, so all newly created files will be owned by your user, notroot
. ↩