User Tools

Site Tools


using_singularity

Using Singularity

These notes apply to Singularity version 3. The cluster currently has version 3.5.2 installed.

Singularity is a container mechanism designed for use on a compute cluster. It gives the user very flexible control of their environment (even up to which Linux distribution they want to use), but still allows use of the cluster storage, fast network, mutliple nodes, etc. A container is like a virtual machine but is considerably more light-weight: it uses the kernel that is running on the machine on which it is executed rather than including its own. Singularity containers can be run essentially like ordinary programs, so they can be submitted as jobs on the cluster (to one or more nodes).

Singularity web page: https://sylabs.io/docs/

YouTube introduction by the author of Singularity: https://www.youtube.com/watch?v=DA87Ba2dpNM

You need a machine on which you have root access to create and set up Singularity images or you may use Sylabs Remote Builder - https://cloud.sylabs.io/builder. Singularity is natively available for Linux. On Windows and Mac OSX you can use virtual machine software. Sylabs has some instructions here: https://sylabs.io/guides/3.0/user-guide/installation.html#install-on-windows-or-mac. Alternatively, you can install virtual machine software such as VirtualBox, install a Linux distribution as a guest OS and install singularity within that.

For example, I installed VirtualBox on my Windows 10 laptop, then created a virtual machine running Centos 7. (If VirtualBox is only giving you the choice of 32-bit virtual machines, you need to turn on virtualization support and VTd support in the BIOS of your machine.)

We can also provide access to a Linux virtual machine with root permissions if you need it. Please contact the system administrators to get access to this VM.

Installing Singularity

For Singularity version 3, installation is a little involved because it requires the Go programming language. Probably best to follow the instructions here: https://sylabs.io/guides/3.5/admin-guide/installation.html# or here: https://sylabs.io/guides/3.5/user-guide/quick_start.html

Creating a Container

(As root, or use sudo.)

singularity build container.sif library://ubuntu

Would build you a container with the latest version of Ubuntu (available in the library - 18.10 at the time of writing). Similarly, you could specify “centos” here (and get Centos 8.4).

But note that this gets you an immutable container: you cannot update it. To get a container that you can update you must create it using the “–sandbox” flag. This creates the container as a directory on your system rather than as a single file. There is also another technique you can use to (virtually) make changes to your container: an overlay. This is an overlay filesystem which you add on top of the container. The contents of that overlay get changed not the container itself. This technique is a little more complicated, but can be used to allow a non-root user to make changes (to the overlay).

To create a writable container:

singularity build --sandbox container library://centos

You can also use a “.def” file that specifies more actions to take during the build process:

singularity build --sandbox container container.def

The “.def” contains a recipe for building the container. You can create your own .def file to get different versions of the OS and different sets of packages installed initially. For more on .def files see this link: https://sylabs.io/guides/3.5/user-guide/definition_files.html.

The following command gets you into the container as root (if you are root when you run the command).

singularity shell container

This command will get you into the container in writable mode (i.e. allowing you to make permanent changes within the container).

singularity shell --writable container

Inside the container you will be whatever user you were outside. So, if you are root outside you will be root inside. (This is why you need a machine on which you have root access to build your container.) On the cluster you will be the same (non-root) user inside and outside of the container.

When you run the container, your home directory on the host machine will be mounted as your home directory within the container. This is a good thing when running your container: the data you want to process are likely in your home directory; but if you are installing programs into the container (as root) if the installation process makes changes to your home directory these will be made to your host machine's home (probably /root) not inside the container. For instance, installing Miniconda offers to update your .bashrc file. If you let it do that it will update the host machine's version. Probably not what you want. There is a –no-home option that can mitigate this, it does not mount your home directory into the container but instead mounts the current working directory (if the cwd is your home, then your home will get mounted).

Once inside your sandbox container (as root) you can make changes, install packages etc. When the container is ready to publish, or run on the cluster, you can create a .sif file from it like this:

singularity build container.sif container

(This step can take quite a long time if a lot has been installed into the sandbox.)

Then copy the .sif file to the cluster. Once there you can run a program inside the container by using the exec command. For instance:

singularity exec centos.sif cat /etc/redhat-release

would run the “cat /etc/redhat-release” command in the centos.sif container.

If you create your container using a .def file you can also specify a “runscript”. This is a command that is automatically executed when you run the container using the run command.

singularity run centos.sif

Using Docker Images

You can build a singularity container from Docker images (stored on Docker Hub) using a command similar to that above for building a container from the Singularity library.

singularity build --sandbox my_tensorflow_container docker://tensorflow/tensorflow:latest-gpu-py3-jupyter

As before, open your container with the “–writable“ flag to make changes. If you plan to run things in the container that will need GPUs include the “–nv” flag.

singularity shell --writable --nv my_tensorflow_container/

Add Something Useful

As an example, we will install JupyterLab along with some system monitoring tools.

apt-get update && apt-get upgrade -y
apt-get install python3.7 -y
python3 -m pip install --upgrade pip
pip install jupyterlab
# update nodejs & npm
apt install -y nodejs npm
pip install nbresuse
# install system monitoring extensions
jupyter labextension install jupyterlab-topbar-extension jupyterlab-system-monitor

Finalize and Test Your Container

singularity build my_tensorflow_container.sif my_tensorflow_container/

Copy the container to the cluster, and run the following to test your container.

srun -p gpu --pty bash -i
singularity shell --nv my_tensorflow_container.sif
python -c 'import tensorflow as tf; print(tf.__version__); print("Devices: ", tf.config.experimental.list_physical_devices())'

You can write a script that you can “sbatch” to run your container on the compute nodes. Create a script, my_test_script.py:

import tensorflow as tf
print(tf.__version__)
print("Devices: ", tf.config.experimental.list_physical_devices())

Create a bash script, my_test_script.bash:

#SBATCH -p gpu -w node94
#SBATCH -J my_script
#SBATCH -o /home2/ajgreen4/Scripts/Output/my_script-%j.out
#SBATCH -e /home2/ajgreen4/Scripts/Output/my_script-%j.err
#SBATCH --export=ALL
singularity exec --nv ~/images/tensorflow2-0-gpu-py3-jupyterNL.sif python ~/Scripts/my_test_script.py

Then run the bash script using sbatch:

sbatch my_test_script.bash

Using Remote Builder

If you don’t have root access to a Linux machine you can build a Singularity container in the cloud using Sylabs Remote Builder (https://cloud.sylabs.io/builder) you just need to write a builder recipe like the one below. You will need to create a Sylabs account.

Go to the Remote Builder website and copy and paste the recipe below into the “Build a Recipe” box. Once the build recipe box has been checked click the build button.

Additional recipe details can be found here: https://sylabs.io/guides/3.5/user-guide/definition_files.html.

Bootstrap: docker
From: tensorflow/tensorflow:latest-gpu-py3-jupyter

%help
    This is a demo container used to illustrate a def file that is
    gpu enabled with tensorflow and jupyterLab.

%post

    #Installing all dependencies

    apt-get update && apt-get upgrade -y
apt-get install python3.7 -y
    python3 -m pip install --upgrade pip
    pip install jupyterlab
    # update nodejs & npm
    apt install -y nodejs npm
    pip install nbresuse
    # install system monitoring extensions
    pip install jupyterlab-nvdashboard
    jupyter labextension install jupyterlab-topbar-extension jupyterlab-system-monitor jupyterlab-nvdashboard
    apt-get clean    

Download the built container and copy it to the cluster. It can be run in the same way as described fabove for the container built locally.

Another Example - QIIME 2

We will install qiime2 as an example. (Run these commands as root.)

cd /home/YOU
mkdir singularity-images
cd singularity-images
singularity build --sandbox ubuntu library://ubuntu:18.04 
singularity shell --writable --no-home ubuntu

You will get a warning saying that /home/YOU/singularity-images wasn't mounted - that's OK. pwd should show that you are in /root. Install miniconda3 into /project/miniconda3 (i.e. change the default /root/miniconda3).

apt update
apt install wget
mkdir /project
cd /project
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
chmod u+x Miniconda3-latest-Linux-x86_64.sh
./Miniconda3-latest-Linux-x86_64.sh
export PATH=/project/miniconda3/bin:$PATH
source /root/.bashrc
wget https://data.qiime2.org/distro/core/qiime2-2019.10-py36-linux-conda.yml
conda env create -n qiime2-2019.10 --file qiime2-2019.10-py36-linux-conda.yml

Copy the code added to the end of /root/.bashrc to initialize conda to a file named /project/start-conda. Make this file executable by all users.

chmod ugo+x /project/start-conda

Create a script,run-qiime, to execute qiime (in /project). Set the script to be executable by group and others. To do this you will need an editor!

apt install nano

The two mysterious exports at the start of the run-qiime file come from error messages generated if you don't include these (the errors suggest them as a solution).

#!/bin/bash
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
/project/start-conda
export PATH=/project/miniconda3/bin:$PATH
source activate qiime2-2019.10
qiime --help
chmod ugo+x run-qiime

Test the script from within the container. Then exit the container and run it again from outside:

singularity exec --writable ubuntu /project/run-qiime

(You need –writable or you will get some errors about “read-only filesystem”.)

Generate a .sif file from the ubuntu container:

singularity build qiime2.sif ubuntu

Copy the .sif file to the cluster, and run the exec command on it. You can write a script that you can “sbatch” to run your container on the compute nodes.

using_singularity.txt · Last modified: 2022/01/26 10:28 by root