r-env-singularity
on PuhtiAll material (C) 2021 by CSC -IT Center for Science Ltd. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License, http://creativecommons.org/licenses/by-sa/4.0/
r-env-singularity
sinteractive
commandFor example:
sinteractive -A <account> -t 00:30:00 -m 8000 -c 4
# reserves a session with a 30min duration, 8GB memory and four cores
r-env-singularity
(e.g. module load r-env-singularity/4.1.1
)start-r
commandsinteractive
can be exited with exit
nano
)sbatch
(e.g. sbatch job.sh
)r-env-singularity
documentation for mode details#!/bin/bash -l
#SBATCH --job-name=r_serial
#SBATCH --account=<project>
#SBATCH --output=output_%j.txt
#SBATCH --error=errors_%j.txt
#SBATCH --partition=test
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=1000
# Load r-env-singularity
module load r-env-singularity
# Clean up .Renviron file in home directory
if test -f ~/.Renviron; then
sed -i '/TMPDIR/d' ~/.Renviron
fi
# Specify a temp folder path
echo "TMPDIR=/scratch/<project>" >> ~/.Renviron
# Run the R script
srun singularity_wrapper exec Rscript --no-save myscript.R
Rscript
wrapper
singularity_wrapper exec
is optionalTMPDIR
) to R
export SINGULARITYENV_MYVARIABLE=x
export
does not work with RStudio Serverless ~/.Renviron
)#!/bin/bash -l
#SBATCH --job-name=r_array
#SBATCH --account=<project>
#SBATCH --output=output_%j_%a.txt
#SBATCH --error=errors_%j_%a.txt
#SBATCH --partition=small
#SBATCH --time=00:05:00
#SBATCH --array=1-10
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=1000
# Load r-env-singularity
module load r-env-singularity
# Clean up .Renviron file in home directory
if test -f ~/.Renviron; then
sed -i '/TMPDIR/d' ~/.Renviron
fi
# Specify a temp folder path
echo "TMPDIR=/scratch/<project>" >> ~/.Renviron
# Run the R script
srun singularity_wrapper exec Rscript --no-save myscript.R $SLURM_ARRAY_TASK_ID
--array
and $SLURM_ARRAY_TASK_ID
#SBATCH --cpus-per-task
, otherwise similar:#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=1000
# --ntasks and --nodes stay 1
--ntasks
and --nodes
modified only when using MPI packages (e.g. snow
)parallel::detectCores()
will always give 40 as result (max. on node)options(future.availableCores.methods = "Slurm")
followed byfuture::availableCores()
rstan
:For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
options(mc.cores = future::availableCores())
--cpus-per-task
as before, while adding a few lines to the batch job file# Match thread and core numbers
export SINGULARITYENV_OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Thread affinity control
export SINGULARITYENV_OMP_PLACES=cores
export SINGULARITYENV_OMP_PROC_BIND=close
snow
, doMPI
, pbdMPI
)--ntasks
(or --ntasks-per-node
)#SBATCH --ntasks-per-node=2
#SBATCH --nodes=2
snow
uses one master and x slave processes (= workers)snow
is also launched using RMPISNOW
rather than Rscript
doMPI
does not need a master task or a special launch commandprofvis
package
/projappl/<project>
mkdir project_rpackages_<rversion>
.libPaths(c("/projappl/<project>/project_rpackages_<rversion>", .libPaths()))
libpath <- .libPaths()[1]
# You can also use getRversion():
.libPaths(paste0("/projappl/<project>/project_rpackages_", gsub("\\.", "", getRversion())))
libpath
to your actual R script (so your packages are found)lib.loc
where needed#!/bin/bash -l
#SBATCH --job-name=r_serial_fastlocal
#SBATCH --account=<project>
#SBATCH --output=output_%j.txt
#SBATCH --error=errors_%j.txt
#SBATCH --partition=test
#SBATCH --time=00:05:00
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --mem-per-cpu=1000
#SBATCH --gres=nvme:10
# Load the module
module load r-env-singularity
# Clean up .Renviron file in home directory
if test -f ~/.Renviron; then
sed -i '/TMPDIR/d' ~/.Renviron
fi
# Specify NVME temp folder path
echo "TMPDIR=$TMPDIR" >> ~/.Renviron
# Run the R script
srun singularity_wrapper exec Rscript --no-save myscript.R
--gres=nvme:10
(reserves 10GB)echo "TMPDIR=$TMPDIR" >> ~/.Renviron
r-env-singularity
has many benefits:
A useful resource: CRAN Task View for HPC