Table of Contents

Getting Started

The cluster uses slurm as the scheduler, to know more about slurm the manual can be found online.

Simple Tasks With Slurm

The scheduler organizes resources through the partition system (roughly equivalent to pbs queues). To view the available partitions, the sinfo command is available:

$ sinfo 
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
normal       up 1-00:00:00      4  alloc compute[01-04]
long         up 4-00:00:00      4  alloc compute[01-04]

Listing the Partition Jobs

We can view the list of currently submitted jobs using the squeue command:

 squeue 
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
               206    normal     test  exampleuser  R    2:07:58      1 compute0
               210    normal aiida-89  exampleuser  R    1:11:09      3 compute[0-0]

Submitting jobs

The sbatch command is available for job submission:

$ sbatch job.mpi

where the job.mpi would look similar to this,

job.mpi
#!/bin/bash
 
#SBATCH -J test               # Job name
#SBATCH -o job.%j.out         # Name of stdout output file (%j expands to jobId)
#SBATCH -e %j.err             # Name of std err
#SBATCH --partition=normal    # Queue
#SBATCH --nodes=1             # Total number of nodes requested
#SBATCH --ntasks-per-node=16  # Total number of mpi tasks requested
#SBATCH --cpus-per-task=1     # 
#SBATCH --time=23:30:0        # Run time (hh:mm:ss) - 1.5 hours
#SBATCH -A physics            # Project name 
 
# Launch MPI-based executable
 
module load gnu12-with-static/12.3.0
module load qespresso/7.3/openblas
 
cd $HOME/scratch/workdir
 
mpirun -np 16  pw.x  < input  > output

you need to ensure you set the partition *–partition* and the accounting project *-A*.

Listing Your Accounting Projects

Every user is required to be in a project before they can run jobs, to see your project, please run this command:

$  sacctmgr show user $USER format=User,DefaultAccount 
      User       Def Acct 
----------        ---------- 
   yourusername    physics 

use this account name in your submission script

Interactive Jobs With Slurm

Its possible that batch jobs are not fit for a particular task, and for that reason, Slurm also provides a means to get an interactive terminal session on the compute node. This can be used for example to do code compilation. This is the command to get an interactive job session:

$ srun -A physics  --nodes=1 --ntasks-per-node=16  --time=02:00:00 --partition=normal   --pty bash -i