====== Overview ====== The cluster is made up of a set of CPU based machines, with a single login node, and attached storage that has been made available to you. The compute nodes are uniform, with 16 cores per node, and 32GB of RAM. Parallel calculations can take advantage of the fast low-latency network available. A batch queue system has been setup using the Slurm scheduler, and it is required that all users excecute their jobs using this scheduler. Storage is available, and users are discouraged from using their $HOME directory for any large files, instead a 38 TB file system is provided under $SCRATCH.