Slurm
Job Submission
Slurm jobs can be submitted through a Job Composer app inside Slurm tab. This provides a graphical alternative to submitting jobs from the command line or from environments such as JupyterLab.
Job script examples can be taken from Templates.
Requeueing is disabled in the Job Composer app. If a node is rebooted, the job has to be restarted manually. You may explicitly enable requeueing by adding --requeue if your job supports safe restarts (e.g. uses checkpoints and does not restart from scratch).
GRES
Available GPU models for “--gres=gpu:<GPU_MODEL>:<NUM_GPUS>”:
| GPU model | VRAM |
| nvidia_geforce_rtx_2080_ti | 11 GB |
| nvidia_geforce_rtx_3070_ti | 8 GB |
| nvidia_geforce_rtx_3080 | 10 GB |
| nvidia_rtx_a4500 | 20 GB |
| nvidia_rtx_a5000 | 24 GB |
| nvidia_rtx_a6000 | 48 GB |
Partitions
Each user is assigned to exactly one partition based on their Unix group (users/intbio/struct_biotech).
Below are default parameters and limits for corresponding Slurm partitions:
users (a.k.a. guests)
- Nodes: 9–12
- Allowed groups: users, students
- Max CPUs per node: 8
- Max memory per node: 16G
- Max nodes: 1
- Default time: 08:00:00
- Max time: 08:00:00
intbio
- Nodes: 1–6, 9–14
- Allowed groups: intbio
- Default time: 08:00:00
- Max time: 30-00:00:00
struct_biotech
- Nodes: 7–8
- Allowed groups: struct_biotech
- Default time: 08:00:00
Resource Visibility
Unlike Newton, Darwin enforces strict resource isolation via Slurm cgroups. Limits are not advisory. Users cannot access resources that were not requested (CPU, memory, GPU). GPU devices outside the allocation are not visible (e.g., via nvidia-smi).
Darwin Docs