User Tools

Site Tools


cluster_resources

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
cluster_resources [2021/06/29 17:14]
root
cluster_resources [2022/01/26 09:57] (current)
root
Line 1: Line 1:
 ====== Cluster Resources ====== ====== Cluster Resources ======
  
-===== Old Cluster ===== +In our slurm configuration both **cores** and **memory** are consumable resources, meaning that slurm keeps track of how many cores and how much memory has been allocated to jobs on each node. A node is considered full if either all the cores are allocated, or all the memory is allocated.
- +
-In our configuration **cores** are the "consumable resource". Each job is by default allocated one core on one node. You can change this with options: **-c** **-n** **-N** and **--exclusive**. +
- +
-In the man pages, generally, **CPU==core** because of this configuration choice. For example, the **-c** option mentioned above has full name **--cpus-per-task** but really means cores per task in our configuration. +
- +
-===== New Cluster ===== +
- +
-In our configuration both **cores** and **memory** are consumable resources, meaning that slurm keeps track of how many cores and how much memory has been allocated to jobs on each node. A node is considered full if either all the cores are allocated, or all the memory is allocated.+
  
 By default, each job is allocated 1 core and 8GB of memory per core (so 8GB total since the default is 1 core). By default, each job is allocated 1 core and 8GB of memory per core (so 8GB total since the default is 1 core).
  
 You can change your allocation from the default using the **-c** = **--cpus-per-task**, **--exclusive**, **--mem**, and **--mem-per-cpu** options. You can change your allocation from the default using the **-c** = **--cpus-per-task**, **--exclusive**, **--mem**, and **--mem-per-cpu** options.
 +
 +If you use the "--exclusive" flag, slurm will allocate all cores on one node to your job, but unless otherwise specified it will still allocate the default of 8GB per core. This may not be all the memory available on the node that slurm allocates to you (e.g. on a node with 32 cores and 384GB of RAM, slurm would allocate 256GB of that RAM to your job). So, if you want a whole node you should probably use both "--exclusive" and "--mem=0" (which slurm treats as a request for all memory on a node).
  
 In the man pages, generally, **CPU==core** because of this configuration choice. For example, the **-c** option mentioned above has full name **--cpus-per-task** but really means cores per task in our configuration. In the man pages, generally, **CPU==core** because of this configuration choice. For example, the **-c** option mentioned above has full name **--cpus-per-task** but really means cores per task in our configuration.
  
  
cluster_resources.1625001252.txt.gz · Last modified: 2021/06/29 17:14 by root