This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
partitions [2014/11/04 15:06] root |
partitions [2025/03/20 13:02] (current) root updated GPU offerings - kmd |
||
---|---|---|---|
Line 3: | Line 3: | ||
**Partition** == **Queue** | **Partition** == **Queue** | ||
- | SLURM allows the definition of different partitions. | + | SLURM allows the definition of different partitions. |
Our cluster has 4 different partitions: | Our cluster has 4 different partitions: | ||
- | * **all** | + | * **standard** |
- | * The default queue. Contains the 17 standard compute nodes (128GB, 16 cores each). | + | * The default queue. Contains the 31 standard compute nodes (at least 128GB RAM, at least 16 cores). No time limit. |
* **gpu** | * **gpu** | ||
- | * Contains the GPU machine | + | * Contains the GPU machines |
* **bigmem** | * **bigmem** | ||
- | * Contains | + | * Contains big memory machines only (512GB or 768GB, 32 cores). No time limit. |
* **short** | * **short** | ||
- | * Contains all nodes, including gpu and bigmem, but has a 2 hour time limit, and jobs in this queue have a lower priority than jobs in the other queues. | + | * Contains all nodes, including gpu and bigmem, but has a 5 hour time limit, and jobs in this queue have a lower priority than jobs in the other queues. |
- | + | ||
- | **All**, **gpu**, and **bigmem** have no time limit. | + | |
+ | **Standard**, | ||