User Tools

Site Tools


scheduling

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
scheduling [2016/09/28 17:26]
root
scheduling [2020/03/11 17:13] (current)
root
Line 1: Line 1:
 ====== Scheduling ====== ====== Scheduling ======
  
-The cluster is running SLURM'default scheduler which is very (maybe too) simple.+The cluster is running SLURM's scheduler with: 
  
-  * first-in first-out queue.+  * "Fair share" job priorities.
   * Simple round-robin node selection.   * Simple round-robin node selection.
  
-The scheduler does not check which nodes are busy and try to avoid them. +The "Fair Sharealgorithm assigns priority to submitted jobs which is inversely dependent on the amount of cluster CPU time consumed by the user submitting the jobs in recent daysThese priorities apply only to jobs waiting in the queuethey do not affect jobs which are already running.
- +
-This has an advantage in that it tends to leave some nodes empty for people who want a whole node. +
- +
-You can use the **-w** option to select a specific node. Actually it asks for "at leastthe nodes in the node list you specify. So command like: +
- +
-<code> +
-sbatch -n 20 -w node2 my_script +
-</code> +
- +
-Would get you some cores on node2 and some on another node (since there are only 16 cores total on node2). If there were no cores free on node2 the job would be queued until some became available. +
- +
-Note that using the -w option with multiple nodes is not a way of queueing jobs on just those nodes: it will actually allocate codes across all nodes you specify and run the job on just the first on theme.g. +
- +
-<code> +
-sbatch -w node[2-4] my_script +
-</code> +
- +
-would allocate one core on each of nodes 2,3, and 4 and run my_script on node2. You would then use srun from within you script to run task steps within this allocation of cores. Don't use this to limit the nodes you want your jobs to run on. +
- +
-You can use the **-x** option to avoid specific nodes. A list of node names looks like this: +
- +
-<code> +
-node[1-4,7,11] +
-</code> +
- +
-Read as "nodes 1 to 4, 7 and 11" i.e. 1,2,3,4,7,11. +
- +
-You can use **-c 16** to request all cores on a (standard) node. +
- +
-You can use the **--exclusive** option to ask for exclusive access to all the nodes your job is allocated. This is especially useful if you have a program which attempts to use all the cores it finds. Please only use it if you need it. +
  
 +The scheduler does not check which nodes are busy and try to avoid them. This has an advantage in that it tends to leave some nodes empty for people who need a whole node.
  
scheduling.1475098001.txt.gz · Last modified: 2016/09/28 17:26 by root