User Tools

Site Tools


etiquette

Etiquette

The BRC Cluster is run largely without restrictions and limits, and that has worked pretty well so far. This is so that you can get a large amount of memory if you need it, or a large number of cores, or run jobs for a long time. We would like to continue to run the same way. So, please follow some guidelines…

  • Do not bog down the head node!
  • You can run some stuff on the head node:
    • Editing of scripts.
    • Data preparation (unless it is computationally intensive).
    • Tests of your code (maybe on cut-down data sets).
    • Compilation.
  • Do not run long-running (multiple day) processes on the head node.
    • From time to time the head node will be rebooted to update the OS kernel. A reboot of the head node will not affect jobs running on the compute nodes, but will, of course, kill off any processes running on the head node itself.
    • (This isn't really an etiquette issue, just a warning. A required reboot of the head node will not be delayed for processes running on the head node.)
  • Try not to queue hundreds of jobs that will take up the entire cluster.
    • Use an “array job” which lets you control the number of nodes you use very easily. This is the preferred technique.
    • Consider queuing the jobs on a small number of nodes (use -x to exclude some nodes).
    • Consider queuing a subset of your jobs at one time.
    • Write fancier scripts to control your jobs.
  • Try not to take up a large proportion of the cluster
    • If your jobs are really short then it's OK.
    • If they take a really long time, it definitely isn't OK.
    • And there's a grey area in the middle.
      • As a guideline, think twice before taking up more than 90 cores for a long time (multiple days).
      • Justification:
        • Usually there are 10-15 people running jobs on the cluster.
        • The cluster has 1000 cores (roughly).
        • So, the “fair share” per person is 60-100 cores.
  • Do not use the GPU and bigmem partitions (queues) unless you actually need those particular resources.
    • It is OK to use the nodes in these partitions by submitting jobs to the short partition (which contains all nodes). The short partition has a 5 hour limit on job run time.
  • Don't expect to unzip (or zip) a large number of files more quickly by sending the unzip commands to multiple nodes. All that does is swamp the file server you are using, slowing things down for you and everyone else.
    • It will probably be just as fast to run the unzip commands sequentially, or at most two at a time.
  • Don't leave interactive jobs running on a node when you are not actually interacting with them.
    • e.g. A shell started with “srun –pty bash -i”.
    • This “uses up” a core on the node and may prevent the node you are running on from being given to a user who needs a node in exclusive mode.
    • Similarly don't deliberately submit one job to each of the nodes that are currently free.
  • On the new cluster you must specify how many cores you want to use, and how much memory your job needs.
    • Try not to over-specify i.e. don't ask for 50GB if your job only needs 5GB.
  • Use space on /scratch if you can.
    • If you are downloading data from an external database (e.g. NCBI) that can just be downloaded again if necessary, then put it on /scratch.
    • This reduces pressure on the amount of space on the other disk volumes, and on the backup system.
  • Check whether there is sufficient free space on the disk volume on which you are working before downloading or generating large amounts of data.
    • Filling up a disk volume will mean that your jobs will not complete properly (they will not be able to write output to disk), and may have the same effect on other users' jobs.
    • You can use the “df” command for this e.g. df -H /home5.
  • Consider using the HPC Center BRC queue.
    • 400 cores (with priority to BRC members).
etiquette.txt · Last modified: 2022/01/26 10:26 by root