Difference between revisions of "Using the PBS / Torque queueing environment"
From Centre for Bioinformatics and Computational Biology
Line 81: | Line 81: | ||
{| class="wikitable" | {| class="wikitable" | ||
!Queue Name | !Queue Name | ||
− | !Max | + | !Max user jobs running |
− | !Max | + | !Max user cores running per job |
!Max memory | !Max memory | ||
− | !Max | + | !Max walltime |
!Description | !Description | ||
|- | |- | ||
− | | | + | |short |
+ | |6 | ||
|28 | |28 | ||
− | | | + | |128 GB |
− | + | |00:05:00 | |
− | |00: | + | |Short queue with 5 minute time limit |
− | | | + | |
|- | |- | ||
|short | |short |
Revision as of 12:11, 3 May 2018
The main commands for interacting with the Torque environment are:
> qstat
View queued jobs.
> qsub
Submit a job to the scheduler.
> qdel
Delete one of your jobs from queue.
Contents
Job script parameters
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:
#PBS -l nodes=1:ppn=14
sets the size of the job in number of processors:
nodes=N sets the number of nodes needed.
ppn=N sets the number of cores per node.
#PBS -l walltime=8:00:00
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.
Example job scripts
A program using 14 cores on a single node:
#!/bin/bash #PBS -l nodes=1:ppn=14 #PBS -l walltime=8:00:00 #PBS -q normal #PBS -o /path/to/stdout.log #PBS -e /path/to/stderr.log #PBS -m ae #PBS -M your.email@address module load bowtie2-2.3.4.1 bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:
> qsub run_bowtie.sh
Interactive jobs
- If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:
> qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00
The different queues available
Single node jobs
Queue Name | Max user jobs running | Max user cores running per job | Max memory | Max walltime | Description |
---|---|---|---|---|---|
short | 6 | 28 | 128 GB | 00:05:00 | Short queue with 5 minute time limit |
short | 28 | 2 | 64 GB | 01:00:00 | Short queue with 1 hour time limit |
medium | 28 | 2 | 64 GB | 08:00:00 | Medium queue with 8 hour time limit |
long | 28 | 1 | 64 GB | 48:00:00 | Long queue with 48 hour time limit |
verylong | 14 | 1 | 64 GB | 96:00:00 | Very long queue with 96 hour time limit |
bigmem | 24 | 2 | 256 GB | 48:00:00 | Long queue with 48 hour time limit |
massivemem | 24 | 1 | 1024 GB | 48:00:00 | Long queue with 48 hour time limit |
Multi-node jobs
Queue Name | Max cores | Max nodes running | Max memory | Max time | Description |
---|---|---|---|---|---|
wide | 28 | 2 | 128 GB | 24:00:00 | Queue to enable MPI-type jobs with up to 56 cores with 48 hour time limit |
verywide | 28 | 4 | 128 GB | 08:00:00 | Queue to enable MPI-type jobs with up to 112 cores with 24 hour time limit |