Using the PBS / Torque queueing environment
The main commands for interacting with the Torque environment are:
View queued jobs.
Submit a job to the scheduler.
Delete one of your jobs from queue.
Job script parameters
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:
#PBS -l nodes=1:ppn=14
sets the size of the job in number of processors:
nodes=N sets the number of nodes needed.
ppn=N sets the number of cores per node.
#PBS -l walltime=8:00:00
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.
Example job scripts
A program using 14 cores on a single node:
#!/bin/bash #PBS -l nodes=1:ppn=14 #PBS -l walltime=8:00:00 #PBS -q normal #PBS -o /path/to/stdout.log #PBS -e /path/to/stderr.log #PBS -k oe #PBS -m ae #PBS -M your.email@address module load bowtie2-22.214.171.124 bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:
> qsub run_bowtie.sh
If you receive an email with exit status "0", that would usually indicate that the job completed successfully.
- If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:
> qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00
The different queues available
|Queue Name||Max user jobs running||Max user cores
running per job
|Max memory||Max walltime||Description|
|short||6||28||128 GB||00:05:00||Short queue with 5 minute time limit|
|normal||4||28||128 GB||08:00:00||Medium queue with 8 hour time limit|
|long||1||28||128 GB||168:00:00||Long queue with one week time limit|
|bigmem||1||24||750 GB||72:00:00||High memory queue with 3 day time limit|
|mpi||1||112||128 GB||72:00:00||Queue for mpi parallel jobs with 3 day time limit|
- If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.
- Both MPICH and openMPI are installed. Please select the relevant environment using the "module load" functionality.
- The node list for MPI can be accessed as $PBS_NODEFILE.