<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://wiki.bi.up.ac.za/wiki/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://wiki.bi.up.ac.za/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Johann</id>
		<title>Centre for Bioinformatics and Computational Biology - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://wiki.bi.up.ac.za/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Johann"/>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Special:Contributions/Johann"/>
		<updated>2026-04-09T16:56:32Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.23.13</generator>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2023-05-10T14:16:46Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* nLustre quotas can be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -uqh &amp;lt;username&amp;gt; /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
Your quota on the home directories, ie: /home/&amp;lt;username&amp;gt; is 500 GB.&lt;br /&gt;
Please note that you are not allowed to store, and process your data from this directory. Its sole intent is for small documents and non-processed files.&lt;br /&gt;
&lt;br /&gt;
Your quota on the Lustre file system is 2 TB, with the following provisos:&lt;br /&gt;
&lt;br /&gt;
You have a total of 2 TB, but within /nlustre/users/&amp;lt;username&amp;gt;/scratch, you have unlimited storage for a certain period.&lt;br /&gt;
The scratch directories get purged periodically on certain dates, 3 times per year:&lt;br /&gt;
&lt;br /&gt;
April 30, 0h00.&lt;br /&gt;
August 31, 0h00.&lt;br /&gt;
December 31, 0h00.&lt;br /&gt;
&lt;br /&gt;
What remains under your directory structure, ie: /nlustre/users/&amp;lt;username&amp;gt;, excluding scratch, must be within your 2 TB quota.&lt;br /&gt;
Further, we will not backup anything under /nlustre/users/&amp;lt;username&amp;gt;/scratch.&lt;br /&gt;
&lt;br /&gt;
We will send only one reminder of the purge date when it approaches. Thereafter, one warning if you are out of quota. &lt;br /&gt;
Failure to comply, will get your account locked out.&lt;br /&gt;
&lt;br /&gt;
Its the user's responsibility to manage its data, and to keep things ordered, and within quota.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Main_Page"/>
				<updated>2023-02-08T05:42:33Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Centre for Bioinformatics and Computational Biology!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [[Obtaining an account]]&lt;br /&gt;
* [http://linuxcommand.org/tlcl.php The Linux Command Line ]&lt;br /&gt;
* [[Logging in to a terminal session]]&lt;br /&gt;
* [[Running jobs on our servers]]&lt;br /&gt;
* [[Using the PBS / Torque queueing environment]]&lt;br /&gt;
* [[Software resources]]&lt;br /&gt;
* [[Hardware resources]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/wiki/index.php?title=Storage_Quotas_and_Charges&amp;amp;action=edit&amp;amp;redlink=1 Storage Quotas]&lt;br /&gt;
* [[Transferring large quantities of data between institutions]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* The compute infrastructure load can be seen in the [http://wonko.bi.up.ac.za/ganglia/?c=unspecified&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2 ganglia monitor]&lt;br /&gt;
* The Bioinformatics post-graduate lecture [https://docs.google.com/spreadsheets/d/1EQQ9lTBi-Pyr6PKgq4eTak-0CU5y52Zg7jJ3LiNyfZQ/edit#gid=0 schedule] is currently available only to bioinformatics students, due to COVID social distancing restrictions.&lt;br /&gt;
* [[Migrating your data safely]]&lt;br /&gt;
* [[File and directory permissions and ownership]]&lt;br /&gt;
* [[Guidelines and Terms of use]]&lt;br /&gt;
* [[Safety and security at the Lab]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/home Admin Wiki]&lt;br /&gt;
== The Black Mamba ==&lt;br /&gt;
* [[About]]&lt;br /&gt;
* [[Joining the discussion]]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Main_Page"/>
				<updated>2023-02-08T05:41:44Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Centre for Bioinformatics and Computational Biology!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [[Obtaining an account]]&lt;br /&gt;
* [http://linuxcommand.org/tlcl.php The Linux Command Line ]&lt;br /&gt;
* [[Logging in to a terminal session]]&lt;br /&gt;
* [[Running jobs on our servers]]&lt;br /&gt;
* [[Using the PBS / Torque queueing environment]]&lt;br /&gt;
* [[Software resources]]&lt;br /&gt;
* [[Hardware resources]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/wiki/index.php?title=Storage_Quotas&amp;amp;action=edit&amp;amp;redlink=1 Storage Quotas]&lt;br /&gt;
* [[Transferring large quantities of data between institutions]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* The compute infrastructure load can be seen in the [http://wonko.bi.up.ac.za/ganglia/?c=unspecified&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2 ganglia monitor]&lt;br /&gt;
* The Bioinformatics post-graduate lecture [https://docs.google.com/spreadsheets/d/1EQQ9lTBi-Pyr6PKgq4eTak-0CU5y52Zg7jJ3LiNyfZQ/edit#gid=0 schedule] is currently available only to bioinformatics students, due to COVID social distancing restrictions.&lt;br /&gt;
* [[Migrating your data safely]]&lt;br /&gt;
* [[File and directory permissions and ownership]]&lt;br /&gt;
* [[Guidelines and Terms of use]]&lt;br /&gt;
* [[Safety and security at the Lab]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/home Admin Wiki]&lt;br /&gt;
== The Black Mamba ==&lt;br /&gt;
* [[About]]&lt;br /&gt;
* [[Joining the discussion]]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Main_Page"/>
				<updated>2023-02-08T05:41:04Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Centre for Bioinformatics and Computational Biology!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [[Obtaining an account]]&lt;br /&gt;
* [http://linuxcommand.org/tlcl.php The Linux Command Line ]&lt;br /&gt;
* [[Logging in to a terminal session]]&lt;br /&gt;
* [[Running jobs on our servers]]&lt;br /&gt;
* [[Using the PBS / Torque queueing environment]]&lt;br /&gt;
* [[Software resources]]&lt;br /&gt;
* [[Hardware resources]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/wiki/index.php?title=Storage_Quotas_and_Charges&amp;amp;action=edit&amp;amp;redlink=1 Storage Quotas]&lt;br /&gt;
* [[Transferring large quantities of data between institutions]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* The compute infrastructure load can be seen in the [http://wonko.bi.up.ac.za/ganglia/?c=unspecified&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2 ganglia monitor]&lt;br /&gt;
* The Bioinformatics post-graduate lecture [https://docs.google.com/spreadsheets/d/1EQQ9lTBi-Pyr6PKgq4eTak-0CU5y52Zg7jJ3LiNyfZQ/edit#gid=0 schedule] is currently available only to bioinformatics students, due to COVID social distancing restrictions.&lt;br /&gt;
* [[Migrating your data safely]]&lt;br /&gt;
* [[File and directory permissions and ownership]]&lt;br /&gt;
* [[Guidelines and Terms of use]]&lt;br /&gt;
* [[Safety and security at the Lab]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/home Admin Wiki]&lt;br /&gt;
== The Black Mamba ==&lt;br /&gt;
* [[About]]&lt;br /&gt;
* [[Joining the discussion]]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2023-02-08T05:39:26Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* nLustre quotas can be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -uqh &amp;lt;username&amp;gt; /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
Your quota on the home directories, ie: /home/&amp;lt;username&amp;gt; is 500 GB.&lt;br /&gt;
Please note that you are not allowed to store, and process your data from this directory. Its sole intent is for small documents and non-processed files.&lt;br /&gt;
&lt;br /&gt;
Your quota on the Lustre file system is 2 TB, with the followong provisos:&lt;br /&gt;
&lt;br /&gt;
You have a total of 2 TB, but within /nlustre/users/&amp;lt;username&amp;gt;/scratch, you have unlimited storage for a certain period.&lt;br /&gt;
The scratch directories get purged periodically on certain dates, 3 times per year:&lt;br /&gt;
&lt;br /&gt;
April 30, 0h00.&lt;br /&gt;
August 31, 0h00.&lt;br /&gt;
December 31, 0h00.&lt;br /&gt;
&lt;br /&gt;
What remains under your directory structure, ie: /nlustre/users/&amp;lt;username&amp;gt;, excluding scratch, must be within your 2 TB quota.&lt;br /&gt;
Further, we will not backup anything under /nlustre/users/&amp;lt;username&amp;gt;/scratch.&lt;br /&gt;
&lt;br /&gt;
We will send only one reminder of the purge date when it approaches. Thereafter, one warning if you are out of quota. &lt;br /&gt;
Failure to comply, will get your account locked out.&lt;br /&gt;
&lt;br /&gt;
Its the user's responsibility to manage its data, and to keep things ordered, and within quota.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Main_Page"/>
				<updated>2022-08-19T09:40:56Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Centre for Bioinformatics and Computational Biology!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [[Obtaining an account]]&lt;br /&gt;
* [http://linuxcommand.org/tlcl.php The Linux Command Line ]&lt;br /&gt;
* [[Logging in to a terminal session]]&lt;br /&gt;
* [[Running jobs on our servers]]&lt;br /&gt;
* [[Using the PBS / Torque queueing environment]]&lt;br /&gt;
* [[Software resources]]&lt;br /&gt;
* [[Hardware resources]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/wiki/index.php?title=Storage_Quotas_and_Charges&amp;amp;action=edit&amp;amp;redlink=1 Storage Quotas and Charges]&lt;br /&gt;
* [[Transferring large quantities of data between institutions]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* The compute infrastructure load can be seen in the [http://wonko.bi.up.ac.za/ganglia/?c=unspecified&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2 ganglia monitor]&lt;br /&gt;
* The Bioinformatics post-graduate lecture [https://docs.google.com/spreadsheets/d/1mNpNbf6D752dAP3mVNYhEeVvXAcDzigRQVVCbCbwq5Q/edit#gid=0 schedule] is currently available only to bioinformatics students, due to COVID social distancing restrictions.&lt;br /&gt;
* [[Migrating your data safely]]&lt;br /&gt;
* [[File and directory permissions and ownership]]&lt;br /&gt;
* [[Guidelines and Terms of use]]&lt;br /&gt;
* [[Safety and security at the Lab]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/home Admin Wiki]&lt;br /&gt;
== The Black Mamba ==&lt;br /&gt;
* [[About]]&lt;br /&gt;
* [[Joining the discussion]]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Main_Page"/>
				<updated>2022-08-19T08:37:27Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Centre for Bioinformatics and Computational Biology!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [[Obtaining an account]]&lt;br /&gt;
* [http://linuxcommand.org/tlcl.php The Linux Command Line ]&lt;br /&gt;
* [[Logging in to a terminal session]]&lt;br /&gt;
* [[Running jobs on our servers]]&lt;br /&gt;
* [[Using the PBS / Torque queueing environment]]&lt;br /&gt;
* [[Software resources]]&lt;br /&gt;
* [[Hardware resources]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/wiki/index.php?title=Storage_Quotas_and_Charges&amp;amp;action=edit&amp;amp;redlink=1 Storage Quotas and Charges]&lt;br /&gt;
* [[Transferring large quantities of data between institutions]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* The compute infrastructure load can be seen in the [http://wonko.bi.up.ac.za/ganglia/?c=unspecified&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2 ganglia monitor]&lt;br /&gt;
* The Bioinformatics post-graduate lecture [https://docs.google.com/spreadsheets/d/1mNpNbf6D752dAP3mVNYhEeVvXAcDzigRQVVCbCbwq5Q/edit#gid=0 schedule] is currently available only to bioinformatics students, due to COVID social distancing restrictions.&lt;br /&gt;
* [[Migrating your data safely]]&lt;br /&gt;
* [[File and directory permissions and ownership]]&lt;br /&gt;
* [[Guidelines and Terms of use]]&lt;br /&gt;
* [[Safety and security at the Lab]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/home.html Admin Wiki]&lt;br /&gt;
== The Black Mamba ==&lt;br /&gt;
* [[About]]&lt;br /&gt;
* [[Joining the discussion]]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Main_Page"/>
				<updated>2022-08-19T08:18:19Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Centre for Bioinformatics and Computational Biology!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [[Obtaining an account]]&lt;br /&gt;
* [http://linuxcommand.org/tlcl.php The Linux Command Line ]&lt;br /&gt;
* [[Logging in to a terminal session]]&lt;br /&gt;
* [[Running jobs on our servers]]&lt;br /&gt;
* [[Using the PBS / Torque queueing environment]]&lt;br /&gt;
* [[Software resources]]&lt;br /&gt;
* [[Hardware resources]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/wiki/index.php?title=Storage_Quotas_and_Charges&amp;amp;action=edit&amp;amp;redlink=1 Storage Quotas and Charges]&lt;br /&gt;
* [[Transferring large quantities of data between institutions]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* The compute infrastructure load can be seen in the [http://wonko.bi.up.ac.za/ganglia/?c=unspecified&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2 ganglia monitor]&lt;br /&gt;
* The Bioinformatics post-graduate lecture [https://docs.google.com/spreadsheets/d/1mNpNbf6D752dAP3mVNYhEeVvXAcDzigRQVVCbCbwq5Q/edit#gid=0 schedule] is currently available only to bioinformatics students, due to COVID social distancing restrictions.&lt;br /&gt;
* [[Migrating your data safely]]&lt;br /&gt;
* [[File and directory permissions and ownership]]&lt;br /&gt;
* [[Guidelines and Terms of use]]&lt;br /&gt;
* [[Safety and security at the Lab]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/adminwiki.html Admin Wiki]&lt;br /&gt;
== The Black Mamba ==&lt;br /&gt;
* [[About]]&lt;br /&gt;
* [[Joining the discussion]]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Logging_in_to_a_terminal_session</id>
		<title>Logging in to a terminal session</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Logging_in_to_a_terminal_session"/>
				<updated>2021-09-15T10:39:09Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* All our servers run Linux&lt;br /&gt;
** PLEASE NOTE, YOU'LL NEED A BASIC KNOWLEDGE OF LINUX TO INTERACT WITH OUR SERVER ENVIRONMENT.&lt;br /&gt;
** If you need an introduction to Linux, we would recommend: [http://linuxcommand.org The Linux Command Line].&lt;br /&gt;
* You would usually log into our head (login node). It is called wonko.bi.up.ac.za.&lt;br /&gt;
* Logging in directly to our compute servers is disabled. You need to run your jobs using the [http://wiki.bi.up.ac.za/wiki/index.php/Running_jobs_on_our_servers queueing] system. &lt;br /&gt;
* If you have a highly specific need to directly log in to one of the compute servers, please discuss it with our system administrator  (johann.swart at up.ac.za).&lt;br /&gt;
* An example from a Linux or Mac terminal session:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; ssh username@wonko.bi.up.ac.za&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
or if graphics forwarding is needed:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; ssh -X user@wonko.bi.up.ac.za&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* For Mac, [https://www.xquartz.org/ XQuartz] needs to be installed for graphics to work.&lt;br /&gt;
* On newer versions of MacOS, you need to use:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; ssh -Y user@wonko.bi.up.ac.za&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* If you are logging in from a Windows machine, you can use a terminal emulator such as [http://www.putty.org putty], [http://www.bitvise.com BitVise] or [https://ttssh2.osdn.jp TeraTerm].&lt;br /&gt;
** The hostname would be wonko.bi.up.ac.za, the user name would be the user name provided to you, and the authentication method would be password.&lt;br /&gt;
* If you need to use graphics from a Windows client, you can download [https://sourceforge.net/projects/xming XMing].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use</id>
		<title>Guidelines and Terms of use</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use"/>
				<updated>2021-09-15T10:35:40Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Please familiarise yourself thoroughly with the following:'''&lt;br /&gt;
* A basic knowledge of Linux is a requirement to interact with our server environment.&lt;br /&gt;
&lt;br /&gt;
* All users should use the PBS Torque queue manager for submitting jobs.&lt;br /&gt;
&lt;br /&gt;
* Please do '''not''' run jobs directly on the main headnode wonko.bi.up.ac.za.&lt;br /&gt;
&lt;br /&gt;
* Computing resources are managed by the queuing system.&lt;br /&gt;
&lt;br /&gt;
* Storage resources however, have user quotas, and are billed for, according to our billing policy (See [[Storage Quotas and Charges]]).&lt;br /&gt;
&lt;br /&gt;
* Both users, and their supervisors should familiarise themselves with the quota system, and the costs incurred.&lt;br /&gt;
&lt;br /&gt;
* Home directories - every user gets allocated 500 GB on the file server at /home/&amp;amp;lt;username&amp;amp;gt;. The purpose of this is to serve as a landing spot when you log in, and for storing documents and similar files. It is expressly '''not intended''' for research data.&lt;br /&gt;
&lt;br /&gt;
* Data directories - every user also gets allocated by default 1 TB on our fast Lustre storage at /nlustre/users/&amp;amp;lt;username&amp;amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* This is where your large files must go, and where you should process from, and store towards during processing.&lt;br /&gt;
&lt;br /&gt;
* This storage may '''not be used''' for backup of any other systems, PC's, laptops or data. You may also not store personal data such as movies, music files and photos here. Its intended for bona fide research activities on Bioinformatics servers only.&lt;br /&gt;
&lt;br /&gt;
* The home directories get backed up daily with a retention period of 2 weeks for multiple versions.&lt;br /&gt;
&lt;br /&gt;
* Due to its size, the Lustre directories get replicated to our replication storage once per week.&lt;br /&gt;
&lt;br /&gt;
* You must request installation of utilities and software applications from the systems admin, who will evaluate the best solution for you, and perform the installation system-wide. '''You may not install or attempt to install software of any kind on our servers,''' even under your own account (trying to use utilities like brew or conda will simply not work).&lt;br /&gt;
&lt;br /&gt;
* DISCLAIMER: Take note that whilst we do backups to the best of our ability of all servers and data, '''the onus of securing and ownership of your data remain with you, the user, and your supervisor. We do not accept responsibility for the loss of users' data or intellectual property.''' We do not have control over the stability and availability of reliable power supply to the facility. Hence unplanned outages may occur at any time, and this may cause data loss, even on our backup systems.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Logging_in_to_a_terminal_session</id>
		<title>Logging in to a terminal session</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Logging_in_to_a_terminal_session"/>
				<updated>2021-09-15T10:33:02Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* All our servers run Linux&lt;br /&gt;
** PLEASE NOTE, YOU'LL NEED A BASIC KNOWLEDGE OF LINUX TO INTERACT WITH OUR SERVER ENVIRONMENT&lt;br /&gt;
** If you need an introduction to Linux, we would recommend: [http://linuxcommand.org The Linux Command Line].&lt;br /&gt;
* You would usually log into our head (login node). It is called wonko.bi.up.ac.za.&lt;br /&gt;
* Logging in directly to our compute servers is disabled. You need to run your jobs using the [http://wiki.bi.up.ac.za/wiki/index.php/Running_jobs_on_our_servers queueing] system. &lt;br /&gt;
* If you have a highly specific need to directly log in to one of the compute servers, please discuss it with our system administrator  (johann.swart at up.ac.za).&lt;br /&gt;
* An example from a Linux or Mac terminal session:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; ssh username@wonko.bi.up.ac.za&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
or if graphics forwarding is needed:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; ssh -X user@wonko.bi.up.ac.za&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* For Mac, [https://www.xquartz.org/ XQuartz] needs to be installed for graphics to work.&lt;br /&gt;
* On newer versions of MacOS, you need to use:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; ssh -Y user@wonko.bi.up.ac.za&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* If you are logging in from a Windows machine, you can use a terminal emulator such as [http://www.putty.org putty], [http://www.bitvise.com BitVise] or [https://ttssh2.osdn.jp TeraTerm].&lt;br /&gt;
** The hostname would be wonko.bi.up.ac.za, the user name would be the user name provided to you, and the authentication method would be password.&lt;br /&gt;
* If you need to use graphics from a Windows client, you can download [https://sourceforge.net/projects/xming XMing].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2021-06-14T00:07:43Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout (do not specify a filename here, just the path)&lt;br /&gt;
#PBS -e /path/to/stderr (ditto)&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the cluster nodes (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q interactive&lt;br /&gt;
&amp;gt; This will give you a session on one of the cluster nodes (not bigmem), with a default walltime of 1 hour, 1 CPU,&lt;br /&gt;
&amp;gt; and 16GB of memory. You can specify walltime up to 8 hours, memory up to 32 GB.&lt;br /&gt;
&amp;gt; Please note the interactive mode is only for editing and testing scripts and code, not for running jobs. This should be done&lt;br /&gt;
&amp;gt; with regular job submissions to the queue.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|interactive&lt;br /&gt;
|4&lt;br /&gt;
|2 (default 1)&lt;br /&gt;
|32 (default 16 GB)&lt;br /&gt;
|8:00:00 (default 1 hr)&lt;br /&gt;
|Interactive queue&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* PLEASE NOTE: The number of cores is sometimes referred to as number of threads in your software application. You should always set max number of threads/cores to the limits specified here (per queue).&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2021-02-02T04:51:27Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* Interactive jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the cluster nodes (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q interactive&lt;br /&gt;
&amp;gt; This will give you a session on one of the cluster nodes (not bigmem), with a default walltime of 1 hour, 1 CPU,&lt;br /&gt;
&amp;gt; and 16GB of memory. You can specify walltime up to 8 hours, memory up to 32 GB.&lt;br /&gt;
&amp;gt; Please note the interactive mode is only for editing and testing scripts and code, not for running jobs. This should be done&lt;br /&gt;
&amp;gt; with regular job submissions to the queue.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|interactive&lt;br /&gt;
|4&lt;br /&gt;
|2 (default 1)&lt;br /&gt;
|32 (default 16 GB)&lt;br /&gt;
|8:00:00 (default 1 hr)&lt;br /&gt;
|Interactive queue&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* PLEASE NOTE: The number of cores is sometimes referred to as number of threads in your software application. You should always set max number of threads/cores to the limits specified here (per queue).&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2021-02-02T04:13:42Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* Interactive jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the cluster nodes (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q interactive&lt;br /&gt;
&amp;gt; This will give you a session on one of the cluster nodes (not bigmem), with a default walltime of 1 hour, 1 CPU,&lt;br /&gt;
&amp;gt; and 16GB of memory. You can specify walltime up to 8 hours, memory up to 64 GB.&lt;br /&gt;
&amp;gt; Please note the interactive mode is only for editing and testing scripts and code, not for running jobs. This should be done&lt;br /&gt;
&amp;gt; with regular job submissions to the queue.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|interactive&lt;br /&gt;
|4&lt;br /&gt;
|2 (default 1)&lt;br /&gt;
|32 (default 16 GB)&lt;br /&gt;
|8:00:00 (default 1 hr)&lt;br /&gt;
|Interactive queue&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* PLEASE NOTE: The number of cores is sometimes referred to as number of threads in your software application. You should always set max number of threads/cores to the limits specified here (per queue).&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2021-02-02T04:04:24Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* The different queues available */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the cluster nodes (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q interactive&lt;br /&gt;
&amp;gt; This will give you a session on one of the cluster nodes (not bigmem), with a default walltime of 1 hour, 1 CPU, and 16GB of memory. You can specify walltime up to 8 hours, memory up to 64 GB.&lt;br /&gt;
&amp;gt; Please note the interactive mode is only for editing and testing scripts and code, not for running jobs. This should be done with regular job submissions to the queue.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|interactive&lt;br /&gt;
|4&lt;br /&gt;
|2 (default 1)&lt;br /&gt;
|32 (default 16 GB)&lt;br /&gt;
|8:00:00 (default 1 hr)&lt;br /&gt;
|Interactive queue&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* PLEASE NOTE: The number of cores is sometimes referred to as number of threads in your software application. You should always set max number of threads/cores to the limits specified here (per queue).&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2021-02-02T03:57:46Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* Interactive jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the cluster nodes (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q interactive&lt;br /&gt;
&amp;gt; This will give you a session on one of the cluster nodes (not bigmem), with a default walltime of 1 hour, 1 CPU, and 16GB of memory. You can specify walltime up to 8 hours, memory up to 64 GB.&lt;br /&gt;
&amp;gt; Please note the interactive mode is only for editing and testing scripts and code, not for running jobs. This should be done with regular job submissions to the queue.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* PLEASE NOTE: The number of cores is sometimes referred to as number of threads in your software application. You should always set max number of threads/cores to the limits specified here (per queue).&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2021-02-02T03:57:23Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* Interactive jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the cluster nodes (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q interactive&lt;br /&gt;
&amp;gt; This will give you a session on one of the cluster nodes (not bigmem), with a default walltime of 1 hour, 1 CPU, and 16GB of memory. You can specify walltime up to 8 hours, memory up to 64 GB.&lt;br /&gt;
&amp;gt; Please note the interactive mode is only for editing and testing scripts and code, not for running jobs. This should be done with regular job submissions to the queue.&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* PLEASE NOTE: The number of cores is sometimes referred to as number of threads in your software application. You should always set max number of threads/cores to the limits specified here (per queue).&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Migrating_your_data_safely</id>
		<title>Migrating your data safely</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Migrating_your_data_safely"/>
				<updated>2021-01-13T20:10:00Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;br /&gt;
Moving large data sets can be problematic. It cannot be done with drag-and-drop operations, or even with the ‘mv’ command due to drops in network connectivity, for example.&lt;br /&gt;
&lt;br /&gt;
'''A much better and tried-and-tested workflow is as follows:'''&lt;br /&gt;
&lt;br /&gt;
* First, do a ‘cp’ with proper construction of the command line.&lt;br /&gt;
* Then follow that with one or preferably two ‘rsync’ operations.&lt;br /&gt;
* Only then can the source be deleted safely.&lt;br /&gt;
&lt;br /&gt;
The reason this works is because ‘cp’ is usually much faster than ‘rsync’, &amp;lt;br/&amp;gt;&lt;br /&gt;
and it gets the bulk of the data across. But it can fail for certain files, for many reasons (unbeknownst to you), &amp;lt;br/&amp;gt;&lt;br /&gt;
and leave a copy operation half way done.&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is where rsync does a good job, since it checks the source and destination for file version, date, and size, &amp;lt;br/&amp;gt;&lt;br /&gt;
and then only copies those files that are not up to date, or do not exist. When a subsequent second ‘rsync' is run, it will report no files copied,&amp;lt;br/&amp;gt;&lt;br /&gt;
and this is when one knows for sure the source and targets are identical. You can then safely proceed to delete the source.&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
&lt;br /&gt;
First, log onto one of our general-use servers such as Zoidberg (do not do this from our Wonko headnode please) :-&lt;br /&gt;
   # ssh &amp;lt;username&amp;gt;@zoidberg.bi.up.ac.za&lt;br /&gt;
&lt;br /&gt;
Please note that the command line switches need to be exactly as below :-&lt;br /&gt;
&lt;br /&gt;
   # tmux&lt;br /&gt;
   # cp -r -v --preserve=all /home/&amp;lt;username&amp;gt;/some_directory /nlustre/users/&amp;lt;username&amp;gt;&lt;br /&gt;
   # rsync -raH --progress /home/&amp;lt;username&amp;gt;/some_directory/* /nlustre/users/&amp;lt;username&amp;gt;/some_directory     ##### Note path specs are different&lt;br /&gt;
then again (just hit up-arrow) :-&lt;br /&gt;
   # rsync -raH --progress /home/&amp;lt;username&amp;gt;/some_directory/* /nlustre/users/&amp;lt;username&amp;gt;/some_directory&lt;br /&gt;
&lt;br /&gt;
'''Note: Always use full explicit paths when doing this kind of operation! Also - be careful when pasting code from browsers.'''&lt;br /&gt;
&lt;br /&gt;
You are done when rsync reports no further files copied. It won't say that explicitly, but you'll notice the total file sizes are the same, and no further file copy stats will be reported.&lt;br /&gt;
&lt;br /&gt;
Any or all of these operations can take many hours, even days - hence the use of ‘tmux’ to preserve the session.&lt;br /&gt;
If you are not familiar with tmux, a good tutorial can be found here:&lt;br /&gt;
https://gist.github.com/MohamedAlaa/2961058&lt;br /&gt;
&lt;br /&gt;
You do not need to learn all of tmux to use it - just the 3 or so basic commands will do.&lt;br /&gt;
&lt;br /&gt;
Then, finally, you can remove your source directories:-&lt;br /&gt;
&lt;br /&gt;
   # cd /home/&amp;lt;username&amp;gt;&lt;br /&gt;
   # rm -rf ./some_directory   '''##### PROCEED WITH EXTREME CAUTION! #####'''&lt;br /&gt;
&lt;br /&gt;
'''Double check your command line when using 'rm -rf’ before you hit enter.''' &lt;br /&gt;
Its effects are immediate, and irreversible, so mistakes will be costly.&lt;br /&gt;
&lt;br /&gt;
If all went well, you’ll end up with an exact copy of your source files, with all attributes, permissions and time stamps preserved, at the new destination.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use</id>
		<title>Guidelines and Terms of use</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use"/>
				<updated>2021-01-13T20:03:35Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Please familiarise yourself thoroughly with the following:'''&lt;br /&gt;
&lt;br /&gt;
* All users should use the PBS Torque queue manager for submitting jobs.&lt;br /&gt;
&lt;br /&gt;
* Please do '''not''' run jobs directly on the main headnode wonko.bi.up.ac.za.&lt;br /&gt;
&lt;br /&gt;
* Computing resources are managed by the queuing system.&lt;br /&gt;
&lt;br /&gt;
* Storage resources however, have user quotas, and are billed for, according to our billing policy (See [[Storage Quotas and Charges]]).&lt;br /&gt;
&lt;br /&gt;
* Both users, and their supervisors should familiarise themselves with the quota system, and the costs incurred.&lt;br /&gt;
&lt;br /&gt;
* Home directories - every user gets allocated 500 GB on the file server at /home/&amp;amp;lt;username&amp;amp;gt;. The purpose of this is to serve as a landing spot when you log in, and for storing documents and similar files. It is expressly '''not intended''' for research data.&lt;br /&gt;
&lt;br /&gt;
* Data directories - every user also gets allocated by default 1 TB on our fast Lustre storage at /nlustre/users/&amp;amp;lt;username&amp;amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* This is where your large files must go, and where you should process from, and store towards during processing.&lt;br /&gt;
&lt;br /&gt;
* This storage may '''not be used''' for backup of any other systems, PC's, laptops or data. You may also not store personal data such as movies, music files and photos here. Its intended for bona fide research activities on Bioinformatics servers only.&lt;br /&gt;
&lt;br /&gt;
* The home directories get backed up daily with a retention period of 2 weeks for multiple versions.&lt;br /&gt;
&lt;br /&gt;
* Due to its size, the Lustre directories get replicated to our replication storage once per week.&lt;br /&gt;
&lt;br /&gt;
* You must request installation of utilities and software applications from the systems admin, who will evaluate the best solution for you, and perform the installation system-wide. '''You may not install or attempt to install software of any kind on our servers,''' even under your own account (trying to use utilities like brew or conda will simply not work).&lt;br /&gt;
&lt;br /&gt;
* DISCLAIMER: Take note that whilst we do backups to the best of our ability of all servers and data, '''the onus of securing and ownership of your data remain with you, the user, and your supervisor. We do not accept responsibility for the loss of users' data or intellectual property.''' We do not have control over the stability and availability of reliable power supply to the facility. Hence unplanned outages may occur at any time, and this may cause data loss, even on our backup systems.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use</id>
		<title>Guidelines and Terms of use</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use"/>
				<updated>2021-01-13T20:01:52Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Please familiarise yourself thoroughly with the following:'''&lt;br /&gt;
&lt;br /&gt;
* All users should use the PBS Torque queue manager for submitting jobs.&lt;br /&gt;
&lt;br /&gt;
* Please do '''not''' run jobs directly on the main headnode wonko.bi.up.ac.za.&lt;br /&gt;
&lt;br /&gt;
* Computing resources are managed by the queuing system.&lt;br /&gt;
&lt;br /&gt;
* Storage resources however, have user quotas, and are billed for, according to our billing policy (See [[Storage Quotas and Charges]]).&lt;br /&gt;
&lt;br /&gt;
* Both users, and their supervisors should familiarise themselves with the quota system, and the costs incurred.&lt;br /&gt;
&lt;br /&gt;
* Home directories - every user gets allocated 500 GB on the file server at /home/&amp;amp;lt;username&amp;amp;gt;. The purpose of this is to serve as a landing spot when you log in, and for storing documents and similar files. It is expressly '''not intended''' for research data.&lt;br /&gt;
&lt;br /&gt;
* Data directories - every user also gets allocated by default 1 TB on our fast Lustre storage at /nlustre/users/&amp;amp;lt;username&amp;amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* This is where your large files must go, and where you should process from, and store towards during processing.&lt;br /&gt;
&lt;br /&gt;
* This storage may '''not be used''' for backup of any other systems, PC's, laptops or data. You may also not store personal data such as movies, music files and photos here. Its intended for bona fide research activities on Bioinformatics servers only.&lt;br /&gt;
&lt;br /&gt;
* The home directories get backed up daily with a retention period of 2 weeks for multiple versions.&lt;br /&gt;
&lt;br /&gt;
* Due to its size, the Lustre directories get replicated to our replication storage once per week.&lt;br /&gt;
&lt;br /&gt;
* You must request installation of utilities and software applications from the systems admin, who will evaluate the best solution for you, and perform the installation system-wide. '''You may not install or attempt to install software of any kind on our servers,''' even under your own account (trying to use utilities like brew or conda will simply not work).&lt;br /&gt;
&lt;br /&gt;
* DISCLAIMER: Take note that whilst we do backups to the best of our ability of all servers and data, '''the onus of securing and ownership of your data remain with you. We do not accept responsibility for the loss of users' data or intellectual property.''' We do not have control over the stability and availability of reliable power supply to the facility. Hence unplanned outages may occur at any time, and this may cause data loss, even on our backup systems.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use</id>
		<title>Guidelines and Terms of use</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use"/>
				<updated>2020-09-08T18:08:14Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Please familiarise yourself thoroughly with the following:'''&lt;br /&gt;
&lt;br /&gt;
* All users should use the PBS Torque queue manager for submitting jobs.&lt;br /&gt;
&lt;br /&gt;
* Please do '''not''' run jobs directly on the main headnode wonko.bi.up.ac.za.&lt;br /&gt;
&lt;br /&gt;
* Computing resources are managed by the queuing system.&lt;br /&gt;
&lt;br /&gt;
* Storage resources however, have user quotas, and are billed for, according to our billing policy (See [[Storage Quotas and Charges]]).&lt;br /&gt;
&lt;br /&gt;
* Both users, and their supervisors should familiarise themselves with the quota system, and the costs incurred.&lt;br /&gt;
&lt;br /&gt;
* Home directories - every user gets allocated 500 GB on the file server at /home/&amp;amp;lt;username&amp;amp;gt;. The purpose of this is to serve as a landing spot when you log in, and for storing documents and similar files. It is expressly '''not intended''' for research data.&lt;br /&gt;
&lt;br /&gt;
* Data directories - every user also gets allocated by default 1 TB on our fast Lustre storage at /nlustre/users/&amp;amp;lt;username&amp;amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* This is where your large files must go, and where you should process from, and store towards during processing.&lt;br /&gt;
&lt;br /&gt;
* This storage may '''not be used''' for backup of any other systems, PC's, laptops or data. You may also not store personal data such as movies, music files and photos here. Its intended for bona fide research activities on Bioinformatics servers only.&lt;br /&gt;
&lt;br /&gt;
* The home directories get backed up daily with a retention period of 2 weeks for multiple versions.&lt;br /&gt;
&lt;br /&gt;
* Due to its size, the Lustre directories get replicated to our replication storage once per week.&lt;br /&gt;
&lt;br /&gt;
* You must request installation of utilities and software applications from the systems admin, who will evaluate the best solution for you, and perform the installation system-wide. '''You may not install or attempt to install software of any kind on our servers,''' even under your own account (trying to use utilities like brew or conda will simply not work).&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use</id>
		<title>Guidelines and Terms of use</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Guidelines_and_Terms_of_use"/>
				<updated>2020-09-08T11:22:20Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Please familiarise yourself thoroughly with the following:'''&lt;br /&gt;
&lt;br /&gt;
* All users should use the PBS Torque queue manager for submitting jobs.&lt;br /&gt;
&lt;br /&gt;
* Please do '''not''' run jobs directly on the main headnode wonko.bi.up.ac.za.&lt;br /&gt;
&lt;br /&gt;
* Computing resources are managed by the queuing system.&lt;br /&gt;
&lt;br /&gt;
* Storage resources however, have user quotas, and are billed for, according to our billing policy (See [[Storage Quotas and Charges]]).&lt;br /&gt;
&lt;br /&gt;
* Both users, and their supervisors should familiarise themselves with the quota system, and the costs incurred.&lt;br /&gt;
&lt;br /&gt;
* Home directories - every user gets allocated 500 GB on the file server at /home/&amp;amp;lt;username&amp;amp;gt;. The purpose of this is to serve as a landing spot when you log in, and for storing documents and similar files. It is expressly '''not intended''' for research data.&lt;br /&gt;
&lt;br /&gt;
* Data directories - every user also gets allocated by default 1 TB on our fast Lustre storage at /nlustre/users/&amp;amp;lt;username&amp;amp;gt;. &lt;br /&gt;
&lt;br /&gt;
* This is where your large files must go, and where you should process from, and store towards during processing.&lt;br /&gt;
&lt;br /&gt;
* This storage may '''not be used''' for backup of any other systems, PC's, laptops or data. You may also not store personal data such as movies, music files and photos here. Its intended for bona fide research activities on Bioinformatics servers only.&lt;br /&gt;
&lt;br /&gt;
* The home directories get backed up daily with a retention period of 2 weeks for multiple versions.&lt;br /&gt;
&lt;br /&gt;
* Due to its size, the Lustre directories get replicated to our replication storage once per week.&lt;br /&gt;
&lt;br /&gt;
* You must request installation of utilities and software applications from the systems admin, who will evaluate the best solution for you, and perform the installation system-wide. You may not install or attempt to install software of any kind on our servers, even under your own account (trying to use utilities like brew or conda will simply not work).&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-07-11T17:56:56Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* The different queues available */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* PLEASE NOTE: The number of cores is sometimes referred to as number of threads in your software application. You should always set max number of threads/cores to the limits specified here (per queue).&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Hardware_resources</id>
		<title>Hardware resources</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Hardware_resources"/>
				<updated>2020-05-20T05:02:41Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* Compute servers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Head node ===&lt;br /&gt;
* The head node is wonko.bi.up.ac.za&lt;br /&gt;
&lt;br /&gt;
=== Compute servers===&lt;br /&gt;
* There are 10 x 28-core compute nodes with 128 GB RAM each (wonko1 - wonko10). These nodes constitute the short, normal and long queues.&lt;br /&gt;
* There is a 96-core SMP server with 3 TB of RAM (alf). This large multicore server takes care of the bigmem queue on its own.&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
* Please also see [[Storage Quotas and Charges]]&lt;br /&gt;
* Home directories are on a 70 TB NFS4 filesystem over Infiniband (/home)&lt;br /&gt;
* User data directories are on a 1.2 PB Lustre filesystem over Infiniband (/nlustre/users)&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Hardware_resources</id>
		<title>Hardware resources</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Hardware_resources"/>
				<updated>2020-05-20T04:59:39Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* Compute servers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Head node ===&lt;br /&gt;
* The head node is wonko.bi.up.ac.za&lt;br /&gt;
&lt;br /&gt;
=== Compute servers===&lt;br /&gt;
* There are 10 x 28-core compute nodes with 128 GB RAM each (wonko1 - wonko10). These nodes constitute the short, normal and long queues.&lt;br /&gt;
* There is a 96-core SMP server with 3 TB of RAM (alf). This SMP server takes care of the bigmem queue on its own.&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
* Please also see [[Storage Quotas and Charges]]&lt;br /&gt;
* Home directories are on a 70 TB NFS4 filesystem over Infiniband (/home)&lt;br /&gt;
* User data directories are on a 1.2 PB Lustre filesystem over Infiniband (/nlustre/users)&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Obtaining_an_account</id>
		<title>Obtaining an account</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Obtaining_an_account"/>
				<updated>2020-03-18T01:32:11Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two main types of accounts:&lt;br /&gt;
* A Linux server login account (terminal-based). This is the default account type. You'll definitely need one of these.&lt;br /&gt;
* A Galaxy web-based account (more information about Galaxy is available [https://galaxyproject.org here]). If you don't know what Galaxy is, you'll probably not need one of these.&lt;br /&gt;
To apply for an account, please fill in the [https://docs.google.com/forms/d/1O7ld9hlnbQOhFXQFAqPwv2WjAN7GHN1Nu37Yiw4ssp8/viewform?edit_requested=true Account Application Form].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Obtaining_an_account</id>
		<title>Obtaining an account</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Obtaining_an_account"/>
				<updated>2020-03-18T01:31:37Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There are two main types of accounts:&lt;br /&gt;
* A Linux server login account (terminal-based). This is the default account type. You'll definitely need one of these.&lt;br /&gt;
* A Galaxy web-based account (more information about Galaxy is available [https://galaxyproject.org here]). If you don't know what Galaxy is, you'll probably not need one of these.&lt;br /&gt;
To apply for an account, please fill in the [https://docs.google.com/forms/d/1O7ld9hlnbQOhFXQFAqPwv2WjAN7GHN1Nu37Yiw4ssp8/viewform?edit_requested=true Account Application Form].&lt;br /&gt;
[https://docs.google.com/forms/d/1O7ld9hlnbQOhFXQFAqPwv2WjAN7GHN1Nu37Yiw4ssp8/viewform?edit_requested=true]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-02-06T05:55:33Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* The different queues available */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:30:00&lt;br /&gt;
|Short queue with 30 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-02-06T05:49:03Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* The different queues available */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:05:00&lt;br /&gt;
|Short queue with 5 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-02-06T05:00:49Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* The different queues available */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|112&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:05:00&lt;br /&gt;
|Short queue with 5 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-02-06T04:59:42Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: /* The different queues available */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:05:00&lt;br /&gt;
|Short queue with 5 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-02-06T04:44:41Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:05:00&lt;br /&gt;
|Short queue with 5 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|4&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|30:00:00&lt;br /&gt;
|Medium queue with 30 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|8&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|900:00:00&lt;br /&gt;
|Long queue with 37.5 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-02-06T02:12:43Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:05:00&lt;br /&gt;
|Short queue with 5 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|4&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|08:00:00&lt;br /&gt;
|Medium queue with 8 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|168:00:00&lt;br /&gt;
|Long queue with one week time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment</id>
		<title>Using the PBS / Torque queueing environment</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Using_the_PBS_/_Torque_queueing_environment"/>
				<updated>2020-02-06T02:12:10Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The main commands for interacting with the Torque environment are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qstat&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
View queued jobs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Submit a job to the scheduler.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qdel&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
Delete one of your jobs from queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Job script parameters ===&lt;br /&gt;
&lt;br /&gt;
Parameters for any job submission are specified as #PBS comments in the job script file or as options to the qsub command. The essential options for the cluster include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the size of the job in number of processors:&lt;br /&gt;
&lt;br /&gt;
nodes=N sets the number of nodes needed.&lt;br /&gt;
&lt;br /&gt;
ppn=N sets the number of cores per node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
sets the total expected wall clock time in hours:minutes:seconds. Note the wall clock limits for each queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Example job scripts ===&lt;br /&gt;
&lt;br /&gt;
A program using 14 cores on a single node:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#PBS -l nodes=1:ppn=14&lt;br /&gt;
#PBS -l walltime=8:00:00&lt;br /&gt;
#PBS -q normal&lt;br /&gt;
#PBS -o /path/to/stdout.log&lt;br /&gt;
#PBS -e /path/to/stderr.log&lt;br /&gt;
#PBS -k oe&lt;br /&gt;
#PBS -m ae&lt;br /&gt;
#PBS -M your.email@address&lt;br /&gt;
&lt;br /&gt;
module load bowtie2-2.3.4.1&lt;br /&gt;
bowtie2 -x /path/to/genome -p 14 -1 /path/to/forwardreads.fastq -2 /path/to/reversereads.fastq -S /path/to/outputfile.sam&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assuming the above job script is saved as the text file run_bowtie.sh, the command to submit it to the Torque scheduler is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub run_bowtie.sh&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you receive an email with exit status &amp;quot;0&amp;quot;, that would usually indicate that the job completed successfully.&lt;br /&gt;
&lt;br /&gt;
=== Interactive jobs ===&lt;br /&gt;
&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The different queues available ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Queue Name&lt;br /&gt;
!Max user jobs running &lt;br /&gt;
!Max user cores &amp;lt;br /&amp;gt; running per job&lt;br /&gt;
!Max memory&lt;br /&gt;
!Max walltime&lt;br /&gt;
!Description &lt;br /&gt;
|-&lt;br /&gt;
|short&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|00:05:00&lt;br /&gt;
|Short queue with 5 minute time limit&lt;br /&gt;
|-&lt;br /&gt;
|normal&lt;br /&gt;
|4&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|08:00:00&lt;br /&gt;
|Medium queue with 8 hour time limit&lt;br /&gt;
|-&lt;br /&gt;
|long&lt;br /&gt;
|6&lt;br /&gt;
|28&lt;br /&gt;
|128 GB&lt;br /&gt;
|168:00:00&lt;br /&gt;
|Long queue with one week time limit&lt;br /&gt;
|-&lt;br /&gt;
|bigmem&lt;br /&gt;
|3&lt;br /&gt;
|24&lt;br /&gt;
|750 GB&lt;br /&gt;
|72:00:00&lt;br /&gt;
|High memory queue with 30 day time limit&lt;br /&gt;
|-&lt;br /&gt;
|mpi&lt;br /&gt;
|1&lt;br /&gt;
|112&lt;br /&gt;
|128 GB&lt;br /&gt;
|720:00:00&lt;br /&gt;
|Queue for mpi parallel jobs with 3 day time limit&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;br /&gt;
** Both MPICH and openMPI are installed. Please select the relevant environment using the &amp;quot;module load&amp;quot; functionality.&lt;br /&gt;
** The node list for MPI can be accessed as $PBS_NODEFILE.&lt;br /&gt;
&lt;br /&gt;
=== Additional information ===&lt;br /&gt;
&lt;br /&gt;
* A useful Torque tutorial can be found at [https://kb.iu.edu/d/avmy this link].&lt;br /&gt;
* Detailed reference documentation for queue submission is available [http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/commands/qsub.htm here].&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/About</id>
		<title>About</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/About"/>
				<updated>2019-10-31T20:07:40Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Black Mamba is a discussion group hosted by the Centre for Bioinformatics and Computational Biology.&lt;br /&gt;
&lt;br /&gt;
The aim of the discussion group is to expose people to a wide variety of different topics ranging from biology to mathematics and computer sciences. Post-graduate studies often require individuals to focus on only certain topics relevant to their field of study. This creates a situation where many people have very specialised skill sets. With a trend towards cross disciplinary research it has become more important than ever that scientists have an understanding of not only their field of research but also surrounding fields.&lt;br /&gt;
&lt;br /&gt;
The group does not function on a student-teacher dynamic. Each week is chaired by the most knowledgeable person in the topic being discussed. This person's role is to influence and steer discussion, not necessarily teach the group.&lt;br /&gt;
&lt;br /&gt;
Open discussion is actively encouraged. With no one person knowing all there is to know about any topic it becomes important that everyone in the group actively interacts to ensure that topics are explored in as wide a scope as possible.&lt;br /&gt;
&lt;br /&gt;
'''Topics that have been discussed by the group:'''&lt;br /&gt;
* Git&lt;br /&gt;
* Databases and SQL&lt;br /&gt;
* Python basics&lt;br /&gt;
* Machine Learning&lt;br /&gt;
* Pipeline creation (nextflow and snakemake)&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2019-10-31T20:02:54Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Pricing ===&lt;br /&gt;
&lt;br /&gt;
The price of a basic account is R 2,000 per year.&lt;br /&gt;
&lt;br /&gt;
'''PLEASE NOTE - NONE OF THE CHARGES MENTIONED HERE ARE RELATED TO ACTUAL OPERATIONAL COSTS.'''&amp;lt;br /&amp;gt;&lt;br /&gt;
'''THESE ARE MERELY TOKEN AMOUNTS WE NEED TO RAISE FROM USERS ACCORDING TO AN AGREEMENT MADE WITH TUKS EXECUTIVE.'''&amp;lt;br /&amp;gt;&lt;br /&gt;
'''AS AN EXAMPLE - JUST THE ELECTRICITY COSTS FOR THE SERVER ROOM ALONE AMOUNT TO MORE THAN R 80 000 PER MONTH.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* A web-based system for checking the different user and group storage quotas and charges is available at [http://storageadmin.bi.up.ac.za http://storageadmin.bi.up.ac.za].&lt;br /&gt;
* nLustre quotas can also be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -u username /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
A basic account (R 2,000 per year) provides:&lt;br /&gt;
&lt;br /&gt;
* 512 GB of storage space in /home/''user''&lt;br /&gt;
* 1 TB of storage space in /nlustre/''user''&lt;br /&gt;
* 512 GB of storage space in Galaxy&lt;br /&gt;
* Unlimited computational usage&lt;br /&gt;
&lt;br /&gt;
=== Upgrading storage space ===&lt;br /&gt;
* Storage space may be upgraded at R 1,000 per TB per year in the following steps:&lt;br /&gt;
** 5 TB total (add 4)&lt;br /&gt;
** 10 TB&lt;br /&gt;
** 20 TB or more&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2019-10-31T20:01:43Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Pricing ===&lt;br /&gt;
&lt;br /&gt;
The price of a basic account is R 2,000 per year.&lt;br /&gt;
&lt;br /&gt;
'''PLEASE NOTE - NONE OF THE CHARGES MENTIONED HERE ARE RELATED TO ACTUAL OPERATIONAL COSTS.'''&lt;br /&gt;
'''THESE ARE MERELY TOKEN AMOUNTS WE NEED TO RAISE FROM USERS ACCORDING TO AN AGREEMENT MADE WITH TUKS EXECUTIVE.'''&lt;br /&gt;
'''AS AN EXAMPLE - JUST THE ELECTRICITY COSTS FOR THE SERVER ROOM ALONE AMOUNT TO MORE THAN R 80 000 PER MONTH.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* A web-based system for checking the different user and group storage quotas and charges is available at [http://storageadmin.bi.up.ac.za http://storageadmin.bi.up.ac.za].&lt;br /&gt;
* nLustre quotas can also be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -u username /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
A basic account (R 2,000 per year) provides:&lt;br /&gt;
&lt;br /&gt;
* 512 GB of storage space in /home/''user''&lt;br /&gt;
* 1 TB of storage space in /nlustre/''user''&lt;br /&gt;
* 512 GB of storage space in Galaxy&lt;br /&gt;
* Unlimited computational usage&lt;br /&gt;
&lt;br /&gt;
=== Upgrading storage space ===&lt;br /&gt;
* Storage space may be upgraded at R 1,000 per TB per year in the following steps:&lt;br /&gt;
** 5 TB total (add 4)&lt;br /&gt;
** 10 TB&lt;br /&gt;
** 20 TB or more&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2019-10-31T20:00:59Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Pricing ===&lt;br /&gt;
&lt;br /&gt;
The price of a basic account is R 2,000 per year.&lt;br /&gt;
&lt;br /&gt;
'''PLEASE NOTE - NONE OF THE CHARGES MENTIONED HERE ARE RELATED TO ACTUAL OPERATIONAL COSTS. &lt;br /&gt;
THESE ARE MERELY TOKEN AMOUNTS WE NEED TO RAISE FROM USERS ACCORDING TO AN AGREEMENT MADE WITH TUKS EXECUTIVE. &lt;br /&gt;
AS AN EXAMPLE - JUST THE ELECTRICITY COSTS FOR THE SERVER ROOM ALONE AMOUNT TO MORE THAN R 80 000 PER MONTH.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* A web-based system for checking the different user and group storage quotas and charges is available at [http://storageadmin.bi.up.ac.za http://storageadmin.bi.up.ac.za].&lt;br /&gt;
* nLustre quotas can also be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -u username /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
A basic account (R 2,000 per year) provides:&lt;br /&gt;
&lt;br /&gt;
* 512 GB of storage space in /home/''user''&lt;br /&gt;
* 1 TB of storage space in /nlustre/''user''&lt;br /&gt;
* 512 GB of storage space in Galaxy&lt;br /&gt;
* Unlimited computational usage&lt;br /&gt;
&lt;br /&gt;
=== Upgrading storage space ===&lt;br /&gt;
* Storage space may be upgraded at R 1,000 per TB per year in the following steps:&lt;br /&gt;
** 5 TB total (add 4)&lt;br /&gt;
** 10 TB&lt;br /&gt;
** 20 TB or more&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2019-10-31T19:59:39Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Pricing ===&lt;br /&gt;
&lt;br /&gt;
The price of a basic account is R 2,000 per year.&lt;br /&gt;
&lt;br /&gt;
** PLEASE NOTE - NONE OF THE CHARGES MENTIONED HERE ARE RELATED TO ACTUAL OPERATIONAL COSTS. &lt;br /&gt;
THESE ARE MERELY TOKEN AMOUNTS WE NEED TO RAISE FROM USERS ACCORDING TO AN AGREEMENT MADE WITH TUKS EXECUTIVE. &lt;br /&gt;
AS AN EXAMPLE - JUST THE ELECTRICITY COSTS FOR THE SERVER ROOM ALONE AMOUNT TO MORE THAN R 80 000 PER MONTH. **&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* A web-based system for checking the different user and group storage quotas and charges is available at [http://storageadmin.bi.up.ac.za http://storageadmin.bi.up.ac.za].&lt;br /&gt;
* nLustre quotas can also be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -u username /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
A basic account (R 2,000 per year) provides:&lt;br /&gt;
&lt;br /&gt;
* 512 GB of storage space in /home/''user''&lt;br /&gt;
* 1 TB of storage space in /nlustre/''user''&lt;br /&gt;
* 512 GB of storage space in Galaxy&lt;br /&gt;
* Unlimited computational usage&lt;br /&gt;
&lt;br /&gt;
=== Upgrading storage space ===&lt;br /&gt;
* Storage space may be upgraded at R 1,000 per TB per year in the following steps:&lt;br /&gt;
** 5 TB total (add 4)&lt;br /&gt;
** 10 TB&lt;br /&gt;
** 20 TB or more&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2019-10-31T19:58:07Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Pricing ===&lt;br /&gt;
&lt;br /&gt;
The price of a basic account is R 2,000 per year.&lt;br /&gt;
&lt;br /&gt;
=== PLEASE NOTE - NONE OF THE CHARGES MENTIONED HERE ARE RELATED TO ACTUAL OPERATIONAL COSTS. ===&lt;br /&gt;
=== THESE ARE MERELY TOKEN AMOUNTS WE NEED TO RAISE FROM USERS ACCORDING TO AN AGREEMENT MADE WITH TUKS EXECUTIVE. ===&lt;br /&gt;
=== AS AN EXAMPLE - JUST THE ELECTRICITY COSTS FOR THE SERVER ROOM ALONE AMOUNT TO MORE THAN R 80 000 PER MONTH. ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* A web-based system for checking the different user and group storage quotas and charges is available at [http://storageadmin.bi.up.ac.za http://storageadmin.bi.up.ac.za].&lt;br /&gt;
* nLustre quotas can also be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -u username /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
A basic account (R 2,000 per year) provides:&lt;br /&gt;
&lt;br /&gt;
* 512 GB of storage space in /home/''user''&lt;br /&gt;
* 1 TB of storage space in /nlustre/''user''&lt;br /&gt;
* 512 GB of storage space in Galaxy&lt;br /&gt;
* Unlimited computational usage&lt;br /&gt;
&lt;br /&gt;
=== Upgrading storage space ===&lt;br /&gt;
* Storage space may be upgraded at R 1,000 per TB per year in the following steps:&lt;br /&gt;
** 5 TB total (add 4)&lt;br /&gt;
** 10 TB&lt;br /&gt;
** 20 TB or more&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges</id>
		<title>Storage Quotas and Charges</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Storage_Quotas_and_Charges"/>
				<updated>2019-10-31T19:55:20Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Pricing ===&lt;br /&gt;
&lt;br /&gt;
The price of a basic account is R 2,000 per year.&lt;br /&gt;
&lt;br /&gt;
PLEASE NOTE - NONE OF THE CHARGES MENTIONED HERE ARE RELATED TO ACTUAL OPERATIONAL COSTS. &lt;br /&gt;
THESE ARE MERELY TOKEN AMOUNTS WE NEED TO RAISE FROM USERS ACCORDING TO AN AGREEMENT MADE WITH TUKS EXECUTIVE.&lt;br /&gt;
AS AN EXAMPLE - JUST THE ELECTRICITY COSTS FOR THE SERVER ROOM ALONE AMOUNT TO MORE THAN R 80 000 PER MONTH.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Checking storage quotas ===&lt;br /&gt;
&lt;br /&gt;
* A web-based system for checking the different user and group storage quotas and charges is available at [http://storageadmin.bi.up.ac.za http://storageadmin.bi.up.ac.za].&lt;br /&gt;
* nLustre quotas can also be checked with:&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; lfs quota -u username /nlustre&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Storage ===&lt;br /&gt;
&lt;br /&gt;
A basic account (R 2,000 per year) provides:&lt;br /&gt;
&lt;br /&gt;
* 512 GB of storage space in /home/''user''&lt;br /&gt;
* 1 TB of storage space in /nlustre/''user''&lt;br /&gt;
* 512 GB of storage space in Galaxy&lt;br /&gt;
* Unlimited computational usage&lt;br /&gt;
&lt;br /&gt;
=== Upgrading storage space ===&lt;br /&gt;
* Storage space may be upgraded at R 1,000 per TB per year in the following steps:&lt;br /&gt;
** 5 TB total (add 4)&lt;br /&gt;
** 10 TB&lt;br /&gt;
** 20 TB or more&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Running_jobs_on_our_servers</id>
		<title>Running jobs on our servers</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Running_jobs_on_our_servers"/>
				<updated>2019-05-19T14:57:42Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* You cannot run jobs directly on the head node or compute servers, you should use the [http://wiki.bi.up.ac.za/wiki/index.php?title=Using_the_PBS_/_Torque_queueing_environment Torque / PBS queue environment] to run your jobs.&lt;br /&gt;
** If for some exceptional reason you need to run a job directly on one of the servers (e.g. a job using a Linux GUI environment, please discuss your needs with our system administrator (johann.swart at up.ac.za).&lt;br /&gt;
* If you need an interactive terminal session on one of the servers (e.g. to compile code, setup jobs, test jobs), you can do this by using the qsub interactive mode, for example:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;source lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
&amp;gt; qsub -I -q queue_name -l nodes=1:ppn=1 -l walltime=01:00:00&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you need to run MPI jobs, please advise the system administrator so that the necessary security access can be enabled for your login.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab</id>
		<title>Safety and security at the Lab</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab"/>
				<updated>2019-01-23T19:04:23Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We go to great lengths and expense to ensure the safety of our lab, equipment and inhabitants. All of this is rendered useless if someone leaves an access door propped open with a book, or something.&lt;br /&gt;
&lt;br /&gt;
* '''YOU ARE NOT ALLOWED TO LEAVE ANY DOORS OPEN AT ANY TIME OF THE DAY. PLEASE ENSURE AT ALL TIMES THAT ALL DOORS ARE MAG LOCKED.'''&lt;br /&gt;
* '''THE SEMINAR ROOM MUST BE LOCKED IF NOT IN USE.'''&lt;br /&gt;
* '''IF YOU ARE THE LAST ONE TO LEAVE AT THE END OF THE DAY, PLEASE MAKE SURE EVERYTHING IS LOCKED.'''&lt;br /&gt;
&lt;br /&gt;
Security is a trivial matter for most people, until their personal safety, or their own possessions are involved. We’ve had multiple security events in the past.&lt;br /&gt;
Do not take this lightly.&lt;br /&gt;
&lt;br /&gt;
Please, your cooperation will be greatly appreciated.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab</id>
		<title>Safety and security at the Lab</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab"/>
				<updated>2019-01-23T19:03:33Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We go to great lengths and expense to ensure the safety of our lab, equipment and inhabitants. All of this is rendered useless if someone leaves an access door propped open with a book, or something.&lt;br /&gt;
&lt;br /&gt;
'''YOU ARE NOT ALLOWED TO LEAVE ANY DOORS OPEN AT ANY TIME OF THE DAY. PLEASE ENSURE AT ALL TIMES THAT ALL DOORS ARE MAG LOCKED.'''&lt;br /&gt;
'''THE SEMINAR ROOM MUST BE LOCKED IF NOT IN USE.'''&lt;br /&gt;
'''IF YOU ARE THE LAST ONE TO LEAVE AT THE END OF THE DAY, PLEASE MAKE SURE EVERYTHING IS LOCKED.'''&lt;br /&gt;
&lt;br /&gt;
Security is a trivial matter for most people, until their personal safety, or their own possessions are involved. We’ve had multiple security events in the past.&lt;br /&gt;
Do not take this lightly.&lt;br /&gt;
&lt;br /&gt;
Please, your cooperation will be greatly appreciated.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab</id>
		<title>Safety and security at the Lab</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab"/>
				<updated>2019-01-23T19:02:50Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We go to great lengths and expense to ensure the safety of our lab, equipment and inhabitants. All of this is rendered useless if someone leaves an access door propped open with a book, or something.&lt;br /&gt;
&lt;br /&gt;
'''YOU ARE NOT ALLOWED TO LEAVE ANY DOORS OPEN AT ANY TIME OF THE DAY. PLEASE ENSURE AT ALL TIMES THAT ALL DOORS ARE MAG LOCKED.&lt;br /&gt;
THE SEMINAR ROOM MUST BE LOCKED IF NOT IN USE.&lt;br /&gt;
IF YOU ARE THE LAST ONE TO LEAVE AT THE END OF THE DAY, PLEASE MAKE SURE EVERYTHING IS LOCKED.'''&lt;br /&gt;
&lt;br /&gt;
Security is a trivial matter for most people, until their personal safety, or their own possessions are involved. We’ve had multiple security events in the past.&lt;br /&gt;
Do not take this lightly.&lt;br /&gt;
&lt;br /&gt;
Please, your cooperation will be greatly appreciated.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab</id>
		<title>Safety and security at the Lab</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Safety_and_security_at_the_Lab"/>
				<updated>2019-01-23T19:01:58Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: Created page with &amp;quot;We go to great lengths and expense to ensure the safety of our lab, equipment and inhabitants. All of this is rendered useless if someone leaves an access door propped open wi...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We go to great lengths and expense to ensure the safety of our lab, equipment and inhabitants. All of this is rendered useless if someone leaves an access door propped open with a book, or something.&lt;br /&gt;
&lt;br /&gt;
'''YOU ARE NOT ALLOWED TO LEAVE ANY DOORS OPEN AT ANY TIME OF THE DAY. PLEASE ENSURE AT ALL TIMES THAT ALL DOORS ARE MAG LOCKED. THE SEMINAR ROOM MUST BE LOCKED IF NOT IN USE. IF YOU ARE THE LAST ONE TO LEAVE AT THE END OF THE DAY, PLEASE MAKE SURE EVERYTHING IS LOCKED.'''&lt;br /&gt;
&lt;br /&gt;
Security is a trivial matter for most people, until their personal safety, or their own possessions are involved. We’ve had multiple security events in the past.&lt;br /&gt;
Do not take this lightly.&lt;br /&gt;
&lt;br /&gt;
Please, your cooperation will be greatly appreciated.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/Main_Page"/>
				<updated>2019-01-23T19:01:07Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the Centre for Bioinformatics and Computational Biology!&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting started ==&lt;br /&gt;
* [[Obtaining an account]]&lt;br /&gt;
* [[Logging in to a terminal session]]&lt;br /&gt;
* [[Running jobs on our servers]]&lt;br /&gt;
* [[Using the PBS / Torque queueing environment]]&lt;br /&gt;
* [[Software resources]]&lt;br /&gt;
* [[Hardware resources]]&lt;br /&gt;
* [http://wiki.bi.up.ac.za/wiki/index.php?title=Storage_Quotas_and_Charges&amp;amp;action=edit&amp;amp;redlink=1 Storage Quotas and Charges]&lt;br /&gt;
* [[Transferring large quantities of data between institutions]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* The compute infrastructure load can be seen in the [http://wonko.bi.up.ac.za/ganglia/?c=unspecified&amp;amp;m=load_one&amp;amp;r=hour&amp;amp;s=by%20name&amp;amp;hc=4&amp;amp;mc=2 ganglia monitor]&lt;br /&gt;
* The Bioinformatics post-graduate lecture [https://docs.google.com/spreadsheets/d/1cPluvAnTzm9Wx-RFeisVo_OPFDMWoXWFdmySA4wEitg/edit?usp=sharing schedule] is available for any interested students from other departments to attend. Lectures are on Tuesdays and Thursdays at 10:00 in FABI Square 3-26&lt;br /&gt;
* [[Migrating your data safely]]&lt;br /&gt;
* [[File and directory permissions and ownership]]&lt;br /&gt;
* [[Guidelines and Terms of use]]&lt;br /&gt;
* [[Safety and security at the Lab]]&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/File_and_directory_permissions_and_ownership</id>
		<title>File and directory permissions and ownership</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/File_and_directory_permissions_and_ownership"/>
				<updated>2018-12-15T10:05:27Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;90% of problems with scripts and file handling are due to incorrect file permission and ownership settings. Every system user should familiarise themselves with this important aspect of Linux.&lt;br /&gt;
&lt;br /&gt;
Before a new script can be run, its execute permission must be set by the user. This is by default unset for safety, and causes problems every day.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When a new file tst.sh is created, it will typically have the following default permissions:&lt;br /&gt;
&lt;br /&gt;
[user@wonko:~]$ ls -al tst.sh&lt;br /&gt;
&lt;br /&gt;
-rw-r--r-- 1 user users 3 Dec 15 11:58 tst.sh&lt;br /&gt;
&lt;br /&gt;
This file cannot be executed or submitted to the queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
By changing user permissions and setting the execute flag:&lt;br /&gt;
&lt;br /&gt;
[user@wonko:~]$ chmod u+x tst.sh&lt;br /&gt;
&lt;br /&gt;
[user@wonko:~]$ ls -al tst.sh&lt;br /&gt;
&lt;br /&gt;
-rwxr--r-- 1 user users 3 Dec 15 11:58 tst.sh&lt;br /&gt;
&lt;br /&gt;
this script can now be run or submiited.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Permission Groups -:'''&lt;br /&gt;
&lt;br /&gt;
Each file and directory has three user based permission groups:&lt;br /&gt;
&lt;br /&gt;
'''owner -''' The Owner permissions apply only the owner of the file or directory, they will not impact the actions of other users.&lt;br /&gt;
&lt;br /&gt;
'''group -''' The Group permissions apply only to the group that has been assigned to the file or directory, they will not effect the actions of other users.&lt;br /&gt;
&lt;br /&gt;
'''all users (world, others) -''' The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most. There's usually no reason to have this set, and for safety we recommend leaving this off.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Permission Types -: '''&lt;br /&gt;
&lt;br /&gt;
Each file or directory has three basic permission types:&lt;br /&gt;
&lt;br /&gt;
'''read -''' The Read permission refers to a user's capability to read the contents of the file.&lt;br /&gt;
&lt;br /&gt;
'''write -''' The Write permissions refer to a user's capability to write or modify a file or directory.&lt;br /&gt;
&lt;br /&gt;
'''execute -''' The Execute permission affects a user's capability to execute a file or view the contents of a directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Viewing the Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
You can view the permissions by checking the file or directory permissions by reviewing the output of the &amp;quot;ls -al&amp;quot; command while in the terminal and while working in the directory which contains the file or folder.&lt;br /&gt;
&lt;br /&gt;
The permission in the command line is displayed as: _ rwxrwxrwx 1 &amp;lt;owner&amp;gt;:&amp;lt;group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''User rights/Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
The first character that I marked with an underscore is the special permission flag that can vary.&lt;br /&gt;
The following set of three characters (rwx) is for the owner permissions.&lt;br /&gt;
The second set of three characters (rwx) is for the Group permissions.&lt;br /&gt;
The third set of three characters (rwx) is for the All Users permissions.&lt;br /&gt;
Following that grouping since the integer/number displays the number of hardlinks to the file.&lt;br /&gt;
The last piece is the Owner and Group assignment formatted as Owner:Group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Modifying the Permissions-:'''&lt;br /&gt;
&lt;br /&gt;
When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Explicitly Defining Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
To explicity define permissions you will need to reference the Permission Group and Permission Types.&lt;br /&gt;
&lt;br /&gt;
The Permission Groups used are:&lt;br /&gt;
u - Owner&lt;br /&gt;
g - Group&lt;br /&gt;
o - Others&lt;br /&gt;
a - All users&lt;br /&gt;
The potential Assignment Operators are + (plus) and - (minus); these are used to tell the system whether to add or remove the specific permissions.&lt;br /&gt;
&lt;br /&gt;
The Permission Types that are used are:&lt;br /&gt;
r - Read&lt;br /&gt;
w - Write&lt;br /&gt;
x - Execute&lt;br /&gt;
&lt;br /&gt;
So for an example, lets say I have a file named file1 that currently has the permissions set to rw_rw_rw_, which means that the owner, group and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.&lt;br /&gt;
&lt;br /&gt;
To make this modification you would invoke the command: chmod a-rw file1&lt;br /&gt;
To add the permissions above you would invoke the command: chmod a+rw file1&lt;br /&gt;
&lt;br /&gt;
As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.&lt;br /&gt;
&lt;br /&gt;
Using Binary References to Set permissions&lt;br /&gt;
Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three octal integers/numbers.&lt;br /&gt;
&lt;br /&gt;
A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.&lt;br /&gt;
&lt;br /&gt;
The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.&lt;br /&gt;
&lt;br /&gt;
r = 4&lt;br /&gt;
w = 2&lt;br /&gt;
x = 1&lt;br /&gt;
You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.&lt;br /&gt;
&lt;br /&gt;
So to set a file to permissions on file1 to read rwxr_ _ _ _ _, you would enter chmod 740 file1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Owners and Groups -:'''&lt;br /&gt;
&lt;br /&gt;
File and directory ownership is usually set by the system defaults, or by the systems administrator, and will be of the form &amp;lt;username&amp;gt;:&amp;lt;primary group&amp;gt;, where the group will typically be &amp;quot;users&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory. There's usually no need to change these, just take note that they are properly set.&lt;br /&gt;
&lt;br /&gt;
You use the chown command to change owner and group assignments, the syntax is simple chown owner:group filename, so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.&lt;br /&gt;
&lt;br /&gt;
'''One more thing - you should be very careful when copy/pasting code from web pages. They usually do not translate all characters accurately or correctly from web to terminal and can lead to unexpected behaviour and bugs.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above just touches the tip of the iceberg on this topic, and you really should consult one of the many websites dealing with the subject in detail.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/File_and_directory_permissions_and_ownership</id>
		<title>File and directory permissions and ownership</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/File_and_directory_permissions_and_ownership"/>
				<updated>2018-12-15T10:04:59Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;90% of problems with scripts and file handling are due to incorrect file permission and ownership settings. Every system user should familiarise themselves with this important aspect of Linux.&lt;br /&gt;
&lt;br /&gt;
Before a new script can be run, its execute permission must be set by the user. This is by default unset for safety, and causes problems every day.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When a new file tst.sh is created, it will typically have the following default permissions:&lt;br /&gt;
&lt;br /&gt;
[user@wonko:~]$ ls -al tst.sh&lt;br /&gt;
&lt;br /&gt;
-rw-r--r-- 1 user users 3 Dec 15 11:58 tst.sh&lt;br /&gt;
&lt;br /&gt;
This file cannot be executed or submitted to the queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
By changing user permissions and setting the execute flag:&lt;br /&gt;
&lt;br /&gt;
[user@wonko:~]$ chmod u+x tst.sh&lt;br /&gt;
[user@wonko:~]$ ls -al tst.sh&lt;br /&gt;
&lt;br /&gt;
-rwxr--r-- 1 user users 3 Dec 15 11:58 tst.sh&lt;br /&gt;
&lt;br /&gt;
this script can now be run or submiited.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Permission Groups -:'''&lt;br /&gt;
&lt;br /&gt;
Each file and directory has three user based permission groups:&lt;br /&gt;
&lt;br /&gt;
'''owner -''' The Owner permissions apply only the owner of the file or directory, they will not impact the actions of other users.&lt;br /&gt;
&lt;br /&gt;
'''group -''' The Group permissions apply only to the group that has been assigned to the file or directory, they will not effect the actions of other users.&lt;br /&gt;
&lt;br /&gt;
'''all users (world, others) -''' The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most. There's usually no reason to have this set, and for safety we recommend leaving this off.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Permission Types -: '''&lt;br /&gt;
&lt;br /&gt;
Each file or directory has three basic permission types:&lt;br /&gt;
&lt;br /&gt;
'''read -''' The Read permission refers to a user's capability to read the contents of the file.&lt;br /&gt;
&lt;br /&gt;
'''write -''' The Write permissions refer to a user's capability to write or modify a file or directory.&lt;br /&gt;
&lt;br /&gt;
'''execute -''' The Execute permission affects a user's capability to execute a file or view the contents of a directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Viewing the Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
You can view the permissions by checking the file or directory permissions by reviewing the output of the &amp;quot;ls -al&amp;quot; command while in the terminal and while working in the directory which contains the file or folder.&lt;br /&gt;
&lt;br /&gt;
The permission in the command line is displayed as: _ rwxrwxrwx 1 &amp;lt;owner&amp;gt;:&amp;lt;group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''User rights/Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
The first character that I marked with an underscore is the special permission flag that can vary.&lt;br /&gt;
The following set of three characters (rwx) is for the owner permissions.&lt;br /&gt;
The second set of three characters (rwx) is for the Group permissions.&lt;br /&gt;
The third set of three characters (rwx) is for the All Users permissions.&lt;br /&gt;
Following that grouping since the integer/number displays the number of hardlinks to the file.&lt;br /&gt;
The last piece is the Owner and Group assignment formatted as Owner:Group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Modifying the Permissions-:'''&lt;br /&gt;
&lt;br /&gt;
When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Explicitly Defining Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
To explicity define permissions you will need to reference the Permission Group and Permission Types.&lt;br /&gt;
&lt;br /&gt;
The Permission Groups used are:&lt;br /&gt;
u - Owner&lt;br /&gt;
g - Group&lt;br /&gt;
o - Others&lt;br /&gt;
a - All users&lt;br /&gt;
The potential Assignment Operators are + (plus) and - (minus); these are used to tell the system whether to add or remove the specific permissions.&lt;br /&gt;
&lt;br /&gt;
The Permission Types that are used are:&lt;br /&gt;
r - Read&lt;br /&gt;
w - Write&lt;br /&gt;
x - Execute&lt;br /&gt;
&lt;br /&gt;
So for an example, lets say I have a file named file1 that currently has the permissions set to rw_rw_rw_, which means that the owner, group and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.&lt;br /&gt;
&lt;br /&gt;
To make this modification you would invoke the command: chmod a-rw file1&lt;br /&gt;
To add the permissions above you would invoke the command: chmod a+rw file1&lt;br /&gt;
&lt;br /&gt;
As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.&lt;br /&gt;
&lt;br /&gt;
Using Binary References to Set permissions&lt;br /&gt;
Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three octal integers/numbers.&lt;br /&gt;
&lt;br /&gt;
A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.&lt;br /&gt;
&lt;br /&gt;
The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.&lt;br /&gt;
&lt;br /&gt;
r = 4&lt;br /&gt;
w = 2&lt;br /&gt;
x = 1&lt;br /&gt;
You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.&lt;br /&gt;
&lt;br /&gt;
So to set a file to permissions on file1 to read rwxr_ _ _ _ _, you would enter chmod 740 file1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Owners and Groups -:'''&lt;br /&gt;
&lt;br /&gt;
File and directory ownership is usually set by the system defaults, or by the systems administrator, and will be of the form &amp;lt;username&amp;gt;:&amp;lt;primary group&amp;gt;, where the group will typically be &amp;quot;users&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory. There's usually no need to change these, just take note that they are properly set.&lt;br /&gt;
&lt;br /&gt;
You use the chown command to change owner and group assignments, the syntax is simple chown owner:group filename, so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.&lt;br /&gt;
&lt;br /&gt;
'''One more thing - you should be very careful when copy/pasting code from web pages. They usually do not translate all characters accurately or correctly from web to terminal and can lead to unexpected behaviour and bugs.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above just touches the tip of the iceberg on this topic, and you really should consult one of the many websites dealing with the subject in detail.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	<entry>
		<id>http://wiki.bi.up.ac.za/wiki/index.php/File_and_directory_permissions_and_ownership</id>
		<title>File and directory permissions and ownership</title>
		<link rel="alternate" type="text/html" href="http://wiki.bi.up.ac.za/wiki/index.php/File_and_directory_permissions_and_ownership"/>
				<updated>2018-12-15T10:04:24Z</updated>
		
		<summary type="html">&lt;p&gt;Johann: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;90% of problems with scripts and file handling are due to incorrect file permission and ownership settings. Every system user should familiarise themselves with this important aspect of Linux.&lt;br /&gt;
&lt;br /&gt;
Before a new script can be run, its execute permission must be set by the user. This is by default unset for safety, and causes problems every day.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When a new file tst.sh is created, it will typically have the following default permissions:&lt;br /&gt;
&lt;br /&gt;
[user@wonko:~]$ ls -al tst.sh&lt;br /&gt;
-rw-r--r-- 1 user users 3 Dec 15 11:58 tst.sh&lt;br /&gt;
&lt;br /&gt;
This file cannot be executed or submitted to the queue.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
By changing user permissions and setting the execute flag:&lt;br /&gt;
&lt;br /&gt;
[user@wonko:~]$ chmod u+x tst.sh&lt;br /&gt;
[user@wonko:~]$ ls -al tst.sh&lt;br /&gt;
-rwxr--r-- 1 user users 3 Dec 15 11:58 tst.sh&lt;br /&gt;
&lt;br /&gt;
this script can now be run or submiited.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Permission Groups -:'''&lt;br /&gt;
&lt;br /&gt;
Each file and directory has three user based permission groups:&lt;br /&gt;
&lt;br /&gt;
'''owner -''' The Owner permissions apply only the owner of the file or directory, they will not impact the actions of other users.&lt;br /&gt;
&lt;br /&gt;
'''group -''' The Group permissions apply only to the group that has been assigned to the file or directory, they will not effect the actions of other users.&lt;br /&gt;
&lt;br /&gt;
'''all users (world, others) -''' The All Users permissions apply to all other users on the system, this is the permission group that you want to watch the most. There's usually no reason to have this set, and for safety we recommend leaving this off.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Permission Types -: '''&lt;br /&gt;
&lt;br /&gt;
Each file or directory has three basic permission types:&lt;br /&gt;
&lt;br /&gt;
'''read -''' The Read permission refers to a user's capability to read the contents of the file.&lt;br /&gt;
&lt;br /&gt;
'''write -''' The Write permissions refer to a user's capability to write or modify a file or directory.&lt;br /&gt;
&lt;br /&gt;
'''execute -''' The Execute permission affects a user's capability to execute a file or view the contents of a directory.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Viewing the Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
You can view the permissions by checking the file or directory permissions by reviewing the output of the &amp;quot;ls -al&amp;quot; command while in the terminal and while working in the directory which contains the file or folder.&lt;br /&gt;
&lt;br /&gt;
The permission in the command line is displayed as: _ rwxrwxrwx 1 &amp;lt;owner&amp;gt;:&amp;lt;group&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''User rights/Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
The first character that I marked with an underscore is the special permission flag that can vary.&lt;br /&gt;
The following set of three characters (rwx) is for the owner permissions.&lt;br /&gt;
The second set of three characters (rwx) is for the Group permissions.&lt;br /&gt;
The third set of three characters (rwx) is for the All Users permissions.&lt;br /&gt;
Following that grouping since the integer/number displays the number of hardlinks to the file.&lt;br /&gt;
The last piece is the Owner and Group assignment formatted as Owner:Group.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Modifying the Permissions-:'''&lt;br /&gt;
&lt;br /&gt;
When in the command line, the permissions are edited by using the command chmod. You can assign the permissions explicitly or by using a binary reference as described below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Explicitly Defining Permissions -:'''&lt;br /&gt;
&lt;br /&gt;
To explicity define permissions you will need to reference the Permission Group and Permission Types.&lt;br /&gt;
&lt;br /&gt;
The Permission Groups used are:&lt;br /&gt;
u - Owner&lt;br /&gt;
g - Group&lt;br /&gt;
o - Others&lt;br /&gt;
a - All users&lt;br /&gt;
The potential Assignment Operators are + (plus) and - (minus); these are used to tell the system whether to add or remove the specific permissions.&lt;br /&gt;
&lt;br /&gt;
The Permission Types that are used are:&lt;br /&gt;
r - Read&lt;br /&gt;
w - Write&lt;br /&gt;
x - Execute&lt;br /&gt;
&lt;br /&gt;
So for an example, lets say I have a file named file1 that currently has the permissions set to rw_rw_rw_, which means that the owner, group and all users have read and write permission. Now we want to remove the read and write permissions from the all users group.&lt;br /&gt;
&lt;br /&gt;
To make this modification you would invoke the command: chmod a-rw file1&lt;br /&gt;
To add the permissions above you would invoke the command: chmod a+rw file1&lt;br /&gt;
&lt;br /&gt;
As you can see, if you want to grant those permissions you would change the minus character to a plus to add those permissions.&lt;br /&gt;
&lt;br /&gt;
Using Binary References to Set permissions&lt;br /&gt;
Now that you understand the permissions groups and types this one should feel natural. To set the permission using binary references you must first understand that the input is done by entering three octal integers/numbers.&lt;br /&gt;
&lt;br /&gt;
A sample permission string would be chmod 640 file1, which means that the owner has read and write permissions, the group has read permissions, and all other user have no rights to the file.&lt;br /&gt;
&lt;br /&gt;
The first number represents the Owner permission; the second represents the Group permissions; and the last number represents the permissions for all other users. The numbers are a binary representation of the rwx string.&lt;br /&gt;
&lt;br /&gt;
r = 4&lt;br /&gt;
w = 2&lt;br /&gt;
x = 1&lt;br /&gt;
You add the numbers to get the integer/number representing the permissions you wish to set. You will need to include the binary permissions for each of the three permission groups.&lt;br /&gt;
&lt;br /&gt;
So to set a file to permissions on file1 to read rwxr_ _ _ _ _, you would enter chmod 740 file1.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Owners and Groups -:'''&lt;br /&gt;
&lt;br /&gt;
File and directory ownership is usually set by the system defaults, or by the systems administrator, and will be of the form &amp;lt;username&amp;gt;:&amp;lt;primary group&amp;gt;, where the group will typically be &amp;quot;users&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
I have made several references to Owners and Groups above, but have not yet told you how to assign or change the Owner and Group assigned to a file or directory. There's usually no need to change these, just take note that they are properly set.&lt;br /&gt;
&lt;br /&gt;
You use the chown command to change owner and group assignments, the syntax is simple chown owner:group filename, so to change the owner of file1 to user1 and the group to family you would enter chown user1:family file1.&lt;br /&gt;
&lt;br /&gt;
'''One more thing - you should be very careful when copy/pasting code from web pages. They usually do not translate all characters accurately or correctly from web to terminal and can lead to unexpected behaviour and bugs.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The above just touches the tip of the iceberg on this topic, and you really should consult one of the many websites dealing with the subject in detail.&lt;/div&gt;</summary>
		<author><name>Johann</name></author>	</entry>

	</feed>