Checkout, Frequently Asked Questions!

Shaheen III#

If you are familiar with HPC clusters and need a quick reference on the specifics of how to interact with KSL computational resources, you will find the relevant information here in a concise form. For details, please explore the other sections of the documentation starting from System Architecture.

Login#

To login you need to ssh into the login node. A ssh client should be installed on your workstation/laptop. For users with MacOS and Linux operating system, please open the Terminal application paste command below replacing your username for Shaheen III. For Windows users, you will need a application with ssh client installed within it. Please follow instruction in this video tutorial . When logging in to Shaheen III, please replace the hostname with shaheen.hpc.kaust.edu.sa when following the steps prescribed in the tutorial.

Logging into Shaheen III#

Shaheen III has a total of 5 login nodes. login1 is reserved for system administration, and login2 to login5 are available for users to login to Shaheen III.

Note

In near future the hostname is going to change to more intuitive one as was the case with Shaheen II (shaheen.hpc.kaust.edu.sa). This is expected to happen after the Shaheen II is decommissioned.

The following is an example of logging in on Shaheen III through hostname login2:

SSH command to login2 on Shaheen III#
ssh -X username@login2.hpc.kaust.edu.sa

Submitting your first Jobscripts#

All KSL systems use SLURM for scheduling jobs for batch processing.

Shaheen III example jobscripts#

On Shaheen III the example jobscripts below need to be submitted from /scratch/$USER directory. This is imperative because /home directory is not mounted on compute nodes. Also /project directory is read-only on compute node.

Note

Compute nodes in workq on Shaheen III are allocated in exclusive mode. For a detailed description of available partitions please refer to Shaheen III.

The following is a sample jobscript cpu_shaheen3.slurm to print hostnames of one AMD Genoa compute nodes of Shaheen III in workq.

Change directory to /scratch, copy the jobscript below and paste it in a file named e.g. cpu_shaheen3.slurm#
#!/bin/bash
#SBATCH --time=00:10:00
#SBATCH --partition=workq
#SBATCH --nodes=1
#SBATCH --ntasks=192
#SBATCH --hint=nomultithread

srun -n ${SLURM_NTASKS} --hint=nomultithread /bin/hostname

The above jobscript can now be submitted using the sbatch command.

sbatch cpu_shaheen3.slurm

KSL has written a convenient utility called Jobscript Generator. Use this template to create a jobscript and copy-paste it in a file in your SSH terminal on Shaheen III or Ibex login nodes.

If you get an error in regarding account specification, please email with the your username and error and the jobscript.