Ibex#
To login you need to ssh
into the login node.
A ssh
client should be installed on your workstation/laptop.
For users with MacOS and Linux operating system, please open the Terminal
application paste command below replacing your username for Shaheen III.
For Windows users, you will need a application with ssh
client installed within it. Please follow instruction in this video tutorial . Refer to Shaheen III for instructions on how to login to Shaheen III.
Logging into Ibex#
ssh -X username@ilogin.ibex.kaust.edu.sa
ssh -X username@glogin.ibex.kaust.edu.sa
Submitting your first Jobscripts#
All KSL systems use SLURM for scheduling jobs for batch processing. Ibex example jobscripts ————————— The jobscript below submits a job to SLURM for running an example workload on Ibex CPU compute nodes. Note that Ibex nodes are shared and you must specify the resources you require in terms of cores or CPUs and/or memory, and wall time.
#!/bin/bash
#SBATCH --time=00:10:00
#SBATCH --nodes=1
#SBATCH --ntasks=4
srun -n ${SLURM_NTASKS} /bin/hostname
The above jobscript can now be submitted using the sbatch
command.
sbatch cpu_ibex.slurm
For submitting a job with GPUs, the jobscript must define the number of GPUs required and on how many nodes. The example below requests two NVIDIA V100 GPUs on a single node with a total of 8 CPUs and a total of 100GB of memory.
#!/bin/bash
#SBATCH --time=00:10:00
#SBATCH --gpus=2
#SBATCH --gpus-per-node=2
#SBATCH --cpus-per-task=8
#SBATCH --ntasks=1
#SBATCH --memory=100G
module load cuda
srun -n ${SLURM_NTASKS} -c ${SLURM_CPUS_PER_TASK} nvidia-smi
The above jobscript can now be submitted using the sbatch
command.
sbatch gpu_ibex.slurm
KSL has written a convenient utility called Jobscript Generator. Use this template to create a jobscript and copy-paste it in a file in your SSH terminal on Shaheen III or Ibex login nodes.
If you get an error in regarding account specification, please with the your username and error and the jobscript.