Legion Quick Start

Legion Quick Start

Quick Links

Contact & Support
For support for any of our services or for general advice and consultancy, email:

This is a quick start guide to Legion for users already familiar with operating in an HPC environment.

Accessing the Legion cluster

Before accessing the Legion cluster, it is necessary to apply for an account. Once you have received notification that your account has been created, you may log in via SSH to:


Your username and password are the same as those for your central UCL user id. Legion is only accessible from within UCL’s network. If you need to access it from outside, you need to log in via Socrates, a departmental machine, or install the IS VPN service.

More details on connecting to Legion are provided on the Accessing RC Systems page.

back to top

Managing data on the Legion cluster

Users on Legion have access to three pools of storage. They have a home directory (quota 50 gigabytes) which is mounted read only on the compute nodes and therefore cannot be written to by running jobs. They have a "scratch" area which does not have a hard quota, but does have the limitation that users may not store more than 200 gigabytes for a time period of over 14 days (or otherwise they will no longer be able to submit jobs to the queue). This limitation is in place to allow users to store very large amounts of data on scratch, but only for a short time. There is a link to this area called "Scratch" within the user’s home directory. Finally, users have access to temporary local storage on the nodes (environmental variable $TMPDIR) which is cleared at the end of the job.

There is a dedicated transfer node with 10 gigabit network links to and from Legion available at:


For more details on the fairly complicated data management structures within Legion, see the Managing Data on RC Systems page.

back to top

Legion user environment

Legion runs an operating system based on Red Hat Enterprise Linux 7 with the Son of Grid Engine batch scheduler. UCL-supported and provided packages are made available to users through the use of the modules system.

module avail - lists available modules

module load - loads a module

module remove - removes a module

The module system handles dependency and conflict information.

You can find out more about the modules system on Legion on the RC Systems user environment page.

back to top

Compiling your code

Legion provides Intel and GNU compilers, and OpenMPI and Intel MPI through the modules system, with the usual wrappers. For a full list of the development tools available see here or in the development tools/compilers sections of the modules system.

You can find out more about compiling code on Legion on the Legion Compiling page.

back to top

Job scheduling policy and projects

A fair-share resource allocation model has been implemented on Legion. See resource allocation for more information and context.

back to top

Submission scripts

Jobs submitted to the scheduler (with "qsub") are shell scripts with directives preceded by #$.

#!/bin/bash -l

1. Force bash as the executing shell.

#$ -S /bin/bash

2. Request ten minutes of wallclock time (format hours:minutes:seconds).

#$ -l h_rt=0:10:0

3. Request 1 gigabyte of RAM per process.

#$ -l mem=1G

4. Set the name of the job.

#$ -N MadScience_1_16

5. Select the MPI parallel environment and 16 processes.

#$ -pe mpi 16

7. Select the project that this job will run under. (Only if you have access to paid resources).

#$ -P <your_project_id>

8. Set the working directory to somewhere in your scratch space.

#$ -wd /home/<your_UCL_id>/Scratch/output

You can then follow these directives with the commands your script would execute. Legion supports a wide variety of job types and we would strongly recommend you study the example scripts.

Jobs can be controlled with "qsub" (submit job), "qstat" (list jobs) and "qdel" (delete job). See the Introduction to batch processing page for more details.

back to top

Testing jobs using User Test Nodes

As well as batch access to the system, a small number of nodes are available for interactive access through the scheduler. These can be requested though the "qrsh" command. You need to provide qrsh with the same options you would include in your job submission script, so

qrsh -pe mpi 8 -l mem=512M,h_rt=2:0:0

Is functionally equivalent to:

#$ -S /bin/bash
#$ -pe mpi 8
#$ -l mem=512M
#$ -l h_rt=2:0:0

Except, of course, that the result of qrsh is an interactive shell. For more details of how to access the user test nodes, see the Testing jobs using User Test Nodes page.

back to top

More information

How the scheduler works

Example submission scripts

Acknowledging the use of Legion in publications

Contact and support


Known issues

Reporting Problems

back to top