Quick Links

Contact & Support
To report a problem or request support for Emerald, email:

Notice The Emerald cluster is no longer operational. Users wanting a GPU HPC cluster are recommended to apply for access to the JADE or CSD3 clusters.

Other GPU clusters

There are two nationally-accessible EPSRC Tier 2 HPC centres with GPUs.

Access is generally managed through calls to an EPSRC Resource Allocation Panel

There may also be pump-priming/proof of concept access available.

General information about machines with external access is available at HPC-UK.


  • Suitable for workloads spanning multiple compute nodes using GPUs and MPI
  • NVIDIA Tesla P100
  • Access to CSD3


About Emerald

Emerald is a large GPU system utilising 372 NVIDIA Tesla processors, hosted at the STFC's e-science department in partnership with the Science and Engineering South (SES) consortium. As part of this consortium, UCL currently has a 23% share of this system. Emerald has a sustained capability of 114TF and on installation in March 2012 was one of the largest GPU systems in Europe.

User information including user guides, technical specifications and a list of available software are available on the SES website:

Emerald Service Closure

The Emerald GPU service has been in operation since March 2012 with access to the system being supplied as a service to UCL users by STFC who support the machine as part of the Centre for Innovation activity operated through the Science and Engineering South (SES) consortium.

Under this model, UCL has worked with STFC’s Research Infrastructure Group at RAL to review and renew access to Emerald on an annual basis. Since March 2015, Emerald’s hardware has been unsupported by its vendor (HP) and STFC have continued to provide support for failing hardware through use of spare nodes retained from a resizing of the system in 2015 to support UCL and STFC workloads. This arrangement has allowed STFC and UCL to extend the life of the Emerald system in 2015-2016 and 2016-2017 without HP support.

Although this approach has been successful, and service levels have been maintained through these extension periods, it was agreed in the latest renewal that current levels of service could not be guaranteed by STFC past 2017 due to the diminishing level of spare nodes and the age of underlying infrastructure.

As a result, Emerald will not be extended past its current service end date of the 31st July 2017 and access is being withdrawn after that point.

With respect to current users of the system, access will continue as normal up to the 31st July and access to user data stored at RAL will be available after this date.

New access requests to use Emerald will not be accepted after 1st May 2017. This is to ensure new users have sufficient time to set up and complete any work on the system prior to the service closure date. Any requests for access beyond the 1st May deadline will be referred to the Computational Resource Allocation Group (CRAG) and users will need to clearly demonstrate and provide assurance that the proposed work can be completed in the remaining time available.

Although Emerald is being withdrawn from service the recent EPSRC Tier 2 Research Infrastructure funding round has invested in two large GPU resources due to enter service in late spring 2017 which will be operated by the University of Oxford (called Jade) and Cambridge (called Peta-5). Access mechanisms for these clusters are still being developed, and we will provide more information on how to access these systems in due course.

Signing up for Emerald

You need to first sign up for a Research Computing systems account. If your account request is approved, the mail stating this will contain instructions on how to then gain access to Emerald.