High Performance Computing refers to the practice of aggregating commodity computers so as to provide a parallel processing environment for solving complex computational problems.
The High performance Computing Service comprises:
- A tier-3 institutional service, ‘Maxwell’ hosted in the Edward Wright Data Centre:
- User support on Maxwell, including software support and troubleshooting.
- We also provide consultancy on access to regional (tier 2) and national (tier 1) facilities, including the EPCC’s Cirrus and Archer systems. These facilities offer the advantages of:
- Greater capacity and capability.
- Diversity of architectures (GPU, ARM, Xeon Phi)
- Training in computational science and software development.
- Maxwell is a Linux-based high performance computer cluster featuring:
- Approximately 1240 cores
- 18 standard nodes, with 5 GB RAM per core and 40 Cores
- 11 high-memory nodes, with 10GB RAM per core and 40 Cores
- 6 Consumer GPU Nodes with dual GPUs - GeForce RTX 2080 Ti
- 1 GPU Node with dual entrerprise GPUS - Tesla V100 PCIe 32GB
- 1 very high memory node, with 3TB RAM and 80 Cores.
- Support and consultation from experienced support staff.
- A wide range of free and commercial software.
This service is available to:
- All registered members of staff, including honorary members of staff.
- Research postgraduate students when authorised by their supervisor.
- External (e.g. consultants, visitors, etc.) when collaborating with University Staff.
- 24 hours a day, seven days a week, 365 days a year, excluding scheduled downtimes.
- There will be scheduled downtimes - prior warning of a at least a week shall be given, except in exceptional circumstances.
- The service runs using a queuing algorithm - which is basically first come, first served, but prioritisation is based on the use made in the previous week of the cluster.
- Support for the service is available through email during normal working hours.
- The user home directory is backed up. The scratch partition is NOT backed up.
- Users are currently restricted to 200 cores at any one time.
- www.abdn.ac.uk/staffnet/working-here/hpc.php - login required to access documentation
- To access the Tier 1 and Tier 2 facilities please come and talk to the Research Infrastructure team – email email@example.com
- Access Tier 2 facility Cirrus at EPCC – http://www.cirrus.ac.uk/access/
- EPSRC Tier 2 Facilities - https://www.epsrc.ac.uk/research/facilities/hpc/tier2/
- Access Tier 1 facility Archer at EPCC - http://www.archer.ac.uk/access/
- If you have problems with the using the cluster then please email firstname.lastname@example.org
- Please email email@example.com to request an initial discussion and an account.
- Users are expected to comply with the University of Aberdeen’s Conditions for using IT Services
There is no charge to hold an account on Maxwell or to try out the service. Limited free use (up to 1000 core hours) is available for the following:
- Small pilot projects and unfunded PGR projects
- Those wishing to test or familiarise themselves with the service
- Those who wish to assess the suitability of the service prior to applying for funding
- Training and documentation use
Where the HPC service is used as part of a research project, you should ensure the costs are incorporated into your grant proposals so that your use of the service can be charged to the appropriate grants. The standard costs for using the HPC service are listed below. Costs for using the HPC are also included in IT Services research costing tool.
- Standard use of the HPC Cluster costs 10p per core-hour of CPU (any nodes).
- Standard support, including training and documentation, is included at no additional charge
- Additional support, including installation and troubleshooting of bespoke applications, is charged at £400 per day in ½ day increments.
- Additional storage beyond the standard allocation is charged at £500 per terabyte per year
- Usage of specialist nodes is available on request.