Compute, storage and cloud resources that are available via SUPR for the moment:
|
Resource |
Centre |
Short Description |
|
Beda |
C3SE |
|
|
|
|
Glenn |
C3SE |
|
|
|
|
Abisko |
HPC2N |
|
|
|
|
Akka |
HPC2N |
|
|
|
|
Kebnekaise |
HPC2N |
|
|
Kebnekaise is a heterogeneous computing resource consisting of:
Notes:
- Access to the Large Memory nodes are handled by the 'Kebnekaise Large Memory' resource.
- Note: It is important that requests for GPU nodes and KNL nodes are explicitly specified in the user's proposal. Also to note that the GPU nodes and the KNL nodes will be charged differently than ordinary computing nodes.
|
|
LUMI-C |
LUMI Sweden |
|
|
|
|
LUMI-G |
LUMI Sweden |
|
|
|
|
Alarik |
Lunarc |
|
|
|
|
Platon |
Lunarc |
|
|
|
|
Kappa |
NSC |
|
|
|
|
Mozart |
NSC |
|
|
|
|
Tetralith |
NSC |
|
|
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. This means that most things are very familiar to Triolith users.
You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module".
All Tetralith compute nodes have 32 CPU cores. There will be 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node will have a local SSD disk where applications can store temporary files (approximately 200GB per node).
All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network works in a similar way to the FDR Infiniband network in Triolith (e.g with a fat-tree topology).
The Tetralith installation will take place in two phases. The first phase consist of 644 nodes and have a capacity that exceeds the current computing capacity of Triolith. The first phase was made available to users on August 23, 2018.
Triolith was turned off September 21, 2018. After this, the second phase of the Tetralith installation will begin. NSC plans to have the entire Tetralith in operation no later than December 31st (i.e for the next round of SNAC Large projects.)
|
|
Triolith |
NSC |
|
|
|
|
Beskow |
PDC |
|
|
|
|
Lindgren |
PDC |
|
|
|
|
Allium |
UPPMAX |
|
|
a test computer
|
|
Kalkyl |
UPPMAX |
|
|
|
|
Milou |
UPPMAX |
|
|
Dedicated UPPNEX cluster.
|
|
Tintin |
UPPMAX |
|
|
|
|
Resource |
Centre |
Short Description |
|
Cephyr NOBACKUP |
C3SE |
|
|
Test-Cephyr
|
|
Cstor backup |
C3SE |
|
|
|
|
Storage |
LUMI Sweden |
|
|
|
|
Centre Storage |
NSC |
|
|
Project storage for SNIC (Small, Medium and Large) and LiU Local projects. Centre Storage @ NSC is designed for fast access from compute resources at NSC. The purpose is to provide storage for active data for projects allocated time on compute resources at NSC.
|
|
DCS |
NSC |
|
|
NSC offers large (>50 TiB) storage allocations on our new high performance Centre Storage/DCS system. Importantly, these large storage (DCS) allocations are for projects requiring active storage, NOT archiving. Alternative archiving resources are available through SNIC (see e.g. http://docs.snic.se/wiki/SweStore). DCS applications should demonstrate how data stored on the new Centre Storage will be used e.g. data processing/reduction, data mining, visualization, analytics etc.
Proposals will be evaluated at least twice per year. Usually to coincide with the processing of SNAC large compute allocations.
|
|
klemming |
PDC |
|
|
klemming storage at PDC
|
|
Swestore/dCache |
SNIC Storage |
|
|
This is a description of the resource as written in the resource object
|
|
Swestore/iRODS |
SNIC Storage |
|
|
The iRODS storage resource offres possibility to add metadata to your data as well as PIDs (Persistent Identifiers). For more information please check User Documentation
|
|
Grus |
UPPMAX |
|
|
|
Resources funded outside of NAISS but which are nationally available, in some cases under special conditions. See conditions for access under each resource.
|
Resource |
Centre |
Short Description |
|
Berzelius Compute |
NSC |
|
|
Berzelius is an NVIDIA SuperPOD consisting of 60 DGX-A100 node, sporting a total of 480 NVIDIA A100 GPUs.
The SuperPOD uses the SLURM resource manager and job scheduler. All DGX-A100 nodes have 8x NVIDIA A100 GPUs (40GB), 128 CPU cores (2x AMD Epyc 7742), 1 TB of RAM and 15 TB of NVMe SSD local disk. High performance central storage is available using 4x AI400X DDN, serving 1 PB of storage space to all nodes of the cluster. All DGX-A100 GPUs have dedicated Mellanox HDR InfiniBand HBAs, that is, there are 8 Mellanox HDR HBAs per DGX-A100 node, connected in a full bisection bandwidth, fat-tree topology.
|
|
Berzelius Compute "LEGACY CPU" |
NSC |
|
|
Berzelius is an NVIDIA SuperPOD consisting of 60 DGX-A100 node, sporting a total of 480 NVIDIA A100 GPUs.
The SuperPOD uses the SLURM resource manager and job scheduler. All DGX-A100 nodes have 8x NVIDIA A100 GPUs (40GB), 128 CPU cores (2x AMD Epyc 7742), 1 TB of RAM and 15 TB of NVMe SSD local disk. High performance central storage is available using 4x AI400X DDN, serving 1 PB of storage space to all nodes of the cluster. All DGX-A100 GPUs have dedicated Mellanox HDR InfiniBand HBAs, that is, there are 8 Mellanox HDR HBAs per DGX-A100 node, connected in a full bisection bandwidth, fat-tree topology.
|
Resources financed by individual universities or in regional collaborations between universities. Access is often limited to employees of the universities where the resources are located. See conditions for access under each resource.