NAISS
SUPR
SUPR
Resources

Compute, storage and cloud resources that are available via SUPR for the moment:

NAISS Resources

Nationally available resources funded and operated by NAISS.

Compute

Resource Centre Short Description
Beda C3SE
Glenn C3SE
Abisko HPC2N
Akka HPC2N
Kebnekaise HPC2N
Kebnekaise is a heterogeneous computing resource consisting of:

Notes:

  1. Access to the Large Memory nodes are handled by the 'Kebnekaise Large Memory' resource.
  2. Note: It is important that requests for GPU nodes and KNL nodes are explicitly specified in the user's proposal. Also to note that the GPU nodes and the KNL nodes will be charged differently than ordinary computing nodes.
LUMI-C LUMI Sweden
LUMI-G LUMI Sweden
Alarik Lunarc
Platon Lunarc
Kappa NSC
Mozart NSC
Tetralith NSC
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. This means that most things are very familiar to Triolith users. You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module". All Tetralith compute nodes have 32 CPU cores. There will be 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node will have a local SSD disk where applications can store temporary files (approximately 200GB per node). All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network works in a similar way to the FDR Infiniband network in Triolith (e.g with a fat-tree topology). The Tetralith installation will take place in two phases. The first phase consist of 644 nodes and have a capacity that exceeds the current computing capacity of Triolith. The first phase was made available to users on August 23, 2018. Triolith was turned off September 21, 2018. After this, the second phase of the Tetralith installation will begin. NSC plans to have the entire Tetralith in operation no later than December 31st (i.e for the next round of SNAC Large projects.)
Triolith NSC
Beskow PDC
Lindgren PDC
Allium UPPMAX
a test computer
Kalkyl UPPMAX
Milou UPPMAX
Dedicated UPPNEX cluster.
Tintin UPPMAX

Storage

Resource Centre Short Description
Cephyr NOBACKUP C3SE
Test-Cephyr
Cstor backup C3SE
Storage LUMI Sweden
Centre Storage NSC
Project storage for SNIC (Small, Medium and Large) and LiU Local projects. Centre Storage @ NSC is designed for fast access from compute resources at NSC. The purpose is to provide storage for active data for projects allocated time on compute resources at NSC.
DCS NSC
NSC offers large (>50 TiB) storage allocations on our new high performance Centre Storage/DCS system. Importantly, these large storage (DCS) allocations are for projects requiring active storage, NOT archiving. Alternative archiving resources are available through SNIC (see e.g. http://docs.snic.se/wiki/SweStore). DCS applications should demonstrate how data stored on the new Centre Storage will be used e.g. data processing/reduction, data mining, visualization, analytics etc. Proposals will be evaluated at least twice per year. Usually to coincide with the processing of SNAC large compute allocations.
klemming PDC
klemming storage at PDC
Swestore/dCache SNIC Storage
This is a description of the resource as written in the resource object
Swestore/iRODS SNIC Storage
The iRODS storage resource offres possibility to add metadata to your data as well as PIDs (Persistent Identifiers). For more information please check User Documentation
Grus UPPMAX

Cloud

Resource Centre Short Description
Cloud SSC Swedish Science Cloud provides Infrastructure as a Service; IaaS

Swedish Science Cloud (SSC) is a large-scale, geographically distributed OpenStack cloud Infrastructure as a Service (IaaS), intended for Swedish academic research provided by NAISS.

It is available free of charge to researchers at Swedish higher education institutions through open application procedures.

The SSC resources are not meant to be a replacement for NAISS supercomputing resources (HPC clusters). Rather, it should be seen as a complement, offering advanced functionality to users who need more flexible access to resources (for example more control over the operating systems and software environments), want to develop software as a service, or want to explore recent technology such as for “Big Data” (e.g. Apache Hadoop/Spark) or IoT applications.

Other National Resources

Resources funded outside of NAISS but which are nationally available, in some cases under special conditions. See conditions for access under each resource.

Compute

Resource Centre Short Description
Berzelius Compute NSC
Berzelius is an NVIDIA SuperPOD consisting of 60 DGX-A100 node, sporting a total of 480 NVIDIA A100 GPUs. The SuperPOD uses the SLURM resource manager and job scheduler. All DGX-A100 nodes have 8x NVIDIA A100 GPUs (40GB), 128 CPU cores (2x AMD Epyc 7742), 1 TB of RAM and 15 TB of NVMe SSD local disk. High performance central storage is available using 4x AI400X DDN, serving 1 PB of storage space to all nodes of the cluster. All DGX-A100 GPUs have dedicated Mellanox HDR InfiniBand HBAs, that is, there are 8 Mellanox HDR HBAs per DGX-A100 node, connected in a full bisection bandwidth, fat-tree topology.
Berzelius Compute "LEGACY CPU" NSC
Berzelius is an NVIDIA SuperPOD consisting of 60 DGX-A100 node, sporting a total of 480 NVIDIA A100 GPUs. The SuperPOD uses the SLURM resource manager and job scheduler. All DGX-A100 nodes have 8x NVIDIA A100 GPUs (40GB), 128 CPU cores (2x AMD Epyc 7742), 1 TB of RAM and 15 TB of NVMe SSD local disk. High performance central storage is available using 4x AI400X DDN, serving 1 PB of storage space to all nodes of the cluster. All DGX-A100 GPUs have dedicated Mellanox HDR InfiniBand HBAs, that is, there are 8 Mellanox HDR HBAs per DGX-A100 node, connected in a full bisection bandwidth, fat-tree topology.

Storage

Resource Centre Short Description
Berzelius Storage NSC
High performance central storage is available using 4x AI400X DDN, serving a total of 1 PB of storage to all nodes of the cluster via a dedicated InfiniBand interconnect. Aggregate IO performance is 192 GB/s from the central storage and the dedicated data interconnect bandwidth per node is 25 GB/s. NSC centre storage (as available on Tetralith) is not accessible on Berzelius.
Omero Storage SciLifeLab Omero Storage
This Resource is used in the Disposer test server to test integration with SUPR for Omero.

Local and Regional Resources

Resources financed by individual universities or in regional collaborations between universities. Access is often limited to employees of the universities where the resources are located. See conditions for access under each resource.

Compute

Resource Centre Short Description
Sigma NSC
Sigma, sigma.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. This means that most things are very familiar to Gamma users. During 2023 the operating system will be upgraded to Rocky Linux 9. See https://www.nsc.liu.se/support/systems/sigma-os-upgrade/ for more information. You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login node. Applications will still be selected using "module". All Sigma compute nodes have 32 CPU cores. There is 104 "thin" nodes with 96 GiB of primary memory (RAM) and 4 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200GB per node). All Sigma nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network work in a similar way to the FDR Infiniband network in Gamma (e.g still a fat-tree topology). Sigma have a capacity that exceeds the current computing capacity of Gamma. Sigma was made available to users on August 23, 2018.
Milou UPPMAX
Dedicated UPPNEX cluster.
Mosler-topolino UPPMAX

Storage

Resource Centre Short Description
Cstor backup C3SE
Nobackup HPC2N
EGA-SE NBIS
Centre Storage NSC
Project storage for SNIC (Small, Medium and Large) and LiU Local projects. Centre Storage @ NSC is designed for fast access from compute resources at NSC. The purpose is to provide storage for active data for projects allocated time on compute resources at NSC.
Gulo - Nobackup UPPMAX
Lustre filesystem
Pica - proj UPPMAX
Backed up project storage