1363031171_systems

HPC and storage hardware

The current Norwegian academic e-infrastructure consists of four HPC systems and two storage systems.

HPC systems

Each of the HPC facilities consists of a compute resource (a number of compute nodes each with a number of processors and internal shared-memory, plus an interconnect that connects the nodes), a central storage resource that is accessible by all the nodes, and a secondary storage resource for back-up (and in few cases also for archiving). All facilities use variants of the UNIX operating system (Linux, AIX, etc.).

Storage systems

The NIRD storage provides data storage capacity for research projects with a data centric architecture. Hence it is  also used  for storage  capacity  for the HPC systems, for  the  national data archive  and  other services  requiring a storage backend. It consists of two geographically separated storage systems. The leading storage technology combined with the powerful network backbone underneath allow the two systems to be geo‐replicated in an asynchronous fashion, thus ensuring high availability and security of the data.  

The NIRD storage facility, differently from its predecessor NorStore, is strongly integrated with the HPC‐ systems, thus facilitating the computation on large datasets. Furthermore, the NIRD storage offers high performance  storage  for  post  processing,  visualization,  GPU‐computing  and  other  services  on  the  NIRD  Service Platform 

Technical documentation

  • Betzy

    Systems-Betzy

    Betzy

    The supercomputer is named after Mary Ann Elizabeth (Betzy) Stephansen, the first Norwegian woman with a Ph.D in mathematics.

    The most powerful supercomputer in Norway

    Betzy is a BullSequana XH2000, provided by Atos, and will  give Norwegian researchers more than 5 times more capacity than previously, with a theoretical peak performance of 6.2  PetaFlops. The supercomputer, which will be placed at NTNU in Trondheim, will be available to users during the second half of 2020.

    Technical specifications:

    • The system comprises of 1344 compute nodes each equipped with 2 x 64core, next-generation AMD EPYC™ processors, code name ‘Rome’, for a total of 172032 cores installed on a total footprint of only 14.78m2. The total compute power will be close to 6Pflops.
    • The system will consume 952kW of power and 95% of the heat will be captured to water.
    • The computes nodes will be interconnected with the new generation of Mellanox HDR technology.
    • The data management solution will rely on a DDN storage with a Lustre parallel file system of more than 2.5PB.
       
    System BullSequana XH2000
    Max Floating point performance, double 6.2 Petaflops
    Number of nodes 1344
    CPU type AMD® Epyc™ "Rome" 2.2GHz
    CPU cores in total 172032
    CPU cores per node 128
    Memory in total 336 TiB
    Memory per node 256 GiB
    Total disc capacity 2.5 PB
    Interconnect InfiniBand HDR 100, Dragonfly+ topology
  • Fram

    Systems-Fram

    Fram

    Named after the Norwegian arctic expedition ship Fram, the new Linux cluster hosted at UiT is a shared resource for research computing capable of 1.1 PFLOP/s theoretical peak performance. It started production 1 November 2017 (2017.2 computing period).

    Fram is a distributed memory system which consists of 1004 dual socket and 2 quad socket nodes, interconnected with a high-bandwidth low-latency Infiniband network. The interconnect network is organized in an island topology, with 9216 cores in each island. Each standard compute node has two 16-core Intel Broadwell chips (2.1 GHz) and 64 GiB memory. In addition, 8 larger memory nodes with 512 GiB RAM and 2 huge memory quad socket nodes with 6 TiB of memory is provided. The total number of compute cores is 32256.

    Technical details

    System Lenovo NeXtScale nx360
    Number of Cores 32256
    Number of nodes 1006
    CPU type Intel E5-2683v4 2.1 GHz
    Intel E7-4850v4 2.1 GHz (hugemem)
    Max Floating point performance, double 1.1 Petaflop/s
    Total memory 78 TiB
    Total disc capacity 2.5 PB

     

  • Saga

    Systems-Saga

    Saga

    The supercomputer is named after the godess in norse mythology associated with wisdom. Saga is also a term for the Icelandic epic prose literature. The supercomputer, placed at NTNU in Trondheim, is designed to run workloads from Abel and Stallo. It was made available to users right before the start of 2019.2.

    Saga is provided by Hewlett Packard Enterprise and has a computational capacity of approximately 85 million CPU hours a year and a life expectancy of four yearuntil 2023. 


    Technical details

    Main components

    • 200 standard compute nodes, with 40 cores and 192 GiB memory each
    • 28 medium memory compute nodes, with 40 cores and 384 GiB of memory each
    • 8 big memory nodes, with 3 TiB and 64 cores each
    • 8 GPU nodes, with 4 NVIDIA GPUs and 2 CPUs with 24 cores and 384 GiB memory each
    • 8 login and service nodes with 256 cores in total
    • 1 PB high metadata performance BeeGFS scratch file system

    Key figures

    Processor Cores 10080
    GPU units 32
    Internal Memory 75 TiB
    Internal disk 91 TB NVMe
    Central disk 1 PB
    Theoretical Performance (Rpeak) 645 TFLOPS

     

  • Stallo

    Systems-Stallo

    Stallo

    The Linux Cluster Stallo is a compute cluster at UiT - The Arctic University which was installed 1 December 2007, and included in NOTUR 1 January 2008. The supercomputer was upgraded in 2013.

    Stallo is intended for a distributed-memory MPI applications with low communication requirements between the processors, a shared-memory OpenMP applications using up to eight processor cores, parallel applications with moderate memory requirements (2-4 GB per core) and embarrassingly parallel applications.

    Technical details

    System HP BL 460c Gen 8
    Number of Cores 14116
    Number of nodes 518
    CPU type Intel E5 2670
    Peak performance 104 Teraflops/s
    Total memory 12.8 TB
    Total disc capacity 2.1 PB

     

  • Vilje

    Systems-Vilje

    Vilje

    Vilje is a cluster system procured by NTNU, in cooperation with the Norwegian Meteorological Institute and UNINETT Sigma in 2012. Vilje is used for numerical weather prediction in operational forecasting by met.no, as well as for research in a broad range of topics at NTNU and other Norwegian universities, colleges and research institutes. The name Vilje is taken from Norse Mythology.

    Vilje is a distributed memory system that consists of 1440 nodes interconnected with a high-bandwidth low-latency switch network (FDR Infiniband). Each node has two 8-core Intel Sandy Bridge (2.6 Ghz) and 32 GB memory. The total number of cores is 23040.

    The system is well-suited (and intended) for large scale parallel MPI applications. Access to Vilje is in principle only allowed for projects that have parallel applications that use a relatively large number of processors (≥ 128).

    Technical details

    System SGI Altix 8600
    Number of Cores 22464
    Number of nodes 1404
    CPU type Intel Sandy Bridge
    Total memory 44 TB
    Total disc capacity xx

     

  • NIRD Toolkit

    Systems - NIRD Toolkit

    NIRD Toolkit

    The NIRD Toolkit is running on a computing platform located in Tromsø and Trondheim.

    Technical details

    Workers 8 workers in total
    vCores 512, each worker has 64 vCores
    RAM 2 TB, each worker has 256 GB
    Gbps 40 Gbps network interconnect
    to storage and among workers
    Storage Total NIRD storage capacity
    accessible form the platform
    GPUs 8 NVIDIA V100 GPUs
    2 GPU per worker in 4 workers


    NIRD Toolkit service

Systems-Betzy

Betzy

The supercomputer is named after Mary Ann Elizabeth (Betzy) Stephansen, the first Norwegian woman with a Ph.D in mathematics.

The most powerful supercomputer in Norway

Betzy is a BullSequana XH2000, provided by Atos, and will  give Norwegian researchers more than 5 times more capacity than previously, with a theoretical peak performance of 6.2  PetaFlops. The supercomputer, which will be placed at NTNU in Trondheim, will be available to users during the second half of 2020.

Technical specifications:

  • The system comprises of 1344 compute nodes each equipped with 2 x 64core, next-generation AMD EPYC™ processors, code name ‘Rome’, for a total of 172032 cores installed on a total footprint of only 14.78m2. The total compute power will be close to 6Pflops.
  • The system will consume 952kW of power and 95% of the heat will be captured to water.
  • The computes nodes will be interconnected with the new generation of Mellanox HDR technology.
  • The data management solution will rely on a DDN storage with a Lustre parallel file system of more than 2.5PB.
     
System BullSequana XH2000
Max Floating point performance, double 6.2 Petaflops
Number of nodes 1344
CPU type AMD® Epyc™ "Rome" 2.2GHz
CPU cores in total 172032
CPU cores per node 128
Memory in total 336 TiB
Memory per node 256 GiB
Total disc capacity 2.5 PB
Interconnect InfiniBand HDR 100, Dragonfly+ topology

Systems-Fram

Fram

Named after the Norwegian arctic expedition ship Fram, the new Linux cluster hosted at UiT is a shared resource for research computing capable of 1.1 PFLOP/s theoretical peak performance. It started production 1 November 2017 (2017.2 computing period).

Fram is a distributed memory system which consists of 1004 dual socket and 2 quad socket nodes, interconnected with a high-bandwidth low-latency Infiniband network. The interconnect network is organized in an island topology, with 9216 cores in each island. Each standard compute node has two 16-core Intel Broadwell chips (2.1 GHz) and 64 GiB memory. In addition, 8 larger memory nodes with 512 GiB RAM and 2 huge memory quad socket nodes with 6 TiB of memory is provided. The total number of compute cores is 32256.

Technical details

System Lenovo NeXtScale nx360
Number of Cores 32256
Number of nodes 1006
CPU type Intel E5-2683v4 2.1 GHz
Intel E7-4850v4 2.1 GHz (hugemem)
Max Floating point performance, double 1.1 Petaflop/s
Total memory 78 TiB
Total disc capacity 2.5 PB

 

Systems-Saga

Saga

The supercomputer is named after the godess in norse mythology associated with wisdom. Saga is also a term for the Icelandic epic prose literature. The supercomputer, placed at NTNU in Trondheim, is designed to run workloads from Abel and Stallo. It was made available to users right before the start of 2019.2.

Saga is provided by Hewlett Packard Enterprise and has a computational capacity of approximately 85 million CPU hours a year and a life expectancy of four yearuntil 2023. 


Technical details

Main components

  • 200 standard compute nodes, with 40 cores and 192 GiB memory each
  • 28 medium memory compute nodes, with 40 cores and 384 GiB of memory each
  • 8 big memory nodes, with 3 TiB and 64 cores each
  • 8 GPU nodes, with 4 NVIDIA GPUs and 2 CPUs with 24 cores and 384 GiB memory each
  • 8 login and service nodes with 256 cores in total
  • 1 PB high metadata performance BeeGFS scratch file system

Key figures

Processor Cores 10080
GPU units 32
Internal Memory 75 TiB
Internal disk 91 TB NVMe
Central disk 1 PB
Theoretical Performance (Rpeak) 645 TFLOPS

 

Systems-Stallo

Stallo

The Linux Cluster Stallo is a compute cluster at UiT - The Arctic University which was installed 1 December 2007, and included in NOTUR 1 January 2008. The supercomputer was upgraded in 2013.

Stallo is intended for a distributed-memory MPI applications with low communication requirements between the processors, a shared-memory OpenMP applications using up to eight processor cores, parallel applications with moderate memory requirements (2-4 GB per core) and embarrassingly parallel applications.

Technical details

System HP BL 460c Gen 8
Number of Cores 14116
Number of nodes 518
CPU type Intel E5 2670
Peak performance 104 Teraflops/s
Total memory 12.8 TB
Total disc capacity 2.1 PB

 

Systems-Vilje

Vilje

Vilje is a cluster system procured by NTNU, in cooperation with the Norwegian Meteorological Institute and UNINETT Sigma in 2012. Vilje is used for numerical weather prediction in operational forecasting by met.no, as well as for research in a broad range of topics at NTNU and other Norwegian universities, colleges and research institutes. The name Vilje is taken from Norse Mythology.

Vilje is a distributed memory system that consists of 1440 nodes interconnected with a high-bandwidth low-latency switch network (FDR Infiniband). Each node has two 8-core Intel Sandy Bridge (2.6 Ghz) and 32 GB memory. The total number of cores is 23040.

The system is well-suited (and intended) for large scale parallel MPI applications. Access to Vilje is in principle only allowed for projects that have parallel applications that use a relatively large number of processors (≥ 128).

Technical details

System SGI Altix 8600
Number of Cores 22464
Number of nodes 1404
CPU type Intel Sandy Bridge
Total memory 44 TB
Total disc capacity xx

 

Systems - NIRD Toolkit

NIRD Toolkit

The NIRD Toolkit is running on a computing platform located in Tromsø and Trondheim.

Technical details

Workers 8 workers in total
vCores 512, each worker has 64 vCores
RAM 2 TB, each worker has 256 GB
Gbps 40 Gbps network interconnect
to storage and among workers
Storage Total NIRD storage capacity
accessible form the platform
GPUs 8 NVIDIA V100 GPUs
2 GPU per worker in 4 workers


NIRD Toolkit service

Decommisioned systems

Click on the link above to find information about old systems for national e-infrastructure no longer in production.

Pictures of national e-infrastructure systems

  • Hexagon
    A supercomputer for massive parallel jobs
  • Stallo
    A supercomputer for smaller low parallel jobs
  • Abel
    A supercomputer for smaller sequential jobs
  • Vilje
    A supercomputer for large and medium parallel jobs
  • NIRD
    A storage facility for large research data sets
  • Saga
    A supercomputer for smaller sequential jobs
  • Betzy
    A supercomputer for massive parallel jobs