Procurement Project Colocation
This project is completed and the contract is awarded Lefdal Mine Datacenter.
Here is an overview over our new data centre facilities
You find additional information in our our press release about the procurement of data centre capacity (in Norwegian).
Background
The national e-infrastructure for large-scale data- and computational science has historically been hosted at the four partner universities that are part of the collaboration with Sigma2.
Sigma2's landscape
- Non-Sensitive Data: 3 HPC clusters on two sites + two data centres in Trondheim and Tromsø
- Sensitive Data: Sigma2 owns storage and a large part of the HPC resources in TSD – Oslo
- Uninett backbone for the connectivity between the data centres
- From 2021, the infrastructure will be connected with the pre-exascale HPC system in Finland
- Operational and Investment funding from RCN and the 4 oldest universities
NRIS (Norwegian research infrastructure services) is an important foundation for the successful collaboration on e-infrastructure in Norway. NRIS consists of employees at the four partner universities and Sigma2, a collaboration to pool competencies, resources and services. All OS and software operations and monitoring are conducted by NRIS, with full access to vendors hardware monitoring. See illustration below.
Project objectives and desired outcome
In addition to procuring, the objective of the procurement project is to identify the requirements for a co-hosting facility, at a professional secure location that allows the national e-infrastructure to be operated sustainably and cost-effectively. The time-related objective is to be able to serve the requirements from the next generation of national storage systems (NIRD) for delivery of equipment in Q4 2021, and the expected HPC system in Q4 2022.
Project scope and exclusions
The project is targeting infrastructure within the realms of Sigma2 as the shared national resources, specifically the future national storage systems like NIRD and potential successors, backup of the future national storage systems and expected future national high-performance computing systems. Hardware operated locally at the universities for local users and existing systems are not included in the project for procuring a co-hosting location.
Project considerations
There is a desire to motivate vendors to provide a sustainable and environmentally friendly housing service while maintaining cost-effectiveness and security as required by the national e-infrastructure. The project may favour the services that either uses less energy for heat removal from the infrastructure, sell the surplus heat for lower cost of service, or reuse the heat for societal benefits in enabling new business or innovation opportunities. How these considerations are weighted will be suggested by the project group in the offer selection matrix, according to directions provided by the Sigma2 board, and with weights according to the importance of the considerations, finally approved by the Sigma2 board.
Contact information
Frequently asked questions (FAQ)
A: No, so far there is only new systems that will be installed at a new location.
A: Yes, HPC and NIRD will be traditionally procurements through Doffin. Colocation will not be through Doffin but by invitation.
A: HPC will be liquid-cooled. Direct cooling in high density racks up to 100 KW or Immersion cooling in tanks (also up to 100 KW).
A: We expect air-cooling for NIRD, but as an alternative, racks with chilled doors, driven by liquid.
A: We simply do not know yet, as the HPC procurement process will start in late 2021, but the colocation needs to provide up to 4 MW for the HPC system.
A: In the first year in production, we expect like 100 KW for Tiered storage (active) and also 100 KW for Data Lake (passive).
A: All efforts to reduce energy usage is important, in form of no energy used for cooling, reuse of waste heat or a combination of these, as it is beneficial to the environment, innovation and public spending. These parameters will be considered during the procurement process.
A: High-speed redundant fibre networks with 100 Gbit/s speed, and access to Uninett Research Network where most of the users are connecting from.
A: For HPC compute nodes only, no UPS / generator support needed. For all other equipment, redundancy, equal to Uptime Institute level III.
A: Yes, could be, for HPC computes nodes. In combination with automatic transfer switches (ATS)
A: For HPC compute nodes only, redundant pumps in colocations own liquid cooling loops. For all other equipment, redundancy, equal to Uptime Institute level III.
A: No, the Tier I requirements are related to cooling to void equipment overheating in case of pump failures. Redundant electricity from two providers with ATS is a plus but not required.
A: The NIRD 2020 storage system are not chosen yet, and the power consumption requirements for NIRD 2020 are based on expectations and hence not accurate.