Services roadmaps

On this page your find roadmaps where you can follow the development of our services.

Now

  • High-Level Training Program Development

    We will continuously develop user training activities to ensure that end users have the knowledge and skills required to effectively use Sigma2 national services. This will contribute to develop national AI skills and plan AI development campus. This includes preparing instructional materials, conducting sessions, and gathering user feedback for continuous improvement.

  • Access to Quantum Computing

    Sigma2 will provide Norwegian researchers with easy access to EuroHPC quantum computers, including LUMI-Q, as well as local quantum emulators for testing and learning. Access will be available through the same interfaces you already use (Waldur and Sigma2 cloud services), with clear documentation, examples, and training materials. You will be able to request access to quantum resources directly through Sigma2 service portal, run quantum algorithms on LUMI-Q or experiment with them first on local emulators, combine classical HPC and AI workflows with quantum steps in a seamless way, and make use of educational notebooks, small demos, and ready-to-use templates that help you get started quickly. This work ensures that quantum computing becomes a natural extension of Sigma2’s services easy to access, easy to learn, and fully integrated into your existing research workflow.

  • Introduce interactive compute partition on HPC

    We are restructuring how we handle light-weight, interactive workloads. By moving development and light-weight testing tasks off login nodes and into a dedicated, (possibly quota-free) partition, we ensure system stability while providing users with a responsive environment for their day-to-day computing needs.

  • Support the Data Rescue initiative

    We will assist the Norwegian research community in safeguarding datasets currently stored abroad, which may be at risk of deletion or loss.

  • Introducing heatmaps for NIRD projects in MAS

    To improve visibility and decision-making at the project level, we will introduce heatmaps in MAS that visually represent storage usage patterns. This feature will empower project leaders to gain a deeper understanding, effectively manage, and optimize data storage resources.

  • Bringing NAIC Services into the National Cloud

    The Norwegian Artificial Intelligence Cloud (NAIC) will move from pilot activities into stable, long-term national services delivered through Sigma2’s cloud infrastructure. For users, this means that NAIC services such as AI resource orchestration, learning and training resources, monitoring, and user support will become easier to access and more predictable to use as part of the national research infrastructure. Compute, storage, and identity services will be provided through Sigma2’s cloud, allowing researchers to run AI workloads without having to manage underlying infrastructure themselves. By integrating NAIC into the national cloud, users gain a consistent entry point, clearer usage conditions, and professional operational support, enabling both research and applied AI projects to scale smoothly from experimentation to production.

  • One Unified Portal for Cloud and AI Services

    We will introduce a single, unified online portal where researchers can access Sigma2’s cloud and AI services in one place. Through this portal, users will be able to request resources, manage projects, follow their usage, and launch ready-to-use research tools such as Jupyter notebooks, RStudio, and Galaxy without complex setup. The portal brings together services from the national research cloud and national AI initiatives, providing a consistent and easy-to-use experience across infrastructures. By offering one clear entry point with transparent access and familiar tools, this initiative lowers the barrier to using advanced computing resources and makes it easier for both new and experienced researchers to get started.

  • GPU support in our administrative system (MAS)

    We are upgrading our administrative system (MAS) to recognize GPU resources as a distinct resource type alongside CPU. This initiative ensures that GPU consumption - whether on national machines or external systems like LUMI - is allocated, tracked, and reported with the same visibility as traditional CPU resources.

  • Integrate our administrative system (MAS) with Puhuri

    Puhuri is the name of the portal used to administer LUMI projects. We are working on integrating this portal more tightly with our existing administrative system (MAS). This will ensure a more seamless experience for Sigma2 users when utilizing LUMI and the national resources.

  • Implement data decommissioning policy

    While a formal data decommissioning policy is in place, consistent execution and follow-up have been lacking. A revision is necessary to incorporate and enable automated maintenance procedures, which will ensure reliable adherence to retention schedules.

  • NIRD Research Data Archive development

    Use cases and functionalities which were not handled as part of the baseline product development during the Archive2021 project will be addressed as part of the NIRD Research data Archive product development process. API development will be prioritized to allow metadata harvesting by metadata registries. Functionalities such as customized data ingestion, customized search and reporting will be addressed.

  • Enable S3 access from HPC systems

  • Enhancing the FAIRness of the NIRD Research Data Archive

    We will improve the interoperability and findability of the NIRD Research Data Archive to enhance its overall FAIRness, making research data easier to discover, access, and integrate across systems. The Research data Archive will continue to evolve with new features and stronger support for FAIR principles. Development will be co-designed with researchers and key stakeholders to ensure it meets community needs.

  • Enhance integration between NIRD and the HPC clusters

    Data cannot and should not be permanently stored on the HPC clusters. We will implement a fast, reliable, and intuitive data staging solution to simplify and accelerate data movement between the HPC clusters and NIRD mid and long-term storage.

  • Data-centric model strategies

    To fully utilize advancements in infrastructure and new system capabilities, the data-centric model must be consistently maintained and periodically updated. Regular updates ensure that the model stays responsive to evolving technological features, allowing users to harness the potential of new functionalities. This proactive approach optimizes performance and keeps the model aligned with the latest operational improvements. The new model shall take into consideration the current and evolving service landscape from Tier-2 (at the local universities) to the Tier-1 (Betzy, Saga, Olivia) and to the Tier-0 (LUMI). A look at the role of the cloud storage in picture is also enviable.

  • Open, confidential and restricted data services

    The data landscape is becoming more complex with the explosion of new technologies and research frontiers. AI/ML is just paradigmatic of this revolution. A variety of data with different requirements with regard to access and protection is produced. We will design and adapt the services to data which require confidentiality while still be free from regulations related to health personal sensitivity.

Next

  • Enable storage solution for large language models

    We will investigate and implement solutions to support a Git-like filesystem and integrate it with the NIRD Central Data Library service to support large language model repositories.

  • Streamlined S3 account management

    We will simplify and centralize the S3 account management processes on NIRD. This strategic change will strengthen security protocols, establish uniform control, and drive down both administrative and operational overhead.

  • Streamlined quota management

    We will simplify and automate storage quota management, improving transparency and ensuring better control over resource usage on NIRD services (For eg: Data Peak, Data Lake).

  • Achieving the ISO 27001 certification

    We will strengthen our information security practices and build customer trust by obtaining ISO 27001 certification, an internationally recognized standard for Information Security Management Systems.

  • Introduce a common contribution model for all storage services

    We need a common storage contribution model covering all our services offered, including HPC storage. The contribution model will be aligned with the data-centric model.

  • Establishing a New National Cloud Region for Research

    We will establish a new Sigma2-owned cloud region at the Lefdal Mine Datacenter to strengthen national capacity for research computing and AI services. This new region will provide modern, scalable cloud resources that improve availability, resilience, and performance, while reducing reliance on commercial cloud providers for publicly funded research. It will form a core foundation for future services, including the renewal of the current Service Platform and support for national AI initiatives, and will enable external-facing services such as dashboards, APIs, and interactive tools. By investing in national cloud capacity, Sigma2 ensures more predictable costs, better integration with existing research infrastructure, and a robust platform that can support growing demand from both academic and applied research communities.

  • Research Data Archive for confidential data

    The paradigm is to make Norwegian scientific data as open as possible, and as restricted as necessary. Finding mechanisms to archive, publish and share large volumes of unstructured scientific data with some requirements for higher security and access controls is still a challenge. Through domain expert and national provider consultation we will investigate and develop a comprehensive architectural design which can support both open research data and facilitate controlled access to sensitive and restricted research assets, ensuring privacy and compliance are baked into the design.

  • Provide a quantum computing service

    It is planned that Sigma2 will get access to quantum computing resources in 2026 through the LUMI consortium (VLQ). This initiative concerns facilitating access to these resources.

  • More automatic administration of HPC quotas

    We are implementing a set of automated administrative protocols in our administrative system (MAS) to improve the user experience. By shifting routine tasks such as account management and quota monitoring to automated systems, we ensure that the users are actively notified of relevant events. In addition, resources remain accessible to active projects while maintaining strict security and efficiency standards.

  • Modernising the NIRD Service Platform

    We will further develop and modernise the NIRD Service Platform to make it more flexible, cloud-like, and easier to use for researchers. This work focuses on improving reliability, scalability, and automation by aligning the platform more closely with modern cloud practices and integrating it into Sigma2’s unified service portal. Users will benefit from clearer access paths, better accounting and usage visibility, and a more self-service–oriented experience through the service marketplace. As part of this effort, existing services will be transitioned to a more robust and future-proof infrastructure, ensuring continuity while enabling gradual improvement and innovation. The result will be a more stable, transparent, and user-friendly platform that supports evolving research workflows and prepares the Service Platform for long-term sustainability.

  • Shared Access to AI Models Across National Services

    We will provide a shared service where researchers can easily find, reuse, and access commonly used AI and machine-learning models across Sigma2’s cloud and AI services. Instead of downloading the same models repeatedly for each project or location, users will be able to work with models that are already available close to where computation happens. This makes AI workflows faster, more efficient, and easier to reproduce, while reducing unnecessary data transfers. The service will support popular open models as well as project-specific ones, and will be integrated with national AI and cloud services so that models can be used consistently across experiments, training runs, and production workflows.

  • Consistent Research Tools Across Cloud, HPC, and Storage Services

    We will ensure that researchers can access the same set of core tools and environments across Sigma2’s services, whether they are working on cloud resources, the Service Platform, national storage, or supercomputers. This means that common analysis tools, workflows, and interfaces will behave consistently across systems, reducing the need to relearn tools or adapt scripts when moving between environments. By packaging and maintaining these tools in a coordinated way, researchers benefit from more predictable environments, easier collaboration, and smoother transitions between interactive analysis and large-scale computation. The result is a more coherent research experience, where tools follow the researcher across infrastructures instead of being tied to a single system.

  • Implement Data Insight service

    We will focus on implementing solutions that offer comprehensive insights to service owners, operators and end-users of the Sigma2 infrastructure. We aim on generating periodical and on-demand reports on usage patterns, metadata, data type and much more.

  • Introduction of Research Objects

    We will address data management needs arising from steadily increasing data volumes in close collaboration with user communities and key stakeholders. We will focus on adopting Research Objects (ROs) as a foundation for improved, future-oriented data management. ROs will first be introduced on NIRD for selected pilot projects and later expanded to all projects.

Later

  • LUMI-AI Federation with Sigma2 Cloud Services

    We will connect Sigma2’s cloud services with LUMI-AI, Europe’s large-scale AI computing platform, so that Norwegian researchers can access and use LUMI-AI resources through the same national portals they already know. This integration will make it possible to request AI capacity, run workloads, and follow usage in a unified way, while securely combining cloud-based preparation and analysis with large-scale AI computation. By federating national cloud services with LUMI-AI, researchers gain access to AI-scale computing beyond what is available domestically, without needing to learn new systems. This strengthens Norway’s participation in EuroHPC, supports advanced AI research, and provides a seamless, secure user experience for hybrid AI and HPC workflows.

  • Increase NIRD performance

    The performance of the NIRD must closely align with the performance requirements imposed by the Olivia machine. Additionally, there is an anticipated increase in demand due to the scaling up of cloud resources.

  • Implementing backup for NIRD Data Lake

    We will establish an on-demand backup service for a limited number of NIRD Data Lake tenants to strengthen data resilience and business continuity, ensuring critical information can be quickly recovered in case of loss, corruption, or system failure.

  • Introduction of data management solution for the Central Data Library

    We will enhance the NIRD Central Data Library (CDL) service by integrating a robust data management solution that improves data accessibility, consistency, and lifecycle governance. This effort will extend the service’s capabilities, streamline data operations, and provide a foundation for future scalability and advanced analytics.

  • Implementing data deduplication

    We will increase the effort in identifying and work on removing duplicated data to improve storage efficiency, performance, and data quality across NIRD and other storage systems.

  • Secure Computing Across Trusted Research Environments and Supercomputers

    We will further strengthen secure computing for sensitive data by working closely with existing Trusted Research Environments (TREs) and national partners to make more high-performance computing capacity available when local resources are not sufficient. For researchers, this means that if data is hosted in a TRE such as SAFE and the available compute capacity there is limited, it will be possible to securely submit approved workloads to larger national systems, such as the Colossus supercomputer within TSD, without moving data or compromising security. In the longer term, this work also prepares the ground for accessing additional secure compute capacity, including European systems, when national resources reach their limits. By connecting TREs, cloud services, and HPC in a coordinated and trusted way, this initiative ensures that researchers working with sensitive data can scale their analyses, run more demanding workloads, and complete projects faster while fully respecting data protection requirements, governance models, and collaborative responsibilities across the national e-infrastructure.

  • Introduce interactive HPC service

    This is a further development of the interactive compute partition, allowing subscriptions to the partition at the organizational level, e.g. via Feide.

  • Central Service Orchestrator and Intelligent Job Scheduling

    We will introduce a central service that automatically helps researchers run their work on the most suitable computing resources, without requiring them to decide where or how jobs should run. From a single portal, users will be able to launch interactive tools and analyses, while the system takes care of selecting the appropriate platform whether that is cloud resources, the Service Platform, or national supercomputers. For example, a researcher working in an interactive notebook can choose to scale up their work, and the service will seamlessly move the workload to a suitable supercomputer when needed. This removes manual steps, reduces complexity, and allows researchers to focus on their science rather than infrastructure choices. By providing a single, intelligent entry point to multiple computing environments, this service makes advanced computing easier to use, more efficient, and more accessible to a wider range of research communities.

  • Sustainable governance

    We will review and propose adjustments to the sustainability, support model, and governance of the new Research Data Archive. The aim is to explore opportunities to better utilize national competencies and resources, with the potential to contribute to a more robust and efficient long-term service.

  • Discuss collaborations and synergies with DataverseNO

    We will establish a framework for tighter collaboration with DataVerseNO, which is a popular platform among researchers and curators for archiving small datasets. Users will be guided and supported in scaling out from one service to the other.