Flexible sizing
Scale your machine learning training with a flexible on-demand GPU cluster. Choose the exact capacity you need, from 2 nodes to 127, don’t overcommit, pay only for what you use.
Boost your AI projects with on-demand access to a scalable GPU cluster
Scale your machine learning training with a flexible on-demand GPU cluster. Choose the exact capacity you need, from 2 nodes to 127, don’t overcommit, pay only for what you use.
Train your models with NVIDIA H100 Tensor Core GPUs and SpectrumX interconnects, ensuring seamless, high-performance distributed AI training with zero interruptions.
Use the cluster for as long as you need – from one week to a few months. You decide when to start and stop, without the burden of long-term contracts. Ideal for temporary or spike AI workloads.
DC5, in PAR2 region, is one of Europe's greenest data centers, powered entirely by renewable wind and hydro energy (GO-certified) and cooled with ultra-efficient free and adiabatic cooling. With a PUE of 1.16 (vs. the 1.55 industry average), it slashes energy use by 30-50% compared to traditional data centers.
Reserve the Cluster fitted for your need — from 16 to 504 GPUs — to secure your access to efficient NVIDIA H100 Tensor Core GPUs.
NVIDIA HGX H100 with NVlink & Spectrum-X Network accelerates the key communication bottleneck between GPUs and is one of the top solution on the market to run distributed training.
NVIDIA Spectrum-X —the latest Networking technology developed by NVIDIA— enables us to build multi-tenant clusters hosted in the same adiabatic Data Center.
The emerging class of trillion parameter AI models requires months to train, even on supercomputers. A GPU Cluster On-Demand enable user to compress this time and complete training within hours, thanks to high-speed, seamless communication between every GPU in a server cluster.
Weights and parameters have become so large in recent LLMs that clusters of GPU (connected via high speed interconnect) are required to load the model into the memory for serving.
GPU Cluster On-Demand raison d’être is to annihilate this bottleneck and enable efficient inference of this gigantic models.
€2.73/hour (~€1992.9/month)
Accelerate your model training and inference with the most high-end AI chip of the market!
€1.4/hour (~€1,022/month)
Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.
€0.75/hour (~€548/month)
Optimize the costs of your AI infrastructure with a versatile entry-level GPU.