GPU Cluster On Demand

Rent a GPU-cluster for a few weeks to speed up training requiring multiple nodes of aggregate GPU memory and enable inference of the largest models

From 16 to 504 GPUs to support your development

Reserve the Cluster fitted for your need — from 16 to 504 GPUs — to secure your access to efficient NVIDIA H100 Tensor Core GPUs.

Fast Networking and GPU-GPU communication for distributed training

NVIDIA HGX H100 with NVlink & Spectrum-X Network accelerates the key communication bottleneck between GPUs and is one of the top solution on the market to run distributed training.

Private and secured environment

NVIDIA Spectrum-X —the latest Networking technology developed by NVIDIA— enables us to build multi-tenant clusters hosted in the same adiabatic Data Center.

Use cases

The emerging class of trillion parameter AI models requires months to train, even on supercomputers. A GPU Cluster On-Demand enable user to compress this time and complete training within hours, thanks to high-speed, seamless communication between every GPU in a server cluster.

Weights and parameters have become so large in recent LLMs that clusters of GPU (connected via high speed interconnect) are required to load the model into the memory for serving.
GPU Cluster On-Demand raison d’être is to annihilate this bottleneck and enable efficient inference of this gigantic models.

Clusters are a big step? Maybe start with a GPU Instance

H100 PCIe GPU

€2.52/hour (~€1387/month)

Accelerate your model training and inference with the most high-end AI chip of the market!

Learn more

L40S GPU Instance

€1.4/hour (~€1,022/month)

Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.

Learn more

L4 GPU Instance

€0.75/hour (~€548/month)

Optimize the costs of your AI infrastructure with a versatile entry-level GPU.

Learn more