GPU-powered infrastructure
Our comprehensive lineup of NVIDIA GPUs, including P100, H100, L4, L40S and GH200 covers a wide range of computing needs. Harness the speed and efficiency of Graphics Processing Units (GPUs) for parallelized workloads thanks to instances or supercomputers.
Available GPU Instances
H100 PCIe GPU
€2.52/hour (~€1387/month)
Accelerate your model training and inference with the most high-end AI chip of the market!
RENDER GPU
€1.24/hour (~€891/month)
Dedicated Tesla P100s for all your Machine Learning & Artificial Intelligence needs.
L4 GPU
Available in Q1 2024
Optimize the costs of your AI infrastructure with a versatile entry-level GPU.
L40S GPU
Available in H1 2024
Expand capacity for AI or run a mix of workloads, including visual computing with the universal L40s GPU.
Available GPU-powered infrastructure
Nabu 2023
127 NVIDIA DGX H100
Build the next Foundation Model with Nabu 2023, one of the fastest and most energy-efficient supercomputers in the world.
Jero 2023
2 NVIDIA DGX H100
Fine-tune Transformers models and deploy them on Jero 2023, the 2-DGX AI supercomputer that can scale up to 16 nodes.
Grace Hopper
Available in 2024
NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace CPU and the H100 Tensor Core GPU for an order-of-magnitude performance leap for large-scale AI and HPC
Choose the right machine
Offers | Render GPU Instance | L4 GPU Instance | L40S GPU Instance | H100 PCIe GPU Instance | Jero & Nabu 2023 | GH200 Grace Hopper™ |
Nvidia GPU | P100 16GB PCIe 3 | L4 24GB PCIe 4 | L40S 48GB PCIe 4 | H100 80GB PCIe 5 | H100 80GB Tensor Core SXM5 | NVIDIA GH200 Grace Hopper™ Superchip |
NVIDIA architecture | Pascal 2016 | LoveLace 2022 | LoveLace 2022 | Hopper 2022 | Hopper 2022 | GH200 Grace Hopper™ Architecture |
Type | Instances | Instances | Instances | Instances | Supercomputer | Instance to SuperComputer |
Performance (training in FP16 Tensor Cores) | no Tensor Cores (not stable in FP16) | Up to 242 TFLOPs | Up to 362 TFLOPS | Up to 1513 TFLOPs | Up to 2010 PFLOPs | Up to 989 peak TFLOPs per GH200 |
Specifications | - 10 vCPU (Skylake) - 42GB RAM DDR3 - 400GB NVMe - Boot on Block - 1 gbps | under construction | under construction | - 24 vCPU (Zen4) - 240GB RAM DDR5 - 3TB NVMe scratch - Boot on Block - 10 gbps | - up to 14,224 CPU cores(Zen4) - 254TB RAM - DDN low latency - 400 gbps | - GH200 SuperChip with 72 ARM Neoverse V2 cores alongside 480 GB of LPDDR5X DRAM and 96GB of HBM3 GPU memory,
the memory is fully merged for up to 576GB of global usable memory.
-1,92TB of scratch storage Up to 25GBps of networking performances |
Price | €1.24/hour (~€891/month) | coming soon | coming soon | €1.9/hour (~€1387/month) | depending on your project | coming soon |
Format & Features | Multi-GPU (under construction) | Multi-GPU (under construction) | - Multi-GPU (up 2) - Multi Instance GPU (MIG) | - up to 127 DGX servers - customizable for your project | Single ship up to DGX GH200 architecture. (For larger setup needs, contact us) | |
Use cases | - Best price/perf ratio for Computer Vision - 3D Graphism - Image/Video Encoding/ Decoding (~4k) | - Medium DL model training - Best price/perf ratio for inference for S/M/L DL models - Small LLM fine-tuning (PEFT) - 3D Graphism - Image/Video Encoding/ Decoding (8k) | Large DL model training Large DL model inference Medium LLM Fine Tuning (PEFT), & Inference 3D Graphism Image/Video processing applications (Encoding/ Decoding (8k)) | - Extra Large DL model training - Extra Large DL model inference - Large LLM fine-tuning (PEFT) and inference | - Extra Large DL model training - Extra Large DL model inference - Large LLM training - HPC | - Extra Large LLM and DL models inference
-HPC |
What they are not made for | LLM | Train LLM | Graphism | Graphism | Graphism, Training |