ScalewaySkip to loginSkip to main contentSkip to footer section

ai-PULSE 2024: Reserve your spot now! Nov 7, Paris

Tailored for your needs

Whether you need to develop Foundation Models or run multiple large-scale training jobs, our custom-built GPU clusters let you define the exact hardware and resources (GPUs, storage, network) tailored to your machine learning workload.

Hosted in Europe

Keep control of your AI development under strict European data regulations. Your data stays secure and compliant with Scaleway's infrastructure, ensuring it's safe from extraterritorial access throughout the machine learning lifecycle.

Proven by industry leaders

Moshi –Kyutai’s revolutionary AI voice assistant–, as well as Mixtral –a highly efficient mixture-of-experts model built by Mistral AI– were both trained on Nabu 2023, the 1st Custom-built Cluster. At its release, Mixtral outperformed existing closed and open weight models across most benchmarks.

Leaders of the AI industry are using these Clusters

Mistral AI

"We're currently working on Scaleway SuperPod, which is performing exceptionally well.", Arthur Mensch on the Master Stage at ai-PULSE 2023. Mistral AI, used Nabu to build its Mixtral model , a highly efficient mixture-of-experts model. At its release, Mixtral outperformed existing closed and open weight models across most benchmarks, offering superior performance with fewer active parameters, making it a major innovation in the field of AI. The collaboration with Scaleway enabled Mistral to scale its training efficiently, allowing Mixtral to achieve groundbreaking results in record time.

Example of Custom-built Clusters we can build for you

Custom-built Cluster nameNumber of GPUsmax PFLOPs FP8 Tensor Core
Nabuchodonosor 20231,016 H100 Tensor Core GPUs (SXM5)Up to 4,021.3 PFLOPS
Jeroboam 202316 H100 Tensor Core GPUs (SXM5)Up to 63.2 PFLOPS

Nabu 2023

  • processor

    CPU

    Dual Intel® Xeon® Platinum 8480C Processors 112 Cores total

  • threads_cores

    Total CPU cores

    14,224 cores

  • gpu

    GPU

    1,016 Nvidia H100 Tensor Core GPUs (SXM5)

  • memory

    Total GPU Memory

    81,280GB

  • processor_frequency

    Processor frequency

    Max of 3.80 GHz

  • memory

    Total RAM Memory

    254 TB of RAM

  • storage_type

    Storage type

    1.8 PB of a3i DDN low latency storage

  • storage

    Storage capacity per DGX

    2.7 TB/s Read and 1.95 TB/s Write

  • bandwidth

    Inter-GPU Bandwidth

    Infiniband 400 Gb/s

Jero 2023

  • processor

    CPU

    Dual Intel® Xeon® Platinum 8480C Processors 112 Cores total

  • threads_cores

    Total CPU Cores

    224 cores

  • gpu

    GPU

    16 Nvidia H100 Tensor Core GPUs (SXM5)

  • memory

    Total GPU memory

    1,280GB

  • processor_frequency

    Processor frequency

    Max of 3.80 GHz

  • memory

    Total RAM memory

    4 TB of RAM

  • storage_type

    Storage type

    64TB of a3i DDN low latency storage

  • bandwidth

    Inter-GPU Bandwidth

    Infiniband 400 Gb/s

Made of the most high-end technologies for AI

NVIDIA H100 Tensor Core GPUs, the best engines for AI

Our Custom-built Clusters, Nabu & Jero 2023 are made of NVIDIA DGX H100 systems with Nvidia H100 Tensor Core GPUs 80GB (SXM5). They can reach lightning fast multi-node scaling for AI, thanks to their latest generation GPUs:

  • Hopper architecture
  • Chip with 80 billion transistors spread over an area of 814 mm²
  • Tensor Core 4th generation up to 6x faster than A100 Tensor Core
  • Transformer Engine up to 30x faster AI inference speedup on language models compared to the prior generation A100
  • 2nd generation of secure MIG up to 7x secure tenants

NVIDIA ConnectX-7 and Quantum-2 networks for seamless scalability

Thanks to the InfiniBand NDR interconnection (400Gb/s), each 8 GPU compute node offers 3.2 Tb/s of bandwidth to all the other nodes on a totally non-blocking network architecture.

Its brand new GPUDirect RDMA technology accelerates direct communication across all nodes of the cluster using InfiniBand, enabling:

  • 15% faster Deep Learning recommendations,
  • 17% faster for NLP,
  • 15% faster for fluid dynamics simulations,
  • 36% lower power consumption.

DDN Storage made for HPC and co-developed with NVIDIA for artificial intelligence

The Custom-built Clusters benefit from DDN a3i storage optimized for ultra-fast computing. With over:

  • 2.7 TB/s read
  • 1.9 TB/s write
  • a write speed of over 15GB/s per DGX systems
    The DDN storage enables regular checkpoints for more security.

SLURM for comprehensive management

Benefit from a comprehensive management for the cluster with SLURM. An Open-source cluster management and job scheduling system for Linux cluster.

Numerous AI applications and use cases

Generative AI

Generates new content, such as images, text, audio or, code. It autonomously produces novel and coherent outputs, expanding the realm of AI-generated content beyond replication or prediction.
With models and algorithms specialized in:

  • Image generation
  • Text generation with Transformer Models also called LLMs (Large Language Models) such as GPT2
  • Code generation