AI solutions
Scale your AI projects from A to Z with a European sovereign and sustainable Cloud Provider
Focus on building AI, not managing infrastructure
Scaling your AI workloads is a constant challenge
Your models are growing in complexity, but managing infrastructure shouldn't be a bottleneck. As workloads expand, your infrastructure needs to keep up without compromising performance or results.
Infrastructure management slows down your innovation
You're spending too much time setting up clusters, managing GPUs, and monitoring resources—time better spent fine-tuning models and advancing AI capabilities.
Unpredictable costs are draining your resources
Over-provisioning for peak performance or dealing with unexpected surges drives up costs, eating into your budget for innovation and scaling.
From Infrastructure-as-a-Service to Managed solutions we got you covered
Why choose Scaleway for your AI projects?
Boost Innovation Sustainably: 50% Less Power
DC5 (par2) is one of Europe's greenest data centers, with a PUE of 1.16 (vs. the 1.55 industry average), it slashes energy use by 30-50% compared to traditional data centers.
Keep sensitive data in Europe
Scaleway stores all its data in Europe and thus, it is not subject to any extraterritorial legislation, and fully compliant with the principles of the GDPR.
Benefit from a complete Cloud Ecosystem
We offer the full range of Cloud services: from data collection, model creation, infrastructure development, delivery to end-customers, and all in between.
Clusters
Clusters
When you need scalable resources for training or developing large models, our clusters provide the flexibility to adapt to your demands with or without long-term commitments. Choose between on-demand access for short-term needs or a custom-built solution for sustained, risk-free support.
On demand Cluster
Don’t commit and rent an On Demand Cluster for a week to unlock your team's ability to train or build large models efficiently. Explore your options to find the perfect setup before committing.
Custom-built Clusters
Design the solution you need to support your development for the next years. Chose the GPU, the storage and the interconnexion solution, we do the rest. focus on OPEX while we handle CAPEX.
GPU Instances
GPU Instances
Need occasional access to powerful GPU Instances for training or inference? Our range of NVIDIA GPU Instances gives you the flexibility to scale up as needed, perfect for specific workloads without investing in permanent infrastructure.
H100 PCIe GPU Instance
€2.73/hour (~€1,993/month)
Accelerate your model training and inference with the most high-end AI chip of the market!
RENDER GPU
€1.24/hour (~€891/month)
Dedicated Tesla P100s for all your Machine Learning & Artificial Intelligence needs.
L4 GPU Instance
€0.75/hour (~€548/month)
Optimize the costs of your AI infrastructure with a versatile entry-level GPU.
L40S GPU Instance
€1.4/hour (~€1,022/month)
Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.
Model-as-a-Service
Model-as-a-Service
Deploy models without the hassle of managing infrastructure. Access pre-configured, serverless endpoints featuring the most popular AI models billed per 1M tokens or hourly-billed with a dedicated infrastructure for more security and better cost anticipation.
Managed Inference
Serve Generative AI models and answer prompts from European end-consumers securely thanks to a dedicated infrastructure billed per hour.
Generative APIs
Access to pre-configured, serverless endpoints featuring the most popular AI models, all hosted in secure European data centers and priced per 1M tokens.
Successful projects powered by Scaleway's infrastructure
Moshi from Kyutai
Moshi, Kyutai’s revolutionary AI voice assistant brings unprecedented vocal capabilities. Trained using Scaleway’s high-performance Cluster and served with our L4 GPU instances, Moshi excels in conveying emotions and accents with 300x codec compression. This setup enabled Moshi to process 70 different emotions and accents with ultra-low latency, allowing for seamless, human-like conversations. Thanks to this high-performance environment, Kyutai was able to achieve this breakthrough.
Mixtral from Mistral AI
Mistral AI, for example, used Nabu to build its Mixtral model , a highly efficient mixture-of-experts model. At its release, Mixtral outperformed existing closed
and open weight models across most benchmarks, offering superior performance with fewer active parameters, making it a major innovation in the field of AI. The collaboration with Scaleway enabled Mistral to scale its training efficiently, allowing Mixtral to achieve groundbreaking results in record time.
HPC, quantum computing and AI for medicine
Qubit Pharmaceutical uses Scaleway’s GPU power to accelerate medical research into new medicines, using a combination of high-performance computing (HPC), quantum computing, and AI. This combination allows research teams to obtain the same test results with 3-5 times less staff and 20 times less tests than using traditional methods, demonstrating how compute power can turbo-boost healthcare when applied correctly.
Benchmarking from Hugging Face
“We've benchmarked a bit of Nabu 2023, Scaleway AI Supercomputer and achieved very appropriate results compared to other CSPs. Results that could be greatly improved with a bit more tuning! We can't wait to benchmark the full 127 DGX nodes and see what performance we achieve” Guillaume Salou, ML Infra Lead at Hugging Face