ScalewaySkip to loginSkip to main contentSkip to footer section

OpenAI-compatible APIs

Easily integrate with existing tools like OpenAI libraries and LangChain SDKs. Our APIs are designed to work out-of-the-box with your existing workflows, including adapters for Retrieval-Augmented Generation (RAG).

Cost-effective usage

Optimize your budget with a pay-per-use model, billed per million tokens. No upfront infrastructure costs or long-term commitments—just flexible pricing ideal for varying workloads or exploratory projects.

Quick model testing

Start serving and testing AI models in just a few minutes. Our streamlined onboarding process and serverless architecture let you deploy endpoints instantly, enabling rapid iteration and minimal setup time.

Everything you need to create apps with Generative AI

Models' prices

Enjoy a free tier of 1,000,000 tokens. Every new customer gets 1,000,000 free tokens—start paying only from the 1,000,001st token.

ModelTypeInput tokensOutput tokens
llama-3.1-8b-instructText generation€0.20/million tokens€0.20/million tokens
llama-3.1-70b-instructText generation€0.90/million tokens€0.90/million tokens
llama-3.3-70b-instructText generation€0.90/million tokens€0.90/million tokens
mistral-nemo-instruct-2407Text generation€0.20/million tokens€0.20/million tokens
qwen2.5-coder-32b-instructCode Generation€0.90/million tokens€0.90/million tokens
pixtral-12b-2409Image analysis€0.20/million tokens€0.20/million tokens
bge-multilingual-gemma2Embedding€0.20/million tokensN/A
deepseek-r1-distill-llama-70bText Generation€0.90/million tokens€0.90/million tokens

Exceptional developer experience meets best-in-class AI

Competitive pricing

Scaleway offers a competitive playground allowing you to quickly experiment with different AI models. Once satisfied with the responses, simply export the payload and replicate at scale!

Check prices

Open weight FTW

Scaleway supports the distribution of cutting-edge open-weight models, whose performance in reasoning and features now rivals that of proprietary models like GPTx or Claude.

Find supported models

Low latency

End-users in Europe will benefit from response time below 200ms to get the first tokens streamed, ideal for interactive dialog and agentic workflows even at high context lengths.

Send your first API request

Structured outputs

Our built-in JSON mode or JSON schema can distill and transform the diverse unstructured outputs of LLMs into actionable, reliable, machine-readable structured data.

How to use structured outputs

Native function calling

Generative AI models served at Scaleway can connect to external tools through Serverless Functions. Integrate LLMs with custom functions or APIs, and easily build applications able to interface with external systems.

How to use function calling

Secured for production

Scaleway's inference stack runs on highly secure, reliable infrastructure in Europe. Designed to enable your prototypes and run your production, this complete stack Managed Inference complements Generative APIs for use cases requiring guaranteed throughput as it offers a dedicated infrastructure.

Read our security measures

Towards a sovereign AI where your data remains yours, and only in Europe.

Designed as drop-in replacement for the OpenAI APIs

# Import modules
from openai import OpenAI
import os

# Initialize the OpenAI client using Scaleway
client = OpenAI(
    api_key=os.environ.get("SCW_API_KEY"),
    base_url='https://api.scaleway.ai/v1' 
)

# Create a chat completion request
completion = client.chat.completions.create(
    messages=[
        {
            'role': 'user',
            'content': 'Sing me a song about Xavier Niel'
        }
    ],
    model='mistral-nemo-instruct-2407'
)

Get started with tutorials

Frequently asked questions

What is Scaleway Generative APIs?

Generative APIs is Scaleway's fully managed service that makes frontier AI models from leading research labs available via a simple API call.

How can I get access to Scaleway Generative APIs?

Access to this service is open to all Scaleway customers. You can begin using it via Scaleway's console playground or via API right away, see the quickstart guide here.
If you need support, don't hesitate to reach out to us through the dedicated slack community #ai

What is the pricing of Scaleway Generative APIs?

This service is totally free while in beta. Once in general availability stage, Generative APIs will be with a "pay-as-you-go" pricing, or "pay per tokens" since your consumption will be charged per 1M tokens in/out.

Where are Scaleway's inference servers located?

We currently host all models in a secure datacenter located in France, Paris only. This may change in the future.

Can I use the OpenAI libraries and APIs?

Scaleway lets you seamlessly transition applications already utilizing OpenAI. You can use any of the OpenAI official libraries, for example the OpenAI Python client library or Azure OpenAI sdk, to interact with your Scaleway Generative APIs. Find here the APIs and parameters supported.

What is the difference with Scaleway Managed Inference?
  • Scaleway Generative APIs is a serverless service. This is most likely the easiest way to get started: We have set up the hardware, so you only pay per token/file and don’t wait for boot-ups.

  • Scaleway Managed Inference on the other hand is meant to deploy curated models or your own models, with the quantization and instances of your choice. You will get predictable throughput, as as well as custom security: isolation in your private network, access control…

Both AI services offer text and multi-modal (image understanding) models, OpenAI compatibility and important capabilities like structured outputs.

This page has ended, but the opportunities with AI are boundless.