NavigationContentFooter
Jump toSuggest an edit

Using Chat API

Reviewed on 03 September 2024Published on 03 September 2024

Scaleway Generative APIs are designed as a drop-in replacement for the OpenAI APIs. If you have an LLM-driven application that uses one of OpenAI’s client libraries, you can easily configure it to point to Scaleway Chat API, and get your existing applications running using open-weight instruct models hosted at Scaleway.

Create chat completion

Creates a model response for the given chat conversation.

Request sample:

curl --request POST \
--url https://api.scaleway.ai/v1/chat/completions \
--header 'Authorization: Bearer ${SCW_SECRET_KEY}' \
--header 'Content-Type: application/json'
--data '{
"model": "llama-3.1-8b-instruct",
"messages": [
{
"role": "system",
"content": "<string>"
},
{
"role": "user",
"content": "<string>"
}
],
"max_tokens": integer,
"temperature": float,
"top_p": float,
"presence_penalty": float,
"stop": "<string>",
"stream": boolean,
}'

Headers

Find required headers in this page.

Body

Required parameters

ParamTypeDescription
messages*array of objectsA list of messages comprising the conversation so far.
model*stringThe name of the model to query.

Our chat API is OpenAI compatible. Use OpenAI’s API reference for more detailed information on the usage.

Supported parameters

  • temperature
  • top_p
  • max_tokens
  • stream
  • presence_penalty
  • logprobs
  • stop
  • seed

Unsupported parameters

  • response_format
  • frequency_penalty
  • n
  • top_logprobs
  • tools
  • tool_choice
  • logit_bias
  • user

If you have a use case requiring one of these unsupported parameters, please contact us via Slack on #ai channel.

Note

Go further with Python code examples to query text models using Scaleway’s Chat API.

Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway