NavigationContentFooter
Jump toSuggest an edit
Was this page helpful?

Understanding the Mistral-small-24b-base-2501 model

Reviewed on 04 March 2025Published on 04 March 2025

Model overviewLink to this anchor

AttributeDetails
ProviderMistral
Compatible InstancesL40S, H100, H100-2 (FP8)
Context size32K tokens

Model nameLink to this anchor

mistral/mistral-small-24b-instruct-2501:fp8

Compatible InstancesLink to this anchor

Instance typeMax context length
L4020k (FP8)
H10032k (FP8)
H100-232k (FP8)

Model introductionLink to this anchor

Mistral Small 24B Instruct is a state-of-the-art transformer model of 24B parameters, built by Mistral. This model is open-weight and distributed under the Apache 2.0 license.

Why is it useful?Link to this anchor

  • Mistral Small 24B offers a large context window of up to 32k tokens and provide both conversational and reasoning capabilities.
  • This model supports multiple languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
  • It supersedes Mistral Nemo Instruct, although its tokens throughput is slightly lower.

How to use itLink to this anchor

Sending Inference requestsLink to this anchor

To perform inference tasks with your Mistral model deployed at Scaleway, use the following command:

curl -s \
-H "Authorization: Bearer <IAM API key>" \
-H "Content-Type: application/json" \
--request POST \
--url "https://<Deployment UUID>.ifr.fr-par.scaleway.com/v1/chat/completions" \
--data '{"model":"mistral/mistral-small-24b-instruct-2501:fp8", "messages":[{"role": "user","content": "Tell me about Scaleway."}], "top_p": 1, "temperature": 0.7, "stream": false}'

Make sure to replace <IAM API key> and <Deployment UUID> with your actual IAM API key and the Deployment UUID you are targeting.

Note

Ensure that the messages array is properly formatted with roles (system, user, assistant) and content.

Receiving Managed Inference responsesLink to this anchor

Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the LLM model based on the input provided in the request.

Note

Despite efforts for accuracy, the possibility of generated text containing inaccuracies or hallucinations exists. Always verify the content generated independently.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway