NavigationContentFooter
Jump toSuggest an edit
Was this page helpful?

Understanding the Pixtral-12b-2409 model

Reviewed on 23 September 2024Published on 23 September 2024

Model overviewLink to this anchor

AttributeDetails
ProviderMistral
Compatible InstancesL40S, H100, H100-2 (bf16)
Context size128k tokens

Model nameLink to this anchor

mistral/pixtral-12b-2409:bf16

Compatible InstancesLink to this anchor

Instance typeMax context length
L40S50k (BF16)
H100128k (BF16)
H100-2128k (BF16)

Model introductionLink to this anchor

Pixtral is a vision language model introducing a novel architecture: 12B parameter multimodal decoder plus 400M parameter vision encoder. It can analyze images and offer insights from visual content alongside text. This multimodal functionality creates new opportunities for applications that need both visual and textual comprehension.

Pixtral is open-weight and distributed under the Apache 2.0 license.

Why is it useful?Link to this anchor

  • Pixtral allows you to process real world and high resolution images, unlocking capacities such as transcribing handwritten files or payment receipts, extracting information from graphs, captioning images, etc.
  • It offers large context window of up to 128k tokens, particularly useful for RAG applications
  • Pixtral supports variable image sizes and types: PNG (.png), JPEG (.jpeg and .jpg), WEBP (.webp), as well as non-animated GIF with only one frame (.gif)
Note

Pixtral 12B can understand and analyze images, not generate them. You will use it through the /v1/chat/completions endpoint.

How to use itLink to this anchor

Sending Inference requestsLink to this anchor

Tip

Unlike previous Mistral models, Pixtral can take an image_url in the content array.

To perform inference tasks with your Pixtral model deployed at Scaleway, use the following command:

curl -s \
-H "Authorization: Bearer <IAM API key>" \
-H "Content-Type: application/json" \
--request POST \
--url "https://<Deployment UUID>.ifr.fr-par.scw.cloud/v1/chat/completions" \
--data '{
"model": "mistral/pixtral-12b-2409:bf16",
"messages": [
{
"role": "user",
"content": [
{"type" : "text", "text": "Describe this image in detail please."},
{"type": "image_url", "image_url": {"url": "https://picsum.photos/id/32/512/512"}},
{"type" : "text", "text": "and this one as well."},
{"type": "image_url", "image_url": {"url": "https://www.wolframcloud.com/obj/resourcesystem/images/a0e/a0ee3983-46c6-4c92-b85d-059044639928/6af8cfb971db031b.png"}}
]
}
],
"top_p": 1,
"temperature": 0.7,
"stream": false
}'

Make sure to replace <IAM API key> and <Deployment UUID> with your actual IAM API key and the Deployment UUID you are targeting.

Tip

The model name allows Scaleway to put your prompts in the expected format.

Note

Ensure that the messages array is properly formatted with roles (system, user, assistant) and content.

Passing images to PixtralLink to this anchor

  1. Image URLs If the image is available online, you can just include the image URL in your request as demonstrated above. This approach is simple and does not require any encoding.

  2. Base64 encoded image Base64 encoding is a standard way to transform binary data, like images, into a text format, making it easier to transmit over the internet.

The following Python code sample shows you how to encode an image in base64 format and pass it to your request payload.

import base64
from io import BytesIO
from PIL import Image
def encode_image(img):
buffered = BytesIO()
img.save(buffered, format="JPEG")
encoded_string = base64.b64encode(buffered.getvalue()).decode("utf-8")
return encoded_string
img = Image.open("path_to_your_image.jpg")
base64_img = encode_image(img)
payload = {
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "What is this image?"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_img}"},
},
],
}
],
... # other parameters
}

Receiving Managed Inference responsesLink to this anchor

Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the visual language model based on the input provided in the request.

Note

Despite efforts for accuracy, the possibility of generated text containing inaccuracies or hallucinations exists. Always verify the content generated independently.

Frequently Asked QuestionsLink to this anchor

What types of images are supported by Pixtral?

  • Bitmap (or raster) image formats, meaning storing images as grids of individual pixels, are supported: PNG, JPEG, WEBP, and non-animated GIFs in particular.
  • Vector image formats (SVG, PSD) are not supported.

Are other files supported?

Only bitmaps can be analyzed by Pixtral, PDFs and videos are not supported.

Is there a limit to the size of each image?

Images size are limited:

  • Directly by the maximum context window. As an example, since tokens are squares of 16x16 pixels, the maximum context window taken by a single image is 4096 tokens (ie. (1024*1024)/(16*16))
  • Indirectly by the model accuracy: resolution above 1024x1024 will not increase model output accuracy. Indeed, images above 1024 pixels width or height will be automatically downscaled to fit within 1024x1024 dimension. Note that image ratio and overall aspect is preserved (images are not cropped, only additionaly compressed).

What is the maximum amount of images per conversation?

One conversation can handle up to 12 images (per request). The 13rd will return a 413 error.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway