NavigationContentFooter
Jump toSuggest an edit

Understanding the Qwen2.5-Coder-32B-Instruct model

Reviewed on 08 December 2024Published on 08 December 2024

Model overview

AttributeDetails
ProviderQwen
LicenseApache 2.0
Compatible InstancesH100, H100-2 (INT8)
Context Lengthup to 128k tokens

Model names

qwen/qwen2.5-coder-32b-instruct:int8

Compatible Instances

Instance typeMax context length
H100128k (INT8)
H100-2128k (INT8)

Model introduction

Qwen2.5-coder is your intelligent programming assistant familiar with more than 40 programming languages. With Qwen2.5-coder deployed at Scaleway, your company can benefit from code generation, AI-assisted code repair, and code reasoning.

Why is it useful?

  • Qwen2.5-coder achieved the best performance on multiple popular code generation benchmarks (EvalPlus, LiveCodeBench, BigCodeBench), outranking many open-source models and providing competitive performance with GPT-4o.
  • This model is versatile. While demonstrating strong and comprehensive coding abilities, it also possesses good general and mathematical skills.

How to use it

Sending Managed Inference requests

To perform inference tasks with your Qwen2.5-coder deployed at Scaleway, use the following command:

curl -s \
-H "Authorization: Bearer <IAM API key>" \
-H "Content-Type: application/json" \
--request POST \
--url "https://<Deployment UUID>.ifr.fr-par.scaleway.com/v1/chat/completions" \
--data '{"model":"qwen/qwen2.5-coder-32b-instruct:int8", "messages":[{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful code assistant."},{"role": "user","content": "Write a quick sort algorithm."}], "max_tokens": 1000, "temperature": 0.8, "stream": false}'
Tip

The model name allows Scaleway to put your prompts in the expected format.

Note

Ensure that the messages array is properly formatted with roles (system, user, assistant) and content.

Receiving Inference responses

Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the LLM model based on the input provided in the request.

Note

Despite efforts for accuracy, the possibility of generated text containing inaccuracies or hallucinations exists. Always verify the content generated independently.

API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2024 – Scaleway