Support for function calling in Scaleway Managed Inference
What is function calling?
Function calling allows a large language model (LLM) to interact with external tools or APIs, executing specific tasks based on user requests. The LLM identifies the appropriate function, extracts the required parameters, and returns the results as structured data, typically in JSON format. While errors can occur, custom parsers or tools like LlamaIndex and LangChain can help ensure valid results.
How to implement function calling in Scaleway Managed Inference?
This tutorial will guide you through the steps of creating a simple flight schedule assistant that can understand natural language queries about flights and return structured information.
What are models with function calling capabilities?
The following models in Scaleway’s Managed Inference library can call tools as per the OpenAI method:
- meta/llama-3.1-8b-instruct
- meta/llama-3.1-70b-instruct
- meta/llama-3.3-70b-instruct
- mistral/mistral-7b-instruct-v0.3
- mistral/mistral-nemo-instruct-2407
- mistral/pixtral-12b-2409
- nvidia/llama-3.1-nemotron-70b-instruct
Understanding function calling
Function calling consists of three main components:
- Tool definitions: JSON schemas that describe available functions and their parameters
- Tool selection: Automatic or manual selection of appropriate functions based on user queries
- Tool execution: Processing function calls and handling their responses
The workflow typically follows these steps:
- Define available tools using JSON schema
- Send system and user query along with tool definitions
- Process model’s function selection
- Execute selected functions
- Return results to model for final response
Further resources
For more information about function calling and advanced implementations, refer to these resources: