NavigationContentFooter
Jump toSuggest an edit

Understanding the BGE-Multilingual-Gemma2 embedding model

Reviewed on 30 October 2024Published on 30 October 2024

Model overview

AttributeDetails
Providerbaai
Compatible InstancesL4 (FP32)
Context size4096 tokens

Model name

baai/bge-multilingual-gemma2:fp32

Compatible Instances

Instance typeMax context length
L44096 (FP32)

Model introduction

BGE is short for BAAI General Embedding. This particular model is an LLM-based embedding, trained on a diverse range of languages and tasks from the lightweight google/gemma-2-9b. As such, it is distributed under the Gemma terms of use.

Why is it useful?

  • BGE-Multilingual-Gemma2 tops the MTEB leaderboard, scoring the number one spot in French and Polish, and number seven in English, at the time of writing this page (Q4 2024).
  • As its name suggests, the model’s training data spans a broad range of languages, including English, Chinese, Polish, French, and more.
  • It encodes text into 3584-dimensional vectors, providing a very detailed representation of sentence semantics.
  • BGE-Multilingual-Gemma2 in its L4/FP32 configuration boats a high context length of 4096 tokens, particularly useful for ingesting data and building RAG applications.

How to use it

Sending Managed Inference requests

To perform inference tasks with your embedding model deployed at Scaleway, use the following command:

curl https://<Deployment UUID>.ifr.fr-par.scaleway.com/v1/embeddings \
-H "Authorization: Bearer <IAM API key>" \
-H "Content-Type: application/json" \
-d '{
"input": "Embeddings can represent text in a numerical format.",
"model": "baai/bge-multilingual-gemma2:fp32"
}'

Make sure to replace <IAM API key> and <Deployment UUID> with your actual IAM API key and the Deployment UUID you are targeting.

Receiving Inference responses

Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed Managed Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the embedding model based on the input provided in the request.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2024 – Scaleway