NavigationContentFooter
Jump toSuggest an edit

Understanding the Llama-2-70b-chat model

Model overview

AttributeDetails
ProviderMeta
Model Namellama-2-70b-chat
Compatible InstancesH100 (FP8) - H100-2 (FP16)
Context size4,096 tokens

Model names

meta/llama-2-70b-chat:fp8
meta/llama-2-70b-chat:fp16

Compatible Instances

  • H100 (FP8)
  • H100-2 (FP16)

Model introduction

The Llama-2-70b-chat model, developed by Meta, is designed for various chat applications and customer service platforms. Trained on diverse conversational data, it generates human-like responses and engages in meaningful dialogues. Its versatility makes it suitable for businesses seeking to enhance their customer interactions.

Why you will love it

Llama-2-70b-chat offers seamless integration with chat applications and customer service platforms, facilitating smooth communication between businesses and their customers. Its robust performance in natural language understanding, enhanced by superior common sense reasoning, enriches user experiences and boosts customer satisfaction.

How to use it

Sending LLM Inference requests

To perform inference tasks with your Llama-2 deployed at Scaleway, use the following command:

curl -s \
-H "X-Auth-Token: <IAM API key>" \
-H "Content-Type: application/json" \
--request POST \
--url "https://<Deployment UUID>.ifr.fr-par.scw.cloud/" \
--data '{"text_input": "[INST]How can I use large language models to improve my customer service? [/INST]", "max_tokens": 200, "temperature": 0.2, "random_seed": 1, "top_p": 0.9}' | jq -r .text_output

Make sure to replace <IAM API key> and <Deployment UUID> with your actual IAM API key and the Deployment UUID you are targeting.

Note

Ensure that the text_input data is properly formatted according to the model’s input requirements.

Prompt engineering

Here is an example with a format to define system and instruction prompts, designed as a virtual assistant to deliver only constructive and respectful responses.

<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
There's a llama in my garden, what should I do?
[/INST]

Receiving Inference responses

Upon sending the HTTP request to the public or private endpoints exposed by the server, you will receive inference responses from the managed LLM Inference server. Process the output data according to your application’s needs. The response will contain the output generated by the LLM model based on the input provided in the request.

Note

Despite efforts for accuracy, the possibility of generated text containing inaccuracies or hallucinations exists. Always verify the content generated independently.

Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway