NavigationContentFooter
Suggest an edit

How to create a deployment

Reviewed on 23 March 2024Published on 06 March 2024
  1. Go to the AI & Data section of the Scaleway console, and select LLM Inference from the side menu to access the LLM Inference dashboard.
  2. Click Create deployment to launch the deployment creation wizard.
  3. Provide the necessary information:
    • Select the desired model for your deployment from the available options:
      • Llama-2-70b-chat
      • Llama-2-7b-chat
      • Mixtral-8x7B-Instruct-v0.1
      • WizardLM-70b-V1.0
      Note

      Some models may require acceptance of an end-user license agreement. If prompted, review the terms and conditions and accept the license accordingly.

    • Choose the geographical region for the deployment.
    • Specify the GPU Instance type to be used with your deployment.
  4. Enter a name for the deployment, and optional tags.
  5. Configure the network settings for the deployment:
    • Enable Private Network for secure communication and restricted availability within Private Networks. Choose an existing Private Network from the drop-down list, if applicable.
    • Enable Public Network to access resources via the public internet. Token protection is enabled by default.
    Important
    • It is not possible to change network settings through the Scaleway console after the deployment creation.
    • Enabling both private and public networks will result in two distinct endpoints (public and private) for your deployment.
    • Deployments must have at least one endpoint, either public or private.
  6. Click Create deployment to launch the deployment process. Once the deployment is ready, it will be listed among your deployments.
See also
How to how to monitor a deployment
Docs APIScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCarreer
© 2023-2024 – Scaleway