NavigationContentFooter
Suggest an edit
Was this page helpful?

How to import custom models into Managed Inference

Reviewed on 27 March 2025Published on 27 March 2025

Scaleway provides a selection of common models for deployment from the Scaleway console. If you need a specific model, you can import it into Managed Inference directly from Hugging Face or a Scaleway Object Storage bucket.

Note

This feature is currently in beta stage and will evolve in the future.

Before you startLink to this anchor

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console.
  • Owner status or IAM permissions to perform actions in your Organization.
  1. Click Managed Inference in the AI & Data section of the side menu in the Scaleway console to access the dashboard.
  2. Click Deploy a model to launch the model deployment wizard.
  3. In the Choose a model section, select Custom model. If you have no model yet, click Import a model to start the model import wizard.
  4. Choose an upload source:
    • Hugging Face: Pull the model from Hugging Face.
    • Object Storage: This feature is coming soon.
  5. Enter your Hugging Face access token, which must have READ access to the repository.
    Note

    Learn how to generate a Hugging Face access token.

  6. Enter the name of the Hugging Face repository to pull the model from.
    Note

    Ensure you have access to gated models if applicable. Refer to the Hugging Face documentation for details.

  7. Choose a name for your model. The name must be unique within your Organization and Project and cannot be changed later.
  8. Click Verify import to check your Hugging Face credentials and ensure model compatibility.
    Tip

    For detailed information about supported models, visit our Supported models in Managed Inference documentation.

  9. Review the summary of your import, which includes:
    • Context size by node type.
    • Quantization options.
    • Estimated cost. Once checked, click Begin import to finalize the process.

Your imported model will now appear in the model library. You can proceed to deploy your model on Managed Inference.

See also
How to deploy a modelHow to monitor a deployment
Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway