NavigationContentFooter
Jump toSuggest an edit
Was this page helpful?

Adding AI to VS Code using Continue and Generative APIs

Reviewed on 14 February 2025Published on 14 February 2025

AI-powered coding is transforming software development by automating repetitive tasks, generating code, improving code quality, and even detecting and fixing bugs. By integrating AI-driven tools, developers can significantly boost productivity and streamline their workflows. This guide provides a step-by-step guide on how to integrate AI-powered code models into VS Code using Continue and Scaleway’s Generative APIs.

Before you startLink to this anchor

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • A valid API key for API authentication
  • Installed Visual Studio Code on your local machine

Install Continue in VS CodeLink to this anchor

You can install Continue directly from the Visual Studio Marketplace or via the command line:

code --install-extension continue.continue

Configure Continue to use Scaleway’s Generative APIsLink to this anchor

Configure Continue through the graphical interface

To link Continue with Scaleway’s Generative APIs, you can use built-in menus from Continue in VS Code.

  • Click Continue in the menu on the left.
  • In the prompt section, click on Select model dropdown, then on Add Chat model.
  • Select Scaleway as provider.
  • Select the model you want to use (we recommend Qwen 2.5 Coder 32b to get started with).
  • Enter your Scaleway secret key.
    Tip

    To start with, we recommend you use a Scaleway secret key having access to your default Scaleway project.

These actions will edit automatically your config.json file. To edit it manually, see Configure Continue through configuration file.

Note

Embeddings and autocomplete models are not yet supported through graphical interface configuration. To enable them, you need to edit the configuration manually, see Configure Continue through configuration file.

Configure Continue through a configuration file

To link Continue with Scaleway’s Generative APIs, you can configure a settings file:

  • Create a config.json file inside your .continue directory.
  • Add the following configuration to enable Scaleway’s Generative API:
    {
    "models": [
    {
    "model": "qwen2.5-coder-32b-instruct",
    "title": "Qwen2.5 Coder",
    "provider": "scaleway",
    "apiKey": "###SCW_SECRET_KEY###"
    }
    ],
    "embeddingsProvider": {
    "model": "bge-multilingual-gemma2",
    "provider": "scaleway",
    "apiKey": "###SCW_SECRET_KEY###"
    },
    "tabAutocompleteModel": {
    "model": "qwen2.5-coder-32b",
    "title": "Qwen2.5 Coder Autocomplete",
    "provider": "scaleway",
    "apiKey": "###SCW_SECRET_KEY###"
    }
    }
  • Save the file at the correct location:
    • Linux/macOS: ~/.continue/config.json
    • Windows: %USERPROFILE%\.continue\config.json
Tip

For more details on configuring config.json, refer to the official Continue documentation. If you want to limit access to a specific Scaleway Project, you should add the field "apiBase": "https://api.scaleway.ai/###PROJECT_ID###/v1/" for each model (ie. models, embeddingsProvider and tabAutocompleteModel) since the default URL https://api.scaleway.ai/v1/ can only be used with the default project.

Activate Continue in VS CodeLink to this anchor

After configuring the API, open VS Code and activate Continue:

  • Open the Command Palette (Ctrl+Shift+P on Windows/Linux, Cmd+Shift+P on Mac)
  • Type "Continue" and select the appropriate command to enable AI-powered assistance.
Important

Enabling tab completion may lead to higher token consumption as the model generates predictions for every keystroke. Be mindful of your API usage and adjust settings accordingly to avoid unexpected costs. For more information, refer to the official Continue documentation.

Going furtherLink to this anchor

You can add additional parameters to configure your model behaviour by editing config.json. For instance, you can add the following systemMessage value to modify LLM messages "role":"system" and/or "role":"developer" and provide less verbose answers:

{
"models": [
{
"model": "...",
"systemMessage": "You are an expert software developer. You give concise responses."
}
]
}
Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway