Adding AI to IntelliJ IDEA using Continue and Generative APIs
AI-driven coding is revolutionizing software development by automating repetitive tasks, generating code snippets, improving code quality, and identifying potential bugs. By integrating AI-powered tools, developers can significantly enhance productivity and optimize workflows. This guide will help you integrate AI-powered code models into JetBrain's IntelliJ IDEA using Continue and Scaleway’s Generative APIs.
Before you start
To complete the actions presented below, you must have:
- A Scaleway account logged into the console
- Owner status or IAM permissions allowing you to perform actions in the intended Organization
- A valid API key for API authentication
- Installed IntelliJ IDEA on your local machine.
You can install Continue from the JetBrains marketplace:
- Open IntelliJ IDEA and go to Preferences/Settings (
Ctrl+Alt+S
on Windows/Linux,Cmd+,
on macOS). - Navigate to Plugins, then click Marketplace.
- Search for Continue and click Install.
- Restart IntelliJ IDEA after installation.
Configure Continue to use Scaleway’s Generative APIs
Configure Continue through the graphical interface
To link Continue with Scaleway's Generative APIs, you can use built-in menus from Continue in IntelliJ IDEA.
- Click Continue in the menu on the right. .
- In the prompt section, click on Select model dropdown, then on Add Chat model.
- Select Scaleway as provider.
- Select the model you want to use (we recommend
Qwen 2.5 Coder 32b
to get started with chat and autocompletion only). - Enter your Scaleway secret key.
These actions will automatically edit your config.yaml
file. To edit it manually, see Configure Continue through configuration file.
Configure Continue through configuration file
To link Continue with Scaleway’s Generative APIs, you need to configure the settings file:
- Open your
config.yaml
settings file:- If you have already configured a Local Assistant, click Local Assistant then click the wheel icon to open your existing
config.yaml
- Otherwise, create a
config.yaml
file inside your.continue
directory.
- If you have already configured a Local Assistant, click Local Assistant then click the wheel icon to open your existing
- Add the following configuration to enable Scaleway's Generative API. This configuration uses three different models for each task:
devstral-small-2505
for agentic workflows through a chat interfaceqwen2.5-coder-32b-instruct
for autocompletion when editing a filebge-multilingual-gemma2
for embedding and retrieving code contextname: Continue Config version: 0.0.1 models: - name: Devstral - Scaleway provider: openai model: devstral-small-2505 apiBase: https://api.scaleway.ai/v1/ apiKey: ###SCW_SECRET_KEY### defaultCompletionOptions: maxTokens: 8000 contextLength: 50000 roles: - chat - apply - embed - edit capabilities: - tool_use - name: Autocomplete - Scaleway provider: openai model: qwen2.5-coder-32b-instruct apiBase: https://api.scaleway.ai/v1/ apiKey: ###SCW_SECRET_KEY### defaultCompletionOptions: maxTokens: 8000 contextLength: 50000 roles: - autocomplete - name: Embeddings Model - Scaleway provider: openai model: bge-multilingual-gemma2 apiBase: https://api.scaleway.ai/v1/ apiKey: ###SCW_SECRET_KEY### roles: - embed embedOptions: maxChunkSize: 256 maxBatchSize: 32 context: - provider: problems - provider: tree - provider: url - provider: search - provider: folder - provider: codebase - provider: web params: n: 3 - provider: open params: onlyPinned: true - provider: docs - provider: terminal - provider: code - provider: diff - provider: currentFile
- Save the file at the correct location:
- Linux/macOS:
~/.continue/config.yaml
- Windows:
%USERPROFILE%\.continue\config.yaml
- Linux/macOS:
- In Local Assistant, click on Reload config or restart IntelliJ IDEA.
Alternatively, a config.json
file can be used with the following format. Note that this format is deprecated, and we recommend using config.yaml
instead.
{
"models": [
{
"model": "devstral-small-2505",
"title": "Devstral - Scaleway",
"provider": "openai",
"apiKey": "###SCW_SECRET_KEY###"
}
],
"embeddingsProvider": {
"model": "bge-multilingual-gemma2",
"provider": "openai",
"apiKey": "###SCW_SECRET_KEY###"
},
"tabAutocompleteModel": {
"model": "qwen2.5-coder-32b-instruct",
"title": "Autocomplete - Scaleway",
"provider": "openai",
"apiKey": "###SCW_SECRET_KEY###"
}
}
Activate Continue in IntelliJ IDEA
After configuring the API, activate Continue in IntelliJ IDEA:
- Open the Command Search (Press
Shift
twice quickly on Windows/Linux/macOS). - Type
"Continue"
and select the appropriate command to enable AI-powered assistance.
Going further
You can add more parameters to configure your model's behavior by editing config.yaml
.
For instance, you can add the following chatOptions.baseSystemMessage
value to modify LLM messages "role":"system"
and/or "role":"developer"
and provide less verbose answers:
model:...
chatOptions:
baseSystemMessage: "You are an expert developer. Only write concise answers."