NavigationContentFooter
Suggest an edit
Was this page helpful?

How to configure alerts for a job

Reviewed on 10 February 2025Published on 10 February 2025

This page shows you how to configure alerts for Scaleway Serverless Jobs using Scaleway Cockpit and Grafana.

Before you startLink to this anchor

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • Scaleway resources you can monitor
  • Created Grafana credentials with the Editor role
  • Enabled the alert manager
  • Created at least one contact point
  • Selected the Scaleway Alerting alert manager in Grafana
  1. Log in to Grafana using your credentials.

  2. Click the Toggle menu then click Alerting.

  3. Click Alert rules and + New alert rule.

  4. Scroll down to the Define query and alert condition section and click Switch to data source-managed alert rule.

    Important

    This allows you to configure alert rules managed by the data source of your choice, instead of using Grafana’s managed alert rules.

  5. Type in a name for your alert.

  6. Select the data source you want to configure alerts for. For the sake of this documentation, we are choosing the Scaleway Metrics data source.

  7. In the Metrics browser drop-down, select the metric you want to configure an alert for. Refer to the table below for details on each alert for Serverless Jobs.

AnyJobError

Pending period
5s
Summary
Job run {{ $labels.resource_id }} is in error.
Query and alert condition
(serverless_job_run:state_failed == 1) OR (serverless_job_run:state_internal_error == 1)
Description
Job run {{ $labels.resource_id }} from the job definition {{ $labels.resource_name }} finish in error. Check the console to find out the error message.

JobError

Pending period
5s
Summary
Job run {{ $labels.resource_id }} is in error.
Query and alert condition
(serverless_job_run:state_failed{resource_name="your-job-name-here"} == 1) OR (serverless_job_run:state_internal_error{resource_name="your-job-name-here"} == 1)
Description
Job run {{ $labels.resource_id }} from the job definition {{ $labels.resource_name }} finish in error. Check the console to find out the error message.

AnyJobHighCPUUsage

Pending period
10s
Summary
High CPU usage for job run {{ $labels.resource_id }}.
Query and alert condition
serverless_job_run:cpu_usage_seconds_total:rate30s / serverless_job_run:cpu_limit * 100 > 90
Description
Job run {{ $labels.resource_name }} from the job definition {{ $labels.resource_name }} is using more than {{ printf "%.0f" $value }}% of its available CPU since 10s.

JobHighCPUUsage

Pending period
10s
Summary
High CPU usage for job run {{ $labels.resource_job definition }}.
Query and alert condition
serverless_job_run:cpu_usage_seconds_total:rate30s{resource_name="your-job-name-here"} / serverless_job_run:cpu_limit{resource_name="your-job-name-here"} * 100 > 90
Description
Job run {{ $labels.resource_name }} from the job definition {{ $labels.resource_name }} is using more than {{ printf "%.0f" $value }}% of its available CPU since 10s.

AnyJobHighMemoryUsage

Pending period
10s
Summary
High memory usage for job run {{ $labels.resource_job definition }}.
Query and alert condition
(serverless_job_run:memory_usage_bytes / serverless_job_run:memory_limit_bytes ) * 100 > 80
Description
Job run {{ $labels.resource_name }} from the job definition {{ $labels.resource_name }} is using more than {{ printf "%.0f" $value }}% of its available RAM since 10s.

JobHighMemoryUsage

Pending period
10s
Summary
High memory usage for job run {{ $labels.resource_id }}.
Query and alert condition
(serverless_job_run:memory_usage_bytes{resource_id="your-job-name-here"} / serverless_job_run:memory_limit_bytes{resource_id="your-job-name-here"}) * 100 > 80
Description
Job run {{ $labels.resource_name }} from the job definition {{ $labels.resource_name }} is using more than {{ printf "%.0f" $value }}% of its available RAM since 10s.
  1. Select labels that apply to the metric you have selected in the previous step, to target your desired resources and fine-tune your alert.
  2. Select one or more values for your labels.
  3. Click Use query to generate your alert based on the conditions you have defined.
  4. Select a folder to store your rule, or create a new one. Folders allow you to easily manage your different rules.
  5. Select an evaluation group to add your rule to. Rules within the same group are evaluated sequentially over the same time interval.
  6. In the Set alert evaluation behavior field, configure the amount of time during which the alert can be in breach of the condition(s) you have defined until it triggers.
    Note

    For example, if you wish to be alerted after your alert has been in breach of the condition for 2 minutes without interruption, type 2 and select minutes in the drop-down.

  7. Optionally, add a summary and a description.
  8. Click Save rule at the top right corner of your screen to save your alert. Once your alert meets the requirements you have configured, you will receive an email to inform you that your alert has been triggered.
Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2025 – Scaleway