Skip to navigationSkip to main contentSkip to footerScaleway DocsSparklesIconAsk our AI
SparklesIconAsk our AI

How to monitor your Kubernetes Kapsule cluster with Cockpit using Alloy

You can now send data plane logs from your Kapsule or Kosmos clusters to Cockpit, providing centralized, real-time access to application and system logs. Reduce complexity and manual work thanks to this integration, powered by an Alloy deployment via Easy Deploy.

This feature allows you to:

  • Enhance observability: View logs from all your Kubernetes containers in one place.
  • Simplify troubleshooting: Quickly drill down into specific Pods or containers without needing to configure a separate logging stack.
AlertCircleIcon
Important

This feature does incur costs based on the volume of logs ingested. Refer to Cockpit FAQ for more details and best practices to avoid unexpected bills.

Before you start

To complete the actions presented below, you must have:

  • A running Kapsule or Kosmos cluster
  • An API Key with IAM permissions to:
    • edit your cluster (KubernetesFullAccess or KubernetesSystemMastersGroupAccess)
    • write on Cockpit (ObservabilityFullAccess)
  • A token with permissions to push to and query logs from Cockpit
LightBulbIcon

IAM permissions required

You will need either of the following IAM permissions to access your Cockpit's Grafana:

  • ObservabilityFullAccess: Full access to Cockpit
  • ObservabilityGrafanaEditor: Editor access to Cockpit's Grafana, read-only access to all other Cockpit features
  • ObservabilityReadOnly: Read access to Cockpit

Architecture and limitations

Control plane vs. data plane

  • Control plane: Fully managed by Scaleway. Users can already monitor control plane components (e.g., kube-apiserver, CCM, CSI) via Cockpit.
  • Data plane: Runs in your Scaleway Project (customer-managed instances, kubelet, containerd, customer Pods, etc.). You have full access to the data plane, including the ability to SSH into nodes.
FeatureControl planeData plane
ResponsibilityFully managed by ScalewayManaged by the customer (runs in your Scaleway Project)
Componentskube-apiserver, CCM, CSI, etc.kubelet, containerd, customer Pods, and system components like kubelet.service.
AccessUsers can monitor components via Cockpit (see how-to guide)Full access to data, including SSH into nodes, log management, and custom configurations.
BillingIncluded in cluster costsBilled based on log ingestion volume (see pricing below).

Because the data plane is entirely under your control, logs from any components running on these nodes are considered your own data. Consequently, shipping these logs to Cockpit is billed based on data ingestion.

How it works

The system leverages Alloy (an open source telemetry collector) running on your Kapsule or Kosmos cluster. Alloy forwards logs to the Loki endpoint of your Cockpit instance:

  1. Alloy can collect logs from:
    • Container stdout/stderr (Pods)
    • systemd journal (e.g., kubelet.service)
  2. The app automatically creates a custom datasource called kubernetes-logs and a Cockpit token with push logs permission.
  3. Log data is transmitted to Cockpit (Loki).
  4. Cockpit stores and indexes these logs.

Step-by-step: Enabling container logs in Cockpit

You can use Scaleway’s Easy Deploy to add an Alloy deployment to your cluster:

  1. Log in to the Scaleway console and go to your Kubernetes cluster.
  2. Navigate to the Easy Deploy tab.
  3. Select Alloy to Cockpit from the library.
  4. Deploy the application. Alloy will install on your cluster with default settings that:
    • Collect container logs for the kube-system namespace (by default).
    • Collect all systemd journal logs (e.g., kubelet).
    • Forward logs securely to Cockpit.
    InformationOutlineIcon
    Note

    You can edit the default deployment configuration to filter logs by source (under alloy.configMap.content in the YAML file). For example:

    cockpit_alloy_kubernetes_pods "namespace1" "namespace2"
    cockpit_alloy_journal "kubelet" "sshd"

Example Alloy configuration

Below is a simplified snippet of the configuration that Easy Deploy generates by default:

alloy:
    configMap:
        content: |
            /* Default loki writer. DO NOT REMOVE OR UPDATE. */
            loki.write "default" {
              endpoint {
                url = sys.env("COCKPIT_LOKI_PUSH_URL")

                authorization {
                  type        = "Bearer"
                  credentials = sys.env("COCKPIT_TOKEN")
                }
              }
            }

            /* Default loki process. DO NOT REMOVE */
            loki.process "default" {
              forward_to = [loki.write.default.receiver]

              // NOTE: You may update or remove this block if necessary.
              stage.limit {
                  rate  = 500 // The maximum number of burst lines that the stage forwards.
                  burst = 500 // The maximum rate of lines per second that the stage forwards.
              }

              // NOTE: You may update or remove this block if necessary.
              stage.drop {
                  longer_than         = "4KB"
                  drop_counter_reason = "too long"
              }

              // NOTE: You may update or remove this block if necessary.
              stage.drop {
                  older_than          = "48h"
                  drop_counter_reason = "too old"
              }
            }

            /* Optional: you may update the following lines: */
            {{{- cockpit_alloy_kubernetes_pods "kube-system" }}}
            {{{- cockpit_alloy_journal }}}

            /* Optional: put your custom Alloy configuration here: */
            /* ... */
    envFrom:
        - secretRef:
            name: easydeploy-alloy-credentials
    mounts:
        dockercontainers: true
        extra:
            - mountPath: /var/lib/alloy
              name: easydeploy-alloy
        varlog: true
    storagePath: /var/lib/alloy
controller:
    volumes:
        extra:
            - hostPath:
                path: /var/easydeploy-alloy
                type: DirectoryOrCreate
              name: easydeploy-alloy
extraObjects:
    - apiVersion: v1
      kind: Secret
      metadata:
        name: easydeploy-alloy-credentials
      stringData:
        COCKPIT_LOKI_PUSH_URL: '{{{ cockpit_loki_push_url }}}'
        COCKPIT_TOKEN: '{{{ cockpit_bearer_token }}}'
InformationOutlineIcon
Note

Template values like {{{ cockpit_bearer_token }}} (Bearer Token) and {{{ cockpit_loki_push_url }}} (Loki URL) are automatically set. Avoid modifying these values.

Visualizing logs in Cockpit

Once Alloy is running:

  1. Go to the Cockpit section of the Scaleway console, then click Open dashboards.
  2. Log in to Grafana.
  3. In Grafana's menu, navigate to Dashboards and select Kubernetes Cluster Pod Logs to view logs collected from Pods in your clusters.
  4. Filter Pod logs by:
    • Datasource which is automatically created upon deployment and visible in the Cockpit console
    • Cluster Name ( e.g. my-kapsule-cluster)
    • namespace, pod, or container labels to isolate specific workloads
    • Time range to limit how far back in history you want to query
  5. Alternatively, in Grafana's menu, navigate to Dashboards and select Kubernetes Cluster Node Logs to view system logs collected from nodes in your clusters.
  6. Filter node logs by:
    • Datasource which is automatically created upon deployment and visible in the Cockpit console
    • Cluster Name (e.g. my-kapsule-cluster)
    • Node or Syslog Identifier labels to isolate specific workloads
    • Time range to limit how far back in history you want to query
  7. Analyze logs in real-time or historical mode to troubleshoot issues, monitor errors, or track performance.

Usage and pricing

Sending logs to Cockpit is billed based on the total volume of logs ingested. Learn more about how you are billed for using Cockpit with Scaleway data in the Cockpit FAQ.

Key points include:

  • Logging rate: The more logs you produce (e.g. high-traffic workloads or verbose logging), the higher the bill.
  • Filtering: Limit logs to critical namespaces or system components only.
InformationOutlineIcon
Note

You may edit the "default" loki process in the configuration of the deployment to adjust the volume of logs to ingest:

/* Default loki process. DO NOT REMOVE */
loki.process "default" {
  forward_to = [loki.write.default.receiver]

  // NOTE: You may update or remove this block if necessary.
  stage.limit {
      rate  = 500 // The maximum number of burst lines that the stage forwards.
      burst = 500 // The maximum rate of lines per second that the stage forwards.
  }

  // NOTE: You may update or remove this block if necessary.
  stage.drop {
      longer_than         = "4KB"
      drop_counter_reason = "too long"
  }

  // NOTE: You may update or remove this block if necessary.
  stage.drop {
      older_than          = "48h"
      drop_counter_reason = "too old"
  }
}
CheckCircleOutlineIcon
Tip

Always monitor the logs ingestion rate in the dedicated dashboards provided in Cockpit to avoid surprises.

Security considerations

  • Authentication: The Alloy client uses a Cockpit Bearer Token to authenticate. Keep this token secret; do not store it in publicly accessible repos.
  • Encryption: Communication between Alloy and Cockpit (HTTPS) encrypts logs in transit.
  • Access Control: Ensure only trusted team members can deploy Easy Deploy applications or modify cluster-level configurations.

Troubleshooting

  • No logs appearing in Cockpit:

    • Verify that the Alloy Pod is running.
      kubectl get pods -n <alloy-namespace>
    • Inspect Alloy logs for errors.
      kubectl logs <alloy-pod-name> -n <alloy-namespace>
  • High log ingestion cost:

    • Review your deployment configuration to filter out verbose logs or unneeded namespaces.
    • Check log ingestion rate in the dedicated dashboards for unusual spikes.

Further resources

SearchIcon
No Results