How to scrape kube-state-metrics using Observe-Agent

:bar_chart: Scraping kube-state-metrics with the Observe Agent

By Default we don’t add a job to scrape kube-state-metrics to our Observe-Agent.

To monitor the state of your Kubernetes objects — like pods, deployments, and nodes — you can configure the Observe Agent to scrape kube-state-metrics. This gives you detailed visibility into cluster health and pod status reasons right in Observe.

Here’s how to set it up.


:receipt: values.yaml

Below is a full example of the values.yaml you can use to enable and configure Prometheus scraping for kube-state-metrics:

# Enable Prometheus scraping for the application
application:
  prometheusScrape:
    enabled: true
    independentDeployment: true

# Configure the Observe Agent
agent:
  config:
    prometheusScraper:
      # Define processors
      processors:
        attributes/debug_source_kube-state_metrics:
          actions:
            - action: insert
              key: debug_source
              value: kube-state-metrics

      # Define Prometheus receivers
      receivers:
        prometheus/kubestate:
          config:
            scrape_configs:
              - job_name: kubestate-svc
                kubernetes_sd_configs:
                  - role: service
                relabel_configs:
                  - action: keep
                    source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
                    regex: "true"
                  - action: replace
                    source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
                    target_label: __metrics_path__
                  - action: replace
                    source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
                    target_label: __address__
                    regex: (.+?)(?::\d+)?;(\d+)
                    replacement: $1:$2
                metric_relabel_configs:
                  - action: keep
                    source_labels: [__name__]
                    regex: kube_pod_container_status_restarts_total|kube_pod_container_status_waiting_reason|kube_pod_container_status_last_terminated_reason|kube_pod_status_reason

      # Define service pipelines
      service:
        pipelines:
          metrics/kubestate:
            receivers: [prometheus/kubestate]
            processors:
              - memory_limiter
              - resource/drop_service_name
              - k8sattributes
              - batch
              - resource/observe_common
              - attributes/debug_source_kube-state_metrics
            exporters:
              - prometheusremotewrite/observe

:rocket: Apply the Configuration

After saving your values.yaml, apply the configuration with Helm:

helm upgrade observe-agent observe/agent -n observe -f values.yaml

This command upgrades (or redeploys) the Observe Agent in the observe namespace with your new kube-state-metrics scraping configuration.

Verify that it’s active:

kubectl get pods -n observe
helm get values observe-agent -n observe

:white_check_mark: Validate the Scrape

To confirm that the metrics are flowing:

  1. Open your Observe dashboard or metrics backend.
  2. Search for the following metrics:
  • kube_pod_container_status_restarts_total
  • kube_pod_container_status_waiting_reason
  • kube_pod_container_status_last_terminated_reason
  • kube_pod_status_reason

If these appear — you’re successfully scraping kube-state-metrics! :tada:


:light_bulb: Pro Tip

Make sure your kube-state-metrics service is annotated correctly so that it’s automatically discovered by the scraper:

metadata:
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "8080"

Once that’s set, the Observe Agent will automatically detect and begin scraping it.