Menu
Grafana Cloud

Migrate to another Helm chart version

Use the following information to migrate from one Helm chart version to another.

Migrate from version 2.0 to 3.0

The 3.0 release of the Kubernetes Monitoring Helm chart no longer uses the Alloy Helm chart as a subchart dependency. Instead, the chart uses the new Alloy Operator to deploy Alloy instances. This allows for a more flexible and powerful deployment of Alloy, as well as the ability of your chosen features to appropriately configure those Alloy instances.

Migrate from version 1.x to 2.0

The version 2.0 release of the Kubernetes Monitoring Helm chart includes major changes from the 1.x version. Many of the features have been reorganized around features rather than data types (such as metrics, logs, and so on). In version 1, many features were enabled by default, such as Cluster metrics, Pod logs, Cluster events, and so on.

In version 2, all features are turned off by default. This means your values file better reflects your desired feature set.

To migrate to version 2, use the following sections to map destinations, collectors, Cluster events, Cluster metrics, annotation autodiscovery, Application Observability, Beyla, Pod logs, Prometheus Operator, and integrations. A migration tool is available at https://grafana.github.io/k8s-monitoring-helm-migrator/.

Destinations mapping

The definition of where data is delivered has changed from externalServices, an object of four types, to destinations, an array of any number of types. Before the externalServices object had four types of destinations:

  • prometheus: Where all metrics are delivered. This could refer to a true Prometheus server or an OTLP destination that handles metrics.
  • loki: Where all logs are delivered. This could refer to a true Loki server or an OTLP destination that handles logs.
  • tempo: Where all traces are delivered. This could refer to a true Tempo server or an OTLP destination that handles traces.
  • pyroscope: Where all profiles are delivered.

In version 1, the service essentially referred to the destination for the data type. In version 2, the destination refers to the protocol used to deliver the data type. Refer to Destinations for more information.

The following table shows an example of how to map from v1 externalServices to v2 destinations:

Servicev1.x settingv2.0 setting
PrometheusexternalServices.prometheusdestinations: [{type: "prometheus"}]
Prometheus (OTLP)externalServices.prometheusdestinations: [{type: "otlp", metrics: {enabled: true}}]
LokiexternalServices.lokidestinations: [{type: "loki"}]
Loki (OTLP)externalServices.lokidestinations: [{type: "loki", logs: {enabled: true}}]
TempoexternalServices.tempodestinations: [{type: "otlp"}]
PyroscopeexternalServices.pyroscopedestinations: [{type: "pyroscope"}]

Complete the following to map destinations:

  1. Create a destination for each external service you are using.
  2. Provide a name and a type for the destination.
  3. Provide the URL for the destination.

    Note

    This is a full data writing/pushing URL, not only the hostname.

  4. Map the other settings from the original service to the new destination as shown in the following table:
    Original serviceNew destination
    authModeauth.type
    Auth definitions (e.g. basicAuth)auth
    externalLabelsextraLabels
    writeRelabelRulesmetricProcessingRules

Collector mapping

Alloy collectors (or instances) have been further split from the original to allow for more flexibility in the configuration and predictability in their resource requirements. Each feature allows for setting the collector, but the defaults have been chosen carefully. You should only need to change these if you have specific requirements.

Responsibilityv1.x Alloyv2.0 AlloyNotes
Metricsalloyalloy-metrics
Logsalloy-logsalloy-logs
Cluster eventsalloy-eventsalloy-singletonAlso applies to anything that must be deployed only to a single instance.
Application receiversalloyalloy-receiver
Profilesalloy-profilesalloy-profiles

Complete the following to map collectors:

  1. Rename alloy to alloy-metrics.
  2. Rename alloy-events to alloy-singleton.
  3. Move any open receiver ports to the alloy-receiver instance.

Cluster events mapping

Gathering of Cluster events has been moved into its own feature called clusterEvents.

Featurev1.x settingv2.0 setting
Cluster Eventslogs.cluster_eventsclusterEvents

If using Cluster events logs.cluster_events.enabled:

  1. Enable clusterEvents and alloy-singleton in your values file:

    yaml
    clusterEvents:
      enabled: true
    alloy-singleton:
      enabled: true
  2. Move logs.cluster_events to clusterEvents.

  3. Rename extraStageBlocks to extraProcessingStages.

Cluster metrics mapping

Cluster metrics refers to any metric data source that scrapes metrics about the cluster itself. This includes the following data sources:

  • Cluster metrics (Kubelet, API Server, and so on)
  • Node metrics (Node Exporter and Windows Exporter)
  • kube-state-metrics
  • Energy metrics via Kepler
  • Cost metrics via OpenCost

These have all been combined into a single feature called clusterMetrics.

Featurev1.x settingv2.0 settingNotes
Kubelet metricsmetrics.kubeletclusterMetrics.kubelet
cAdvisor metricsmetrics.cadvisorclusterMetrics.cadvisor
kube-state-metrics metricsmetrics.cadvisorclusterMetrics.kube-state-metrics
kube-state-metrics deploymentkube-state-metricsclusterMetrics.kube-state-metricsThe decision to deploy is controlled by clusterMetrics.kube-state-metrics.deploy.
Node Exporter metricsmetrics.node-exporterclusterMetrics.node-exporter
Node Exporter deploymentprometheus-node-exporterclusterMetrics.node-exporterThe decision to deploy is controlled by clusterMetrics.node-exporter.deploy.
Windows Exporter metricsmetrics.windows-exporterclusterMetrics.windows-exporter
Windows Exporter deploymentprometheus-windows-exporterclusterMetrics.windows-exporterThe decision to deploy is controlled by clusterMetrics.windows-exporter.deploy.
Energy metrics (Kepler)metrics.keplerclusterMetrics.kepler
Kepler deploymentkeplerclusterMetrics.kepler
Cost metrics (OpenCost)metrics.opencostclusterMetrics.opencost
OpenCost deploymentopencostclusterMetrics.opencost

If using Cluster metrics metrics.enabled:

  1. Enable clusterMetrics and alloy-metrics in your values file:

    yaml
    clusterMetrics:
      enabled: true
    alloy-metrics:
      enabled: true
  2. Move each of the sections in the previous table to clusterMetrics.

  3. Rename any extraRelabelingRules to extraDiscoveryRules.

  4. Rename any extraMetricRelabelingRules to extraMetricProcessingRules.

Annotation Autodiscovery mapping

Discovery of Pods and Services by annotation has been moved into its own feature called annotationAutodiscovery.

Featurev1.x settingv2.0 setting
Annotation autodiscoverymetrics.autoDiscoverannotationAutodiscovery

If using annotation autodiscovery metrics.autoDiscover.enabled:

  1. Enable annotationAutodiscovery and alloy-metrics in your values file:

    yaml
    annotationAutodiscovery:
      enabled: true
    alloy-metrics:
      enabled: true
  2. Move the contents of metrics.autoDiscover to annotationAutodiscovery.

  3. Rename any extraRelabelingRules to extraDiscoveryRules.

  4. Rename any extraMetricRelabelingRules to extraMetricProcessingRules.

Application Observability mapping

Application Observability is the new name for the feature that encompasses receiving data via various receivers (such as OTLP, Zipkin, and so on) within the metrics, logs, and traces sections. This has been moved into its own feature.

Featurev1.x settingv2.0 setting
Collector portsalloy.alloy.extraPortsalloy-receiver.alloy.extraPorts
Receiver definitionsreceiversapplicationObservability.receivers
Processorsreceivers.processorsapplicationObservability.processors
Metric Filtersmetrics.receiver.filtersapplicationObservability.metrics.filters
Metric Transformsmetrics.receiver.transformsapplicationObservability.metrics.transforms
Log Filterslogs.receiver.filtersapplicationObservability.logs.filters
Log Transformslogs.receiver.transformsapplicationObservability.logs.transforms
Trace Filterstraces.receiver.filtersapplicationObservability.traces.filters
Trace Transformstraces.receiver.transformsapplicationObservability.traces.transforms

If using application observability traces.enabled and receivers.*.enabled:

  1. Enable applicationObservability and alloy-receiver in your values file:

    yaml
    applicationObservability:
      enabled: true
    alloy-receiver:
      enabled: true
  2. Move any extra ports opened for applications from alloy.alloy.extraPorts to alloy-receiver.alloy.extraPorts

  3. Enable the receivers you want to use in applicationObservability.receivers, for example:

    yaml
    applicationObservability:
      receivers:
        grpc:
          enabled: true
  4. Move receiver processors from receivers.processors to applicationObservability.processors.

  5. Move metric filters from metrics.receiver.filters to applicationObservability.metrics.filters.

  6. Move metric transforms from metrics.receiver.transforms to applicationObservability.metrics.transforms.

  7. Move log filters from logs.receiver.filters to applicationObservability.logs.filters.

  8. Move log transforms from logs.receiver.transforms to applicationObservability.logs.transforms.

  9. Move trace filters from traces.receiver.filters to applicationObservability.traces.filters.

  10. Move trace transforms from traces.receiver.transforms to applicationObservability.traces.transforms.

Grafana Beyla mapping

Deployment and handling of the zero-code instrumentation feature using Grafana Beyla has been moved into its own feature called autoInstrumentation.

Featurev1.x settingv2.0 setting
Auto-instrumentation metricsmetrics.beylaautoInstrumentation.beyla
Beyla deploymentbeylaautoInstrumentation.beyla

If using Beyla beyla.enabled:

  1. Enable autoInstrumentation and alloy-metrics in your values file:

    yaml
    autoInstrumentation:
      enabled: true
    alloy-metrics:
      enabled: true
  2. Combine beyla and metrics.beyla and copy to autoInstrumentation.beyla

Pod logs mapping

Gathering of Pods logs has been moved into its own feature called podLogs.

Featurev1.x settingv2.0 setting
Pod logslogs.pod_logspodLogs

If using Pod logs logs.pod_logs.enabled:

  1. Enable podLogs and alloy-logs in your values file:

    yaml
    podLogs:
      enabled: true
    alloy-logs:
      enabled: true
  2. Move logs.pod_logs to podLogs.

  3. Rename:

    • extraRelabelingRules to extraDiscoveryRules
    • extraStageBlocks to extraLogProcessingStages

Prometheus Operator objects mapping

Handling for Prometheus Operator objects, such as ServiceMonitors, PodMonitors, and Probes has been moved to the prometheusOperatorObjects feature. This feature also includes the option to deploy the Prometheus Operator CRDs.

Featurev1.x settingv2.0 setting
PodMonitor settingsmetrics.podMonitorsprometheusOperatorObjects.podMonitors
Probe settingsmetrics.probesprometheusOperatorObjects.probes
ServiceMonitor settingsmetrics.serviceMonitorsprometheusOperatorObjects.serviceMonitors
CRDs deploymentprometheus-operator-crds.enabledcrds.deploy

If using Prometheus Operator objects, metrics.podMonitors.enabled, metrics.probes.enabled, metrics.serviceMonitors.enabled, prometheus-operator-crds.enabled:

  1. Enable prometheusOperatorObjects and alloy-metrics in your values file:

    yaml
    prometheusOperatorObjects:
      enabled: true
    alloy-metrics:
      enabled: true
  2. Move the following:

    • metrics.podMonitors to prometheusOperatorObjects.podMonitors
    • metrics.probes to prometheusOperatorObjects.probes
    • metrics.serviceMonitors to prometheusOperatorObjects.serviceMonitors

Integrations mapping

Integrations is a new feature in version 2.0 that allow you to enable and configure additional data sources. This includes the Alloy metrics that were previously part of v1. Some service integrations that previously needed to be defined in the extraConfig and logs.extraConfig sections can now be used in the integration feature.

Replace your extraConfig to the new integrationsfeature if you are using either of these settings:

  • The metrics.alloy setting for getting Alloy metrics
  • The extraConfig to add config to get data from any of the new built-in integrations

The following are built-in integrations:

Built-in integrationv1.x settingv2.0 setting
Alloymetrics.alloyintegrations.alloy
cert-managerextraConfigintegrations.cert-manager
etcdextraConfigintegrations.etcd
MySQLextraConfig and logs.extraConfigintegrations.mysql

If using the Alloy integration metrics.alloy.enabled, or if using extraConfig for cert-manager, etcd, or MySQL:

  1. Create instances of the integration that you want, and enable alloy-metrics in your values file:

    yaml
    integrations:
      alloy:
        instances:
          - name: 'alloy'
    alloy-metrics:
      enabled: true
  2. Move metrics.alloy to integrations.alloy.instances[].

For service integrations that are not available in the built-in integrations feature, you can continue to use them in the extraConfig sections. Refer to the Extra Configs section for guidance.

Extra configs mapping

The variables for adding arbitrary configuration to the Alloy instances have been moved inside the respective Alloy instance. If you are using extraConfig to add configuration for scraping metrics from an integration built-in with the integrations feature (such as cert-manager, etcd, or MySQL), you can move that configuration to the new integrations feature.

For other uses of extraConfig, refer to the following table:

extraConfigv1.x settingv2.0 setting
Alloy for MetricsextraConfigalloy-metrics.extraConfig
Alloy for AppsextraConfigalloy-receiver.extraConfig
Alloy for Eventslogs.cluster_events.extraConfigalloy-singleton.extraConfig
Alloy for Logslogs.extraConfigalloy-logs.extraConfig
Alloy for Profilesprofiles.extraConfigalloy-profiles.extraConfig
  1. Move the following:
    • extraConfig related to metrics to alloy-metrics.extraConfig
    • extraConfig related to application receivers to alloy-receivers.extraConfig
    • logs.cluster_events.extraConfig to alloy-singleton.extraConfig
    • logs.extraConfig to alloy-logs.extraConfig
    • profiles.extraConfig to alloy-profiles.extraConfig
  2. Rename destinations for telemetry data to the appropriate destination component. Refer to Destination names.

Destination names

The <destination_name> in the component reference is the name of the destination, set to lowercase and with any non-alphanumeric characters replaced with an underscore. For example, if your destination is named Grafana Cloud Metrics, then the destination name would be grafana_cloud_metrics.

Data typev1.x settingv2.0 setting
Metricsprometheus.relabel.metrics_service.receiverprometheus.remote_write.<destination_name>.receiver
Logsloki.process.logs_service.receiverloki.write.<destination_name>.receiver
Tracesotelcol.exporter.otlp.traces_service.inputotelcol.exporter.otlp.<destination_name>.input
Profilespyroscope.write.profiles_service.receiverpyroscope.write.<destination_name>.receiver

Dropped features

The following features have been removed from the 2.0 release:

  • Pre-install hooks: The pre-install and pre-upgrade hooks that performed config validation have been removed. The Alloy Pods now validate the configuration at runtime and log any issues, and do so without these Pods. This greatly decreases startup time.
  • helm test functionality: The helm test functionality that ran a config analysis and attempted to query the databases for expected metrics and logs has been removed. This functionality was either not fully developed or not useful in production environments. The query testing was mainly for CI/CD testing in development. This has been replaced by more effective and comprehensive methods.