Migrate to another Helm chart version
Use the following information to migrate from one Helm chart version to another.
Migrate from version 2.0 to 3.0
The 3.0 release of the Kubernetes Monitoring Helm chart no longer uses the Alloy Helm chart as a subchart dependency. Instead, the chart uses the new Alloy Operator to deploy Alloy instances. This allows for a more flexible and powerful deployment of Alloy, as well as the ability of your chosen features to appropriately configure those Alloy instances.
Migrate from version 1.x to 2.0
The version 2.0 release of the Kubernetes Monitoring Helm chart includes major changes from the 1.x version. Many of the features have been reorganized around features rather than data types (such as metrics, logs, and so on). In version 1, many features were enabled by default, such as Cluster metrics, Pod logs, Cluster events, and so on.
In version 2, all features are turned off by default. This means your values file better reflects your desired feature set.
To migrate to version 2, use the following sections to map destinations, collectors, Cluster events, Cluster metrics, annotation autodiscovery, Application Observability, Beyla, Pod logs, Prometheus Operator, and integrations. A migration tool is available at https://grafana.github.io/k8s-monitoring-helm-migrator/.
Destinations mapping
The definition of where data is delivered has changed from externalServices
, an object of four types, to destinations
, an array of any number of types. Before the externalServices
object had four types of destinations:
prometheus
: Where all metrics are delivered. This could refer to a true Prometheus server or an OTLP destination that handles metrics.loki
: Where all logs are delivered. This could refer to a true Loki server or an OTLP destination that handles logs.tempo
: Where all traces are delivered. This could refer to a true Tempo server or an OTLP destination that handles traces.pyroscope
: Where all profiles are delivered.
In version 1, the service essentially referred to the destination for the data type. In version 2, the destination refers to the protocol used to deliver the data type. Refer to Destinations for more information.
The following table shows an example of how to map from v1 externalServices
to v2 destinations
:
Service | v1.x setting | v2.0 setting |
---|---|---|
Prometheus | externalServices.prometheus | destinations: [{type: "prometheus"}] |
Prometheus (OTLP) | externalServices.prometheus | destinations: [{type: "otlp", metrics: {enabled: true}}] |
Loki | externalServices.loki | destinations: [{type: "loki"}] |
Loki (OTLP) | externalServices.loki | destinations: [{type: "loki", logs: {enabled: true}}] |
Tempo | externalServices.tempo | destinations: [{type: "otlp"}] |
Pyroscope | externalServices.pyroscope | destinations: [{type: "pyroscope"}] |
Complete the following to map destinations:
- Create a destination for each external service you are using.
- Provide a
name
and atype
for the destination. - Provide the URL for the destination.
Note
This is a full data writing/pushing URL, not only the hostname.
- Map the other settings from the original service to the new destination as shown in the following table:
Original service New destination authMode
auth.type
Auth definitions (e.g. basicAuth
)auth
externalLabels
extraLabels
writeRelabelRules
metricProcessingRules
Collector mapping
Alloy collectors (or instances) have been further split from the original to allow for more flexibility in the configuration and predictability in their resource requirements. Each feature allows for setting the collector, but the defaults have been chosen carefully. You should only need to change these if you have specific requirements.
Responsibility | v1.x Alloy | v2.0 Alloy | Notes |
---|---|---|---|
Metrics | alloy | alloy-metrics | |
Logs | alloy-logs | alloy-logs | |
Cluster events | alloy-events | alloy-singleton | Also applies to anything that must be deployed only to a single instance. |
Application receivers | alloy | alloy-receiver | |
Profiles | alloy-profiles | alloy-profiles |
Complete the following to map collectors:
- Rename
alloy
toalloy-metrics
. - Rename
alloy-events
toalloy-singleton
. - Move any open receiver ports to the
alloy-receiver
instance.
Cluster events mapping
Gathering of Cluster events has been moved into its own feature called
clusterEvents
.
Feature | v1.x setting | v2.0 setting |
---|---|---|
Cluster Events | logs.cluster_events | clusterEvents |
If using Cluster events logs.cluster_events.enabled
:
Enable
clusterEvents
andalloy-singleton
in your values file:clusterEvents: enabled: true alloy-singleton: enabled: true
Move
logs.cluster_events
toclusterEvents
.Rename
extraStageBlocks
toextraProcessingStages
.
Cluster metrics mapping
Cluster metrics refers to any metric data source that scrapes metrics about the cluster itself. This includes the following data sources:
- Cluster metrics (Kubelet, API Server, and so on)
- Node metrics (Node Exporter and Windows Exporter)
- kube-state-metrics
- Energy metrics via Kepler
- Cost metrics via OpenCost
These have all been combined into a single feature called
clusterMetrics
.
Feature | v1.x setting | v2.0 setting | Notes |
---|---|---|---|
Kubelet metrics | metrics.kubelet | clusterMetrics.kubelet | |
cAdvisor metrics | metrics.cadvisor | clusterMetrics.cadvisor | |
kube-state-metrics metrics | metrics.cadvisor | clusterMetrics.kube-state-metrics | |
kube-state-metrics deployment | kube-state-metrics | clusterMetrics.kube-state-metrics | The decision to deploy is controlled by clusterMetrics.kube-state-metrics.deploy . |
Node Exporter metrics | metrics.node-exporter | clusterMetrics.node-exporter | |
Node Exporter deployment | prometheus-node-exporter | clusterMetrics.node-exporter | The decision to deploy is controlled by clusterMetrics.node-exporter.deploy . |
Windows Exporter metrics | metrics.windows-exporter | clusterMetrics.windows-exporter | |
Windows Exporter deployment | prometheus-windows-exporter | clusterMetrics.windows-exporter | The decision to deploy is controlled by clusterMetrics.windows-exporter.deploy . |
Energy metrics (Kepler) | metrics.kepler | clusterMetrics.kepler | |
Kepler deployment | kepler | clusterMetrics.kepler | |
Cost metrics (OpenCost) | metrics.opencost | clusterMetrics.opencost | |
OpenCost deployment | opencost | clusterMetrics.opencost |
If using Cluster metrics metrics.enabled
:
Enable
clusterMetrics
andalloy-metrics
in your values file:clusterMetrics: enabled: true alloy-metrics: enabled: true
Move each of the sections in the previous table to
clusterMetrics
.Rename any
extraRelabelingRules
toextraDiscoveryRules
.Rename any
extraMetricRelabelingRules
toextraMetricProcessingRules
.
Annotation Autodiscovery mapping
Discovery of Pods and Services by annotation has been moved into its own feature called annotationAutodiscovery
.
Feature | v1.x setting | v2.0 setting |
---|---|---|
Annotation autodiscovery | metrics.autoDiscover | annotationAutodiscovery |
If using annotation autodiscovery metrics.autoDiscover.enabled
:
Enable
annotationAutodiscovery
andalloy-metrics
in your values file:annotationAutodiscovery: enabled: true alloy-metrics: enabled: true
Move the contents of
metrics.autoDiscover
toannotationAutodiscovery
.Rename any
extraRelabelingRules
toextraDiscoveryRules
.Rename any
extraMetricRelabelingRules
toextraMetricProcessingRules
.
Application Observability mapping
Application Observability is the new name for the feature that encompasses receiving data via various receivers (such as OTLP, Zipkin, and so on) within the metrics, logs, and traces sections. This has been moved into its own feature.
Feature | v1.x setting | v2.0 setting |
---|---|---|
Collector ports | alloy.alloy.extraPorts | alloy-receiver.alloy.extraPorts |
Receiver definitions | receivers | applicationObservability.receivers |
Processors | receivers.processors | applicationObservability.processors |
Metric Filters | metrics.receiver.filters | applicationObservability.metrics.filters |
Metric Transforms | metrics.receiver.transforms | applicationObservability.metrics.transforms |
Log Filters | logs.receiver.filters | applicationObservability.logs.filters |
Log Transforms | logs.receiver.transforms | applicationObservability.logs.transforms |
Trace Filters | traces.receiver.filters | applicationObservability.traces.filters |
Trace Transforms | traces.receiver.transforms | applicationObservability.traces.transforms |
If using application observability traces.enabled
and receivers.*.enabled
:
Enable
applicationObservability
andalloy-receiver
in your values file:applicationObservability: enabled: true alloy-receiver: enabled: true
Move any extra ports opened for applications from
alloy.alloy.extraPorts
toalloy-receiver.alloy.extraPorts
Enable the receivers you want to use in
applicationObservability.receivers
, for example:applicationObservability: receivers: grpc: enabled: true
Move receiver processors from
receivers.processors
toapplicationObservability.processors
.Move metric filters from
metrics.receiver.filters
toapplicationObservability.metrics.filters
.Move metric transforms from
metrics.receiver.transforms
toapplicationObservability.metrics.transforms
.Move log filters from
logs.receiver.filters
toapplicationObservability.logs.filters
.Move log transforms from
logs.receiver.transforms
toapplicationObservability.logs.transforms
.Move trace filters from
traces.receiver.filters
toapplicationObservability.traces.filters
.Move trace transforms from
traces.receiver.transforms
toapplicationObservability.traces.transforms
.
Grafana Beyla mapping
Deployment and handling of the zero-code instrumentation feature using Grafana Beyla has been moved into its own feature called
autoInstrumentation
.
Feature | v1.x setting | v2.0 setting |
---|---|---|
Auto-instrumentation metrics | metrics.beyla | autoInstrumentation.beyla |
Beyla deployment | beyla | autoInstrumentation.beyla |
If using Beyla beyla.enabled
:
Enable
autoInstrumentation
andalloy-metrics
in your values file:autoInstrumentation: enabled: true alloy-metrics: enabled: true
Combine
beyla
andmetrics.beyla
and copy toautoInstrumentation.beyla
Pod logs mapping
Gathering of Pods logs has been moved into its own feature called podLogs
.
Feature | v1.x setting | v2.0 setting |
---|---|---|
Pod logs | logs.pod_logs | podLogs |
If using Pod logs logs.pod_logs.enabled
:
Enable
podLogs
andalloy-logs
in your values file:podLogs: enabled: true alloy-logs: enabled: true
Move
logs.pod_logs
topodLogs
.Rename:
extraRelabelingRules
toextraDiscoveryRules
extraStageBlocks
toextraLogProcessingStages
Prometheus Operator objects mapping
Handling for Prometheus Operator objects, such as ServiceMonitors
, PodMonitors
, and Probes
has been moved to the
prometheusOperatorObjects
feature.
This feature also includes the option to deploy the Prometheus Operator CRDs.
Feature | v1.x setting | v2.0 setting |
---|---|---|
PodMonitor settings | metrics.podMonitors | prometheusOperatorObjects.podMonitors |
Probe settings | metrics.probes | prometheusOperatorObjects.probes |
ServiceMonitor settings | metrics.serviceMonitors | prometheusOperatorObjects.serviceMonitors |
CRDs deployment | prometheus-operator-crds.enabled | crds.deploy |
If using Prometheus Operator objects, metrics.podMonitors.enabled
, metrics.probes.enabled
,
metrics.serviceMonitors.enabled
, prometheus-operator-crds.enabled
:
Enable
prometheusOperatorObjects
andalloy-metrics
in your values file:prometheusOperatorObjects: enabled: true alloy-metrics: enabled: true
Move the following:
metrics.podMonitors
toprometheusOperatorObjects.podMonitors
metrics.probes
toprometheusOperatorObjects.probes
metrics.serviceMonitors
toprometheusOperatorObjects.serviceMonitors
Integrations mapping
Integrations is a new feature in version 2.0 that allow you to enable and configure additional data sources.
This includes the Alloy metrics that were previously part of v1
.
Some service integrations that previously needed to be defined in the extraConfig
and logs.extraConfig
sections can now be used in the integration feature.
Replace your extraConfig
to the new integrations
feature if you are using either of these settings:
- The
metrics.alloy
setting for getting Alloy metrics - The
extraConfig
to add config to get data from any of the new built-in integrations
The following are built-in integrations:
Built-in integration | v1.x setting | v2.0 setting |
---|---|---|
Alloy | metrics.alloy | integrations.alloy |
cert-manager | extraConfig | integrations.cert-manager |
etcd | extraConfig | integrations.etcd |
MySQL | extraConfig and logs.extraConfig | integrations.mysql |
If using the Alloy integration metrics.alloy.enabled
, or if using extraConfig
for cert-manager, etcd, or MySQL:
Create instances of the integration that you want, and enable
alloy-metrics
in your values file:integrations: alloy: instances: - name: 'alloy' alloy-metrics: enabled: true
Move
metrics.alloy
tointegrations.alloy.instances[]
.
For service integrations that are not available in the built-in integrations feature, you can continue to use them in the extraConfig
sections.
Refer to the
Extra Configs section for guidance.
Extra configs mapping
The variables for adding arbitrary configuration to the Alloy instances have been moved inside the respective Alloy instance.
If you are using extraConfig
to add configuration for scraping metrics from an integration built-in with the
integrations feature (such as cert-manager, etcd, or MySQL), you can move that configuration to the new integrations
feature.
For other uses of extraConfig
, refer to the following table:
extraConfig | v1.x setting | v2.0 setting |
---|---|---|
Alloy for Metrics | extraConfig | alloy-metrics.extraConfig |
Alloy for Apps | extraConfig | alloy-receiver.extraConfig |
Alloy for Events | logs.cluster_events.extraConfig | alloy-singleton.extraConfig |
Alloy for Logs | logs.extraConfig | alloy-logs.extraConfig |
Alloy for Profiles | profiles.extraConfig | alloy-profiles.extraConfig |
- Move the following:
extraConfig
related to metrics toalloy-metrics.extraConfig
extraConfig
related to application receivers toalloy-receivers.extraConfig
logs.cluster_events.extraConfig
toalloy-singleton.extraConfig
logs.extraConfig
toalloy-logs.extraConfig
profiles.extraConfig
toalloy-profiles.extraConfig
- Rename destinations for telemetry data to the appropriate destination component. Refer to Destination names.
Destination names
The <destination_name>
in the component reference is the name of the destination, set to lowercase and with any non-alphanumeric characters replaced with an underscore. For example, if your destination is named Grafana Cloud Metrics
, then the destination name would be grafana_cloud_metrics
.
Data type | v1.x setting | v2.0 setting |
---|---|---|
Metrics | prometheus.relabel.metrics_service.receiver | prometheus.remote_write.<destination_name>.receiver |
Logs | loki.process.logs_service.receiver | loki.write.<destination_name>.receiver |
Traces | otelcol.exporter.otlp.traces_service.input | otelcol.exporter.otlp.<destination_name>.input |
Profiles | pyroscope.write.profiles_service.receiver | pyroscope.write.<destination_name>.receiver |
Dropped features
The following features have been removed from the 2.0 release:
- Pre-install hooks: The pre-install and pre-upgrade hooks that performed config validation have been removed. The Alloy Pods now validate the configuration at runtime and log any issues, and do so without these Pods. This greatly decreases startup time.
helm test
functionality: Thehelm test
functionality that ran a config analysis and attempted to query the databases for expected metrics and logs has been removed. This functionality was either not fully developed or not useful in production environments. The query testing was mainly for CI/CD testing in development. This has been replaced by more effective and comprehensive methods.