---
title: "Migrate from Grafana Agent Operator to Grafana Agent Flow | Grafana Agent documentation"
description: "Migrate from Grafana Agent Operator to Grafana Agent Flow"
---

# Migrate from Grafana Agent Operator to Grafana Agent Flow

With the release of Grafana Agent Flow, Grafana Agent Operator is no longer the recommended way to deploy Grafana Agent in Kubernetes. Some of the Operator functionality has moved into Grafana Agent Flow itself, and the Helm Chart has replaced the remaining functionality.

- The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, and `PodLogs`) are all supported natively by Grafana Agent Flow. You are no longer required to use the Operator to consume those CRDs for dynamic monitoring in your cluster.
- The parts of the Operator that deploy the Grafana Agent itself (`GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs) are deprecated. Operator users should use the Grafana Agent \[Helm Chart]\[] to deploy Grafana Agent directly to your clusters.

This guide provides some steps to get started with Grafana Agent Flow for users coming from Grafana Agent Operator.

## Deploy Grafana Agent Flow with Helm

1. Create a `values.yaml` file, which contains options for deploying your Grafana Agent. You can start with the [default values](https://github.com/grafana/agent/blob/main/operations/helm/charts/grafana-agent/values.yaml) and customize as you see fit, or start with this snippet, which should be a good starting point for what the Operator does.
   
   YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```yaml
   agent:
     mode: 'flow'
     configMap:
       create: true
     clustering:
       enabled: true
   controller:
     type: 'statefulset'
     replicas: 2
   crds:
     create: false
   ```
   
   This configuration deploys Grafana Agent Flow as a `StatefulSet` using the built-in [clustering](/docs/agent/v0.43/flow/concepts/clustering/) functionality to allow distributing scrapes across all Grafana Agent Pods.
   
   This is one of many deployment possible modes. For example, you may want to use a `DaemonSet` to collect host-level logs or metrics. See the Grafana Agent Flow [deployment guide](/docs/agent/v0.43/flow/get-started/deploy-agent/) for more details about different topologies.
2. Create a Grafana Agent configuration file, `agent.river`.
   
   In the next step, you add to this configuration as you convert `MetricsInstances`. You can add any additional configuration to this file as you need.
3. Install the Grafana Helm repository:
   
   ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```none
   helm repo add grafana https://grafana.github.io/helm-charts
   helm repo update
   ```
4. Create a Helm release. You can name the release anything you like. The following command installs a release called `grafana-agent-metrics` in the `monitoring` namespace.
   
   shell ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```shell
   helm upgrade grafana-agent-metrics grafana/grafana-agent -i -n monitoring -f values.yaml --set-file agent.configMap.content=agent.river
   ```
   
   This command uses the `--set-file` flag to pass the configuration file as a Helm value so that you can continue to edit it as a regular River file.

## Convert `MetricsIntances` to Grafana Agent Flow components

A `MetricsInstance` resource primarily defines:

- The remote endpoints Grafana Agent Flow should send metrics to.
- The `PodMonitor`, `ServiceMonitor`, and `Probe` resources this Grafana Agent should discover.

You can use these functions in Grafana Agent Flow with the `prometheus.remote_write`, `prometheus.operator.podmonitors`, `prometheus.operator.servicemonitors`, and `prometheus.operator.probes` components respectively.

The following River sample is equivalent to the `MetricsInstance` from the [operator guide](/docs/agent/v0.43/operator/deploy-agent-operator-resources/#deploy-a-metricsinstance-resource).

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy

// read the credentials secret for remote_write authorization
remote.kubernetes.secret "credentials" {
  namespace = "monitoring"
  name = "primary-credentials-metrics"
}

prometheus.remote_write "primary" {
    endpoint {
        url = "https://<PROMETHEUS_URL>/api/v1/push"
        basic_auth {
            username = nonsensitive(remote.kubernetes.secret.credentials.data["username"])
            password = remote.kubernetes.secret.credentials.data["password"]
        }
    }
}

prometheus.operator.podmonitors "primary" {
    forward_to = [prometheus.remote_write.primary.receiver]
    // leave out selector to find all podmonitors in the entire cluster
    selector {
        match_labels = {instance = "primary"}
    }
}

prometheus.operator.servicemonitors "primary" {
    forward_to = [prometheus.remote_write.primary.receiver]
    // leave out selector to find all servicemonitors in the entire cluster
    selector {
        match_labels = {instance = "primary"}
    }
}
```

Replace the following:

- *`<PROMETHEUS_URL>`* : The endpoint you want to send metrics to.

This configuration discovers all `PodMonitor`, `ServiceMonitor`, and `Probe` resources in your cluster that match the label selector `instance=primary`. It then scrapes metrics from the targets and forward them to your remote write endpoint.

You may need to customize this configuration further if you use additional features in your `MetricsInstance` resources. Refer to the documentation for the relevant components for additional information:

- [remote.kubernetes.secret](/docs/agent/v0.43/flow/reference/components/remote.kubernetes.secret/)
- [prometheus.remote\_write](/docs/agent/v0.43/flow/reference/components/prometheus.remote_write/)
- [prometheus.operator.podmonitors](/docs/agent/v0.43/flow/reference/components/prometheus.operator.podmonitors/)
- [prometheus.operator.servicemonitors](/docs/agent/v0.43/flow/reference/components/prometheus.operator.servicemonitors/)
- [prometheus.operator.probes](/docs/agent/v0.43/flow/reference/components/prometheus.operator.probes/)
- [prometheus.scrape](/docs/agent/v0.43/flow/reference/components/prometheus.scrape/)

## Collecting Logs

Our current recommendation is to create an additional DaemonSet deployment of Grafana Agents to scrape logs.

> We have components that can scrape pod logs directly from the Kubernetes API without needing a DaemonSet deployment. These are still considered experimental, but if you would like to try them, see the documentation for [loki.source.kubernetes](/docs/agent/v0.43/flow/reference/components/loki.source.kubernetes/) and [loki.source.podlogs](/docs/agent/v0.43/flow/reference/components/loki.source.podlogs/).

These values are close to what the Operator currently deploys for logs:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
agent:
  mode: 'flow'
  configMap:
    create: true
  clustering:
    enabled: false
  controller:
    type: 'daemonset'
  mounts:
    # -- Mount /var/log from the host into the container for log collection.
    varlog: true
```

This command will install a release named `grafana-agent-logs` in the `monitoring` namespace:

![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```none
helm upgrade grafana-agent-logs grafana/grafana-agent -i -n monitoring -f values-logs.yaml --set-file agent.configMap.content=agent-logs.river
```

This simple configuration will scrape logs for every pod on each node:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
// read the credentials secret for remote_write authorization
remote.kubernetes.secret "credentials" {
  namespace = "monitoring"
  name      = "primary-credentials-logs"
}

discovery.kubernetes "pods" {
  role = "pod"
  // limit to pods on this node to reduce the amount you need to filter
  selectors {
    role  = "pod"
    field = "spec.nodeName=" + env("<HOSTNAME>")
  }
}

discovery.relabel "pod_logs" {
  targets = discovery.kubernetes.pods.targets
  rule {
    source_labels = ["__meta_kubernetes_namespace"]
    target_label  = "namespace"
  }
  rule {
    source_labels = ["__meta_kubernetes_pod_name"]
    target_label  = "pod"
  }
  rule {
    source_labels = ["__meta_kubernetes_pod_container_name"]
    target_label  = "container"
  }
  rule {
    source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_name"]
    separator     = "/"
    target_label  = "job"
  }
  rule {
    source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
    separator     = "/"
    action        = "replace"
    replacement   = "/var/log/pods/*$1/*.log"
    target_label  = "__path__"
  }
  rule {
    action = "replace"
    source_labels = ["__meta_kubernetes_pod_container_id"]
    regex = "^(\\w+):\\/\\/.+$"
    replacement = "$1"
    target_label = "tmp_container_runtime"
  }
}

local.file_match "pod_logs" {
  path_targets = discovery.relabel.pod_logs.output
}

loki.source.file "pod_logs" {
  targets    = local.file_match.pod_logs.targets
  forward_to = [loki.process.pod_logs.receiver]
}

// basic processing to parse the container format. You can add additional processing stages
// to match your application logs.
loki.process "pod_logs" {
  stage.match {
    selector = "{tmp_container_runtime=\"containerd\"}"
    // the cri processing stage extracts the following k/v pairs: log, stream, time, flags
    stage.cri {}
    // Set the extract flags and stream values as labels
    stage.labels {
      values = {
        flags   = "",
        stream  = "",
      }
    }
  }

  // if the label tmp_container_runtime from above is docker parse using docker
  stage.match {
    selector = "{tmp_container_runtime=\"docker\"}"
    // the docker processing stage extracts the following k/v pairs: log, stream, time
    stage.docker {}

    // Set the extract stream value as a label
    stage.labels {
      values = {
        stream  = "",
      }
    }
  }

  // drop the temporary container runtime label as it is no longer needed
  stage.label_drop {
    values = ["tmp_container_runtime"]
  }

  forward_to = [loki.write.loki.receiver]
}

loki.write "loki" {
  endpoint {
    url = "https://<LOKI_URL>/loki/api/v1/push"
    basic_auth {
      username = nonsensitive(remote.kubernetes.secret.credentials.data["username"])
      password = remote.kubernetes.secret.credentials.data["password"]
    }
}
}
```

Replace the following:

- *`<LOKI_URL>`* : The endpoint of your Loki instance.

The logging subsystem is very powerful and has many options for processing logs. For further details, see the [component documentation](/docs/agent/v0.43/flow/reference/components/).

## Integrations

The `Integration` CRD isn’t supported with Grafana Agent Flow. However, all static mode integrations have an equivalent component in the [`prometheus.exporter`](/docs/agent/v0.43/flow/reference/components/) namespace. The \[reference documentation]\[component documentation] should help convert those integrations to their Grafana Agent Flow equivalent.
