---
title: "Migrate from Grafana Agent Operator to Grafana Alloy | Grafana Alloy documentation"
description: "Migrate from Grafana Agent Operator to Grafana Alloy"
---

# Migrate from Grafana Agent Operator to Grafana Alloy

You can migrate from Grafana Agent Operator to Alloy.

- The Monitor types (`PodMonitor`, `ServiceMonitor`, `Probe`, `ScrapeConfig`, and `PodLogs`) are all supported natively by Alloy.
- The parts of Grafana Agent Operator that deploy Grafana Agent, `GrafanaAgent`, `MetricsInstance`, and `LogsInstance` CRDs, are deprecated.

## Deploy Alloy with Helm

1. Create a `values.yaml` file, which contains options for deploying Alloy. You can start with the [default values](https://github.com/grafana/alloy/blob/main/operations/helm/charts/alloy/values.yaml) and customize as you see fit, or start with this snippet, which should be a good starting point for what Grafana Agent Operator does.
   
   YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```yaml
   alloy:
     configMap:
       create: true
     clustering:
       enabled: true
   controller:
     type: 'statefulset'
     replicas: 2
   crds:
     create: false
   ```
   
   This configuration deploys Alloy as a `StatefulSet` using the built-in [clustering](../../../get-started/clustering/) functionality to allow distributing scrapes across all Alloy pods.
   
   This is one of many deployment possible modes. For example, you may want to use a `DaemonSet` to collect host-level logs or metrics. See the Alloy [deployment guide](../../../set-up/deploy/) for more details about different topologies.
2. Create an Alloy configuration file, `config.alloy`.
   
   In the next step, you add to this configuration as you convert `MetricsInstances`. You can add any additional configuration to this file as you need.
3. Install the Grafana Helm repository:
   
   shell ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```shell
   helm repo add grafana https://grafana.github.io/helm-charts
   helm repo update
   ```
4. Create a Helm release. You can name the release anything you like. The following command installs a release called `alloy-metrics` in the `monitoring` namespace.
   
   shell ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy
   
   ```shell
   helm upgrade alloy-metrics grafana/alloy -i -n monitoring -f values.yaml --set-file alloy.configMap.content=config.alloy
   ```
   
   This command uses the `--set-file` flag to pass the configuration file as a Helm value so that you can continue to edit it as a regular Alloy configuration file.

## Convert `MetricsInstance` to Alloy components

A `MetricsInstance` resource primarily defines:

- The remote endpoints Grafana Agent should send metrics to.
- The `PodMonitor`, `ServiceMonitor`, `ScrapeConfig`, and `Probe` resources this Alloy should discover.

You can use these functions in Alloy with the `prometheus.remote_write`, `prometheus.operator.podmonitors`, `prometheus.operator.servicemonitors`, and `prometheus.operator.probes` components respectively.

The following Alloy syntax sample is equivalent to the `MetricsInstance` from the [operator guide](/docs/agent/latest/operator/deploy-agent-operator-resources/#deploy-a-metricsinstance-resource).

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy

// read the credentials secret for remote_write authorization
remote.kubernetes.secret "credentials" {
  namespace = "monitoring"
  name = "primary-credentials-metrics"
}

prometheus.remote_write "primary" {
    endpoint {
        url = "https://<PROMETHEUS_URL>/api/v1/push"
        basic_auth {
            username = convert.nonsensitive(remote.kubernetes.secret.credentials.data["username"])
            password = remote.kubernetes.secret.credentials.data["password"]
        }
    }
}

prometheus.operator.podmonitors "primary" {
    forward_to = [prometheus.remote_write.primary.receiver]
    // leave out selector to find all podmonitors in the entire cluster
    selector {
        match_labels = {instance = "primary"}
    }
}

prometheus.operator.servicemonitors "primary" {
    forward_to = [prometheus.remote_write.primary.receiver]
    // leave out selector to find all servicemonitors in the entire cluster
    selector {
        match_labels = {instance = "primary"}
    }
}
```

Replace the following:

- *`<PROMETHEUS_URL>`* : The endpoint you want to send metrics to.

This configuration discovers all `PodMonitor`, `ServiceMonitor`, `ScrapeConfig`, and `Probe` resources in your cluster that match the label selector `instance=primary`. It then scrapes metrics from the targets and forward them to your remote write endpoint.

You may need to customize this configuration further if you use additional features in your `MetricsInstance` resources. Refer to the documentation for the relevant components for additional information:

- [`remote.kubernetes.secret`](../../../reference/components/remote/remote.kubernetes.secret/)
- [`prometheus.remote_write`](../../../reference/components/prometheus/prometheus.remote_write/)
- [`prometheus.operator.podmonitors`](../../../reference/components/prometheus/prometheus.operator.podmonitors/)
- [`prometheus.operator.servicemonitors`](../../../reference/components/prometheus/prometheus.operator.servicemonitors/)
- [`prometheus.operator.scrapeconfigs`](../../../reference/components/prometheus/prometheus.operator.scrapeconfigs/)
- [`prometheus.operator.probes`](../../../reference/components/prometheus/prometheus.operator.probes/)
- [`prometheus.scrape`](../../../reference/components/prometheus/prometheus.scrape/)

## Collect logs

The current recommendation is to create an additional DaemonSet deployment of Alloy to scrape logs.

> Alloy has components that can scrape Pod logs directly from the Kubernetes API without needing a DaemonSet deployment. These are still considered experimental, but if you would like to try them, see the documentation for [`loki.source.kubernetes`](../../../reference/components/loki/loki.source.kubernetes/) and [`loki.source.podlogs`](../../../reference/components/loki/loki.source.podlogs/).

These values are close to what Grafana Agent Operator deploys for logs:

YAML ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```yaml
alloy:
  configMap:
    create: true
  clustering:
    enabled: false
  controller:
    type: 'daemonset'
  mounts:
    # -- Mount /var/log from the host into the container for log collection.
    varlog: true
```

This command installs a release named `alloy-logs` in the `monitoring` namespace:

shell ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```shell
helm upgrade alloy-logs grafana/alloy -i -n monitoring -f values-logs.yaml --set-file alloy.configMap.content=config-logs.alloy
```

This simple configuration scrapes logs for every Pod on each node:

Alloy ![Copy code to clipboard](/media/images/icons/icon-copy-small-2.svg) Copy

```alloy
// read the credentials secret for remote_write authorization
remote.kubernetes.secret "credentials" {
  namespace = "monitoring"
  name      = "primary-credentials-logs"
}

discovery.kubernetes "pods" {
  role = "pod"
  // limit to pods on this node to reduce the amount you need to filter
  selectors {
    role  = "pod"
    field = "spec.nodeName=" + sys.env("<HOSTNAME>")
  }
}

discovery.relabel "pod_logs" {
  targets = discovery.kubernetes.pods.targets
  rule {
    source_labels = ["__meta_kubernetes_namespace"]
    target_label  = "namespace"
  }
  rule {
    source_labels = ["__meta_kubernetes_pod_name"]
    target_label  = "pod"
  }
  rule {
    source_labels = ["__meta_kubernetes_pod_container_name"]
    target_label  = "container"
  }
  rule {
    source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_name"]
    separator     = "/"
    target_label  = "job"
  }
  rule {
    source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
    separator     = "/"
    action        = "replace"
    replacement   = "/var/log/pods/*$1/*.log"
    target_label  = "__path__"
  }
  rule {
    action = "replace"
    source_labels = ["__meta_kubernetes_pod_container_id"]
    regex = "^(\\w+):\\/\\/.+$"
    replacement = "$1"
    target_label = "tmp_container_runtime"
  }
}

local.file_match "pod_logs" {
  path_targets = discovery.relabel.pod_logs.output
}

loki.source.file "pod_logs" {
  targets    = local.file_match.pod_logs.targets
  forward_to = [loki.process.pod_logs.receiver]
}

// basic processing to parse the container format. You can add additional processing stages
// to match your application logs.
loki.process "pod_logs" {
  stage.match {
    selector = "{tmp_container_runtime=\"containerd\"}"
    // the cri processing stage extracts the following k/v pairs: log, stream, time, flags
    stage.cri {}
    // Set the extract flags and stream values as labels
    stage.labels {
      values = {
        flags   = "",
        stream  = "",
      }
    }
  }

  // if the label tmp_container_runtime from above is docker parse using docker
  stage.match {
    selector = "{tmp_container_runtime=\"docker\"}"
    // the docker processing stage extracts the following k/v pairs: log, stream, time
    stage.docker {}

    // Set the extract stream value as a label
    stage.labels {
      values = {
        stream  = "",
      }
    }
  }

  // drop the temporary container runtime label as it is no longer needed
  stage.label_drop {
    values = ["tmp_container_runtime"]
  }

  forward_to = [loki.write.loki.receiver]
}

loki.write "loki" {
  endpoint {
    url = "https://<LOKI_URL>/loki/api/v1/push"
    basic_auth {
      username = convert.nonsensitive(remote.kubernetes.secret.credentials.data["username"])
      password = remote.kubernetes.secret.credentials.data["password"]
    }
}
}
```

Replace the following:

- *`<LOKI_URL>`* : The endpoint of your Loki instance.

The logging subsystem is very powerful and has many options for processing logs. For further details, see the [component documentation](../../../reference/components/).

## Integrations

The `Integration` CRD isn’t supported with Alloy. However, all Grafana Agent Static mode integrations have an equivalent component in the [`prometheus.exporter`](../../../reference/components/) namespace. The [reference documentation](../../../reference/components/) should help convert those integrations to their Alloy equivalent.
