This is documentation for the next version of Grafana Alloy Documentation. For the latest stable release, go to the latest version.
mimir.alerts.kubernetes
EXPERIMENTAL: This is an experimental component. Experimental components are subject to frequent breaking changes, and may be removed with no equivalent replacement. To enable and use an experimental component, you must set the
stability.levelflag toexperimental.
mimir.alerts.kubernetes discovers AlertmanagerConfig Kubernetes resources and loads them into a Mimir instance.
- You can specify multiple
mimir.alerts.kubernetescomponents by giving them different labels. - You can use Kubernetes label selectors to limit the
NamespaceandAlertmanagerConfigresources considered during reconciliation. - Compatible with the Alertmanager APIs of Grafana Mimir, Grafana Cloud, and Grafana Enterprise Metrics.
- Compatible with the
AlertmanagerConfigCRD from theprometheus-operator. - This component accesses the Kubernetes REST API from within a Pod.
Note
This component requires Role-based access control (RBAC) to be set up in Kubernetes in order for Alloy to access it via the Kubernetes REST API.
mimir.alerts.kubernetes doesn’t support [clustering][clustered mode].
[clustered mode]: ../../../../get-started/clustering/
Usage
mimir.alerts.kubernetes "<LABEL>" {
address = "<MIMIR_URL>"
global_config = "..."
}Arguments
You can use the following arguments with mimir.alerts.kubernetes:
At most, one of the following can be provided:
authorizationblockbasic_authblockbearer_token_fileargumentbearer_tokenargumentoauth2block
no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers.
proxy_url must be configured if no_proxy is configured.
proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or the lowercase versions thereof).
Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY.
proxy_url and no_proxy must not be configured if proxy_from_environment is configured.
proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.
Blocks
The following blocks are supported inside the definition of
mimir.alerts.kubernetes:
The > symbol indicates deeper levels of nesting.
For example, oauth2 > tls_config refers to a tls_config block defined inside an oauth2 block.
authorization
credential and credentials_file are mutually exclusive, and only one can be provided inside an authorization block.
Warning
Using
credentials_filecauses the file to be read on every outgoing request. Use thelocal.filecomponent with thecredentialsattribute instead to avoid unnecessary reads.
basic_auth
password and password_file are mutually exclusive, and only one can be provided inside a basic_auth block.
Warning
Using
password_filecauses the file to be read on every outgoing request. Use thelocal.filecomponent with thepasswordattribute instead to avoid unnecessary reads.
alertmanagerconfig_selector and alertmanagerconfig_namespace_selector
The alertmanagerconfig_selector and alertmanagerconfig_namespace_selector blocks describe a Kubernetes label selector for AlertmanagerConfig CRDs or namespace discovery.
The following arguments are supported:
When the match_labels argument is empty, all resources are matched.
match_expression
The match_expression block describes a Kubernetes label match expression for AlertmanagerConfig CRDs or namespace discovery.
The following arguments are supported:
The operator argument should be one of the following strings:
"In""NotIn""Exists""DoesNotExist"
The values argument must not be provided when operator is set to "Exists" or "DoesNotExist".
oauth2
client_secret and client_secret_file are mutually exclusive, and only one can be provided inside an oauth2 block.
Warning
Using
client_secret_filecauses the file to be read on every outgoing request. Use thelocal.filecomponent with theclient_secretattribute instead to avoid unnecessary reads.
The oauth2 block may also contain a separate tls_config sub-block.
no_proxy can contain IPs, CIDR notations, and domain names. IP and domain names can contain port numbers.
proxy_url must be configured if no_proxy is configured.
proxy_from_environment uses the environment variables HTTP_PROXY, HTTPS_PROXY, and NO_PROXY (or the lowercase versions thereof).
Requests use the proxy from the environment variable matching their scheme, unless excluded by NO_PROXY.
proxy_url and no_proxy must not be configured if proxy_from_environment is configured.
proxy_connect_header should only be configured if proxy_url or proxy_from_environment are configured.
tls_config
The following pairs of arguments are mutually exclusive and can’t both be set simultaneously:
ca_pemandca_filecert_pemandcert_filekey_pemandkey_file
When configuring client authentication, both the client certificate (using cert_pem or cert_file) and the client key (using key_pem or key_file) must be provided.
When min_version isn’t provided, the minimum acceptable TLS version is inherited from Go’s default minimum version, TLS 1.2.
If min_version is provided, it must be set to one of the following strings:
"TLS10"(TLS 1.0)"TLS11"(TLS 1.1)"TLS12"(TLS 1.2)"TLS13"(TLS 1.3)
Exported fields
mimir.alerts.kubernetes doesn’t export any fields.
Component health
mimir.alerts.kubernetes is reported as unhealthy if given an invalid configuration or an error occurs during reconciliation.
Debug information
mimir.alerts.kubernetes doesn’t expose debug information.
Debug metrics
Example
This example creates a mimir.alerts.kubernetes component which only loads namespace and AlertmanagerConfig resources if they contain an alloy label set to yes.
remote.kubernetes.configmap "default" {
namespace = "default"
name = "alertmgr-global"
}
mimir.alerts.kubernetes "default" {
address = "http://mimir-nginx.mimir-test.svc:80"
global_config = remote.kubernetes.configmap.default.data["glbl"]
template_files = {
`default_template` =
`{{ define "__alertmanager" }}AlertManager{{ end }}
{{ define "__alertmanagerURL" }}{{ .ExternalURL }}/#/alerts?receiver={{ .Receiver | urlquery }}{{ end }}`,
}
alertmanagerconfig_selector {
match_labels = {
alloy = "yes",
}
}
alertmanagerconfig_namespace_selector {
match_labels = {
alloy = "yes",
}
}
}The following example is an RBAC configuration for Kubernetes. It authorizes Alloy to query the Kubernetes REST API:
apiVersion: v1
kind: ServiceAccount
metadata:
name: alloy
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: alloy
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
- apiGroups: ["monitoring.coreos.com"]
resources: ["alertmanagerconfigs"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: alloy
subjects:
- kind: ServiceAccount
name: alloy
namespace: default
roleRef:
kind: ClusterRole
name: alloy
apiGroup: rbac.authorization.k8s.ioThe following is an example of a complete Kubernetes configuration:
apiVersion: v1
kind: Namespace
metadata:
name: testing
labels:
alloy: "yes"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: grafana-alloy
namespace: testing
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: grafana-alloy
rules:
- apiGroups: [""]
resources: ["namespaces", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["monitoring.coreos.com"]
resources: ["alertmanagerconfigs"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: grafana-alloy
subjects:
- kind: ServiceAccount
name: grafana-alloy
namespace: testing
roleRef:
kind: ClusterRole
name: grafana-alloy
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: grafana-alloy
spec:
type: NodePort
selector:
app: grafana-alloy
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: testing
name: grafana-alloy
spec:
replicas: 1
selector:
matchLabels:
app: grafana-alloy
template:
metadata:
labels:
app: grafana-alloy
spec:
serviceAccount: grafana-alloy
containers:
- name: alloy
image: grafana/alloy:latest
imagePullPolicy: Never
args:
- run
- /etc/config/config.alloy
- --stability.level=experimental
ports:
- containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: alloy-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: alloy-config
namespace: testing
data:
config.alloy: |
remote.kubernetes.configmap "default" {
namespace = "testing"
name = "alertmgr-global"
}
mimir.alerts.kubernetes "default" {
address = "http://mimir-nginx.mimir-test.svc:80"
global_config = remote.kubernetes.configmap.default.data["glbl"]
template_files = {
`default_template` =
`{{ define "__alertmanager" }}AlertManager{{ end }}
{{ define "__alertmanagerURL" }}{{ .ExternalURL }}/#/alerts?receiver={{ .Receiver | urlquery }}{{ end }}`,
}
alertmanagerconfig_namespace_selector {
match_labels = {
alloy = "yes",
}
}
alertmanagerconfig_selector {
match_labels = {
alloy = "yes",
}
}
}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmgr-global
namespace: testing
data:
glbl: |
global:
resolve_timeout: 5m
http_config:
follow_redirects: true
enable_http2: true
smtp_hello: localhost
smtp_require_tls: true
route:
receiver: "null"
receivers:
- name: "null"
- name: "alloy-namespace/global-config/myreceiver"
templates:
- 'default_template'
---
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmgr-config1
namespace: testing
labels:
alloy: "yes"
spec:
route:
receiver: "null"
routes:
- receiver: myamc
continue: true
receivers:
- name: "null"
- name: myamc
webhookConfigs:
- url: http://test.url
httpConfig:
followRedirects: true
---
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmgr-config2
namespace: testing
labels:
alloy: "yes"
spec:
route:
receiver: "null"
routes:
- receiver: 'database-pager'
groupWait: 10s
matchers:
- name: service
value: webapp
receivers:
- name: "null"
- name: "database-pager"The Kubernetes configuration above creates the Alertmanager configuration below and sends it to Mimir:
template_files:
default_template: |-
{{ define "__alertmanager" }}AlertManager{{ end }}
{{ define "__alertmanagerURL" }}{{ .ExternalURL }}/#/alerts?receiver={{ .Receiver | urlquery }}{{ end }}
alertmanager_config: |
global:
resolve_timeout: 5m
http_config:
follow_redirects: true
enable_http2: true
smtp_hello: localhost
smtp_require_tls: true
route:
receiver: "null"
continue: false
routes:
- receiver: testing/alertmgr-config1/null
matchers:
- namespace="testing"
continue: true
routes:
- receiver: testing/alertmgr-config1/myamc
continue: true
- receiver: testing/alertmgr-config2/null
matchers:
- namespace="testing"
continue: true
routes:
- receiver: testing/alertmgr-config2/database-pager
matchers:
- service="webapp"
continue: false
group_wait: 10s
receivers:
- name: "null"
- name: alloy-namespace/global-config/myreceiver
- name: testing/alertmgr-config1/null
- name: testing/alertmgr-config1/myamc
webhook_configs:
- send_resolved: false
http_config:
follow_redirects: true
enable_http2: true
url: <secret>
url_file: ""
max_alerts: 0
timeout: 0s
- name: testing/alertmgr-config2/null
- name: testing/alertmgr-config2/database-pager
templates:
- default_template


