Get started with Grafana Mimir using the Helm chart
The Grafana Mimir Helm chart allows you to configure, install, and upgrade Grafana Mimir within a Kubernetes cluster.
Before you begin
The instructions that follow are common across any flavor of Kubernetes. If you don’t have experience with Kubernetes, you can install a lightweight flavor of Kubernetes such as kind.
Experience with the following is recommended, but not essential:
- General knowledge about using a Kubernetes cluster.
- Understanding of what the
kubectl
command does. - Understanding of what the
helm
command does.
Caution: This procedure is primarily aimed at local or development setups. To set up in a production environment, see Run Grafana Mimir in production using the Helm chart.
Hardware requirements
- A single Kubernetes node with a minimum of 4 cores and 16GiB RAM
Software requirements
Kubernetes 1.20 or higher
The
kubectl
command for your version of KubernetesRun the following command to get both the Kubernetes and
kubectl
version:kubectl version
. The command prints out the server and client versions, where the server is the Kubernetes itself and client meanskubectl
.The
helm
command version 3.8 or higherRun the following command to get the Helm version:
helm version
.
Verify that you have
Access to the Kubernetes cluster
For example by running the command
kubectl get ns
, which lists all namespaces.Persistent storage is enabled in the Kubernetes cluster, which has a default storage class set up. You can change the default StorageClass.
Note: If you are using
kind
or you are unsure, assume it is enabled and continue.DNS service works in the Kubernetes cluster
Note: If you are using
kind
or you are unsure, assume it works and continue.
Security setup
If you are using kind
or similar local Kubernetes setup and haven’t set security policies, you can safely skip this section.
This installation will not succeed if you have enabled the PodSecurityPolicy admission controller or if you are enforcing the Restricted policy with Pod Security admission controller. The reason is that the installation includes a deployment of MinIO. The minio/minio chart is not compatible with running under a Restricted policy or the PodSecurityPolicy that the mimir-distributed chart provides.
If you are using the PodSecurityPolicy admission controller, then it is not possible to deploy the mimir-distributed chart with MinIO.
Refer to Run Grafana Mimir in production using the Helm chart for instructions on
setting up an external object storage and disable the built-in MinIO deployment with minio.enabled: false
in the Helm values file.
If you are using the Pod Security admission controller, then MinIO and the mimir-distributed chart can successfully deploy under the baseline pod security level.
Install the Helm chart in a custom namespace
Using a custom namespace solves problems later on because you do not have to overwrite the default namespace.
Create a unique Kubernetes namespace, for example
mimir-test
:kubectl create namespace mimir-test
For more details, see the Kubernetes documentation about Creating a new namespace.
Set up a Helm repository using the following commands:
helm repo add grafana https://grafana.github.io/helm-charts helm repo update
Note: The Helm chart at https://grafana.github.io/helm-charts is a publication of the source code at grafana/mimir.
Install Grafana Mimir using the Helm chart:
helm -n mimir-test install mimir grafana/mimir-distributed
Note: The output of the command contains the write and read URLs necessary for the following steps.
Check the statuses of the Mimir pods:
kubectl -n mimir-test get pods
The results look similar to this:
NAME READY STATUS RESTARTS AGE mimir-minio-7bd89b757d-q5hp6 1/1 Running 0 2m44s mimir-rollout-operator-76c67c7d56-v6xtl 1/1 Running 0 2m44s mimir-nginx-858455979c-hjvhx 1/1 Running 0 2m44s mimir-make-minio-buckets-svgvd 0/1 Completed 1 2m44s mimir-ruler-64b9d59b94-tvj7z 1/1 Running 0 2m44s mimir-query-frontend-c444b56f9-jrmwl 1/1 Running 0 2m44s mimir-overrides-exporter-86c4d54645-zktkm 1/1 Running 0 2m44s mimir-querier-5d9c55d6d9-l6fdc 1/1 Running 0 2m44s mimir-distributor-7796db494f-rsvdx 1/1 Running 0 2m44s mimir-query-scheduler-d5dccfff7-5c5rw 1/1 Running 0 2m44s mimir-querier-5d9c55d6d9-xghl6 1/1 Running 0 2m44s mimir-query-scheduler-d5dccfff7-vz4vf 1/1 Running 0 2m44s mimir-alertmanager-0 1/1 Running 0 2m44s mimir-store-gateway-zone-b-0 1/1 Running 0 2m44s mimir-store-gateway-zone-c-0 1/1 Running 0 2m43s mimir-ingester-zone-b-0 1/1 Running 0 2m43s mimir-compactor-0 1/1 Running 0 2m44s mimir-ingester-zone-a-0 1/1 Running 0 2m43s mimir-store-gateway-zone-a-0 1/1 Running 0 2m44s mimir-ingester-zone-c-0 1/1 Running 0 2m44s
Wait until all of the pods have a status of
Running
orCompleted
, which might take a few minutes.
Generate some metrics for testing
The Grafana Mimir Helm chart can collect metrics or logs, or both, about Grafana Mimir itself. This is called metamonitoring. In the example that follows, metamonitoring scrapes metrics about Grafana Mimir itself, and then writes those metrics to the same Grafana Mimir instance.
Create a YAML file called
custom.yaml
for your Helm values.To enable metamonitoring in Grafana Mimir, add the following YAML snippet to your Grafana Mimir
custom.yaml
file:metaMonitoring: serviceMonitor: enabled: true grafanaAgent: enabled: true installOperator: true metrics: additionalRemoteWriteConfigs: - url: "http://mimir-nginx.mimir-test.svc:80/api/v1/push"
Note: In a production environment the
url
above would point to an external system, independent of your Grafana Mimir instance, such as an instance of Grafana Cloud Metrics.Upgrade Grafana Mimir by using the
helm
command:helm -n mimir-test upgrade mimir grafana/mimir-distributed -f custom.yaml
Start Grafana in Kubernetes and query metrics
Install Grafana in the same Kubernetes cluster.
For details, see Deploy Grafana on Kubernetes.
If you haven’t done it as part of the previous step, port-forward Grafana to
localhost
, by using thekubectl
command:kubectl port-forward service/grafana 3000:3000
In a browser, go to the Grafana server at http://localhost:3000.
Sign in using the default username
admin
and passwordadmin
.On the left-hand side, go to Configuration > Data sources.
Configure a new Prometheus data source to query the local Grafana Mimir server, by using the following settings:
Field Value Name Mimir URL http://mimir-nginx.mimir-test.svc:80/prometheus To add a data source, see Add a data source.
Verify success:
You should be able to query metrics in Grafana Explore, as well as create dashboard panels by using your newly configured
Mimir
data source.
Advanced setup with external access
In this procedure you’ll set up external access for Grafana Mimir to allow writing and quering metrics from outside the Kubernetes cluster. You’ll set up an ingress that enables you to externally access a Kubernetes cluster.
Before you begin
Verify that an ingress controller is set up in the Kubernetes cluster, for example ingress-nginx
Set up ingress
Configure an ingress:
b. Add the following to your
custom.yaml
Helm values file:nginx: ingress: enabled: true ingressClassName: nginx hosts: - host: <ingress-host> paths: - path: / pathType: Prefix tls: # empty, disabled.
Replace
<ingress-host>
with a suitable hostname that DNS can resolve to the external IP address of the Kubernetes cluster. For more information, see Ingress.Note: On Linux systems, and if it is not possible for you set up local DNS resolution, you can use the
--add-host=<ingress-host>:<kubernetes-cluster-external-address>
command-line flag to define the<ingress-host>
local address for thedocker
commands in the examples that follow.Note: To see all of the configurable parameters for a Helm chart installation, use
helm show values grafana/mimir-distributed
.Upgrade Grafana Mimir by using the
helm
command:helm -n mimir-test upgrade mimir grafana/mimir-distributed -f custom.yaml
The output of the command should contain the URL to use for querying Grafana Mimir from the outside, for example:
From outside the cluster via ingress: http://myhost.mynetwork/prometheus
Configure Prometheus to write to Grafana Mimir
You can either configure Prometheus to write to Grafana Mimir or configure Grafana Agent to write to Mimir. Although you can configure both, you do not need to.
Make a choice based on whether or not you already have a Prometheus server set up:
For an existing Prometheus server:
Add the following YAML snippet to your Prometheus configuration file:
remote_write: - url: http://<ingress-host>/api/v1/push
In this case, your Prometheus server writes metrics to Grafana Mimir, based on what is defined in the existing
scrape_configs
configuration.Restart the Prometheus server.
For a Prometheus server that does not exist yet:
Write the following configuration to a
prometheus.yml
file:remote_write: - url: http://<ingress-host>/api/v1/push scrape_configs: - job_name: prometheus honor_labels: true static_configs: - targets: ["localhost:9090"]
In this case, your Prometheus server writes metrics to Grafana Mimir that it scrapes from itself.
Start a Prometheus server by using Docker:
docker run -p 9090:9090 -v <absolute-path-to>/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus
Note: On Linux systems, if <ingress-host> cannot be resolved by the Prometheus server, use the additional command-line flag
--add-host=<ingress-host>:<kubernetes-cluster-external-address>
to set it up.
Configure Grafana Agent to write to Grafana Mimir
You can either configure Grafana Agent to write to Grafana Mimir or configure Prometheus to write to Mimir. Although you can configure both, you do not need to.
Make a choice based on whether or not you already have a Grafana Agent set up:
For an existing Grafana Agent:
Add the following YAML snippet to your Grafana Agent metrics configurations (
metrics.configs
):remote_write: - url: http://<ingress-host>/api/v1/push
In this case, your Grafana Agent will write metrics to Grafana Mimir, based on what is defined in the existing
metrics.configs.scrape_configs
configuration.Restart the Grafana Agent.
For a Grafana Agent that does not exist yet:
Write the following configuration to an
agent.yaml
file:metrics: wal_directory: /tmp/grafana-agent/wal configs: - name: agent scrape_configs: - job_name: agent static_configs: - targets: ["127.0.0.1:12345"] remote_write: - url: http://<ingress-host>/api/v1/push
In this case, your Grafana Agent writes metrics to Grafana Mimir that it scrapes from itself.
Create an empty directory for the write ahead log (WAL) of the Grafana Agent
Start a Grafana Agent by using Docker:
docker run -v <absolute-path-to-wal-directory>:/etc/agent/data -v <absolute-path-to>/agent.yaml:/etc/agent/agent.yaml -p 12345:12345 grafana/agent
Note: On Linux systems, if <ingress-host> cannot be resolved by the Grafana Agent, use the additional command-line flag
--add-host=<ingress-host>:<kubernetes-cluster-external-address>
to set it up.
Query metrics in Grafana
You can use the Grafana installed in Kubernetes in the Start Grafana in Kubernetes and query metrics section as well or follow the instructions bellow.
Note: If you have the port-forward running for Grafana from an earlier step, stop it.
First install Grafana, and then add Mimir as a Prometheus data source.
Start Grafana by using Docker:
docker run --rm --name=grafana -p 3000:3000 grafana/grafana
Note: On Linux systems, if <ingress-host> cannot be resolved by Grafana, use the additional command-line flag
--add-host=<ingress-host>:<kubernetes-cluster-external-address>
to set it up.In a browser, go to the Grafana server at http://localhost:3000.
Sign in using the default username
admin
and passwordadmin
.On the left-hand side, go to Configuration > Data sources.
Configure a new Prometheus data source to query the local Grafana Mimir cluster, by using the following settings:
Field Value Name Mimir URL http://<ingress-host>/prometheus To add a data source, see Add a data source.
Verify success:
You should be able to query metrics in Grafana Explore, as well as create dashboard panels by using your newly configured
Mimir
data source. For more information, see Monitor Grafana Mimir.