Deploy on Kubernetes
You can deploy a Grafana Enterprise Logs (GEL) cluster in an existing Kubernetes namespace using Minio as a storage backend.
To start, make sure that you have a working Kubernetes cluster and the ability to deploy to that cluster using the kubectl
tool. If you do not currently have access to a Kubernetes cluster, refer to Deploy on Linux.
You will deploy three copies of GEL’s single binary version, rather than deploying each microservice separately.
Deploy Minio
The examples that follow use Minio as the object storage backend. Minio is a open source S3 compatible object storage service that is freely available and easy to run on Kubernetes. If you want to use a different object storage backend, refer to the Loki storage_config.
- Create a file titled
minio.yaml
, and copy the following configuration code into it:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
spec:
selector:
matchLabels:
app: minio
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
initContainers:
- name: create-buckets
image: busybox:1.28
command:
[
"sh",
"-c",
"mkdir -p /storage/grafana-logs-data && mkdir -p /storage/grafana-logs-admin",
]
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/storage"
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/storage"
---
apiVersion: v1
kind: Service
metadata:
name: minio
spec:
type: ClusterIP
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
- Run the following command:
kubectl create -f minio.yaml
- Verify that you have set up Minio correctly, by port-forwarding it and navigating to it in your browser:
kubectl port-forward service/minio 9000:9000
- Navigate to the minio admin console using your browser. The sign-in credentials are username
minio
and passwordminio123
.
Create a license secret
- Run the following command to load your GEL license file (
license.jwt
) as a Kubernetes Secret.
kubectl create secret generic ge-logs-license --from-file license.jwt
- Verify you have successfully created the secret by running the following command:
kubectl get secret ge-logs-license -oyaml
The preceding command prints a Kubernetes Secret object with a license.jwt
field that contains a long base64-encoded value string.
Create a GEL configuration map
To create a configuration file for your cluster and deploy it as a Kubernetes ConfigMap, copy the configuration that follows and save it to a config.yaml
file. Edit the YAML file so that the cluster_name
field is the name of the cluster associated with your license:
auth:
type: enterprise
target: all
cluster_name: <insert_your_cluster_name>
license:
path: /etc/ge-logs/license/license.jwt
ingester:
lifecycler:
num_tokens: 512
ring:
kvstore:
store: memberlist
replication_factor: 3
admin_client:
storage:
type: s3
s3:
endpoint: minio:9000
bucket_name: grafana-logs-admin
access_key_id: minio
secret_access_key: minio123
insecure: true
chunk_store_config:
max_look_back_period: 0s
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
memberlist:
abort_if_cluster_join_fails: false
bind_port: 7946
join_members:
- ge-logs-discovery
storage_config:
aws:
s3: http://minio:minio123@minio.gem.svc.cluster.local:9000
bucketnames: grafana-logs-data
s3forcepathstyle: true
boltdb_shipper:
active_index_directory: /data/boltdb-shipper-active
cache_location: /data/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: s3
schema_config:
configs:
- from: 2021-01-01
store: boltdb-shipper
object_store: aws
schema: v11
index:
prefix: index_
period: 24h
table_manager:
retention_deletes_enabled: false
retention_period: 0s
compactor:
working_directory: /data/boltdb-shipper-compactor
shared_store: s3
To create the ConfigMap, run the following command:
kubectl create configmap ge-logs-config --from-file=config.yaml
Create the services for GEL
Two Kubernetes services are required to run GEL as a StatefulSet. The first service supports GRPC requests between replicas. The second is a gossip service port to allow the replicas to join together and form a hash ring to coordinate work.
- Create a
services.yaml
file and copy the following content into it:
---
apiVersion: v1
kind: Service
metadata:
labels:
name: ge-logs-discovery
name: ge-logs-discovery
spec:
clusterIP: None
ports:
- name: ge-logs-grpc
port: 9095
targetPort: grpc
- name: ge-logs-gossip
port: 7946
targetPort: gossip
publishNotReadyAddresses: true
selector:
name: ge-logs
---
apiVersion: v1
kind: Service
metadata:
labels:
name: ge-logs
name: ge-logs
spec:
ports:
- name: ge-logs-http
port: 8100
targetPort: http
selector:
name: ge-logs
sessionAffinity: None
type: LoadBalancer
- Create the services by running the following command:
kubectl apply -f services.yaml
Deploy the GEL StatefulSet
The procedure that follows deploys three copies of the GEL binary as a StatefulSet.
- Copy the following content into the
statefulset.yaml
file:
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
name: ge-logs
name: ge-logs
spec:
replicas: 3
selector:
matchLabels:
name: ge-logs
serviceName: ge-logs
template:
metadata:
labels:
name: ge-logs
spec:
containers:
- args:
- -config.file=/etc/ge-logs/config.yaml
image: grafana/enterprise-logs:v1.6.3
imagePullPolicy: IfNotPresent
name: enterprise-logs
ports:
- containerPort: 80
name: http
- containerPort: 9095
name: grpc
- containerPort: 7946
name: gossip
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /data
name: data
- mountPath: /etc/ge-logs
name: ge-logs-config
- mountPath: /etc/ge-logs/license
name: ge-logs-license
imagePullSecrets:
- name: gcr
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 300
volumes:
- name: ge-logs-config
configMap:
name: ge-logs-config
- name: ge-logs-license
secret:
secretName: ge-logs-license
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
- Create the StatefulSet, by running the following command:
kubectl apply -f statefulset.yaml
Start the GEL compactor as a Kubernetes deployment
The single binary of GEL does not enable the compactor component by default. Therefore, you need to run the compactor separately as a Kubernetes Deployment.
- Copy the following content into the a
compactor.yaml
file:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: compactor-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: compactor
name: compactor
spec:
selector:
matchLabels:
app: compactor
template:
metadata:
labels:
app: compactor
spec:
containers:
- args:
- -config.file=/etc/ge-logs/config.yaml
- -target=compactor
image: grafana/enterprise-logs:v1.6.3
imagePullPolicy: IfNotPresent
name: compactor
ports:
- containerPort: 80
name: http
- containerPort: 9095
name: grpc
- containerPort: 7946
name: gossip
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
volumeMounts:
- mountPath: /data
name: data
- mountPath: /etc/ge-logs
name: ge-logs-config
- mountPath: /etc/ge-logs/license
name: ge-logs-license
imagePullSecrets:
- name: gcr
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 300
volumes:
- name: ge-logs-config
configMap:
name: ge-logs-config
- name: ge-logs-license
secret:
secretName: ge-logs-license
- name: data
persistentVolumeClaim:
claimName: compactor-data
- Create the compactor deployment:
kubectl apply -f compactor.yaml
Generate an admin token
To communicate with GEL’s admin API, you need a token to manage tenants and access policies. You can use a Kubernetes Job to perform token generation.
- Copy the following content it into a
tokengen-job.yaml
file:
apiVersion: batch/v1
kind: Job
metadata:
name: ge-logs-tokengen
spec:
template:
spec:
containers:
- name: ge-logs-tokengen
image: grafana/enterprise-logs:v1.6.3
imagePullPolicy: IfNotPresent
args:
- --config.file=/etc/ge-logs/config.yaml
- --target=tokengen
volumeMounts:
- mountPath: /etc/ge-logs
name: ge-logs-config
- mountPath: /etc/ge-logs/license
name: ge-logs-license
volumes:
- name: ge-logs-config
configMap:
name: ge-logs-config
- name: ge-logs-license
secret:
secretName: ge-logs-license
restartPolicy: Never
imagePullSecrets:
- name: gcr
- Create the tokengen Kubernetes Job by running the following command:
$ kubectl apply -f tokengen-job.yaml
- Check the status of the Kubernetes Pod that is running the tokengen job, and once it’s completed, check the logs for that job for the new admin token:
kubectl logs job.batch/ge-logs-tokengen
The output of the preceding command contains a token as a string in the logs. The log line you are looking for looks as follows, although your token string will be different:
Token created: Ym9vdHN0cmFwLXRva2VuOmA3PzkxOF0zfVx0MTlzMVteTTcjczNAPQ==
- Note down this token because it is required later when you set up your cluster.
Next steps
To integrate your logs cluster with Grafana and a UI to interact with the Admin API, refer to Set up the GEL plugin for Grafana.