Menu

Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.

Enterprise

Gateway

The Grafana Enterprise Metrics gateway is a service target. It can proxy requests to other Grafana Enterprise Metrics microservices. You can also use it for client-side load balancing of requests proxied to the distributors.

Configuration

The gateway has its own configuration block in the Grafana Enterprise Metrics configuration files.

yaml
gateway:
  proxy:
    default: <backend_proxy_config>
    [ admin_api: <backend_proxy_config> ]
    [ alertmanager: <backend_proxy_config> ]
    [ compactor: <backend_proxy_config> ]
    [ distributor: <backend_proxy_config> ]
    [ graphite: <backend_proxy_config> ]
    [ ingester: <backend_proxy_config> ]
    [ query_frontend: <backend_proxy_config> ]
    [ ruler: <backend_proxy_config> ]
    [ store_gateway: <backend_proxy_config> ]
}

You can also use flags to configure the gateway. Each flag is the path to the equivalent configuration field joined by the period (.) character and with underscores (_) replaced with dashes (-). For example, use the flag --gateway.proxy.store-gateway.url=<store-gateway url> to configure the store-gateway backend proxy URL.

<backend_proxy_config>

A backend_proxy section specifies the URL of the backend to be proxied.

yaml
url: <url> | default = <gateway.proxy.default.url>

Client-side load balancing

If you use a backend proxy URL beginning with dns:///, it creates a gRPC proxy with client-side round-robin load balancing instead of the default HTTP reverse proxy. To configure client-side load balancing for requests to the distributors, set the gateway.proxy.distributor.url to dns:///<distributor service>.

Note: There are three / characters in the preceding DNS URL meaning that you are using the default DNS authority. For details about DNS URLs, refer to RFC 4501.

Client-side load balancing is useful in ensuring that distributors are evenly loaded with requests. Prometheus remote-write clients use HTTP persistent connections, also known as HTTP keep-alive, to re-use a single TCP connection for multiple requests and responses resulting in reduced latency for subsequent requests.

Kubernetes Services are not load balancers; initial TCP connections are made using a random endpoint but once the connection is established, the same remote-write client will talk to the same distributor server for its lifetime. This can mean an uneven load for your distributors and worse cluster performance overall.

The Grafana Enterprise Metrics gateway solves this problem by exposing an HTTP server for receiving the client requests but using gRPC to talk to the distributors. The gRPC proxy maintains a list of endpoints returned from the DNS lookup and keeps persistent connections to each one. The proxies are also configured to perform per request client-side load balancing across the endpoints resulting in the best of persistent connections without the issues presented in the preceding paragraph.