IntegrationsCurrently available IntegrationsNGINX Integration

NGINX Integration for Grafana Cloud

NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. It started out as a web server designed for maximum performance and stability. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.

Use the walkthrough in Grafana Cloud to install the NGINX Integration.

Pre-installation configuration for the NGINX Integration

This integration uses a custom JSON access log to generate metrics about total traffic, error rates, unique visitors, and visitor demographics information.

On most modern Linux based systems, you can simply create a new file at /etc/nginx/conf.d/grafana-cloud-nginx-integration.conf with the following contents.

log_format json_analytics escape=json '{'
'"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution
'"connection": "$connection", ' # connection serial number
'"connection_requests": "$connection_requests", ' # number of requests made in connection
'"pid": "$pid", ' # process pid
'"request_id": "$request_id", ' # the unique request id
'"request_length": "$request_length", ' # request length (including headers and body)
'"remote_addr": "$remote_addr", ' # client IP
'"remote_user": "$remote_user", ' # client HTTP username
'"remote_port": "$remote_port", ' # client port
'"time_local": "$time_local", '
'"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format
'"request": "$request", ' # full path no arguments if the request
'"request_uri": "$request_uri", ' # full path and arguments if the request
'"args": "$args", ' # args
'"status": "$status", ' # response status code
'"body_bytes_sent": "$body_bytes_sent", ' # the number of body bytes exclude headers sent to a client
'"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client
'"http_referer": "$http_referer", ' # HTTP referer
'"http_user_agent": "$http_user_agent", ' # user agent
'"http_x_forwarded_for": "$http_x_forwarded_for", ' # http_x_forwarded_for
'"http_host": "$http_host", ' # the request Host: header
'"server_name": "$server_name", ' # the name of the vhost serving the request
'"request_time": "$request_time", ' # request processing time in seconds with msec resolution
'"upstream": "$upstream_addr", ' # upstream backend server for proxied requests
'"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS
'"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers
'"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body
'"upstream_response_length": "$upstream_response_length", ' # upstream response length
'"upstream_cache_status": "$upstream_cache_status", ' # cache HIT/MISS where applicable
'"ssl_protocol": "$ssl_protocol", ' # TLS protocol
'"ssl_cipher": "$ssl_cipher", ' # TLS cipher
'"scheme": "$scheme", ' # http or https
'"request_method": "$request_method", ' # request method
'"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
'"pipe": "$pipe", ' # "p" if request was pipelined, "." otherwise
'"gzip_ratio": "$gzip_ratio"'
'}';

access_log /path/to/logfile.log json_analytics;

Note that the access log in the example would be written to a specific file which is accessed by the grafana agent. If this is running in a docker container, or Kubernetes cluster, you would write to /dev/stdout

Then you must restart the NGINX server.

GeoIP2 Configuration

If you wish to enable GeoIP2 support in your NGINX configuration, the country of origin for your visitors will be added to a Worldmap panel panel on the main dashboard.

You can download the necessary GeoIP2 database files here.

To enable the module you will need to add the following directive to the root of your /etc/nginx/nginx.conf file;

load_module modules/ngx_http_geoip2_module.so;

You will also have to configure the GeoIP2 module to populate a variable which will be used in the JSON access log.

geoip2 /etc/nginx/geoip/GeoLite2-Country.mmdb {
  $geoip_country_code default=US source=$remote_addr country iso_code;
}

Finally, you must amend the country code to the fields of the JSON access log format specification.

log_format json_analytics escape=json '{'
<snip>
'"gzip_ratio": "$gzip_ratio", '
'"geoip_country_code": "$geoip_country_code"'
'}';

A complete working sample /etc/nginx/nginx.conf would then look like this;

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

load_module modules/ngx_http_geoip2_module.so;

events {
  worker_connections  1024;
}


http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;

  sendfile        on;

  keepalive_timeout  65;

  log_format json_analytics escape=json '{'
  '"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution
  '"connection": "$connection", ' # connection serial number
  '"connection_requests": "$connection_requests", ' # number of requests made in connection
  '"pid": "$pid", ' # process pid
  '"request_id": "$request_id", ' # the unique request id
  '"request_length": "$request_length", ' # request length (including headers and body)
  '"remote_addr": "$remote_addr", ' # client IP
  '"remote_user": "$remote_user", ' # client HTTP username
  '"remote_port": "$remote_port", ' # client port
  '"time_local": "$time_local", '
  '"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format
  '"request": "$request", ' # full path no arguments if the request
  '"request_uri": "$request_uri", ' # full path and arguments if the request
  '"args": "$args", ' # args
  '"status": "$status", ' # response status code
  '"body_bytes_sent": "$body_bytes_sent", ' # the number of body bytes exclude headers sent to a client
  '"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client
  '"http_referer": "$http_referer", ' # HTTP referer
  '"http_user_agent": "$http_user_agent", ' # user agent
  '"http_x_forwarded_for": "$http_x_forwarded_for", ' # http_x_forwarded_for
  '"http_host": "$http_host", ' # the request Host: header
  '"server_name": "$server_name", ' # the name of the vhost serving the request
  '"request_time": "$request_time", ' # request processing time in seconds with msec resolution
  '"upstream": "$upstream_addr", ' # upstream backend server for proxied requests
  '"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS
  '"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers
  '"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body
  '"upstream_response_length": "$upstream_response_length", ' # upstream response length
  '"upstream_cache_status": "$upstream_cache_status", ' # cache HIT/MISS where applicable
  '"ssl_protocol": "$ssl_protocol", ' # TLS protocol
  '"ssl_cipher": "$ssl_cipher", ' # TLS cipher
  '"scheme": "$scheme", ' # http or https
  '"request_method": "$request_method", ' # request method
  '"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
  '"pipe": "$pipe", ' # "p" if request was pipelined, "." otherwise
  '"gzip_ratio": "$gzip_ratio", '
  '"http_cf_ray": "$http_cf_ray",'
  '"geoip_country_code": "$geoip_country_code"'
  '}';

  access_log /dev/stdout json_analytics;

  geoip2 /etc/nginx/geoip/GeoLite2-Country.mmdb {
    $geoip_country_code default=US source=$remote_addr country iso_code;
  }

  server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
  }
}

Post-install configuration for the NGINX Integration

This integration is dependent upon the presence of a clearly defined label to find the NGINX access logs in Loki.

Labeling for static scrape

The following agent configuration defines a scrape job that fetches the logs and assigns the label nginx_host=foo.

It is advised that you configure your scrape job to include an appropriate label which will uniquely identify your NGINX instances.

loki:
  configs:
  - name: agent
    clients:
    - basic_auth:
        password: <Your metrics writer API key>
        username: <Your hosted logs tenant ID>
      url: http://logs-prod-us-central1.grafana.net/api/prom/push
    positions:
      filename: /tmp/positions.yaml
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: nginx-integration
      static_configs:
      - targets:
        - localhost
        labels:
          nginx_host: foo
          __path__: /path/to/logfile.log

Labeling for kubernetes

Alternatively, the agent may be configured to fetch logs from an NGINX container running in Kubernetes using a kubernetes_sd_config as below.

loki:
  configs:
  - name: agent
    clients:
    - basic_auth:
        password: <Your metrics writer API key>
        username: <Your hosted logs tenant ID>
      url: http://logs-prod-us-central1.grafana.net/api/prom/push
    positions:
      filename: /tmp/positions.yaml
    target_config:
      sync_period: 10s
    scrape_configs:
    - job_name: kubernetes-pods-name
      kubernetes_sd_configs:
        - role: pod
      pipeline_stages:
        - cri: {}
      relabel_configs:
        - source_labels:
            - __meta_kubernetes_pod_label_name
          target_label: __service__
        - source_labels:
            - __meta_kubernetes_pod_node_name
          target_label: __host__
        - action: drop
          regex: ""
          source_labels:
            - __service__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - action: replace
          replacement: $1
          separator: /
          source_labels:
            - __meta_kubernetes_namespace
            - __service__
          target_label: job
        - action: replace
          source_labels:
            - __meta_kubernetes_namespace
          target_label: namespace
        - action: replace
          source_labels:
            - __meta_kubernetes_pod_name
          target_label: pod
        - action: replace
          source_labels:
            - __meta_kubernetes_pod_container_name
          target_label: container
        - replacement: /var/log/pods/*$1/*.log
          separator: /
          source_labels:
            - __meta_kubernetes_pod_uid
            - __meta_kubernetes_pod_container_name
          target_label: __path__

Dashboard filtering by label

You will likely find that when you first load the dashboard, no data will be displayed, and in fact an error may be displayed.

You must configure the dashboard to properly query the logs collected by the agent. This is accomplished by using the label_name and label_value dropdowns at the top of the dashboard.

In the static scrape example, the label_name is “nginx_host”. The label_value can be one, or many different instances of NGINX with different label values that have the same name.

In the Kubernetes example, you might set the label_name to “container”, or duplicate the job or instance filters.

Once you have settled on a specific label to use for identifying your NGINX instances, and chosen them from the dropdowns, you may wish to persist your choices. You can accomplish this by;

  1. Click the gear icon near the top right to change “Dashboard Settings”.
  2. Click the “Make editable” button
  3. Click the “Save dashboard”
  4. Make sure to select the “Save current variable values as dashboard default” checkbox and save the dashboard

Now your label choices will be persisted each time you return to the dashboard.