Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
Log queries
All LogQL queries contain a log stream selector.
Optionally, the log stream selector can be followed by a log pipeline. A log pipeline is a set of stage expressions that are chained together and applied to the selected log streams. Each expression can filter out, parse, or mutate log lines and their respective labels.
The following example shows a full log query in action:
{container="query-frontend",namespace="loki-dev"} |= "metrics.go" | logfmt | duration > 10s and throughput_mb < 500
The query is composed of:
- a log stream selector
{container="query-frontend",namespace="loki-dev"}
which targets thequery-frontend
container in theloki-dev
namespace. - a log pipeline
|= "metrics.go" | logfmt | duration > 10s and throughput_mb < 500
which will filter out log that contains the wordmetrics.go
, then parses each log line to extract more labels and filter with them.
To avoid escaping special characters you can use the
`
(backtick) instead of"
when quoting strings. For example`\w+`
is the same as"\\w+"
. This is specially useful when writing a regular expression which contains multiple backslashes that require escaping.
Log stream selector
The stream selector determines which log streams to include in a query’s results. A log stream is a unique source of log content, such as a file. A more granular log stream selector then reduces the number of searched streams to a manageable volume. This means that the labels passed to the log stream selector will affect the relative performance of the query’s execution.
The log stream selector is specified by one or more comma-separated key-value pairs. Each key is a log label and each value is that label’s value.
Curly braces ({
and }
) delimit the stream selector.
Consider this stream selector:
{app="mysql",name="mysql-backup"}
All log streams that have both a label of app
whose value is mysql
and a label of name
whose value is mysql-backup
will be included in
the query results.
A stream may contain other pairs of labels and values,
but only the specified pairs within the stream selector are used to determine
which streams will be included within the query results.
The same rules that apply for Prometheus Label Selectors apply for Grafana Loki log stream selectors.
The =
operator after the label name is a label matching operator.
The following label matching operators are supported:
=
: exactly equal!=
: not equal=~
: regex matches!~
: regex does not match
Regex log stream examples:
{name =~ "mysql.+"}
{name !~ "mysql.+"}
{name !~ `mysql-\d+`}
Note: The =~
regex operator is fully anchored, meaning regex must match against the entire string, including newlines. The regex .
character does not match newlines by default. If you want the regex dot character to match newlines you can use the single-line flag, like so: (?s)search_term.+
matches search_term\n
.
Log pipeline
A log pipeline can be appended to a log stream selector to further process and filter log streams. It is composed of a set of expressions. Each expression is executed in left to right sequence for each log line. If an expression filters out a log line, the pipeline will stop processing the current log line and start processing the next log line.
Some expressions can mutate the log content and respective labels, which will be then be available for further filtering and processing in subsequent expressions. An example that mutates is the expression
| line_format "{{.status_code}}"
Log pipeline expressions fall into one of three categories:
- Filtering expressions: line filter expressions and label filter expressions
- Parsing expressions
- Formatting expressions: line format expressions and label format expressions
Line filter expression
The line filter expression does a distributed grep
over the aggregated logs from the matching log streams.
It searches the contents of the log line,
discarding those lines that do not match the case sensitive expression.
Each line filter expression has a filter operator followed by text or a regular expression. These filter operators are supported:
|=
: Log line contains string!=
: Log line does not contain string|~
: Log line contains a match to the regular expression!~
: Log line does not contain a match to the regular expression
Line filter expression examples:
Keep log lines that have the substring “error”:
|= "error"
A complete query using this example:
{job="mysql"} |= "error"
Discard log lines that have the substring “kafka.server:type=ReplicaManager”:
!= "kafka.server:type=ReplicaManager"
A complete query using this example:
{instance=~"kafka-[23]",name="kafka"} != "kafka.server:type=ReplicaManager"
Keep log lines that contain a substring that starts with
tsdb-ops
and ends withio:2003
. A complete query with a regular expression:{name="kafka"} |~ "tsdb-ops.*io:2003"
Keep log lines that contain a substring that starts with
error=
, and is followed by 1 or more word characters. A complete query with a regular expression:{name="cassandra"} |~ `error=\w+`
Filter operators can be chained.
Filters are applied sequentially.
Query results will have satisfied every filter.
This complete query example will give results that include the string error
,
and do not include the string timeout
.
{job="mysql"} |= "error" != "timeout"
When using |~
and !~
, Go (as in Golang) RE2 syntax regex may be used.
The matching is case-sensitive by default.
Switch to case-insensitive matching by prefixing the regular expression
with (?i)
.
While line filter expressions could be placed anywhere within a log pipeline, it is almost always better to have them at the beginning. Placing them at the beginning improves the performance of the query, as it only does further processing when a line matches. For example, while the results will be the same, the query specified with
{job="mysql"} |= "error" | json | line_format "{{.err}}"
will always run faster than
{job="mysql"} | json | line_format "{{.message}}" |= "error"
Line filter expressions are the fastest way to filter logs once the log stream selectors have been applied.
Line filter expressions have support matching IP addresses. See Matching IP addresses for details.
Label filter expression
Label filter expression allows filtering log line using their original and extracted labels. It can contain multiple predicates.
A predicate contains a label identifier, an operation and a value to compare the label with.
For example with cluster="namespace"
the cluster is the label identifier, the operation is =
and the value is “namespace”. The label identifier is always on the right side of the operation.
We support multiple value types which are automatically inferred from the query input.
- String is double quoted or backticked such as
"200"
or `us-central1
`. - Duration is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.
- Number are floating-point number (64bits), such as
250
,89.923
. - Bytes is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as “42MB”, “1.5Kib” or “20b”. Valid bytes units are “b”, “kib”, “kb”, “mib”, “mb”, “gib”, “gb”, “tib”, “tb”, “pib”, “pb”, “eib”, “eb”.
String type work exactly like Prometheus label matchers use in log stream selector. This means you can use the same operations (=
,!=
,=~
,!~
).
The string type is the only one that can filter out a log line with a label
__error__
.
Using Duration, Number and Bytes will convert the label value prior to comparision and support the following comparators:
==
or=
for equality.!=
for inequality.>
and>=
for greater than and greater than or equal.<
and<=
for lesser than and lesser than or equal.
For instance, logfmt | duration > 1m and bytes_consumed > 20MB
If the conversion of the label value fails, the log line is not filtered and an __error__
label is added. To filters those errors see the pipeline errors section.
You can chain multiple predicates using and
and or
which respectively express the and
and or
binary operations. and
can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline.
This means that all the following expressions are equivalent:
| duration >= 20ms or size == 20kb and method!~"2.."
| duration >= 20ms or size == 20kb | method!~"2.."
| duration >= 20ms or size == 20kb , method!~"2.."
| duration >= 20ms or size == 20kb method!~"2.."
By default the precedence of multiple predicates is right to left. You can wrap predicates with parenthesis to force a different precedence left to right.
For example the following are equivalent.
| duration >= 20ms or method="GET" and size <= 20KB
| ((duration >= 20ms or method="GET") and size <= 20KB)
It will evaluate first duration >= 20ms or method="GET"
. To evaluate first method="GET" and size <= 20KB
, make sure to use proper parenthesis as shown below.
| duration >= 20ms or (method="GET" and size <= 20KB)
Label filter expressions are the only expression allowed after the unwrap expression. This is mainly to allow filtering errors from the metric extraction.
Label filter expressions have support matching IP addresses. See Matching IP addresses for details.
Parser expression
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using label filter expressions or for metric aggregations.
Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.)
For instance, the pipeline | json
will produce the following mapping:
{ "a.b": {c: "d"}, e: "f" }
->
{a_b_c="d", e="f"}
In case of errors, for instance if the line is not in the expected format, the log line won’t be filtered but instead will get a new __error__
label added.
If an extracted label key name already exists in the original log stream, the extracted label key will be suffixed with the _extracted
keyword to make the distinction between the two labels. You can forcefully override the original label using a label formatter expression. However if an extracted key appears twice, only the latest label value will be kept.
Loki supports JSON, logfmt, pattern, regexp and unpack parsers.
It’s easier to use the predefined parsers json
and logfmt
when you can. If you can’t, the pattern
and regexp
parsers can be used for log lines with an unusual structure. The pattern
parser is easier and faster to write; it also outperforms the regexp
parser.
Multiple parsers can be used by a single log pipeline. This is useful for parsing complex logs. There are examples in Multiple parsers.
JSON
The json parser operates in two modes:
without parameters:
Adding
| json
to your pipeline will extract all json properties as labels if the log line is a valid json document. Nested properties are flattened into label keys using the_
separator.Note: Arrays are skipped.
For example the json parsers will extract from the following document:
{ "protocol": "HTTP/2.0", "servers": ["129.0.1.1","10.2.1.3"], "request": { "time": "6.032", "method": "GET", "host": "foo.grafana.net", "size": "55", "headers": { "Accept": "*/*", "User-Agent": "curl/7.68.0" } }, "response": { "status": 401, "size": "228", "latency_seconds": "6.031" } }
The following list of labels:
"protocol" => "HTTP/2.0" "request_time" => "6.032" "request_method" => "GET" "request_host" => "foo.grafana.net" "request_size" => "55" "response_status" => "401" "response_size" => "228" "response_size" => "228"
with parameters:
Using
| json label="expression", another="expression"
in your pipeline will extract only the specified json fields to labels. You can specify one or more expressions in this way, the same aslabel_format
; all expressions must be quoted.Currently, we only support field access (
my.field
,my["field"]
) and array access (list[0]
), and any combination of these in any level of nesting (my.list[0]["field"]
).For example,
| json first_server="servers[0]", ua="request.headers[\"User-Agent\"]
will extract from the following document:{ "protocol": "HTTP/2.0", "servers": ["129.0.1.1","10.2.1.3"], "request": { "time": "6.032", "method": "GET", "host": "foo.grafana.net", "size": "55", "headers": { "Accept": "*/*", "User-Agent": "curl/7.68.0" } }, "response": { "status": 401, "size": "228", "latency_seconds": "6.031" } }
The following list of labels:
"first_server" => "129.0.1.1" "ua" => "curl/7.68.0"
If an array or an object returned by an expression, it will be assigned to the label in json format.
For example,
| json server_list="servers", headers="request.headers"
will extract:"server_list" => `["129.0.1.1","10.2.1.3"]` "headers" => `{"Accept": "*/*", "User-Agent": "curl/7.68.0"}`
logfmt
The logfmt parser can be added using the | logfmt
and will extract all keys and values from the logfmt formatted log line.
For example the following log line:
at=info method=GET path=/ host=grafana.net fwd="124.133.124.161" service=8ms status=200
will get those labels extracted:
"at" => "info"
"method" => "GET"
"path" => "/"
"host" => "grafana.net"
"fwd" => "124.133.124.161"
"service" => "8ms"
"status" => "200"
Pattern
The pattern parser allows the explicit extraction of fields from log lines by defining a pattern expression (| pattern "<pattern-expression>"
). The expression matches the structure of a log line.
Consider this NGINX log line.
0.191.12.2 - - [10/Jun/2021:09:14:29 +0000] "GET /api/plugins/versioncheck HTTP/1.1" 200 2 "-" "Go-http-client/2.0" "13.76.247.102, 34.120.177.193" "TLSv1.2" "US" ""
This log line can be parsed with the expression
<ip> - - <_> "<method> <uri> <_>" <status> <size> <_> "<agent>" <_>
to extract these fields:
"ip" => "0.191.12.2"
"method" => "GET"
"uri" => "/api/plugins/versioncheck"
"status" => "200"
"size" => "2"
"agent" => "Go-http-client/2.0"
A pattern expression is composed of captures and literals.
A capture is a field name delimited by the <
and >
characters. <example>
defines the field name example
.
An unnamed capture appears as <_>
. The unnamed capture skips matched content.
Captures are matched from the line beginning or the previous set of literals, to the line end or the next set of literals. If a capture is not matched, the pattern parser will stop.
Literals can be any sequence of UTF-8 characters, including whitespace characters.
By default, a pattern expression is anchored at the start of the log line. If the expression start with literals, then the log line must also start with the same set of literals. Use <_>
at the beginning of the expression to anchor the expression at the start.
Consider the log line
level=debug ts=2021-06-10T09:24:13.472094048Z caller=logging.go:66 traceID=0568b66ad2d9294c msg="POST /loki/api/v1/push (204) 16.652862ms"
To match msg="
, use the expression:
<_> msg="<method> <path> (<status>) <latency>"
A pattern expression is invalid if
- It does not contain any named capture.
- It contains two consecutive captures not separated by whitespace characters.
Regular expression
Unlike the logfmt and json, which extract implicitly all values and takes no parameters, the regexp parser takes a single parameter | regexp "<re>"
which is the regular expression using the Golang RE2 syntax.
The regular expression must contain a least one named sub-match (e.g (?P<name>re)
), each sub-match will extract a different label.
For example the parser | regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"
will extract from the following line:
POST /api/prom/api/v1/query_range (200) 1.5s
those labels:
"method" => "POST"
"path" => "/api/prom/api/v1/query_range"
"status" => "200"
"duration" => "1.5s"
unpack
The unpack
parser parses a JSON log line, unpacking all embedded labels in the pack
stage.
A special property _entry
will also be used to replace the original log line.
For example, using | unpack
with the log line:
{
"container": "myapp",
"pod": "pod-3223f",
"_entry": "original log message"
}
extracts the container
and pod
labels; it sets original log message
as the new log line.
You can combine the unpack
and json
parsers (or any other parsers) if the original embedded log line is of a specific format.
Line format expression
The line format expression can rewrite the log line content by using the text/template format.
It takes a single string parameter | line_format "{{.label_name}}"
, which is the template format. All labels are injected variables into the template and are available to use with the {{.label_name}}
notation.
For example the following expression:
{container="frontend"} | logfmt | line_format "{{.query}} {{.duration}}"
Will extract and rewrite the log line to only contains the query and the duration of a request.
You can use double quoted string for the template or backticks `{{.label_name}}`
to avoid the need to escape special characters.
line_format
also supports math
functions. Example:
If we have the following labels ip=1.1.1.1
, status=200
and duration=3000
(ms), we can divide the duration by 1000
to get the value in seconds.
{container="frontend"} | logfmt | line_format "{{.ip}} {{.status}} {{div .duration 1000}}"
The above query will give us the line
as 1.1.1.1 200 3
See template functions to learn about available functions in the template format.
Labels format expression
The | label_format
expression can rename, modify or add labels. It takes as parameter a comma separated list of equality operations, enabling multiple operations at once.
When both side are label identifiers, for example dst=src
, the operation will rename the src
label into dst
.
The left side can alternatively be a template string (double quoted or backtick), for example dst="{{.status}} {{.query}}"
, in which case the dst
label value is replaced by the result of the text/template evaluation. This is the same template engine as the | line_format
expression, which means labels are available as variables and you can use the same list of functions.
In both cases, if the destination label doesn’t exist, then a new one is created.
The renaming form dst=src
will drop the src
label after remapping it to the dst
label. However, the template form will preserve the referenced labels, such that dst="{{.src}}"
results in both dst
and src
having the same value.
A single label name can only appear once per expression. This means
| label_format foo=bar,foo="new"
is not allowed but you can use two expressions for the desired effect:| label_format foo=bar | label_format foo="new"
Log queries examples
Multiple filtering
Filtering should be done first using label matchers, then line filters (when possible) and finally using label filters. The following query demonstrate this.
{cluster="ops-tools1", namespace="loki-dev", job="loki-dev/query-frontend"} |= "metrics.go" !="out of order" | logfmt | duration > 30s or status_code!="200"
Multiple parsers
To extract the method and the path of the following logfmt log line:
level=debug ts=2020-10-02T10:10:42.092268913Z caller=logging.go:66 traceID=a9d4d8a928d8db1 msg="POST /api/prom/api/v1/query_range (200) 1.5s"
You can use multiple parsers (logfmt and regexp) like this.
{job="cortex-ops/query-frontend"} | logfmt | line_format "{{.msg}}" | regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"
This is possible because the | line_format
reformats the log line to become POST /api/prom/api/v1/query_range (200) 1.5s
which can then be parsed with the | regexp ...
parser.
Formatting
The following query shows how you can reformat a log line to make it easier to read on screen.
{cluster="ops-tools1", name="querier", namespace="loki-dev"}
|= "metrics.go" != "loki-canary"
| logfmt
| query != ""
| label_format query="{{ Replace .query \"\\n\" \"\" -1 }}"
| line_format "{{ .ts}}\t{{.duration}}\ttraceID = {{.traceID}}\t{{ printf \"%-100.100s\" .query }} "
Label formatting is used to sanitize the query while the line format reduce the amount of information and creates a tabular output.
For these given log lines:
level=info ts=2020-10-23T20:32:18.094668233Z caller=metrics.go:81 org_id=29 traceID=1980d41501b57b68 latency=fast query="{cluster=\"ops-tools1\", job=\"cortex-ops/query-frontend\"} |= \"query_range\"" query_type=filter range_type=range length=15m0s step=7s duration=650.22401ms status=200 throughput_mb=1.529717 total_bytes_mb=0.994659
level=info ts=2020-10-23T20:32:18.068866235Z caller=metrics.go:81 org_id=29 traceID=1980d41501b57b68 latency=fast query="{cluster=\"ops-tools1\", job=\"cortex-ops/query-frontend\"} |= \"query_range\"" query_type=filter range_type=range length=15m0s step=7s duration=624.008132ms status=200 throughput_mb=0.693449 total_bytes_mb=0.432718
The result would be:
2020-10-23T20:32:18.094668233Z 650.22401ms traceID = 1980d41501b57b68 {cluster="ops-tools1", job="cortex-ops/query-frontend"} |= "query_range"
2020-10-23T20:32:18.068866235Z 624.008132ms traceID = 1980d41501b57b68 {cluster="ops-tools1", job="cortex-ops/query-frontend"} |= "query_range"