Plugins 〉Altinity plugin for ClickHouse
Altinity plugin for ClickHouse
Altinity Grafana datasource plugin for ClickHouse® (grafana Grafana 4.6+ supported)
Altinity ClickHouse datasource plugin provides a support for ClickHouse as a backend database.
Initially plugin developed by Vertamedia, maintaned by Altinity since 2020.
Quick start
Grafana 10+ setup notes for plugin version before 3.0.0
Old versions of Altinity ClickHouse datasource plugin for Grafana written in Angular. So you can watch warning like
Angular plugin
This data source plugin uses a deprecated, legacy platform based on AngularJS and will stop working in future releases of Grafana.
Don't worry about warning message, plugin will still working until Grafana 11 will release, after it upgrade to Altinity ClickHouse datasource plugin for Grafana to 3.x version is required.
Grafana 7+ setup notes for plugin version before 2.2.0
When 2.0.x and 2.1.x vertamedia-clickhouse-grafana plugin versions released Grafana team didn't provide worked signing method for community plugins. Current sign process describe on grafana.com
so, for properly setup 2.0.x and 2.1.x plugins you need change configuration option
[plugins]
allow_loading_unsigned_plugins=vertamedia-clickhouse-datasource
or setup environment variable
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=vertamedia-clickhouse-datasource
You can install plugin from grafana.com
OR
Copy files to your Grafana plugin directory. Restart Grafana, check data sources list at Configuration -> Datasources -> New, choose ClickHouse option.
Features
- Access to CH via HTTP / HTTPS
- Query setup
- Raw SQL editor
- Query formatting
- Macros support
- Additional functions
- Templates
- Table view
- SingleStat view
- Ad-hoc filters
- Annotations
- Alerts support
- Histogram support
- Logs support
- Flamegraph support
- Traces support
Access to ClickHouse via HTTP / HTTPS
Page configuration is standard
There is a small feature - ClickHouse treats HTTP Basic Authentication credentials as a database user and will try to run queries using its name.
CHProxy (optional)
Using of CHProxy will bring additional features:
- Easily setup
HTTPS
access to ClickHouse as shown here to provide secure access. - Limit concurrency and execution time for requests from
Grafana
as shown here to preventClickHouse
overloading fromGrafana
. - Protection against request bursts for dashboards with numerous graphs.
CHProxy
allows queueing requests and execute them sequentially. To learn more - read about paramsmax_queue_size
andmax_queue_time
at CHProxy page. - Response caching for the most frequent queries as shown here.
Caching
will protect ClickHouse
from excessive refreshes and will be optimal option for popular dashboards.
Hint - if you need to cache requests like
last 24h
where timestamp changes constantly then try to useRound
option atRaw Editor
Query setup
Query setup interface:
First row FROM
contains two options: database and table. Table values depends on a selected database.
Next rows contains selectors for time filtering:
Column timestamp time
- DateTime (DateTime)
- DateTime64 (DateTime64)
- TimeStamp (UInt32).
Timestamp column Date column
Timestmap column
are required for time-based macros and functions because all analytics based on these values. Plugin will try to detect Date, Date32 column automatically
Button Go to Query
is just a toggler to Raw SQL Editor
Raw SQL Editor
Raw Editor allows custom SQL queries to be written:
Raw Editor allows typing queries, get info about functions and macros, format queries as Clickhouse do. To Execute query on server press "Run Query" or just leave focus from SQL editor textarea.
Under the Editor you can find options which allows setup rounding, time column step
and Add metadata
to SQL query which allows know which dashboard and user produce workload to your ClickHouse server.
Press Show Generated SQL
for see a raw query (all macros and functions have already been replaced) which will be sent directly to ClickHouse.
Macros support
Plugin supports the following marcos:
- $table - replaced with selected table name from Query Builder
- $dateCol - replaced with
Column:Date
value from Query Builder - $dateTimeCol - replaced with
Column:DateTime
orColumn:TimeStamp
value from Query Builder - $from - replaced with (timestamp with ms)/1000 value of UI selected "Time Range:From"
- $to - replaced with (timestamp with ms)/1000 value of UI selected "Time Range:To"
- $interval - replaced with selected "Group by a time interval" value (as a number of seconds)
- $timeFilter - replaced with currently selected "Time Range". Requires Column:Date and Column:DateTime or Column:TimeStamp to be selected.
- $timeFilterByColumn($column) - replaced with currently selected "Time Range" for a column passed as
$column
argument. Use it in queries or query variables as...WHERE $timeFilterColumn($column)...
or...WHERE $timeFilterColumn(created_at)...
. - $timeSeries - replaced with special ClickHouse construction to convert results as time-series data. Use it as "SELECT $timeSeries...".
- $naturalTimeSeries - replaced with special ClickHouse construction to convert results as time-series with in a logical/natural breakdown. Use it as "SELECT $naturalTimeSeries...".
- $unescape - unescapes variable value by removing single quotes. Used for multiple-value string variables: "SELECT $unescape($column) FROM requests WHERE $unescape($column) = 5"
- $adhoc - replaced with a rendered ad-hoc filter expression, or "1" if no ad-hoc filters exist. Since ad-hoc applies automatically only to outer queries the macros can be used for filtering in inner queries.
A description of macros is available by typing their names in Raw Editor
Functions
Functions are just templates of SQL queries, and you can check the final query at Raw SQL Editor mode. If you need some additional complexity - just copy raw sql into Raw Editor and make according changes. Remember that macros are still available to use.
There are some limits in function use because of poor query analysis:
- Column:Date and Column:DateTime or Column:TimeStamp must be set in Query Builder
- Query must begin from function name
- Only one function can be used per query
Plugin supports the following functions:
$rate(cols...) - converts query results as "change rate per interval"
Example usage:
$rate(countIf(Type = 200) AS good, countIf(Type != 200) AS bad) FROM requests
Query will be transformed into:
SELECT
t,
good / runningDifference(t / 1000) AS goodRate,
bad / runningDifference(t / 1000) AS badRate
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60)) * 1000 AS t,
countIf(Type = 200) AS good,
countIf(Type != 200) AS bad
FROM requests
WHERE ((EventDate >= toDate(1482796747)) AND (EventDate <= toDate(1482853383))) AND ((EventTime >= toDateTime(1482796747)) AND (EventTime <= toDateTime(1482853383)))
GROUP BY t
ORDER BY t
)
$columns(key, value) - query values as array of [key, value], where key will be used as label
Example usage:
$columns(OSName, count(*) c)
FROM requests
INNER JOIN oses USING (OS)
Query will be transformed into:
SELECT
t,
groupArray((OSName, c)) AS groupArr
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
OSName,
count(*) AS c
FROM requests
INNER JOIN oses USING (OS)
WHERE ((EventDate >= toDate(1482796627)) AND (EventDate <= toDate(1482853383))) AND ((EventTime >= toDateTime(1482796627)) AND (EventTime <= toDateTime(1482853383)))
GROUP BY
t,
OSName
ORDER BY
t,
OSName
)
GROUP BY t
ORDER BY t
This will help to build the next graph:
$rateColumns(key, value) - is a combination of $columns and $rate
Example usage:
$rateColumns(OS, count(*) c) FROM requests
Query will be transformed into:
SELECT
t,
arrayMap(lambda(tuple(a), (a.1, a.2 / runningDifference(t / 1000))), groupArr)
FROM
(
SELECT
t,
groupArray((OS, c)) AS groupArr
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
OS,
count(*) AS c
FROM requests
WHERE ((EventDate >= toDate(1482796867)) AND (EventDate <= toDate(1482853383))) AND ((EventTime >= toDateTime(1482796867)) AND (EventTime <= toDateTime(1482853383)))
GROUP BY
t,
OS
ORDER BY
t,
OS
)
GROUP BY t
ORDER BY t
)
$rateColumnsAggregated(key, subkey, aggFunction1, value1, ... aggFunctionN, valueN) - if you need calculate rate
for higher cardinality dimension and then aggregate by lower cardinality dimension
Example usage:
$rateColumnsAggregated(datacenter, concat(datacenter,interface) AS dc_interface, sum, tx_bytes * 1014 AS tx_kbytes, sum, max(rx_bytes) AS rx_bytes) FROM traffic
Query will be transformed into:
SELECT
t,
datacenter,
sum(tx_kbytesRate) AS tx_bytesRateAgg,
sum(rx_bytesRate) AS rx_bytesRateAgg
FROM
(
SELECT
t,
datacenter,
dc_interface,
tx_kbytes / runningDifference(t / 1000) AS tx_kbytesRate,
rx_bytes / runningDifference(t / 1000) AS rx_bytesRate
FROM
(
SELECT
(intDiv(toUInt32(event_time), 60) * 60) * 1000 AS t,
datacenter,
concat(datacenter,interface) AS dc_interface,
max(tx_bytes * 1024) AS tx_kbytes,
max(rx_bytes) AS rx_bytes
FROM traffic
WHERE ((event_date >= toDate(1482796867)) AND (event_date <= toDate(1482853383)))
AND ((event_time >= toDateTime(1482796867)) AND (event_time <= toDateTime(1482853383)))
GROUP BY
t,
datacenter,
dc_interface
ORDER BY
t,
datacenter,
dc_interface
)
)
GROUP BY
t,
datacenter
ORDER BY
datacenter,
t
look issue 386 for reasons for implementation
$perSecond(cols...) - converts query results as "change rate per interval" for Counter-like(growing only) metrics
Example usage:
$perSecond(Requests) FROM requests
Query will be transformed into:
SELECT
t,
if(runningDifference(max_0) < 0, nan, runningDifference(max_0) / runningDifference(t / 1000)) AS max_0_PerSecond
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
max(Requests) AS max_0
FROM requests
WHERE ((EventDate >= toDate(1535711819)) AND (EventDate <= toDate(1535714715)))
AND ((EventTime >= toDateTime(1535711819)) AND (EventTime <= toDateTime(1535714715)))
GROUP BY t
ORDER BY t
)
// see issue 78 for the background
$perSecondColumns(key, value) - is a combination of $columns and $perSecond for Counter-like metrics
Example usage:
$perSecondColumns(Protocol, Requests) FROM requests WHERE Protocol in ('udp','tcp')
Query will be transformed into:
SELECT
t,
groupArray((perSecondColumns, max_0_PerSecond)) AS groupArr
FROM
(
SELECT
t,
Protocol,
if(runningDifference(max_0) < 0 OR neighbor(perSecondColumns,-1,perSecondColumns) != perSecondColumns, nan, runningDifference(max_0) / runningDifference(t / 1000)) AS max_0_PerSecond
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
Protocol AS perSecondColumns,
max(Requests) AS max_0
FROM requests
WHERE ((EventDate >= toDate(1535711819)) AND (EventDate <= toDate(1535714715)))
AND ((EventTime >= toDateTime(1535711819)) AND (EventTime <= toDateTime(1535714715)))
AND (Protocol IN ('udp', 'tcp'))
GROUP BY
t,
Protocol
ORDER BY
t,
Protocol
)
)
GROUP BY t
ORDER BY t
// see issue 80 for the background
$perSecondColumnsAggregated(key, subkey, aggFunction1, value1, ... aggFunctionN, valueN) - if you need to calculate perSecond
for higher cardinality dimension and then aggregate by lower cardinality dimension
Example usage:
$perSecondColumnsAggregated(datacenter, concat(datacenter,interface) AS dc_interface, sum, tx_bytes * 1014 AS tx_kbytes, sum, max(rx_bytes) AS rx_bytes) FROM traffic
Query will be transformed into:
SELECT
t,
datacenter,
sum(tx_kbytesPerSecond) AS tx_bytesPerSecondAgg,
sum(rx_bytesPerSecond) AS rx_bytesPerSecondAgg
FROM
(
SELECT
t,
datacenter,
dc_interface,
if(runningDifference(tx_kbytes) < 0 OR neighbor(tx_kbytes,-1,tx_kbytes) != tx_kbytes, nan, runningDifference(tx_kbytes) / runningDifference(t / 1000)) AS tx_kbytesPerSecond,
if(runningDifference(rx_bytes) < 0 OR neighbor(rx_bytes,-1,rx_bytes) != rx_bytes, nan, runningDifference(rx_bytes) / runningDifference(t / 1000)) AS rx_bytesPerSecond
FROM
(
SELECT
(intDiv(toUInt32(event_time), 60) * 60) * 1000 AS t,
datacenter,
concat(datacenter,interface) AS dc_interface,
max(tx_bytes * 1024) AS tx_kbytes,
max(rx_bytes) AS rx_bytes
FROM traffic
WHERE ((event_date >= toDate(1482796867)) AND (event_date <= toDate(1482853383)))
AND ((event_time >= toDateTime(1482796867)) AND (event_time <= toDateTime(1482853383)))
GROUP BY
t,
datacenter,
dc_interface
ORDER BY
t,
datacenter,
dc_interface
)
)
GROUP BY
t,
datacenter
ORDER BY
datacenter,
t
look issue 386 for reasons for implementation
$delta(cols...) - converts query results as "delta value inside interval" for Counter-like(growing only) metrics, will negative if counter reset
Example usage:
$delta(Requests) FROM requests
Query will be transformed into:
SELECT
t,
runningDifference(max_0) AS max_0_Delta
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
max(Requests) AS max_0
FROM requests
WHERE ((EventDate >= toDate(1535711819)) AND (EventDate <= toDate(1535714715)))
AND ((EventTime >= toDateTime(1535711819)) AND (EventTime <= toDateTime(1535714715)))
GROUP BY t
ORDER BY t
)
// see issue 455 for the background
$deltaColumns(key, value) - is a combination of $columns and $delta for Counter-like metrics
Example usage:
$deltaColumns(Protocol, Requests) FROM requests WHERE Protocol in ('udp','tcp')
Query will be transformed into:
SELECT
t,
groupArray((deltaColumns, max_0_Delta)) AS groupArr
FROM
(
SELECT
t,
deltaColumns,
if (neighbor(deltaColumns,-1,deltaColumns) != deltaColumns, 0, runningDifference(max_0)) AS max_0_Delta
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
Protocol AS deltaColumns,
max(Requests) AS max_0
FROM requests
WHERE ((EventDate >= toDate(1535711819)) AND (EventDate <= toDate(1535714715)))
AND ((EventTime >= toDateTime(1535711819)) AND (EventTime <= toDateTime(1535714715)))
AND (Protocol IN ('udp', 'tcp'))
GROUP BY
t,
Protocol
ORDER BY
t,
Protocol
)
)
GROUP BY t
ORDER BY t
// see issue 455 for the background
$deltaColumnsAggregated(key, subkey, aggFunction1, value1, ... aggFunctionN, valueN) - if you need to calculate delta
for higher cardinality dimension and then aggregate by lower cardinality dimension
Example usage:
$deltaColumnsAggregated(datacenter, concat(datacenter,interface) AS dc_interface, sum, tx_bytes * 1014 AS tx_kbytes, sum, max(rx_bytes) AS rx_bytes) FROM traffic
Query will be transformed into:
SELECT
t,
datacenter,
sum(tx_kbytesDelta) AS tx_bytesDeltaAgg,
sum(rx_bytesDelta) AS rx_bytesDeltaAgg
FROM
(
SELECT
t,
datacenter,
dc_interface,
if(neighbor(tx_kbytes,-1,tx_kbytes) != tx_kbytes, 0, runningDifference(tx_kbytes) / 1) AS tx_kbytesDelta,
if(neighbor(rx_bytes,-1,rx_bytes) != rx_bytes, 0, runningDifference(rx_bytes) / 1) AS rx_bytesDelta
FROM
(
SELECT
(intDiv(toUInt32(event_time), 60) * 60) * 1000 AS t,
datacenter,
concat(datacenter,interface) AS dc_interface,
max(tx_bytes * 1024) AS tx_kbytes,
max(rx_bytes) AS rx_bytes
FROM traffic
WHERE ((event_date >= toDate(1482796867)) AND (event_date <= toDate(1482853383)))
AND ((event_time >= toDateTime(1482796867)) AND (event_time <= toDateTime(1482853383)))
GROUP BY
t,
datacenter,
dc_interface
ORDER BY
t,
datacenter,
dc_interface
)
)
GROUP BY
t,
datacenter
ORDER BY
datacenter,
t
look issue 386 for reasons for implementation
$increase(cols...) - converts query results as "non-negative delta value inside interval" for Counter-like(growing only) metrics, will zero if counter reset and delta less zero
Example usage:
$increase(Requests) FROM requests
Query will be transformed into:
SELECT
t,
if(runningDifference(max_0) < 0, 0, runningDifference(max_0) ) AS max_0_Increase
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
max(Requests) AS max_0
FROM requests
WHERE ((EventDate >= toDate(1535711819)) AND (EventDate <= toDate(1535714715)))
AND ((EventTime >= toDateTime(1535711819)) AND (EventTime <= toDateTime(1535714715)))
GROUP BY t
ORDER BY t
)
// see issue 455 for the background
$increaseColumns(key, value) - is a combination of $columns and $increase for Counter-like metrics
Example usage:
$increaseColumns(Protocol, Requests) FROM requests WHERE Protocol in ('udp','tcp')
Query will be transformed into:
SELECT
t,
groupArray((increaseColumns, max_0_Increase)) AS groupArr
FROM
(
SELECT
t,
Protocol,
if (runningDifference(max_0) < 0 OR neighbor(increaseColumns,-1,increaseColumns) != increaseColumns, 0, runningDifference(max_0)) AS max_0_Increase
FROM
(
SELECT
(intDiv(toUInt32(EventTime), 60) * 60) * 1000 AS t,
Protocol AS increaseColumns,
max(Requests) AS max_0
FROM requests
WHERE ((EventDate >= toDate(1535711819)) AND (EventDate <= toDate(1535714715)))
AND ((EventTime >= toDateTime(1535711819)) AND (EventTime <= toDateTime(1535714715)))
AND (Protocol IN ('udp', 'tcp'))
GROUP BY
t,
Protocol
ORDER BY
t,
Protocol
)
)
GROUP BY t
ORDER BY t
// see issue 455 for the background
$increaseColumnsAggregated(key, subkey, aggFunction1, value1, ... aggFunctionN, valueN) - if you need to calculate increase
for higher cardinality dimension and then aggregate by lower cardinality dimension
Example usage:
$increaseColumnsAggregated(datacenter, concat(datacenter,interface) AS dc_interface, sum, tx_bytes * 1014 AS tx_kbytes, sum, max(rx_bytes) AS rx_bytes) FROM traffic
Query will be transformed into:
SELECT
t,
datacenter,
sum(tx_kbytesIncrease) AS tx_bytesIncreaseAgg,
sum(rx_bytesIncrease) AS rx_bytesIncreaseAgg
FROM
(
SELECT
t,
datacenter,
dc_interface,
if(runningDifference(tx_kbytes) < 0 OR neighbor(tx_kbytes,-1,tx_kbytes) != tx_kbytes, nan, runningDifference(tx_kbytes) / 1) AS tx_kbytesIncrease,
if(runningDifference(rx_bytes) < 0 OR neighbor(rx_bytes,-1,rx_bytes) != rx_bytes, nan, runningDifference(rx_bytes) / 1) AS rx_bytesIncrease
FROM
(
SELECT
(intDiv(toUInt32(event_time), 60) * 60) * 1000 AS t,
datacenter,
concat(datacenter,interface) AS dc_interface,
max(tx_bytes * 1024) AS tx_kbytes,
max(rx_bytes) AS rx_bytes
FROM traffic
WHERE ((event_date >= toDate(1482796867)) AND (event_date <= toDate(1482853383)))
AND ((event_time >= toDateTime(1482796867)) AND (event_time <= toDateTime(1482853383)))
GROUP BY
t,
datacenter,
dc_interface
ORDER BY
t,
datacenter,
dc_interface
)
)
GROUP BY
t,
datacenter
ORDER BY
datacenter,
t
look issue 386 for reasons for implementation
Templating
Query Variable
If you add a template variable of the type Query
, you can write a ClickHouse query that can
return things like measurement names, key names or key values that are shown as a dropdown select box.
For example, you can have a variable that contains all values for the hostname
column in a table if you specify a query like this in the templating variable Query setting.
SELECT hostname FROM host
To use time range dependent macros like timeFilterByColumn($column)
in your query the refresh mode of the template variable needs to be set to On Time Range Change.
SELECT event_name FROM event_log WHERE $timeFilterByColumn(time_column)
Another option is a query that can create a key/value variable. The query should return two columns that are named __text
and __value
. The __text
column value should be unique (if it is not unique then the first value will use). The options in the dropdown will have a text and value that allows you to have a friendly name as text and an id as the value. An example query with hostname
as the text and id
as the value:
SELECT hostname AS __text, id AS __value FROM host
You can also create nested variables. For example if you had another variable named region
. Then you could have the hosts variable only show hosts from the current selected region with a query like this (if region
is a multi-value variable then use the IN
comparison operator rather than =
to match against multiple values):
SELECT hostname FROM host WHERE region IN ($region)
Conditional Predicate
If you are using templating to feed your predicate, you will face performance degradation when everything will select as the predicate, and it's not necessary. It's also true for textbox when nothing is entered, you have to write specific sql code to handle that.
To resolve this issue a new macro $conditionalTest(SQL Predicate,$variable) can be used to remove some part of the query. If the variable is type query with all selected or if the variable is a textbox with nothing enter, then the SQL Predicate is not include in the generated query.
To give an example: with 2 variables $var query with include All option $text textbox $text_with_single_quote textbox with single quote
The following query
SELECT
$timeSeries as t,
count()
FROM $table
WHERE $timeFilter
$conditionalTest(AND toLowerCase(column) in ($var),$var)
$conditionalTest(AND toLowerCase(column2) like '%$text%',$text)
$conditionalTest(AND toLowerCase(column3) ilike ${text_with_single_quote:sqlstring},$text_with_single_quote)
GROUP BY t
ORDER BY t
if the $var
is selected as "All" value, and the $text
variable is empty, the query will be converted into:
SELECT
$timeSeries as t,
count()
FROM $table
WHERE $timeFilter
GROUP BY t
ORDER BY t
If the $var
template variable have select some elements, and the $text
template variable has at least one char, the query will be converted into:
SELECT
$timeSeries as t,
count()
FROM $table
WHERE $timeFilter
AND toLowerCase(column) in ($var)
AND toLowerCase(column2) like '%$text%'
GROUP BY t
ORDER BY t
Working with panels
Pie Chart (https://grafana.com/plugins/grafana-piechart-panel)
Remember that pie chart plugin is not welcome for using in grafana - see Grafana BLog - Friends don't let friends abuse pie charts
To create "Top 5" diagram we will need two queries: one for 'Top 5' rows and one for 'Other' row.
Top5:
SELECT
1 AS t, /* fake timestamp value */
UserName,
sum(Requests) AS Reqs
FROM requests
GROUP BY t, UserName
ORDER BY Reqs DESC
LIMIT 5
Other:
SELECT
1 AS t, /* fake timestamp value */
UserName,
sum(Requests) AS Reqs
FROM requests
GROUP BY t, UserName
ORDER BY Reqs DESC
LIMIT 5,10000000000000 /* select some ridiculous number after first 5 */
Table view (https://grafana.com/plugins/table)
There are don't contain any tricks in displaying time-series data. To print summary data, omit time column, and format the result as "Table" and press "Run query".
SELECT
UserName,
sum(Requests) as Reqs
FROM requests
GROUP BY
UserName
ORDER BY
Reqs
Vertical histogram (https://grafana.com/plugins/graph)
To make the vertical histogram from graph panel we will need to edit some settings:
- Display -> Draw Modes -> Bars
- Axes -> X-Axis -> Mode -> Series
You can use next query:
$columns(
Size,
sum(Items) Items)
FROM some_table
// It is also possible to use query without macros
Worldmap panel (https://github.com/grafana/worldmap-panel)
If you have a table with country/city codes:
SELECT
1,
Country AS c,
sum(Requests) AS Reqs
FROM requests
GLOBAL ANY INNER JOIN
(
SELECT Country, CountryCode
FROM countries
) USING (CountryCode)
WHERE $timeFilter
GROUP BY
c
ORDER BY Reqs DESC
If you are using geohash set following options:
You can make following query with Table
formatting:
Ad-hoc filters
If there is an Ad-hoc variable, plugin will fetch all columns of all tables of all databases (except system database) as tags.
So in dropdown menu will be options like database.table.column
. If you specify the default database it will only fetch tables and columns from that database, and the dropdown menu will have an option like table.column
.
If there are ENUM columns, the plugin will fetch their options and use them as tag values.
Also, plugin will fetch 300 unique values for fields with other types.
Plugin will apply Ad-hoc filters to all queries on the dashboard if their settings $database
and $table
are the same
as database.table
specified in Ad-hoc control. If the ad-hoc filter doesn't specify a table, it will apply to all queries regardless of the table.
This is useful if the dashboard contains queries to multiple different tables.
There are no option to apply OR operator for multiple Ad-hoc filters - see grafana/grafana#10918 There are no option to use IN operator for Ad-hoc filters due to Grafana limitations
There may be cases when CH contains too many tables and columns so their fetching could take notably amount of time. So, if you need
to have multiple dashboards with different databases using of default database
won't help. The best way to solve this will be to have parametrized
ad-hoc variable in dashboard settings. Currently, it's not supported by Grafana interface (see issue).
As a temporary workaround, plugin will try to look for variable with name adhoc_query_filter
and if it exists will use its value as query to fetch columns.
For this purpose we recommend creating some variable constant
with the name adhoc_query_filter
and set the value similar to the following one:
SELECT database, table, name, type FROM system.columns WHERE table='myTable' ORDER BY database, table
That should help to control data fetching by ad-hoc queries.
Template variable values via Query
To use time range dependent macros like $from
and $to
in your query the refresh mode of the template variable needs to be set to On Time Range Change.
SELECT ClientID FROM events WHERE EventTime > toDateTime($from) AND EventTime < toDateTime($to)
Annotations
Plugin support Annotations with regions. To enable this feature open Dashboard settings
and add new annotation query with clickhouse
datasource with properly field names.
Alerts support
Grafana provide two kind of alerts. Unified alerts and graph panel related alerts (legacy).
Both kind of alerts supports by our plugin can't be used together.
Use GF_UNIFIED_ALERTING_ENABLED=1
(preferable) or GF_ALERTING_ENABLED=1
environment variables for switch.
Panel related alerts (legacy)
To enable alerts open "alerts" tab in panel, and define alert expression as described on grafana.com
Be careful with Template variables values, currently grafana doesn't support template variables in alert queries itself. Also, grafana UI doesn't pass template variables values to a backend, after you change it on frontend UI.
So, the clickhouse grafana plugin can use template variables values, because we have "Generated SQL" which pass to backend "as is" To ensure template variables values will properly pass to a backend part of the plugin. Please choose the required template variables values for your alerts in UI dropdown, ensure values properly rendered in "Generated SQL" (maybe need change SQL queries in query editor) and save a whole dashboard to the Grafana server
WARNING: Test alert
button doesn't save a current state of alert rules to a backend part of the plugin.
If the "Generated SQL" properly passed into backend part of plugin, you will see something like this:
Unified Alerts support
Unified alerts could be provisioned with YAML file, look to https://github.com/Altinity/clickhouse-grafana/tree/master/docker/grafana/provisioning/alerting/
To export exists unified alerts to YAML use Export alerts
Alerts troubleshooting
To troubleshoot alerts in clickhouse grafana plugin when enable level=debug
in log
section grafana.ini
or via GF_LOG_LEVEL=debug
environment variable.
Histogram support
To show Histogram you need query in format as "Time Series"
According to https://grafana.com/docs/grafana/latest/panels-visualizations/visualizations/histogram, Histograms support time series and any table results with one or more numerical fields.
Logs support
To render your ClickHouse data as Logs, please use special format in "Format as" dropdown in Query Editor called "Logs". This option helps Grafana recognizes data as logs and shows logs visualization automatically in Explore UI. On dashboards you can use Logs panel as well.
To return suitable for logs data - query should return at least one time field (assumed that it will be first field) and one text field from the ClickHouse.
Plugin is also transforming all text fields, except log line, into the labels using following rules:
- Log line will be taken either from dedicated
content
field or from first in order text field in result - All other text fields will be treated as a labels
There are few dedicated fields that are recognized by Grafana:
level
(string) - set the level for each log lineid
(string) - by default, Grafana offers basic support for deduplicating log lines, that can be improved by adding this field to explicitly assign identifiers to each log line
All other fields returned from data source will be recognized by Grafana as detected fields
Flamegraph support
To show Traces you need query in format as "Flame Graph" According to https://grafana.com/docs/grafana/latest/panels-visualizations/visualizations/flame-graph/#data-api, you need to have recordset with 4 fields
level
- Numeric - the level of the stack frame. The root frame is level 0.label
- String - the function name or other symbol which identifyvalue
- Numeric - the number of samples or bytes that were recorded in this stack traceself
- Numeric - the number of samples or bytes that were recorded in only this stack frame excluding the children, for clickhouse this is usually zero cause we can't calculate)
Moreover, rows shall be ordered by stack trace and level
If you setup query_profiler_real_time_period_ns
in profile or query level settings when you can try to visualize it as FlameGraph with the following query
Look to system.trace_log table description for how to get data for FlameGraph
Look to flamegraph dashboard example for example of dashboard with FlameGraph
Flamegraph query example:
SELECT length(trace) - level_num AS level, label, count() AS value, 0 self
FROM system.trace_log
ARRAY JOIN arrayEnumerate(trace) AS level_num,
arrayMap(x -> if(addressToSymbol(x) != '', demangle(addressToSymbol(x)), 'unknown') , trace) AS label
WHERE trace_type='Real' AND $timeFilter
GROUP BY level, label, trace
ORDER BY trace, level
Traces support
To show Traces you need query with format as "Traces" with following
For example, if <opentelemetry_start_trace_probability>1</opentelemetry_start_trace_probability>
in user profile and system.opentelemetry_span_log
is not emtpy, then you can show traces about clickhouse query execution
Look to system.opentelemetry_span_log table description for how to get data for FlameGraph
Look to tracing dashboard example for example of dashboard with FlameGraph
Tracing visualization requires following field names (case sensitive):
traceID
- StringspanID
- StringoperationName
- StringparentSpanID
- StringserviceName
- Stringduration
- UInt64 - duration in millisecondsstartTime
- UInt64 - start time in millisecondstags
- map(String, String) - tags for spanserviceTags
- map(String, String) - tags for service (for example 'hostName')
Traces query example for system.opentelemetry_span_log
SELECT
trace_id AS traceID,
span_id AS spanID,
operation_name AS operationName,
parent_span_id AS parentSpanID,
'clickhouse' AS serviceName,
intDiv(finish_time_us - start_time_us, 1000) AS duration,
intDiv(start_time_us,1000) AS startTime,
attribute AS tags,
map('hostName',hostname) AS serviceTags
FROM
system.opentelemetry_span_log
WHERE $timeFilter
ORDER BY traceID, startTime
Configure the Datasource with Provisioning
It’s now possible to configure datasources using config files with Grafana’s provisioning system. You can read more about how it works and all the settings you can set for datasources on the provisioning docs page.
Here are some provisioning example:
apiVersion: 1
datasources:
- name: Clickhouse
type: vertamedia-clickhouse-datasource
access: proxy
url: http://localhost:8123
<bool> enable/disable basic auth
basicAuth: false
<string> basic auth username
basicAuthUser: “default”
<bool> enable/disable with credentials headers
withCredentials: false
<bool> mark as default datasource. Max one per org
isDefault: false
<map> fields that will be converted to json and stored in json_data
jsonData:
<bool> enable/disable sending ‘add_http_cors_header=1’ parameter
addCorsHeader: false
<bool> enable/disable using POST method for sending queries
usePOST: false
<bool> enable/disable using Accept-Encoding header in each request
useCompression: false
<string> compression type allowed values: gzip, zstd, br, deflate
compressionType: ""
<string> default database name
defaultDatabase: ""
<bool> enable/disable tls authorization
tlsAuth: false
<bool> enable/disable tls authorization with custom ca
tlsAuthWithCACert: false
<bool> enable/disable authorization with X-ClickHouse-* headers
useYandexCloudAuthorization: false
<string> X-ClickHouse-Key header value for authorization
xHeaderUser: ""
<string> the same value as url when useYandexCloudAuthorization: true
@todo remove this workaround when merge https://github.com/grafana/grafana/pull/80858
dataSourceUrl: “http://localhost:8123”
secureJsonData:
<string> X-ClickHouse-User header value for authorization
xHeaderKey: ""
<string> basic auth password
basicAuthPassword: ""
<string> custom certificate authority for TLS https connection, base64 encoded
tlsCACert: ""
<string> custom client certificate for TLS https connection, base64 encoded
tlsClientCert: ""
<string> custom client secret key for TLS https connection, base64 encoded
tlsClientKey: ""
Some settings and security params are the same for all datasources. You can find them here.
FAQ
Why time series last point is not the real last point?
Plugin extrapolates last datapoint if time range is last N
to avoid displaying of constantly decreasing graphs
when timestamp in a table is rounded to minute or bigger.
If it so then in 99% cases last datapoint will be much less than previous one, because last minute is not finished yet.
That's why plugin checks prev datapoints and tries to predict last datapoint value just as it was already written into db.
This behavior could be turned off via "Extrapolation" checkbox in query editor.
Which table schema used in SQL query examples?
All examples in this plugin use following table schema:
CREATE TABLE IF NOT EXISTS countries(
Country LowCardinality(String),
CountryCode LowCardinality(String)
) ENGINE MergeTree()
ORDER BY (CountryCode, Country);
CREATE TABLE IF NOT EXISTS oses (
OSName LowCardinality(String),
OS LowCardinality(String)
) ENGINE MergeTree()
ORDER BY (OS);
CREATE TABLE IF NOT EXISTS requests(
EventTime DateTime,
EventDate Date,
Protocol LowCardinality(String),
UserName LowCardinality(String),
OS LowCardinality(String),
CountryCode LowCardinality(String),
Type UInt8,
Requests UInt32
) ENGINE=MergeTree()
ORDER BY (EventDate, EventTime, Type, OS, Protocol, UserName)
PARTITION BY toYYYYMM(EventDate);
What about alerts support?
Alerts feature requires changes in Grafana
's backend, which can be extended only for Grafana 6.5+. Grafana
's maintainers are working on this feature.
Current alerts support for clickhouse-grafana
datasource plugin in beta.
For clickhouse grafana plugin 2.2.3+ support only for amd64 architecture for Linux, macOS, Windows and arm64 Linux, macOS (m1). Only amd64 prior 2.2.3 version.
Contributing
If you have any idea for an improvement or found a bug do not hesitate to open an issue or submit a pull request. We will appreciate any help from the community which will make working with such amazing products as ClickHouse and Grafana more convenient.
Development
see CONTRIBUTING.md for Development and Pull request Contributing instructions
License
MIT License, please see LICENSE for details.
Grafana Cloud Free
- Free tier: Limited to 3 users
- Paid plans: $55 / user / month above included usage
- Access to all Enterprise Plugins
- Fully managed service (not available to self-manage)
Self-hosted Grafana Enterprise
- Access to all Enterprise plugins
- All Grafana Enterprise features
- Self-manage on your own infrastructure
Grafana Cloud Free
- Free tier: Limited to 3 users
- Paid plans: $55 / user / month above included usage
- Access to all Enterprise Plugins
- Fully managed service (not available to self-manage)
Self-hosted Grafana Enterprise
- Access to all Enterprise plugins
- All Grafana Enterprise features
- Self-manage on your own infrastructure
Grafana Cloud Free
- Free tier: Limited to 3 users
- Paid plans: $55 / user / month above included usage
- Access to all Enterprise Plugins
- Fully managed service (not available to self-manage)
Self-hosted Grafana Enterprise
- Access to all Enterprise plugins
- All Grafana Enterprise features
- Self-manage on your own infrastructure
Grafana Cloud Free
- Free tier: Limited to 3 users
- Paid plans: $55 / user / month above included usage
- Access to all Enterprise Plugins
- Fully managed service (not available to self-manage)
Self-hosted Grafana Enterprise
- Access to all Enterprise plugins
- All Grafana Enterprise features
- Self-manage on your own infrastructure
Grafana Cloud Free
- Free tier: Limited to 3 users
- Paid plans: $55 / user / month above included usage
- Access to all Enterprise Plugins
- Fully managed service (not available to self-manage)
Self-hosted Grafana Enterprise
- Access to all Enterprise plugins
- All Grafana Enterprise features
- Self-manage on your own infrastructure
Installing Altinity plugin for ClickHouse on Grafana Cloud:
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
Installing plugins on a Grafana Cloud instance is a one-click install; same with updates. Cool, right?
Note that it could take up to 1 minute to see the plugin show up in your Grafana.
For more information, visit the docs on plugin installation.
Installing on a local Grafana:
For local instances, plugins are installed and updated via a simple CLI command. Plugins are not updated automatically, however you will be notified when updates are available right within your Grafana.
1. Install the Data Source
Use the grafana-cli tool to install Altinity plugin for ClickHouse from the commandline:
grafana-cli plugins install
The plugin will be installed into your grafana plugins directory; the default is /var/lib/grafana/plugins. More information on the cli tool.
Alternatively, you can manually download the .zip file for your architecture below and unpack it into your grafana plugins directory.
Alternatively, you can manually download the .zip file and unpack it into your grafana plugins directory.
2. Configure the Data Source
Accessed from the Grafana main menu, newly installed data sources can be added immediately within the Data Sources section.
Next, click the Add data source button in the upper right. The data source will be available for selection in the Type select box.
To see a list of installed data sources, click the Plugins item in the main menu. Both core data sources and installed data sources will appear.
3.2.3 (2024-09-03)
Fixes:
- fix default values in connection editor behavior when changing fields, also change URL, fix https://github.com/Altinity/clickhouse-grafana/issues/615
3.2.2 (2024-08-11)
Enhancements:
- improve e2e tests, more scenarios covered
- if query is filled then switched to Query Editor instead of Query Settings, fix https://github.com/Altinity/clickhouse-grafana/issues/605
Fixes:
- remove
information_schema
from adhoc system.columns drop down query - fix issues with
use defaul values
option in connection settings dialog, fix https://github.com/Altinity/clickhouse-grafana/issues/600 - fix query reset error when edit multiple queries panel, fix https://github.com/Altinity/clickhouse-grafana/issues/604
- properly handle empty response with 502 status code, fix https://github.com/Altinity/clickhouse-grafana/issues/595
3.2.1 (2024-06-25)
Fixes:
- fix wrong time series conversion when custom GROUP BY clause, fix https://github.com/Altinity/clickhouse-grafana/issues/583
3.2.0 (2024-06-24)
Enhancements:
- add ability to setup default values for query builder in connection properties to avoid unnecessary duplicated UI actions, fix https://github.com/Altinity/clickhouse-grafana/issues/495
- add example dashboard for Histogram support, fix https://github.com/Altinity/clickhouse-grafana/issues/497
- add properly visualization for Map and JSON ClickHouse types, fix https://github.com/Altinity/clickhouse-grafana/issues/486 and https://github.com/Altinity/clickhouse-grafana/issues/189
- add e2e test which cover basic happy path scenarios
Fixes:
- fixed wrong time range reset after query editing in explore mode, fix https://github.com/Altinity/clickhouse-grafana/issues/566
- fixed wrong behavior for $columns + subqueries, fix https://github.com/Altinity/clickhouse-grafana/issues/565
3.1.1 (2024-05-20)
Fixes:
- fixed wrong encoding messages in golang part of plugin
- fixed wrong screenshots reference in plugin.json
- fixed wrong alerts query editor behavior, fix https://github.com/Altinity/clickhouse-grafana/issues/560 (affected 3.1.0)
- fixed syntax highlight when auto-complete items sql query failed, fix https://github.com/Altinity/clickhouse-grafana/issues/559
- fixed drop-down fields values clean when datasource changed https://github.com/Altinity/clickhouse-grafana/issues/561
3.1.0 (2024-05-10)
Enhancements:
- add additional auto-complete functions the same as
clickhouse-client
, fix https://github.com/Altinity/clickhouse-grafana/issues/509 - add support for Flamegraph and Tracing, fix https://github.com/Altinity/clickhouse-grafana/issues/508
- enhanced support for Annotations Query builder.
- added support for HTTP Compression in Connection Settings, fix https://github.com/Altinity/clickhouse-grafana/issues/494
- switched to go 1.22
- updated README with grafana 10 screenshots
- added $rateColumnsAggregated, $perSecondsColumnsAggregated, $increaseColumnsAggregated, $deltaColumnsAggregated macros for aggregating per-second rates - fix https://github.com/Altinity/clickhouse-grafana/issues/386
- added
Add metadata
option which added SQL comment to allow detection which dashboard and user is source of query on ClickHouse server side, fix https://github.com/Altinity/clickhouse-grafana/issues/435
Fixes:
- fixed support grafana cloud fix https://github.com/Altinity/clickhouse-grafana/issues/517, fix https://github.com/Altinity/clickhouse-grafana/issues/516
- multiple UI fixes for QueryEditor component, fix https://github.com/Altinity/clickhouse-grafana/issues/551, https://github.com/Altinity/clickhouse-grafana/issues/546, https://github.com/Altinity/clickhouse-grafana/issues/555, https://github.com/Altinity/clickhouse-grafana/issues/547, https://github.com/Altinity/clickhouse-grafana/issues/540, https://github.com/Altinity/clickhouse-grafana/issues/542,
- fixed tlsSkipVerify was ignored when empty tlsCARoot or tlsClientCert and tlsClientKey, fix https://github.com/Altinity/clickhouse-grafana/issues/532
- fixed multiple issues for format
As table
, fix https://github.com/Altinity/clickhouse-grafana/issues/515, https://github.com/Altinity/clickhouse-grafana/issues/529 - fixed Annotations setup page don't contain Query textfield, fix https://github.com/Altinity/clickhouse-grafana/issues/518
- refactored processing macros and whole query on client side, fix https://github.com/Altinity/clickhouse-grafana/issues/524
- fixed legacy alerting when use $from and $to macros in query, fix https://github.com/Altinity/clickhouse-grafana/issues/458
- fixed corner case for WHERE field IN ('value1','value2') vs WHERE field IN ['value1','value2'], fix https://github.com/Altinity/clickhouse-grafana/issues/506
- fixed corner case for $conditionalTest macro, fix https://github.com/Altinity/clickhouse-grafana/issues/524
3.0.0 (2024-01-19)
Enhancements:
- rewrite plugin from scratch for Grafana 10+ compatibility using react instead of angular
- if you provision datasource from YAML, now
xHeaderKey
move fromjsonData
tosecureJsonData
, and you need to adddataSourceUrl
tojsonData
, look https://github.com/Altinity/clickhouse-grafana/issues/348 anddocker/grafana/provisioning/datasources/clickhouse-x-auth.yaml
for details
2.5.4 (2023-09-13)
Enhancements:
- switch to go 1.21
Fixes:
- fix
$conditionalTest
macro behavior when drop-down template variable doesn't haveAll value
, haveMulti value
and zero values is selected, fix https://github.com/Altinity/clickhouse-grafana/issues/485 - fix some function descriptions in ACE query editor
2.5.3 (2022-11-22)
Enhancements:
- add secureJsonData description for datasource in README, fix https://github.com/Altinity/clickhouse-grafana/issues/452
- add $delta, $deltaColumns, $increase, $increaseColumns, fix https://github.com/Altinity/clickhouse-grafana/issues/455
Fixes:
- add CGO_ENABLED=0 when build plugin, fix https://github.com/Altinity/clickhouse-grafana/issues/447
2.5.2 (2022-09-05)
Enhancements:
- add Node graph example, fix https://github.com/Altinity/clickhouse-grafana/issues/352
Fixes:
- properly escaping database and table identifiers on client-side, fix https://github.com/Altinity/clickhouse-grafana/issues/440, add more tests
2.5.1 (2022-08-24)
Enhancements:
- Switch to go1.19, update go package dependencies
Fixes:
- properly escaping database and table identifiers, fix https://github.com/Altinity/clickhouse-grafana/issues/440
2.5.0 (2022-05-31)
Enhancements:
- Add support for Logs visualization, fix https://github.com/Altinity/clickhouse-grafana/issues/331, thanks @Fiery-Fenix and @pixelsquared
- Add $conditionalTest to editor auto-complete
- Add support $__searchFilter to template variable queries, fix https://github.com/Altinity/clickhouse-grafana/issues/354
- Add allow sub-seconds time resolution with $timeSeriesMs and $timeFilterMs support, fix https://github.com/Altinity/clickhouse-grafana/issues/344, fix https://github.com/Altinity/clickhouse-grafana/issues/398
- Expand template variable values when open context menu
Explore
, fix https://github.com/Altinity/clickhouse-grafana/issues/346
Fixes:
- remove legacy binaries in dist folder, fix https://github.com/Altinity/clickhouse-grafana/issues/419
- allow Nullable types in alert label name in backend part, fix https://github.com/Altinity/clickhouse-grafana/issues/405
- remove INFORMATION_SCHEMA from adhoc control, fix https://github.com/Altinity/clickhouse-grafana/issues/426
- legacy binaries in dist folder after 2.4.4 release plugin name changed, fix https://github.com/Altinity/clickhouse-grafana/issues/419
- resolve corner case for
SELECT x IN (SELECT ...)
, fix https://github.com/Altinity/clickhouse-grafana/issues/421 - tested textbox variables with
${variable:sqlstring}
format, fix https://github.com/Altinity/clickhouse-grafana/issues/125
2.4.4 (2022-04-01)
Fixes:
- replace Vertamedia Altinity logo
2.4.3 (2022-03-02)
Fixes:
- change ClickHouse logo to Altinity logo
2.4.2 (2021-12-29)
Fixes:
- fix unified alerts interval https://github.com/Altinity/clickhouse-grafana/issues/400
2.4.1 (2021-12-20)
Enhancements:
- update dependencies, try to fix critical nodejs dependencies issues
Fixes:
- fix unnecessary warning Logging message on backend part
- fix https://github.com/Altinity/clickhouse-grafana/issues/366
- fix https://github.com/Altinity/clickhouse-grafana/issues/357
- fix https://github.com/Altinity/clickhouse-grafana/issues/345
- fix https://github.com/Altinity/clickhouse-grafana/issues/342
- fix https://github.com/Altinity/clickhouse-grafana/issues/385
- fix https://github.com/Altinity/clickhouse-grafana/issues/317
- fix https://github.com/Altinity/clickhouse-grafana/issues/336
- fix https://github.com/Altinity/clickhouse-grafana/issues/320
- fix https://github.com/Altinity/clickhouse-grafana/issues/326
2.4.0 (2021-11-29)
Enhancement:
- Add support for Grafana 8.x unified alerts, fix https://github.com/Altinity/clickhouse-grafana/issues/380
- Add TLS support for backend alerts part of plugin https://github.com/Altinity/clickhouse-grafana/issues/356#issuecomment-906732530
- Add $naturalTimeSeries macro, look details in https://github.com/Altinity/clickhouse-grafana/pull/89/files#diff-cd9133eda7b58ef9c9264190db4534a1be53216edbda9ac57256fbd800368c03R383-R412
- Update golang-plugin-sdk-go to latest version
- Properly format Value in Table format, look details https://github.com/Altinity/clickhouse-grafana/pull/379
- Remove toDateTime64 casting for column when time column is already DateTime64 to improve performance. Change test to ensure the casting is removed from the query, fix https://github.com/Altinity/clickhouse-grafana/issues/360
- implements
$timeFilter64ByColumn(column_name)
macro, fix https://github.com/Altinity/clickhouse-grafana/issues/343
Fixes:
- implements properly GET and POST support for alert queries, fix https://github.com/Altinity/clickhouse-grafana/issues/353
- SQL syntax highlight now works always, fix https://github.com/Altinity/clickhouse-grafana/issues/174, fix https://github.com/Altinity/clickhouse-grafana/issues/381
- fix https://github.com/Altinity/clickhouse-grafana/issues/376,
- fix negative behavior for $perSecondColumns https://github.com/Altinity/clickhouse-grafana/issues/337
- fix https://github.com/Altinity/clickhouse-grafana/issues/374, ignore
--
inside quotas as comment
2.3.1 (2021-04-23)
Breaking changes
- On latest Grafana 7.x releases, template variables SQL queries shall return only scalar types of values, see https://github.com/Altinity/clickhouse-grafana/issues/328
Enhancement:
- add support Apple M1 ;)
- switch to new grafana plugin Golang SDK, thanks to @bmanth60 and @valeriakononenko for help
- add BasicAuth support for alerts, see https://github.com/Altinity/clickhouse-grafana/issues/267
Fixes:
- fix github actions backend build
- fix UNION ALL parsing, see https://github.com/Altinity/clickhouse-grafana/issues/319
- fix many issues with alerting
- https://github.com/Altinity/clickhouse-grafana/issues/305
- https://github.com/Altinity/clickhouse-grafana/issues/327
- https://github.com/Altinity/clickhouse-grafana/issues/334
- https://github.com/Altinity/clickhouse-grafana/issues/335
2.2.3 (2021-02-17)
Enhancement:
- automate plugin sign process via github actions, fix wrong executable file permissions
2.2.0 (2020-11-30)
Enhancement:
- add region support to annotation query, try to fix wrong column orders for table format, fix https://github.com/Altinity/clickhouse-grafana/issues/303
- add plugin sign process, fix https://github.com/Altinity/clickhouse-grafana/issues/212
- add
DateTime64
support, fix https://github.com/Altinity/clickhouse-grafana/issues/292 - add
linux\arm64
backend plugin build - improve ARRAY JOIN parsing, fix https://github.com/Altinity/clickhouse-grafana/issues/284
- improve
docker-compose.yaml
add ability to redefineGRAFANA_VERSION
andCLICKHOUSE_VERSION
via environment variableslatest
by default
Fixes:
- add
*.js.map
and*.js
from src and spec folder to .gitignore - don't apply adhoc filters twice when used $adhoc macros, fix https://github.com/Altinity/clickhouse-grafana/issues/282
- fix corner case for table format with wrong columns order between meta and data response section, fix https://github.com/Altinity/clickhouse-grafana/issues/281
- add trickster to docker-compose environment
- actualize links in README.md
2.1.0 (2020-08-13)
Enhancement:
- add "Skip comments" checkbox to query editor to pass SQL comments to server, fix https://github.com/Altinity/clickhouse-grafana/issues/265
- add setup notes for Grafana 7.x to README
- add SQL preprocessing logic on browser side with <% js code subset %>, https://github.com/Altinity/clickhouse-grafana/pull/186, thanks @fgbogdan
- improve alerts query processing for use case when
query(query_name, from, to)
time range is less than visible dashboard time range, see https://github.com/Altinity/clickhouse-grafana/issues/237 - improve alerts json parsing in golang part for case when we have string fields in response which interprets as series name, see https://github.com/Altinity/clickhouse-grafana/issues/230
- properly parsing POST queries in golang part of plugin, https://github.com/Altinity/clickhouse-grafana/pull/228, thanks @it1804
Fixes:
- fix corner cases for $macro + subquery, see https://github.com/Altinity/clickhouse-grafana/issues/276 and https://github.com/Altinity/clickhouse-grafana/issues/277
- fix parallel query execution, see https://github.com/Altinity/clickhouse-grafana/pull/273
- fix identifiers quotes, see https://github.com/Altinity/clickhouse-grafana/issues/276, https://github.com/Altinity/clickhouse-grafana/issues/277
- fix plugin.json for pass
grafana-plugin-repository
plugin validator - fix multi-value variables behavior - https://github.com/Altinity/clickhouse-grafana/issues/252
- add Vagrantfile for statefull environment and allow to upgrade scenario like grafana 7.1.0 + grafana-cli upgrade-all
- fix https://github.com/Altinity/clickhouse-grafana/issues/244
- fix https://github.com/Altinity/clickhouse-grafana/issues/243
- add multiple dashboard examples for github issues:
- fix https://github.com/Altinity/clickhouse-grafana/issues/240
- fix https://github.com/Altinity/clickhouse-grafana/issues/135
- fix https://github.com/Altinity/clickhouse-grafana/issues/245
- fix https://github.com/Altinity/clickhouse-grafana/issues/238
- fix https://github.com/Altinity/clickhouse-grafana/issues/232
- fix https://github.com/Altinity/clickhouse-grafana/issues/127
- fix https://github.com/Altinity/clickhouse-grafana/issues/141
2.0.2 (2020-07-06)
Enhancements:
- add alerts support for Windows and MacOSX
- improve ad-hoc filters for query field values as
SELECT DISTINCT field AS value FROM db.table LIMIT 300
, https://github.com/Altinity/clickhouse-grafana/pull/222 - add ability to multiple JOIN parsing https://github.com/Altinity/clickhouse-grafana/pull/206
- multiple improvements for docker-compose environments, add automatic dashboards and datasource provisions which help to reproduce most of the corner cases which happens in Grafana + ClickHouse
Fixes:
- apply a workaround for UTC timezone for Date and DateTime columns in grafana dashboards https://github.com/Altinity/clickhouse-grafana/issues/117
- clear documentation about timestamp term for $from and $to https://github.com/Altinity/clickhouse-grafana/issues/115
- fix AST parsing corner case in
WHERE [test, 'test']
"," was skipped, fix ah-doc ast FROM recursive parsing https://github.com/Altinity/clickhouse-grafana/issues/99 - fix corner cases for table functions parsing when adhoc filter applied https://github.com/Altinity/clickhouse-grafana/issues/130
- fix multiple grammar issues in README.md
- fix convert rules for Float, Decimal columns from Clickhouse to Grafana Table plugin https://github.com/Altinity/clickhouse-grafana/issues/199
- fix corner cases when Grafana Template variable value represented as array of strings https://github.com/Altinity/clickhouse-grafana/issues/169
- fix AST parsing corner cases for $macroFunctions correct position for FROM statement https://github.com/Altinity/clickhouse-grafana/issues/187
2.0.1 (2020-06-19)
Fixes:
- fix golang alerts for $columns, $perSecond, $perSecondColumns macros https://github.com/Altinity/clickhouse-grafana/pull/200
2.0.0 (2020-06-17)
Enhancements:
- compatibility with grafana 7.x, please use environment variable
GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=vertamedia-clickhouse-datasource
orallow_loading_unsigned_plugins=vertamedia-clickhouse-datasource
in plugins section ofgrafana.ini
https://github.com/Altinity/clickhouse-grafana/pull/192 - add grafana 7.x alerting support thanks to Brian Thai https://github.com/bmanth60
- add alias support to $perSecondColumns macro https://github.com/Altinity/clickhouse-grafana/pull/193
- Support
custom
variable type and empty values for$conditionalTest
macro https://github.com/Altinity/clickhouse-grafana/pull/178 - add docker-compose.yaml to improve local development
Fixes:
- fix AST for corner case when quotes escaped inside quotes https://github.com/Altinity/clickhouse-grafana/pull/123, https://github.com/Altinity/clickhouse-grafana/pull/195
- fix https://github.com/Altinity/clickhouse-grafana/issues/179, add "Extrapolation" checkbox to Query Editor
1.9.5 (2020-01-15)
Fixes:
- Comments not supported by sql language parser #95
1.9.4 (2019-11-27)
Fixes:
- Ad Hoc Filters small adjustments for numeric values
- UI optimizations within Metric builder
1.9.3 (2019-10-18)
Fixes:
- Ad Hoc Filters improvements for complex usage
1.9.2 (2019-10-10)
Fixes:
- Compatibility fix to support grafana 6.4.x
- Ad Hoc Filters fix
- $conditionalTest ALL value option fix
1.9.0 (2019-08-12)
New features:
- Add macro
conditionalTest
(thx to @TH-HA) #122 - Add support for connect to Yandex.Cloud ClickHouse (thx to @negasus) #106
Fixes:
- Fix identifier back quoting when there is a function call
- Fix AST parser errors for quotes (thx to @Fiery-Fenix) #128
- Added default database to all requests from datasource options (thx to @Fiery-Fenix) #126
- Drop lodash fcn composition (thx to @simPod) #110
- Cleanup build (thx to @simPod) #112
1.8.1 (2019-02-01)
New features:
- Add
timeFilterByColumn
macro (thx to @simPod) #68
Fixes:
- add requestId to queries so that abandoned one are cancelled (thx to @nvartolomei)
- bug with parentheses in
$unescape
macros #90 - bug with treating string as numbers in table view #97
1.8.0 (2018-11-07)
New features
- new $perSecond and $perSecondColumns macros (thx to @simPod) #78 #80
- Date column is now optional #48
Fixes:
- extend queried timerange for queries with round option to provide a graph without gaps in the rightmost and leftmost points #84
- adhocs: check whether it is possibly to apply filters by comparing with parsed query or query builder settings #86
1.7.0 (2018-09-05)
New Features
- provide $adhoc macros for using ad-hoc filters in inner queries (thx to @vavrusa)
- allow to set custom query for ad-hoc filter via
adhoc_query_filter
variable - provide new
Round
value$step
for auto-rounding according to graph resolution changes
1.6.0 (2018-08-07)
New Features
- annotations support (txh to @atsirin)
- allow to use
$from
and$to
macroses in variable queries - provisioning config example in README
1.5.1 (2018-06-05)
Fixes
- optimize memory use for range time series (thx to @vavrusa)
- apply ad-hoc filters on inner subqueries (thx to @vavrusa)
1.5.0 (2018-05-31)
New Features
- new datasource setting -
default database
. If set it will be prefilled in the query builder, and used to make ad-hoc filters more convenient (thx to @vavrusa) - support wildcard ad-hoc filters for dashboards using multiple tables (thx to @vavrusa)
- parse dimensions from GROUP BY to simplify querying (see piechart and worldmap examples) (thx to @vavrusa)
$timeCol
to$dateCol
renamed to be more clear with column types (thx to @simPod)
1.4.3 (2018-04-09)
Fixes
- fix broken AST when using nested
SELECT
withoutFROM
statement (#45) - strict statement matching (#44)
- rebuild queries from AST only if adhoc filters were applied
1.4.2 (2018-03-18)
Fixes
- support
UNION ALL
statements - proper format for
LIMIT N,M
construction (thx to @shankerwangmiao) - update
Show Help
section with $unescape description
1.4.1 (2018-03-12)
New Features
- $unescape - unescapes variable value by removing single quotes. Used for multiple-value string variables: "SELECT $unescape($column) FROM requests WHERE $unescape($column) = 5"
Fixes
- labmda-operator
->
no more breaks while reformatting query
1.4.0 (2018-03-08)
New Features
Ad-hoc filters support:
- If there is an Ad-hoc variable, plugin will fetch all columns of all tables of all databases (except system database) as tags.
So in dropdown menu will be options like
database.table.column
- If there are ENUM columns, plugin will fetch their options and use them as tag values
- Plugin will apply Ad-hoc filters to all queries on the dashboard if their settings
$database
and$table
are the same as Ad-hoc'sdatabase.table
- There are no option to apply OR operator for multiple Ad-hoc filters - see grafana/grafana#10918
- There are no option to use IN operator for Ad-hoc filters due to Grafana limitations
1.3.1 (2018-02-12)
Fixes
- support array indexing int AST
1.3.0 (2018-02-07)
New Features
- columns autocompletion in ace-editor
1.2.7 (2018-01-05)
Fixes
- properly format query with reserved names
- fix #31
1.2.6 (2017-12-13)
Fixes
- allow rounding with
round
option both time filters: $from and $to
1.2.5 (2017-12-05)
Fixes
1.2.4 (2017-11-22)
Fixes
- apply proper value formatting for table format
1.2.3 (2017-11-20)
Fixes
- commit generated files
1.2.2 (2017-11-20)
Fixes
- fix error with absent
getCollapsedText
#24 - suppress errors while parsing AST #24
- show generated SQL in textarea #24
- do not round timestamp after converting #25
- increase max-height of query editor
1.2.1 (2017-11-17)
Fixes
1.2.0 (2017-11-15)
New Features
- Ace editor
- ClickHouse function completion (thx to https://github.com/smi2/tabix.ui)