Database observability: How OpenTelemetry semantic conventions improve consistency across signals
Databases are a crucial part of modern systems, which means database observability is incredibly important, too. However, gathering information on them can be complex, variable, and tricky to instrument in a consistent way.
OpenTelemetry is helping to change that, and one of the most important aspects in making it work is a set of shared rules called semantic conventions. These conventions might sound abstract, but they’re actually the backbone of clarity and consistency across observability signals. They tell us how to name things, what kind of data to expect, and how to make sure that the signals make sense together—no matter what language or tool you’re using.
In this post, which is based on my recent KubeCon talk, I’ll dive into the role of semantic conventions in improving database observability and highlight the recent work we’ve done to make the OpenTelemetry database semantic conventions stable, which included my own work on creating prototypes and helping with the definitions. This work may not always get the spotlight, but it’s a big deal for anyone building or monitoring systems at scale.
What are OpenTelemetry semantic conventions and why do they matter?
In OpenTelemetry, semantic conventions define how we name spans, metrics, and attributes. Think of them as a shared language. Without it, two teams might measure the same thing—like database query duration—but call it completely different names (e.g., statement.duration
vs. query.time
). That’s confusing for users, hard to work with, and even harder to visualize or aggregate across systems.
When we agree on semantic conventions, we eliminate that confusion. Everyone is on the same page, regardless of language, library, or database. And that’s exactly why we’ve been working so hard to get database semantic conventions to a stable state in OpenTelemetry.
The challenge of standardizing database observability
Creating these conventions isn’t a quick task. We’re trying to define things in a way that works across all databases—from Postgres and MySQL to NoSQL—and in all languages. For example, should we call it table
or collection
? Should an attribute exist if not all databases support it?
Even something as simple as rows_returned
isn’t so straightforward. Are we returning the number of rows in the buffer, or the total that could be fetched? What if the query is lazy-loaded or paginated? Do we even have events for that?
It’s a delicate balance between being specific and being flexible enough to support every backend out there. And yes, sometimes it takes months to iron out all the edge cases.
But we’ve made huge progress.
Making OpenTelemetry database conventions stable
We recently marked database semantic conventions as stable.
This is huge. It means that the span names, metrics, and attributes you use for database observability are no longer moving targets. It gives developers, vendors, and users confidence to invest in building tooling and instrumentation around them.
What’s included?
- Client spans
- Metrics
- Vendor-specific attributes
These conventions even include guidance on things like sanitization and query summarization, because privacy matters. By default, no sensitive data is ever sent. You’d have to explicitly opt in if you want to include raw query data—and that feature is still being finalized.
How to instrument a simple app with OpenTelemetry and visualize it in Grafana
To show what it looks like to use OpenTelemetry’s semantic conventions in practice, let’s take a look at an example with React (frontend), Node.js (backend, which is the part we will instrument) and a Postgres database. The goal is to automatically generate rich, meaningful telemetry data, without requiring manual instrumentation of every query.

Note: This post is focusing on the backend, but the frontend, along with everything else, can be found here.
Here is how to setup the backend:
1. Set up database calls
Create a file called database.js
, which will contain the function to connect to the database and all other functions used by the frontend (get
, add
, and remove user
):
const { Pool: PGPool } = require('pg');
const { v4: uuidv4 } = require('uuid');
function startPsql() {
const pool = new PGPool({
user: 'username',
password: 'admin',
host: 'localhost',
port: 5432, // default Postgres port
database: 'db_name',
});
return pool;
}
const getUsers = async ( pgPool ) => {
const queryText = 'SELECT user_id, first_name, last_name, email FROM users';
return pgPool.query(queryText);
}
const addUser = ( pgPool, firstName, lastName, email ) => {
const userID = uuidv4();
const queryText = 'INSERT INTO users (user_id, first_name, last_name, email) VALUES ($1, $2, $3, $4)';
return pgPool.query(queryText, [userID, firstName, lastName, email]);
}
const removeUser = ( pgPool, userID ) => {
const queryText = `DELETE FROM users where user_id=\'${userID}\'`;
return pgPool.query(queryText);
}
exports.startPsql = startPsql;
exports.getUsers = getUsers;
exports.addUser = addUser;
exports.removeUser = removeUser;
2. Create routes for the database calls
On the index.js
file, add the routes to the functions from the database file:
const express = require('express')
const db = require('./database')
const app = express();
const PORT = 3030;
const pgPool = db.startPsql();
app.get('/users', async ( req, res ) => {
res.header("Access-Control-Allow-Origin", "*");
try {
const dbRes = await db.getUsers(pgPool);
res.send(dbRes.rows);
} catch(e) {
console.error(e);
res.status(500);
res.send({error: e});
}
})
app.post('/add', ( req, res ) => {
res.header("Access-Control-Allow-Origin", "*");
var users = db.addUser(pgPool, req.query['first_name'], req.query['last_name'], req.query['email']);
res.send(users);
})
app.post('/remove', ( req, res ) => {
res.header("Access-Control-Allow-Origin", "*");
var users = db.removeUser(pgPool, req.query['userID']);
res.send(users);
});
app.listen(PORT, () => console.log(`Server running on port: http://localhost:${PORT}`));
3. Install OpenTelemetry dependencies
Install the dependencies from the SDK and also the instrumentation-pg
one, since we’re using Postgres:
npm install @opentelemetry/api \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/sdk-node \
@opentelemetry/sdk-metrics \
@opentelemetry/exporter-metrics-otlp-http \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/instrumentation-pg
4. Configure OpenTelemetry
In a new file (e.g., instrumentation.js
), configure the OpenTelemetry SDK with the OTLP exporter and automatic instrumentation:
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { PeriodicExportingMetricReader } = require('@opentelemetry/sdk-metrics');
const { OTLPMetricExporter } = require('@opentelemetry/exporter-metrics-otlp-http');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { PgInstrumentation } = require('@opentelemetry/instrumentation-pg');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter(),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter(),
}),
instrumentations: [
new PgInstrumentation(),
getNodeAutoInstrumentations(),
],
});
sdk.start();
Note: In this example, we’re using auto-instrumentation, which already has
PgInstrumentation
, so is not necessary to initialize it. But I added it to make it clear where the initialization is happening, and also as an example of what you should do when not using auto instrumentation.
5. Add environment variables
Add the required environment variables to be able to connect to your Grafana instance:
export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://otlp-gateway-prod-us-east-0.grafana.net/otlp"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic O...="
export OTEL_SERVICE_NAME="user-crud-backend"
6. Restart your application
Before starting the app, make sure to initialize it by requiring the instrumentation.js
file such as:
node --require ./instrumentation.js index.js
7. Generate data
Once the app is running, go back to the UI and do a few refreshes, add new users or remove them to have telemetry data generated.
8. Visualize the data
Go to our Grafana instance and see the data.
Trace and span
To see the data, log into Grafana Cloud (If you don’t already have one, you can sign up for a forever-free account today) and select Application Observability, then your application using the service name you chose in step 5; it will open the Overview page for your service. You will be able to switch to the Traces tab, where the list of all Traces will be displayed.

Selecting any of them, you can see the spans that are part of each one.

Pay special attention to the last two spans from the image above. Those are the spans from the interaction with the database: the connection and the SELECT query.

You can also click on the span and it will open its details, including its attributes, such as the following image.

Metrics
Now let’s change to the “Explore Metrics” page, and select the metric db_client_operation_duration_seconds
, which will display a chart for it. Because the metric has attributes, we can also do a breakdown on them, for example, creating one each for each value of the attribute operation type.

Using OpenTelemetry and the Postgres instrumentation, we added tracing and metrics with minimal setup. We could then see, directly in Grafana:
- How long each query took
- Which operations (
select
,insert
,delete
) were slowest - The span and metric attributes for each database action
This kind of visibility is invaluable for performance tuning, cost analysis, and even evaluating different databases, ORMs, or query changes. Just by comparing durations before and after a schema change, you can validate whether your index actually made a difference.
And the best part? Thanks to semantic conventions, all of this is consistent. So if I switch from Postgres to MySQL, or move from Node to Java, the core structure of my data stays the same.
What’s next for OpenTelemetry and database instrumentation?
The stabilization is done for span, the “duration” metric, and their respective attributes, for the databases: MariaDB, Microsoft SQL Server, MySQL, and PostgreSQL. That means that other databases and metrics are still tagged under development.
To be able to mark the remaining items as stable we need help from the community and database vendors, so proper prototypes can be done for them, following existing semantic conventions to make sure it all works for those or if any adjustments need to be made.
During the stabilization phase, several prototypes were made for different SDKs, now we need to complete the implementation for all remaining SKD languages.
If you’re using OpenTelemetry—awesome! Try the database instrumentation and let us know what works and what doesn’t. Edge cases, weird DB quirks, surprising behavior—we want to hear it all. You can reach out using the OpenTelemetry Slack channels or creating issues to the OpenTelemetry repository that you used the components from.
And if you’re interested in contributing, even better! As a maintainer for contributor experience, I’m always happy to help newcomers get started.
We’re building something really powerful, but we can’t do it alone.
How Grafana Labs is contributing to OpenTelemetry and database observability
At Grafana Labs, we’re deeply invested in making observability more accessible, consistent, and actionable—and OpenTelemetry plays a critical role in that mission. Our engineers are actively involved in the OpenTelemetry community, contributing to the specification and implementation of semantic conventions. This includes helping to define and refine stable, vendor-neutral standards that make it easier for users to instrument, among other things, their database workloads once and visualize the data anywhere.
We’re also applying these standards in our own products. For example, Grafana Tempo, our distributed tracing backend, is fully compatible with OpenTelemetry and supports database-related attributes out of the box. The same for Grafana Mimir, which supports metrics.
By aligning our observability stack with OpenTelemetry’s semantic conventions—and helping shape those conventions—we’re aiming to reduce the complexity of instrumentation and improve the end-to-end experience for developers, SREs, and platform teams alike.
Grafana Cloud is the easiest way to get started with metrics, logs, traces, dashboards, and more. We have a generous forever-free tier and plans for every use case. Sign up for free now!