GrafanaCON 2026 announcements: A guide to all the latest news from Grafana Labs

GrafanaCON 2026 announcements: A guide to all the latest news from Grafana Labs

2026-04-2111 min
Twitter
Facebook
LinkedIn

GrafanaCON 2026 kicked off in Barcelona, which is a fitting city to reveal the latest updates in Grafana 13. 

In 2013, Grafana Labs Co-founder Torkel Ödegaard made the first commit for what would become Grafana while he was on vacation in the Catalan city. "I was traveling here for the Christmas holiday and I got a cold and spent most of the day in bed coding and working on Grafana," said Torkel during the opening keynote of GrafanaCON, our biggest community event of the year. "Even after I had gotten better, I stayed locked in my room working on Grafana … I was already obsessed with what I was building." 

Turns out, Grafana was infectious. The open source project now boasts more than 35 million users, including 100 Grafana champions, worldwide. 

"I want to reinforce that this journey has been our shared journey," said Grafana Labs Co-founder and CEO Raj Dutt while addressing the GrafanaCON audience. "We've gotten so much great feedback, input, pull requests, all of the above from so many people in the community, including many people in this room."

Today at GrafanaCON 2026, hundreds of those open source community members gathered to hear Grafana Labs' engineering and leadership teams deliver an action-packed opening keynote with announcements about how we're making it easier to get started with observability, easier to operate Grafana at scale, and easier to access and analyze your observability data from anywhere.

Here is a quick recap of all the major news coming out of GrafanaCON. Check out the full keynote to learn about the latest updates in open source observability updates and new developments in AI.

Grafana 13 release: From zero to insights in minutes 

The latest major release of Grafana is here to help you get value from your data faster than ever—whether you’re building dashboards, operating at scale, or evolving the platform to meet changing needs.

To get you from blank canvas to meaningful insights in just a few clicks, there are now suggested dashboards, which automatically surface pre-built dashboards tailored to your connected data sources with curated, ready-to-use visualization options. Dynamic dashboards, a dashboarding experience that is more intuitive, responsive, and scalable to better support growing teams, is also now generally available. Naturally, there are also new ways to visualize your data. (Did someone say Graphviz panel?)

Another way we're making it easier to spin up insights in Grafana? We announced that Grafana OSS and Grafana Enterprise users can now access Grafana Assistant, our AI-powered agent in Grafana Cloud, to further customize dashboards and templates, streamline SQL expressions, and more. (More on this below!) 

Video

Grafana 13 is also focused on making it easier to maintain and operate Grafana at scale. Git Sync, now generally available, enables teams to manage observability as code by bringing native GitOps workflows into their Grafana instance. You can also now run regular health checks on your Grafana server and get actionable insights and recommendations for maintaining optimal system performance with the new Grafana Advisor tool. 

To learn more, check out our Grafana 13 blog post

Grafana Assistant everywhere

Where and how you run Grafana shouldn’t determine whether you can benefit from Grafana Assistant. That's why we're expanding the ways you can use and access our purpose-built LLM, including making Assistant available to our self-managed Grafana users.

In a workflow diagram, the Assistant plugin sends LLM requests from your environment to Grafana Cloud and Grafana Assistant. The data is then sent to an LLM provider before being passed back to Assistant, after which LLM responses are sent to self-hosted environmentsIf you're a Grafana OSS and Grafana Enterprise user, you can create a Grafana Cloud account and connect it to your Grafana installation via a one-click setup, giving you access to Assistant to help analyze telemetry data and code in real time, build dashboards, ask questions, and more. 

Customize Grafana Assistant to your needs

Use Assistant skills, now generally available, to create documents that give Assistant agents instructions, context, and specialized knowledge about your team's workflows. You can also use skills to write runbooks, connect to third-party tools, auto-approve tool calls, and set up auto-remediation pipelines. We've also added a new Assistant automations feature to generate regular summaries of activities so you can stay on top of everything in your instance. 

And to help reduce context switching and get closer to where you're already working, we've expanded the ways you can access Assistant, whether that's through Slack, Microsoft Teams, or via an API or CLI. And we've added a hosted remote MCP server and the new gcx CLI tool so your agents can talk to Assistant, Grafana Cloud, or both.

To learn more, check out our Grafana Assistant blog post.

AI Observability to monitor agentic workloads

AI Observability in Grafana Cloud (in public preview) is a complete solution for teams running agents in production. 

Agents make decisions, call tools, generate content, and interact with users, services, and applications in ways that traditional observability isn't designed to handle. Until now, most organizations have been often left reading raw conversations, guessing at quality, and reacting too late. But with AI Observability in Grafana Cloud, you can:

  • Observe AI agent behavior in real time, including inputs, outputs, and execution flows
  • Continuously evaluate outputs, with alerts for issues such as low-quality responses, policy violations, or anomalous behavior
  • Surface risk earlier, including potential data exposure or misuse (for example, leaked credentials or abnormal usage patterns)
  • Elevate agent sessions and conversations to first-class telemetry signals and correlate them in the same environment where applications are observed

AI Observability began as an internal hackathon project to improve the ways we monitor Grafana Assistant. We're excited to share what we've learned so others can benefit from the same capabilities that have served us so well.

O11y benchmark for AI agents

To help the community navigate this new world of AI-assisted observability, we’re open sourcing grafana/o11y-bench, a benchmark for evaluating AI agents on observability workflows. It runs agents against a real Grafana stack with access to Grafana MCP server and grades them on a set of observability tasks within that environment.

o11y-bench is built on Harbor, an open source framework released by the creators of Terminal Bench that standardizes environments for benchmarking agents against sets of focused tasks. The benchmark we developed focuses on the workflows that actually matter in practice: querying metrics, logs, and traces; investigating incidents; and making targeted dashboard changes.

Our goal with o11y-bench is to engage the community to see what's possible. We have kicked off the leaderboard with a set of base frontier models, but we welcome your feedback to help address new combinations of agent harnesses, model configurations, and experimentation to push agent capabilities in observability forward.

To discover more, check out our grafana/o11y-bench blog post.

OSS project updates: Loki, OpenTelemetry, and more 

The evolution of Loki 

The rise of structured logs and OpenTelemetry has fundamentally changed how teams use logs, shifting from simple search toward more analytical, high-cardinality queries. To accommodate this shift, we’re introducing a major evolution of Loki, our open source log aggregation system, that’s designed for faster logging at scale.

Updates include:

  • Kafka-backed ingestion for more efficient, durable pipelines at the ingestion layer.
  • A redesigned query engine and scheduler to better handle large-scale analytical workloads. 

Together, these changes deliver up to 20x less data scanned and 10x faster performance on aggregated queries, making it faster to answer complex questions across massive datasets.

To further accelerate Loki’s evolution, Grafana Labs also announced the acquisition of Logline, an early-stage company founded by tenured engineering leader and entrepreneur Jason Nochlin. Specializing in “needle–in-the-haystack” log queries and full text search, we are introducing a new indexing approach that makes it much faster to find specific, highly unique values in large datasets. 

Pyroscope 2.0: rearchitecting the continuous profiling database

A ground-up rearchitecture of our open source continuous profiling database, Pyroscope 2.0 is designed to make continuous profiling more efficient and cost-effective at scale.

Decorative imageBy eliminating write-path replication, improving data storage efficiency, and introducing stateless querying that scales with demand, the latest major release of Pyroscope lowers both the cost and complexity of continuous profiling. These architectural changes also enable a range of new features like generating metrics from profiles, inspecting individual profiles, heatmap queries, and more.

To learn more, read our Pyroscope 2.0 release blog.

k6 2.0: AI-driven performance testing

With the upcoming major release of k6 2.0, you can catch performance issues earlier and with less effort. This major release of the open source performance testing tool delivers AI-assisted authoring, rich assertions, and broad protocol support. Highlights include:

  • AI-driven testing: New AI-focused subcommands—agent, mcp, docs, and explore—make it easier to generate, adapt, and run performance tests. These tools enable deeper integration with AI workflows and help teams use k6 programmatically.
  • Assertions API: Inspired by Playwright, the Assertions API offers a familiar and expressive way to validate application behavior in k6. 
  • Extension ecosystem: k6 2.0 introduces a formalized extension ecosystem and catalog, combining official Grafana Labs extensions with a growing set of community contributions. 
  • Distributed testing with k6 Operator 1.0: k6 Operator for Kubernetes has reached version 1.0, bringing stable custom resource definitions (CRDs), semantic versioning, and a more predictable release cycle. 

Grafana Marketplace 

As your observability practice evolves, you often depend on a growing set of plugins to reach your target data in various systems and services. These plugins can become mission-critical, where ongoing support and compatibility is paramount.

To support the continued, sustainable growth of these plugins, we're introducing the Grafana Marketplace—a new platform that allows independent software vendors, systems integrators, and developers in the Grafana community to sell and distribute plugins developed for Grafana. Our founding marketplace partners include Crest Data, Phenisys, and KensoBI. 

Decorative imageThe Grafana Marketplace is currently in its pilot phase, and we invite you to help us shape it. To learn more, read our Grafana Marketplace blog post

Introducing the 2026 Golden Grot Awards winners

For our fourth annual Golden Grot Awards, we are not only honoring creative and impactful Grafana dashboards that shoot for the stars (or the moon). We are also recognizing innovators and leaders in the AI observability space. And the Grot goes to… 

Aurora borealis tracker by Mohamed Adem

Living in Canada, Mohamed Adem was tired of missing aurora borealis displays. Instead of juggling a dozen websites, he built a dashboard that tracks the entire chain from solar flare to visible sky conditions. It monitors NOAA space weather data, IMF Bz magnetic field shifts, Kp index geomagnetic activity, cloud cover forecasts, and moon phase—combining them into a composite Go/No-Go score.

The system runs entirely on public APIs using Telegraf and InfluxDB Cloud, proving powerful insight doesn’t require paid data feeds. By visualizing correlations between solar wind, geomagnetic disturbance, and visibility conditions, Mohamed gained both deeper understanding and better timing. It’s science, astronomy, and observability converging to answer the simple question: Should I go outside tonight?

Blue Ghost Mission 1 lunar landing by Jackson Sweeney, Firefly Aerospace

Jackson Sweeney’s dashboard monitored over 500 telemetry points during Firefly Aerospace’s Blue Ghost Mission 1 lunar lander mission. It tracked temperature sensors, heater states, power usage, battery storage, and vehicle orientation throughout the 60-day journey from Earth to the Moon.

Accessible to the entire mission team, the Grafana dashboard provided critical visibility into thermal health and risk conditions during operations, serving as a primary decision-support tool during lunar landing procedures. This wasn’t just observability. It was mission-critical infrastructure helping land a spacecraft on the Moon.

Pioneering AI in Observability by Oren Lion, TeleTracking

TeleTracking's mission is to optimize the flow of patients through healthcare systems by managing logistics such as admissions, transfers, bedside care coordination, and bed cleaning. Their platform acts like air traffic control for hospitals, stressing that the platform's availability and performance are critical because they directly impact patient care.

To address their need for faster root cause analysis, especially when handling critical issues, Oren Lion, TeleTracking's Director of Logistics Engineering, and his team integrated Grafana Assistant into their operational workflows, decreasing their incident response from 3 days to 1 minute.

Excellence in AI Observability by Dhananjay Yadav, NeoSapien

NeoSapien is building an AI-native wearable platform that transforms real-time conversations into structured, actionable insights using speech-to-text and LLM pipelines. To operate these complex, multi-stage workflows in production, their team has embraced AI observability as a core discipline, using Grafana Cloud to monitor everything from transcription latency to LLM performance, output quality, and cost across every interaction.With Grafana Assistant Investigations, NeoSapien can quickly detect, diagnose, and resolve issues across their AI systems, whether it’s latency spikes, degraded model outputs, or unexpected cost anomalies. This level of visibility allows them to continuously optimize performance and reliability as their product scales. 

Congratulations to all our winners!

The opening keynote is just one of more than 30 sessions at GrafanaCON this week that will offer tips, tricks, and updates around open source technologies. Check out the full agenda and for more about GrafanaCON, read our press releases about the event.

Tags

Related content