Menu
Grafana Cloud Open source

Enable LLM features

Before you begin

To activate Grafana LLM features, you need an Editor or Admin basic role in a Grafana Cloud account, including the free tier.

Enterprise or OSS versions of Grafana can also use the LLM app, but won’t have access to the LLM provider built into Cloud. Install the Grafana LLM plugin using the standard plugin installation method for your Grafana version, and bring API authentication for LLM access from an OpenAI compatible service. You can obtain an OpenAI API key from OpenAI’s platform, or Azure OpenAI authentication from the Azure OpenAI service.

For more information, refer to the Plugin management documentation.

Note

Grafana’s LLM features depend on Grafana Live being enabled (this is the default). If you attempt to enable the features without Grafana Live enabled you’ll see a warning in the plugin.

See the max_connections setting for details on how to ensure Grafana Live is enabled.

Enable LLM features

  1. In Grafana Cloud, click Administration > Plugins and data > Plugins in the side navigation menu.
  2. Browse or search for the LLM plugin and click to open it.
  3. On the Configuration tab, select “Enable OpenAI access via Grafana”.
  4. Click to permit Grafana Labs to share limited data with OpenAI’s API (not for training, and only to provide these features).
  5. Click Save settings.

Using a custom LLM provider

Support for custom LLM providers was added in 0.10.0.

You have the choice to use open weight models (such has Meta’s Llama 3) and non-OpenAI providers (such as Anthropic and OpenAI). To use a custom LLM provider, follow these steps:

  1. Install the Plugin: Download the Grafana LLM plugin.
  2. Set Up Your API: Make sure you have an OpenAI-compatible API server, such as vLLM, ollama, LMStudio, or LiteLLM.
  3. Configure the Plugin:
    • In the plugin settings, select “Use OpenAI-compatible API”
    • Set the API URL to your self-hosted API (e.g., http://vllm.internal.url:8000/).
    • In Model Settings > Model mappings, configure the Base and Large models with your preferred ones (e.g., meta-llama/Meta-Llama-3-8B-Instruct).