Plugins 〉LLM


Developer
Grafana

Sign up to receive occasional product news and updates:



Application
grafana

LLM

  • Overview
  • Installation
  • Change log
  • Related content

Grafana LLM app (experimental)

This is a Grafana application plugin which centralizes access to LLMs across Grafana.

It is responsible for:

  • storing API keys for LLM providers
  • proxying requests to LLMs with auth, so that other Grafana components need not store API keys
  • providing Grafana Live streams of streaming responses from LLM providers (namely OpenAI)
  • providing LLM based extensions to Grafana's extension points (e.g. 'explain this panel')

Future functionality will include:

  • support for multiple LLM providers, including the ability to choose your own at runtime
  • rate limiting of requests to LLMs, for cost control
  • token and cost estimation
  • RBAC to only allow certain users to use LLM functionality

Note: This plugin is experimental, and may change significantly between versions, or deprecated completely in favor of a different approach based on user feedback.

For users

Install and configure this plugin to enable various LLM-related functionality across Grafana. This will include new functionality inside Grafana itself, such as explaining panels, or in plugins, such as natural language query editors.

All LLM requests will be routed via this plugin, which ensures the correct API key is being used and rate limited appropriately.

For plugin developers

This plugin is not designed to be directly interacted with; instead, use the convenience functions in the @grafana/experimental package which will communicate with this plugin, if installed.

First, add the correct version of @grafana/experimental to your dependencies in package.json:

{
  "dependencies": {
    "@grafana/experimental": "1.7.0"
  }
}

Then in your components you can use the llm object from @grafana/experimental like so:

import React, { useState } from 'react';
import { useAsync } from 'react-use';
import { scan } from 'rxjs/operators';

import { llms } from ‘@grafana/experimental’; import { PluginPage } from ‘@grafana/runtime’;

import { Button, Input, Spinner } from ‘@grafana/ui’;

const MyComponent = (): JSX.Element => { const [input, setInput] = React.useState(’’); const [message, setMessage] = React.useState(’’); const [reply, setReply] = useState(’’);

const { loading, error } = useAsync(async () => { const enabled = await llms.openai.enabled(); if (!enabled) { return false; } if (message === ‘’) { return; } // Stream the completions. Each element is the next stream chunk. const stream = llms.openai.streamChatCompletions({ model: ‘gpt-3.5-turbo’, messages: [ { role: ‘system’, content: ‘You are a cynical assistant.’ }, { role: ‘user’, content: message }, ], }).pipe( // Accumulate the stream chunks into a single string. scan((acc, delta) => acc + delta, ‘’) ); // Subscribe to the stream and update the state for each returned value. return stream.subscribe(setReply); }, [message]);

if (error) { // TODO: handle errors. return null; }

return ( <div> <Input value={input} onChange={(e) => setInput(e.currentTarget.value)} placeholder=“Enter a message” /> <br /> <Button type=“submit” onClick={() => setMessage(input)}>Submit</Button> <br /> <div>{loading ? <Spinner /> : reply}</div> </div> ); }

Installing LLM on Grafana Cloud:

For more information, visit the docs on plugin installation.

Changelog

0.2.1

  • Change path handling for chat completions streams to put separate requests into separate streams. Requests can pass a UUID as the suffix of the path now, but is backwards compatible with an older version of the frontend code.

0.2.0

  • Expose vector search API to perform semantic search against a vector database using a configurable embeddings source

0.1.0

  • Support proxying LLM requests from Grafana to OpenAI