LLM Providers
LLM Providers allow you to configure the prompt playground so that it can work with any LLM provider.
Provider library
We currently offer the implementation for OpenAI
, AzureOpenAI
and Anthropic
. This enables you to get started quickly without having to re-implement the connection to these LLM providers.
The HuggingFace
provider
example
is available to help you customize it to the HugginFace-hosted model of your
choice.
OpenAI
Required environment variables:
AzureOpenAI
Required environment variables:
Optional environment variables:
Anthropic
Required environment variables:
Custom provider
You can define your own provider to connect with your LLM provider of choice. Below is an example of how it would look like.
Step 1: Define the LLM Provider
The create_completion
method is required, it is where you should place the logic where you communicate with your LLM provider.
The format_message
and message_to_string
function are encouraged. Defining them allows the provider to work with both defining prompts as one string or as a list of messages.
The environment variables are required to auto-detect which provider is configured. Chainlit only shows configured LLM providers in the prompt playground.
The inputs
is a list of controls that this LLM provider offers that Chainlit will display in the side panel of the prompt playground.
The is_chat
property is a toggle to define whether you feed a list of messages to the LLM provider (like OpenAI’s gpt-4
model) or one text string (like Anthropic’s claude-2
model).
Step 2: Register the LLM Provider:
Once you have defined the provider, you need to tell Chainlit that it exists.
Langchain Provider
Adding an LLM Provider from a Langchain LLM class is straightforward.
Langchain Providers won’t have settings editable in the Prompt Playground. If
you need the settings, you should extend the BaseProvider
class and create
your own provider.
Here is an example with HuggingFaceHub (works the same for other LLMs).
Was this page helpful?