Generation
CompletionGeneration
A CompletionGeneration
contains all of the data that has been sent to a completion-based LLM (like Llama2) as well as the response from the LLM.
It is designed to be passed to a Step to enable the Prompt Playground.
Attributes
template
str
The template of the prompt.
formatted
str
The formatted version of the template.
completion
str
The completion of the prompt.
template_format
str
The format of the template, defaults to “f-string”.
inputs
Dict[str, str]
The inputs (variables) to the prompt, represented as a dictionary.
provider
str
The provider of the LLM, such as “openai”.
settings
Dict[str, Any]
The settings the LLM provider was called with.
Example
import os
from chainlit.playground.providers import OpenAI
import chainlit as cl
# If no OPENAI_API_KEY is available, the OpenAI provider won't be available in the prompt playground
os.environ["OPENAI_API_KEY"] = "sk-..."
template = """Hello, this is a template.
This is a variable1 {variable1}
And this is variable2 {variable2}
"""
inputs = {
"variable1": "variable1 value",
"variable2": "variable2 value",
}
settings = {
"model": "gpt-3.5-turbo",
"temperature": 0,
# ... more settings
}
# Let's pretend we made the call to openai and this is the response
completion = "The openai completion"
@cl.step(type="llm")
async def fake_llm():
# Pretend we're calling the LLM
await cl.sleep(2)
fake_completion = "The openai completion"
formatted_template = template.format(**inputs)
# Add the generation to the current step
cl.context.current_step.generation = cl.CompletionGeneration(
provider=OpenAI.id,
completion=fake_completion,
inputs=inputs,
settings=settings,
template=template,
formatted=formatted_template,
)
return fake_completion
@cl.on_chat_start
async def start():
await fake_llm()
A completion generation displayed in the Prompt Playground
Was this page helpful?