Integrations
Langchain Callback Handler
The following code example demonstrates how to pass a callback handler:
llm = OpenAI(temperature=0)
llm_math = LLMMathChain.from_llm(llm=llm)
@cl.on_message
async def main(message: cl.Message):
res = await llm_math.acall(message.content, callbacks=[cl.LangchainCallbackHandler()])
await cl.Message(content="Hello").send()
Final Answer streaming
If streaming is enabled at the LLM level, Langchain will only stream the intermediate steps. You can enable final answer streaming by passing stream_final_answer=True
to the callback handler.
# Optionally, you can also pass the prefix tokens that will be used to identify the final answer
answer_prefix_tokens=["FINAL", "ANSWER"]
cl.LangchainCallbackHandler(
stream_final_answer=True,
answer_prefix_tokens=answer_prefix_tokens,
)
Final answer streaming will only work with prompts that have a consistent
final answer pattern. It will also not work with
AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION
Was this page helpful?