How to use prompt management with litellm's integration with langfuse? #3780
-
|
So for using prompt management, with openai we have this tutorial https://langfuse.com/docs/prompts/get-started in this, they use something called I also tried proxy but the proxy thing doesn't work. |
Beta Was this translation helpful? Give feedback.
Replies: 9 comments 9 replies
-
|
To integrate Langfuse's prompt management with LiteLLM, you can use the LiteLLM Python SDK to send logs to Langfuse by setting the appropriate environment variables. Here's an example of how you can configure this in your Python code: import os
from litellm import completion
# Set environment variables
os.environ["LANGFUSE_PUBLIC_KEY"] = "<your_public_key>"
os.environ["LANGFUSE_SECRET_KEY"] = "<your_secret_key>"
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # or "https://us.cloud.langfuse.com" for US region
os.environ["OPENAI_API_KEY"] = "<your_openai_api_key>"
os.environ["COHERE_API_KEY"] = "<your_cohere_api_key>"
# Set callbacks
litellm.success_callback = ["langfuse"]
litellm.failure_callback = ["langfuse"]
# Example usage
response = completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - I'm openai"}
]
)
print(response)Additionally, you can use the LiteLLM Proxy with the Langfuse OpenAI SDK Wrapper for a more comprehensive integration. This setup automatically captures token counts, latencies, streaming response times, API errors, and more. You can also configure the LiteLLM Proxy UI to send logs directly to Langfuse by setting the callback to Langfuse in the UI [1][2][3]. For more advanced usage, such as setting custom trace IDs, trace user IDs, and tags, you can use the |
Beta Was this translation helpful? Give feedback.
-
|
To clarify, you want to link traces produces by the litellm python sdk to prompts within langfuse prompt management by passing the langfuse_prompt attribute? |
Beta Was this translation helpful? Give feedback.
-
|
Yes. I want to track prompts and traces for litellm. |
Beta Was this translation helpful? Give feedback.
-
|
The LiteLLM docs does suggest there is a prompt metadata parameter that can be passed in, but it doesn't seem to do anything to link the generation to the prompts. https://litellm.vercel.app/docs/observability/langfuse_integration Generation Specific Parameters |
Beta Was this translation helpful? Give feedback.
-
|
@GildeshAbhay @analyst3dev Could you please try the following with the latest version of the LiteLLM and Langfuse Python SDK? from langfuse import Langfuse
langfuse_prompt = Langfuse().get_prompt('my_prompt')
response = completion(
model="gpt-3.5-turbo",
messages=langfuse_prompt.prompt,
metadata={
"prompt": langfuse_prompt
},
)
print(response)It looks like LiteLLM is looking in the metadata for the langfuse prompt: https://github.com/BerriAI/litellm/blob/134bd2cebb6b62c2dd5f314bdd6e8270b78ca332/litellm/integrations/langfuse/langfuse.py#L693 |
Beta Was this translation helpful? Give feedback.
-
|
im having the exact same problem, |
Beta Was this translation helpful? Give feedback.
-
|
I don't understand having to pass the prompt as a message and the prompt
name as Metadata...what is the point of the prompt managent then?
…On Tue, 29 Oct 2024 at 17:58, Hassieb Pakzad ***@***.***> wrote:
@GildeshAbhay <https://github.com/GildeshAbhay> @analyst3dev
<https://github.com/analyst3dev> Could you please try the following with
the latest version of the LiteLLM and Langfuse Python SDK?
from langfuse import Langfuse
langfuse_prompt = Langfuse().get_prompt('my_prompt')
response = completion(
model="gpt-3.5-turbo",
messages=langfuse_prompt.prompt,
metadata={
"prompt": langfuse_prompt
},
)
print(response)
It looks like LiteLLM is looking in the metadata for the langfuse prompt:
https://github.com/BerriAI/litellm/blob/134bd2cebb6b62c2dd5f314bdd6e8270b78ca332/litellm/integrations/langfuse/langfuse.py#L693
—
Reply to this email directly, view it on GitHub
<#3780 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACXJNRXHOICOXZMYXSN6NT3Z57EEHAVCNFSM6AAAAABQB4XJ7WVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCMBZGA4DKMQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
As it seems this is a LiteLLM error, I've made a bug ticket there which should solve this problem. They do a version check on Langfuse > 2.73 but it seems they confused the version of the 'main' repo (already at 2.86) with the python repo(at 2.53.3). |
Beta Was this translation helpful? Give feedback.
-
|
I am using Litellm Proxy, I've tried langefuse_prompt, prompt, langefusePrompt passing ChatPromptClient.dict in, but seems it is still not adding the trace to the prompt |
Beta Was this translation helpful? Give feedback.

I was actually wrong, misread the versions. LiteLLM implementation expects a
dict, not aChatPromptClient. If you addChatPromptClient.__dict__instead, the prompt tracking works.BerriAI/litellm#6545 (comment)