-
Notifications
You must be signed in to change notification settings - Fork 2.5k
add reasoning param for openai responses LLM #4548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThis change adds support for OpenAI's reasoning parameter to the LLM class. It introduces a new Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py`:
- Around line 77-81: The current branch in the reasoning default logic overrides
OpenAI defaults; change the logic in the block that uses
_supports_reasoning_effort and Reasoning so that if reasoning is not provided
you do NOT force "minimal" for pre-5.1 models — instead only set
Reasoning(effort="none") for models that must default to none (e.g., "gpt-5.1"),
and otherwise leave reasoning as None/omitted so the API uses its documented
default (or explicitly set "medium" only if you intentionally want to override);
update the conditional around model and _supports_reasoning_effort to reflect
this and remove the forced "minimal" assignment.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
**/*.py: Format code with ruff
Run ruff linter and auto-fix issues
Run mypy type checker in strict mode
Maintain line length of 100 characters maximum
Ensure Python 3.9+ compatibility
Use Google-style docstrings
Files:
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py
🧬 Code graph analysis (1)
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py (2)
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/models.py (1)
_supports_reasoning_effort(294-301)livekit-agents/livekit/agents/utils/misc.py (1)
is_given(25-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: livekit-plugins-inworld
- GitHub Check: livekit-plugins-cartesia
- GitHub Check: unit-tests
- GitHub Check: livekit-plugins-groq
- GitHub Check: livekit-plugins-deepgram
- GitHub Check: livekit-plugins-elevenlabs
- GitHub Check: livekit-plugins-openai
- GitHub Check: type-check (3.13)
- GitHub Check: type-check (3.9)
🔇 Additional comments (5)
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py (5)
23-23: Imports for Reasoning and model capability helper look appropriate.Also applies to: 37-37
48-48: _LLMOptions now cleanly models the new reasoning option.
63-63: Constructor surface updated consistently.
91-91: Reasoning option is propagated into _LLMOptions as expected.
135-136: Reasoning is forwarded into request kwargs cleanly.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
livekit-plugins/livekit-plugins-openai/livekit/plugins/openai/responses/llm.py
Show resolved
Hide resolved
davidzhao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
to use:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.