-
Notifications
You must be signed in to change notification settings - Fork 428
feat: current usage #821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: current usage #821
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -26,6 +26,11 @@ | |
| from ..config.config import Config, ConfigLifecycleHandler | ||
| from .base import Agent | ||
|
|
||
| # Current process shared | ||
| TOTAL_PROMPT_TOKENS = 0 | ||
| TOTAL_COMPLETION_TOKENS = 0 | ||
| TOKEN_LOCK = asyncio.Lock() | ||
|
|
||
|
|
||
| class LLMAgent(Agent): | ||
| """ | ||
|
|
@@ -467,9 +472,26 @@ async def step( | |
| messages = await self.parallel_tool_call(messages) | ||
|
|
||
| await self.after_tool_call(messages) | ||
|
|
||
| # usage | ||
|
|
||
| prompt_tokens = _response_message.prompt_tokens | ||
| completion_tokens = _response_message.completion_tokens | ||
|
|
||
| global TOTAL_PROMPT_TOKENS, TOTAL_COMPLETION_TOKENS, TOKEN_LOCK | ||
| async with TOKEN_LOCK: | ||
| TOTAL_PROMPT_TOKENS += prompt_tokens | ||
| TOTAL_COMPLETION_TOKENS += completion_tokens | ||
|
|
||
| # tokens in the current step | ||
| self.log_output( | ||
| f'[usage] prompt_tokens: {prompt_tokens}, completion_tokens: {completion_tokens}' | ||
| ) | ||
| # total tokens for the process so far | ||
| self.log_output( | ||
| f'[usage] prompt_tokens: {_response_message.prompt_tokens}, ' | ||
| f'completion_tokens: {_response_message.completion_tokens}') | ||
| f'[usage_total] total_prompt_tokens: {TOTAL_PROMPT_TOKENS}, ' | ||
| f'total_completion_tokens: {TOTAL_COMPLETION_TOKENS}') | ||
|
|
||
|
Comment on lines
+478
to
+494
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Following my suggestion to move the tracking variables to be class attributes of prompt_tokens = _response_message.prompt_tokens
completion_tokens = _response_message.completion_tokens
# 使用全局累积
async with LLMAgent.TOKEN_LOCK:
LLMAgent.TOTAL_PROMPT_TOKENS += prompt_tokens
LLMAgent.TOTAL_COMPLETION_TOKENS += completion_tokens
# tokens in the current step
self.log_output(
f'[usage] prompt_tokens: {prompt_tokens}, completion_tokens: {completion_tokens}'
)
# total tokens for the process so far
self.log_output(
f'[usage_total] total_prompt_tokens: {LLMAgent.TOTAL_PROMPT_TOKENS}, '
f'total_completion_tokens: {LLMAgent.TOTAL_COMPLETION_TOKENS}') |
||
| yield messages | ||
|
|
||
| def prepare_llm(self): | ||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -132,6 +132,10 @@ def _call_llm(self, | |||||||||||||||||||
| """ | ||||||||||||||||||||
| messages = self._format_input_message(messages) | ||||||||||||||||||||
|
|
||||||||||||||||||||
| if kwargs.get('stream', False) and self.args.get( | ||||||||||||||||||||
| 'stream_options', {}).get('include_usage', True): | ||||||||||||||||||||
| kwargs.setdefault('stream_options', {})['include_usage'] = True | ||||||||||||||||||||
|
Comment on lines
+135
to
+137
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This conditional logic is a bit dense and could be hard to parse. For better readability, you could break it down into a few lines with intermediate variables and a comment explaining the intent.
Suggested change
|
||||||||||||||||||||
|
|
||||||||||||||||||||
| return self.client.chat.completions.create( | ||||||||||||||||||||
| model=self.model, messages=messages, tools=tools, **kwargs) | ||||||||||||||||||||
|
|
||||||||||||||||||||
|
|
||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using global variables for process-wide state can lead to code that is hard to test and maintain. It's better to encapsulate this state within the
LLMAgentclass itself as class attributes. This clearly associates the state with the agent and avoids polluting the global namespace.For example, you could define them inside
LLMAgentlike this: