feat: Adds a --json flag to the prompt command. #1325
+93
−10
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request: Add
--jsonflag tollm promptSummary
Adds a
--jsonflag to thellm promptcommand. This outputs the response, metadata, andconversation_idas a JSON array (consistent withllm logs --json), enabling programmatic use and filtering with tools likejq.Changes
llm/cli.pyto support the--jsonflag.--jsonwith--no-log, as logging is required for JSON output construction.docs/usage.md.docs/changelog.mdfor version 0.28.tests/test_prompt_json.pywith targeted test cases.Manual Testing Results
1. Basic JSON Output
uv run llm prompt "say hello" --json[
{
"id": "01kcw9rcbknhkmv95dsbq8dnw8",
"model": "kimi-k2-instruct-0905-fp8",
"resolved_model": null,
"prompt": "say hello",
"system": null,
"prompt_json": {
"messages": [
{
"role": "user",
"content": "say hello"
}
]
},
"options_json": {},
"response": "Hello!",
"response_json": {
"content": {
"$": "r:01kcw9rcbknhkmv95dsbq8dnw8"
},
"finish_reason": "stop",
"usage": {
"completion_tokens": 3,
"prompt_tokens": 28,
"total_tokens": 31,
"reasoning_tokens": 0
},
"id": "a67d97db3aff4b0f8ad9cecadf782399",
"object": "chat.completion.chunk",
"model": "moonshotai/Kimi-K2-Instruct-0905",
"created": 1766181320
},
"conversation_id": "01kcw9re59ejhjpg7w32881zy2",
"duration_ms": 1845,
"datetime_utc": "2025-12-19T21:55:19.027141+00:00",
"input_tokens": 28,
"output_tokens": 3,
"token_details": null,
"conversation_name": "say hello",
"conversation_model": "kimi-k2-instruct-0905-fp8",
"schema_json": null,
"prompt_fragments": [],
"system_fragments": [],
"tools": [],
"tool_calls": [],
"tool_results": [],
"attachments": []
}
]
2. Extracting Conversation ID (Filtering with jq)
Result:
01kcw9re59ejhjpg7w32881zy23. Error Case: Conflict with --no-log
uv run llm prompt "say hello" --json --no-logError: Cannot use --json with --no-log as the response must be logged to output JSON
Automated Tests
Targeted tests ensure the
--jsonflag works across success and error conditions.Test Descriptions:
Verbose Test Output:
Files Modified
PR Plan Completion Check