Skip to content

Conversation

@ShellLM
Copy link

@ShellLM ShellLM commented Dec 19, 2025

Pull Request: Add --json flag to llm prompt

Summary

Adds a --json flag to the llm prompt command. This outputs the response, metadata, and conversation_id as a JSON array (consistent with llm logs --json), enabling programmatic use and filtering with tools like jq.

Changes

  • CLI Implementation: Updated llm/cli.py to support the --json flag.
  • Validation: Added logic to prevent use of --json with --no-log, as logging is required for JSON output construction.
  • Documentation:
    • Added "Output as JSON" section to docs/usage.md.
    • Added changelog entry in docs/changelog.md for version 0.28.
  • Tests: Created tests/test_prompt_json.py with targeted test cases.

Manual Testing Results

1. Basic JSON Output

uv run llm prompt "say hello" --json

[
{
"id": "01kcw9rcbknhkmv95dsbq8dnw8",
"model": "kimi-k2-instruct-0905-fp8",
"resolved_model": null,
"prompt": "say hello",
"system": null,
"prompt_json": {
"messages": [
{
"role": "user",
"content": "say hello"
}
]
},
"options_json": {},
"response": "Hello!",
"response_json": {
"content": {
"$": "r:01kcw9rcbknhkmv95dsbq8dnw8"
},
"finish_reason": "stop",
"usage": {
"completion_tokens": 3,
"prompt_tokens": 28,
"total_tokens": 31,
"reasoning_tokens": 0
},
"id": "a67d97db3aff4b0f8ad9cecadf782399",
"object": "chat.completion.chunk",
"model": "moonshotai/Kimi-K2-Instruct-0905",
"created": 1766181320
},
"conversation_id": "01kcw9re59ejhjpg7w32881zy2",
"duration_ms": 1845,
"datetime_utc": "2025-12-19T21:55:19.027141+00:00",
"input_tokens": 28,
"output_tokens": 3,
"token_details": null,
"conversation_name": "say hello",
"conversation_model": "kimi-k2-instruct-0905-fp8",
"schema_json": null,
"prompt_fragments": [],
"system_fragments": [],
"tools": [],
"tool_calls": [],
"tool_results": [],
"attachments": []
}
]

2. Extracting Conversation ID (Filtering with jq)

uv run llm prompt "say hello" -s "be brief" --json | jq -r '.[].conversation_id'

Result: 01kcw9re59ejhjpg7w32881zy2

3. Error Case: Conflict with --no-log

uv run llm prompt "say hello" --json --no-log

Error: Cannot use --json with --no-log as the response must be logged to output JSON

Automated Tests

Targeted tests ensure the --json flag works across success and error conditions.

Test Descriptions:

  • test_prompt_json: Verifies that the command returns a valid JSON array containing the response and conversation_id.
  • test_prompt_json_no_log_error: Verifies that combining --json and --no-log results in a clear error message.
  • test_prompt_json_no_stream: Verifies that JSON output works correctly even when streaming is explicitly disabled.

Verbose Test Output:

============================= test session starts ==============================
platform linux -- Python 3.13.7, pytest-9.0.2, pluggy-1.6.0 -- /home/thomas/my-llm/llm/.venv/bin/python
cachedir: .pytest_cache
rootdir: /home/thomas/my-llm/llm
configfile: pytest.ini
plugins: anyio-4.12.0, syrupy-5.0.0, asyncio-1.3.0, recording-0.13.4, httpx-0.36.0
asyncio: mode=Mode.STRICT, debug=False, asyncio_default_fixture_loop_scope=function, asyncio_default_test_loop_scope=function
collecting ... collected 3 items

tests/test_prompt_json.py::test_prompt_json PASSED                       [ 33%]
tests/test_prompt_json.py::test_prompt_json_no_log_error PASSED          [ 66%]
tests/test_prompt_json.py::test_prompt_json_no_stream PASSED             [100%]

============================== 3 passed in 3.91s ===============================

Files Modified

  • llm/cli.py
  • docs/usage.md
  • docs/changelog.md
  • tests/test_prompt_json.py (New)

PR Plan Completion Check

  • Manual Testing (Basic, JQ filter, Error case)
  • Automated Tests (Targeted suite with verbose verification)
  • Documentation Updates (Usage and Changelog)
  • Cleanup (Build artifacts removed)

@ShellLM ShellLM force-pushed the pr/prompt-json branch 5 times, most recently from 62a8971 to 98e6567 Compare December 19, 2025 22:16
Adds a --json flag to the prompt command that outputs the response as JSON
including the conversation_id. This makes it easier to use llm in scripts
and pipelines where structured output is needed.

- Suppresses text output when --json is used
- Forces logging (required to generate JSON output)
- Reuses logs_list command to output the JSON
- Works with both sync and async responses
@ShellLM ShellLM marked this pull request as ready for review December 19, 2025 22:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants