-
-
Notifications
You must be signed in to change notification settings - Fork 91
feat(ollama): add ollama types and meta #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
View your CI Pipeline Execution ↗ for commit 344d236
☁️ Nx Cloud last updated this comment at |
f9abcfe to
4b5ebe5
Compare
ed875f4 to
d0d4a56
Compare
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-gemini
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
8f1fea3 to
2e6f6a3
Compare
fd750de to
c30a4f1
Compare
📝 WalkthroughWalkthroughCentralizes Ollama model metadata into a new model-meta registry and ~70 per-family meta modules; adapters remove local model lists and now import model names and per-model provider-option types from the registry; typings updated so provider options resolve by model name. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
c30a4f1 to
89c26fe
Compare
| export interface OllamaChatRequest { | ||
| model: string | ||
| // messages?: Message[] | ||
| stream?: boolean | ||
| format?: string | object | ||
| keep_alive?: string | number | ||
| // tools?: Tool[] | ||
| // think?: boolean | 'high' | 'medium' | 'low' | ||
| logprobs?: boolean | ||
| top_logprobs?: number | ||
| options?: Partial<Options> | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments are used for reference, will be removed
| role: string | ||
| content: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
role and config are only config applied to all chat
8e40285 to
d1e32a2
Compare
1052658 to
833b639
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
♻️ Duplicate comments (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (1)
15-26: Track removal of commented reference code.The file contains extensive commented blocks that appear to serve as reference documentation. Based on the past review comment, these are planned for removal. Ensure these comments are cleaned up before merging to maintain code clarity.
Also applies to: 28-39, 53-60, 62-75
🟠 Major comments (12)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts-7-18 (1)
7-18: Fix size format inconsistencies and verify accuracy.The size values across the three model definitions are inconsistent in format and potentially incorrect:
- Line 14:
'4.7b'(billion parameters)- Line 27:
'4.7gb'(gigabytes) - This is concerning because the model is named8b(implying 8 billion parameters)- Line 40:
'40gb'(gigabytes)The mixed use of
'b'vs'gb'units creates ambiguity. Additionally, the8bmodel having a'4.7gb'size seems incorrect if8brefers to 8 billion parameters.🔎 Suggested approach to verify and standardize
Please verify the correct size values from the official Ollama model documentation and standardize the unit format across all three models. If the size represents parameters, use 'b' consistently (e.g., '4.7b', '8b', '70b'). If it represents disk space, use 'gb' consistently.
#!/bin/bash # Verify size format used in other model-meta files for consistency rg -n "size:" packages/typescript/ai-ollama/src/meta/model-meta-*.ts -A 0Also applies to: 20-31, 33-42
packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts-42-42 (1)
42-42: Replaceanywith specific type for type safety.The
LLAMA3_CHATQA_70bmodel usesOllamaModelMeta<any>instead of the specific typeOllamaModelMeta<OllamaChatRequest & OllamaChatRequestMessages>used by the other two models. This reduces type safety and is inconsistent with the type mapping at line 65 which correctly types this model.Based on learnings, per-model type safety should be maintained.
🔎 Proposed fix
-} as const satisfies OllamaModelMeta<any> +} as const satisfies OllamaModelMeta< + OllamaChatRequest & OllamaChatRequestMessages +>packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts-7-36 (1)
7-36: Typo in model name: "tinnyllama" should be "tinyllama".The model name has a typo with double 'n' (
tinnyllama) instead of single 'n' (tinyllama). This will cause model lookup failures when users try to use these models with Ollama, as the official model name istinyllama.🔎 Proposed fix
-const TINNYLLAMA_LATEST = { - name: 'tinnyllama:latest', +const TINYLLAMA_LATEST = { + name: 'tinyllama:latest', supports: { input: ['text'], output: ['text'], capabilities: [], }, size: '638mb', context: 2_000, } as const satisfies OllamaModelMeta< OllamaChatRequest & OllamaChatRequestMessages > -const TINNYLLAMA_1_1b = { - name: 'tinnyllama:1.1b', +const TINYLLAMA_1_1b = { + name: 'tinyllama:1.1b',This fix should be applied throughout the file to all constant names, exported types, and the MODELS array.
Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts-24-37 (1)
24-37: Variable/model name mismatch:LLAMA3_3_70bdefines model'llama3.3:8b'.The constant is named
LLAMA3_3_70bbut the model name is'llama3.3:8b'. This is inconsistent and likely incorrect—either the variable should beLLAMA3_3_8bor the model name should be'llama3.3:70b'.🔎 Proposed fix (assuming 70b is correct)
const LLAMA3_3_70b = { - name: 'llama3.3:8b', + name: 'llama3.3:70b', supports: { input: ['text'], output: ['text'], capabilities: ['tools'], }, size: '43gb', context: 128_000, } as const satisfies OllamaModelMeta<packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts-21-32 (1)
21-32: Type constraint mismatch with model capabilities.GEMMA3_270m declares
input: ['text'](text-only) but usesOllamaChatRequestMessages<OllamaMessageImages>in its type constraint, which includes image message support. This creates a type inconsistency where the type system allows image messages for a text-only model.Use
OllamaChatRequestMessages(without theOllamaMessageImagesparameter) for text-only models to ensure type safety aligns with model capabilities.🔎 Proposed fix
const GEMMA3_270m = { name: 'gemma3:270m', supports: { input: ['text'], output: ['text'], capabilities: [], }, size: '298mb', context: 32_000, } as const satisfies OllamaModelMeta< - OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> + OllamaChatRequest & OllamaChatRequestMessages >packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts-34-45 (1)
34-45: Type constraint mismatch with model capabilities.Same issue as GEMMA3_270m: GEMMA3_1b declares text-only input but uses
OllamaChatRequestMessages<OllamaMessageImages>in its type constraint.🔎 Proposed fix
const GEMMA3_1b = { name: 'gemma3:1b', supports: { input: ['text'], output: ['text'], capabilities: [], }, size: '815mb', context: 32_000, } as const satisfies OllamaModelMeta< - OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> + OllamaChatRequest & OllamaChatRequestMessages >packages/typescript/ai-ollama/src/meta/model-meta-gemma3.ts-106-120 (1)
106-120: Provider options type map includes incorrect types for text-only models.The provider options map assigns
OllamaMessageImagesto text-only models (270m, 1b), creating the same type safety issue at the API level.🔎 Proposed fix
export type Gemma3ChatModelProviderOptionsByName = { // Models with thinking and structured output support [GEMMA3_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> - [GEMMA3_270m.name]: OllamaChatRequest & - OllamaChatRequestMessages<OllamaMessageImages> + [GEMMA3_270m.name]: OllamaChatRequest & OllamaChatRequestMessages - [GEMMA3_1b.name]: OllamaChatRequest & - OllamaChatRequestMessages<OllamaMessageImages> + [GEMMA3_1b.name]: OllamaChatRequest & OllamaChatRequestMessages [GEMMA3_4b.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> [GEMMA3_12b.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> [GEMMA3_27b.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> }packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts-20-31 (1)
20-31: Model name inconsistency: constant vs actual name.The constant is named
LLAMA3_8bbut the model name is'llama3:7b'. This mismatch between the constant name and actual model name creates confusion. Verify whether the constant should be renamed toLLAMA3_7bor the model name should be corrected to'llama3:8b'. Note: Similar inconsistencies exist in other model meta files (e.g.,LLAMA3_70bwith correct name, butLLAMA_GUARD3_1bwith'llama3:7b'name).packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts-9-22 (1)
9-22: Update context window to 128,000 tokens for both Mistral Nemo models.Mistral Nemo supports 128K context window, not 1,000. Update
context: 1_000tocontext: 128_000for bothMISTRAL_NEMO_LATESTandMISTRAL_NEMO_12b(lines 17 and 32).packages/typescript/ai-ollama/src/meta/model-meta-mistral-nemo.ts-24-37 (1)
24-37: Set context to 128,000 tokens for Mistral Nemo 12b.The context window is set to 1,000 tokens but should be 128,000. This applies to both
MISTRAL_NEMO_LATESTandMISTRAL_NEMO_12b.packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts-52-60 (1)
52-60: Remove tool support from provider options type — llava-llama3 does not support tool calling.The provider options type (lines 54-59) includes
OllamaMessageToolsandOllamaChatRequestTools, but llava-llama3 is a vision-only model without tool calling support. This allows callers to pass tools where they will fail at runtime.Remove
OllamaMessageTools &and& OllamaChatRequestToolsto match the model's actual capabilities and the satisfies clause (which only requiresOllamaChatRequestMessages<OllamaMessageImages>).packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts-58-62 (1)
58-62: MissingSAILOR2_1bin the exported models array.The
SAILOR2_1bmodel is defined (lines 20-31) and included in bothSailor2ChatModelProviderOptionsByName(line 78) andSailor2ModelInputModalitiesByName(line 86), but it's not included in theSAILOR2_MODELSarray. This inconsistency could cause issues where the type system allows'sailor2:1b'as a valid model name, but runtime iteration overSAILOR2_MODELSwould miss it.🔎 Proposed fix
export const SAILOR2_MODELS = [ SAILOR2_LATEST.name, + SAILOR2_1b.name, SAILOR2_8b.name, SAILOR2_20b.name, ] as const
🟡 Minor comments (26)
packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts-45-50 (1)
45-50: Update comment to match actual capabilities.The comment on line 47 states "Models with thinking and structured output support," but the capabilities arrays for both models are empty (lines 12 and 25), indicating no support for thinking, tools, vision, or embedding. This inconsistency could mislead developers.
Please either:
- Update the comment to accurately reflect the models' capabilities, or
- Add the appropriate capabilities to the model metadata if they do support thinking and structured output
🔎 Proposed fix to update the comment
- // Models with thinking and structured output support + // Models with text input/output support [DOLPHIN3_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages [DOLPHIN3_8b.name]: OllamaChatRequest & OllamaChatRequestMessagespackages/typescript/ai-ollama/src/meta/model-meta-llava.ts-15-15 (1)
15-15: Inconsistent size format units.The
sizefield uses inconsistent units across models:
LLAVA_LATEST:'4.7b'(appears to be parameter count)- Other models:
'4.7gb','8gb','20gb'(file size)This mixing of parameter count ('b' for billions) and file size ('gb' for gigabytes) creates ambiguity. Standardize to either parameter count or file size across all models.
Also applies to: 28-28, 41-41, 54-54
packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts-77-91 (1)
77-91: Correct misleading comments in type map definitions.The comments above the type maps are inconsistent with the actual model definitions:
- Line 78: Claims "Models with thinking and structured output support", but all models have empty
capabilities: []arrays.- Line 86: Claims "Models with text, image, audio, video (no document)", but all models only support
input: ['text'].Update the comments to accurately reflect the models' actual capabilities and supported modalities.
🔎 Proposed fix
// Manual type map for per-model provider options export type ShieldgemmaChatModelProviderOptionsByName = { - // Models with thinking and structured output support + // All Shieldgemma models with standard chat support [SHIELDGEMMA_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages [SHIELDGEMMA_2b.name]: OllamaChatRequest & OllamaChatRequestMessages [SHIELDGEMMA_9b.name]: OllamaChatRequest & OllamaChatRequestMessages [SHIELDGEMMA_27b.name]: OllamaChatRequest & OllamaChatRequestMessages } export type ShieldgemmaModelInputModalitiesByName = { - // Models with text, image, audio, video (no document) + // All models support text input only [SHIELDGEMMA_LATEST.name]: typeof SHIELDGEMMA_LATEST.supports.input [SHIELDGEMMA_2b.name]: typeof SHIELDGEMMA_2b.supports.input [SHIELDGEMMA_9b.name]: typeof SHIELDGEMMA_9b.supports.input [SHIELDGEMMA_27b.name]: typeof SHIELDGEMMA_27b.supports.input }Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts-50-56 (1)
50-56: Fix the misleading comment.The comment states "Models with thinking and structured output support" but the Moondream models only declare
'vision'capability (lines 13, 26), not thinking or structured output. This comment appears to be copied from another model file and should be updated to accurately describe Moondream's capabilities.🔎 Suggested fix
// Manual type map for per-model provider options export type MoondreamChatModelProviderOptionsByName = { - // Models with thinking and structured output support + // Models with vision support [MOONDREAM_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> [MOONDREAM_1_8b.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> }packages/typescript/ai-ollama/src/meta/model-meta-moondream.ts-8-32 (1)
8-32: Verify metadata accuracy for moondream:1.8b variant.The constants
MOONDREAM_LATESTandMOONDREAM_1_8bhave identical metadata (size: '1.7gb', context: 2_000), which appears inconsistent with the model naming. Themoondream:1.8bvariant name typically refers to a model with ~1.8 billion parameters, yet both variants declare the same 1.7gb size. Cross-check the actual specifications from the Moondream project to ensure accuracy, as other vision model variants in the codebase (e.g., llava) have differentiated metadata across versions.packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts-74-74 (1)
74-74: Copy-paste error: comment references wrong model family.The commented type alias says
AyaChatModelsbut this file is for Exaone3.5 models.🔎 Proposed fix
-// export type AyaChatModels = (typeof EXAONE3_5MODELS)[number] +// export type Exaone3_5ChatModels = (typeof EXAONE3_5MODELS)[number]packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts-33-44 (1)
33-44: Constant name doesn't match model name.The constant
EXAONE3_5_7_1bsuggests a 7.1b model, but the actual model name is'exaone3.5:7.8b'. Consider renaming toEXAONE3_5_7_8bfor consistency.🔎 Proposed fix
-const EXAONE3_5_7_1b = { +const EXAONE3_5_7_8b = { name: 'exaone3.5:7.8b',Update references in
EXAONE3_5MODELS,Exaone3_5ChatModelProviderOptionsByName, andExaone3_5ModelInputModalitiesByNameaccordingly.packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts-26-56 (1)
26-56: Inconsistency between capabilities and type constraints.
OPT_OSS_20bandOPT_OSS_120bare constrained withOllamaChatRequestThinking_OpenAIin their type assertions (lines 38-39, 54-55), but theircapabilitiesarrays only include['tools']without'thinking'(lines 31, 47).If these models support thinking, add
'thinking'to their capabilities. If they don't, consider removingOllamaChatRequestThinking_OpenAIfrom the type constraint to maintain consistency.🔎 Option A: Add thinking capability
const OPT_OSS_20b = { name: 'gpt-oss:20b', supports: { input: ['text'], output: ['text'], - capabilities: ['tools'], + capabilities: ['tools', 'thinking'], },Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-ollama/src/meta/model-meta-deepseek-ocr.ts-48-48 (1)
48-48: Fix typo in commented type alias.The comment contains a typo with double underscores in the model constant name.
🔎 Proposed fix
-// export type DeepseekOcrChatModels = (typeof DEEPSEEK_OCR__MODELS)[number] +// export type DeepseekOcrChatModels = (typeof DEEPSEEK_OCR_MODELS)[number]packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts-61-61 (1)
61-61: Fix duplicate "export" keyword in comment.The comment contains a typo with duplicate "export" keywords.
🔎 Proposed fix
-// export export type Llama3_2VisionChatModels = (typeof LLAMA3_2Vision_MODELS)[number] +// export type Llama3_2VisionChatModels = (typeof LLAMA3_2_VISION_MODELS)[number]Note: Also fixed the reference to
LLAMA3_2_VISION_MODELS(wasLLAMA3_2Vision_MODELS).packages/typescript/ai-ollama/src/meta/model-meta-llama3.2-vision.ts-21-32 (1)
21-32: Correct model size for 11b variant.The
LLAMA3_2_VISION_11bmodel's size should be'7.8gb', not'1gb'. The ollama model registry lists the download size for llama3.2:11b-vision as approximately 7.8 GB.packages/typescript/ai-ollama/src/meta/model-meta-llama3.2.ts-9-22 (1)
9-22: Inconsistent size format - this should be '2gb'.Line 16 shows the size as
'2b'which is inconsistent with the'Xgb'format used throughout this file (line 31:'1.3gb', line 46:'2gb'). The actual llama3.2 model is approximately 2GB, and'2b'is not a valid unit notation. This should be'2gb'to match the established pattern and actual model specifications.name: 'llama3.2:latest', supports: { input: ['text'], output: ['text'], capabilities: ['tools'], }, - size: '2b', + size: '2gb', context: 128_000,packages/typescript/ai-ollama/src/model-meta.ts-362-364 (1)
362-364: Duplicate entry:LLAMA3_2_MODELSis spread twice.
LLAMA3_2_MODELSappears on both line 362 and line 364, which will result in duplicate model names in the final array. Remove one of the duplicates.🔎 Proposed fix
...LLAMA3_1_MODELS, ...LLAMA3_2_MODELS, ...LLAMA3_2_VISION_MODELS, - ...LLAMA3_2_MODELS, ...LLAMA3_3_MODELS,packages/typescript/ai-ollama/src/meta/model-meta-llama3.3.ts-16-16 (1)
16-16: Typo: Size unit should be'43gb'not'43b'.For consistency with the
LLAMA3_3_70bmodel (which uses'43gb'), this should include the 'g'.🔎 Proposed fix
const LLAMA3_3_LATEST = { name: 'llama3.3:latest', supports: { input: ['text'], output: ['text'], capabilities: ['tools'], }, - size: '43b', + size: '43gb', context: 128_000, } as const satisfies OllamaModelMeta<packages/typescript/ai-ollama/src/meta/model-meta-llava-phi3.ts-15-15 (1)
15-15: Typo: Size unit should be'2.9gb'not'2.9b'.The size for
LLAVA_PHI3_LATESTis'2.9b'whileLLAVA_PHI3_8buses'2.9gb'. This appears to be a typo—the 'g' is missing.🔎 Proposed fix
const LLAVA_PHI3_LATEST = { name: 'llava-phi3:latest', supports: { input: ['text', 'image'], output: ['text'], capabilities: ['vision'], }, - size: '2.9b', + size: '2.9gb', context: 4_000, } as const satisfies OllamaModelMeta<packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts-7-18 (1)
7-18: Verify size unit: 'b' vs 'gb'.Line 14 specifies size as
'4.7b'which appears to be a typo. Based on the pattern used for the same size model at line 27 ('4.7gb'), this should likely be'4.7gb'.🔎 Proposed fix
const LLAMA3_GRADIENT_LATEST = { name: 'llama3-gradient:latest', supports: { input: ['text'], output: ['text'], capabilities: [], }, - size: '4.7b', + size: '4.7gb', context: 1_000_000, } as const satisfies OllamaModelMeta< OllamaChatRequest & OllamaChatRequestMessages >packages/typescript/ai-ollama/src/meta/model-meta-llava-llama3.ts-10-21 (1)
10-21: Verify size unit: 'b' vs 'gb'.Line 17 specifies size as
'5.5b'which appears to be a typo. This should likely be'5.5gb'based on the pattern used in other model metadata files.🔎 Proposed fix
const LLAVA_LLAMA3_LATEST = { name: 'llava-llama3:latest', supports: { input: ['text', 'image'], output: ['text'], capabilities: ['vision'], }, - size: '5.5b', + size: '5.5gb', context: 8_000, } as const satisfies OllamaModelMeta< OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> >packages/typescript/ai-ollama/src/meta/model-meta-granite3-dense.ts-60-60 (1)
60-60: Fix typo in commented type name.The comment has a duplicate "3":
Granite3Dense3ChatModelsshould beGranite3DenseChatModels.🔎 Proposed fix
-// export type Granite3Dense3ChatModels = (typeof GRANITE3_DENSE_MODELS)[number] +// export type Granite3DenseChatModels = (typeof GRANITE3_DENSE_MODELS)[number]packages/typescript/ai-ollama/src/meta/model-meta-llama3.ts-7-18 (1)
7-18: Fix size unit inconsistency: Change '4.7b' to '4.7gb'.Line 14 specifies
size: '4.7b'which is inconsistent with the naming pattern used throughout the file. Both LLAMA3_8b (line 27) and LLAMA3_70b (line 40) use 'gb' suffix for their sizes ('4.7gb'and'40gb'respectively). The LLAMA3_LATEST model should also use'4.7gb'for consistency.packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts-39-52 (1)
39-52: Variable name doesn't match model identifier.
GRANITE3_MOE_3brefers to'granite3-moe:8b'. Consider renaming toGRANITE3_MOE_8bfor consistency.🔎 Proposed fix
-const GRANITE3_MOE_3b = { - name: 'granite3-moe:8b', +const GRANITE3_MOE_8b = { + name: 'granite3-moe:8b',packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts-9-22 (1)
9-22: Typo in size field.
LLAMA3_1_LATESThassize: '4.9b'but should likely be'4.9gb'to match the convention used in other model-meta files and theLLAMA3_1_8bdefinition below.🔎 Proposed fix
const LLAMA3_1_LATEST = { name: 'llama3.1:latest', supports: { input: ['text'], output: ['text'], capabilities: ['tools'], }, - size: '4.9b', + size: '4.9gb', context: 128_000,packages/typescript/ai-ollama/src/meta/model-meta-granite3-moe.ts-24-37 (1)
24-37: Variable name doesn't match model identifier.
GRANITE3_MOE_1brefers to'granite3-moe:2b'(line 25), andGRANITE3_MOE_3brefers to'granite3-moe:8b'(line 40). This naming mismatch is confusing and may lead to incorrect usage.🔎 Proposed fix
-const GRANITE3_MOE_1b = { - name: 'granite3-moe:2b', +const GRANITE3_MOE_2b = { + name: 'granite3-moe:2b',And update all references accordingly (
GRANITE3_MOE_MODELS, type maps).Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts-17-17 (1)
17-17: Inconsistent size unit format.
LLAMA4_LATESTuses'67b'whileLLAMA4_16X17buses'67gb'. The'67b'appears to be missing the "gb" unit suffix for consistency.🔎 Suggested fix
- size: '67b', + size: '67gb',packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts-39-52 (1)
39-52: Variable name doesn't match model name.Similar issue:
GRANITE3_1_MOE_3bhasname: 'granite3.1-moe:8b'. Consider renaming toGRANITE3_1_MOE_8b.🔎 Suggested fix
-const GRANITE3_1_MOE_3b = { +const GRANITE3_1_MOE_8b = { name: 'granite3.1-moe:8b',And update all references in lines 57, 79-81, and 88.
Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-ollama/src/meta/model-meta-granite3.1-moe.ts-24-37 (1)
24-37: Variable name doesn't match model name.The variable
GRANITE3_1_MOE_1bhasname: 'granite3.1-moe:2b'. The suffix_1bis misleading as it suggests a 1B parameter model, but the actual model is 2B. Consider renaming toGRANITE3_1_MOE_2bfor clarity.🔎 Suggested fix
-const GRANITE3_1_MOE_1b = { +const GRANITE3_1_MOE_2b = { name: 'granite3.1-moe:2b',And update all references in lines 56, 76-78, and 87.
Committable suggestion skipped: line range outside the PR's diff.
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts-133-157 (1)
133-157: Misleading comment about model capabilities.The comment on line 135 states "Models with thinking and structured output support", but the actual model metadata only declares
['tools']in the capabilities array. Either the comment should be updated to match reality, or the capabilities should be enhanced if these models truly support thinking/structured output.🔎 Fix: Update comment to match actual capabilities
// Manual type map for per-model provider options export type Qwen2_5CoderChatModelProviderOptionsByName = { - // Models with thinking and structured output support + // Models with tool support [QWEN2_5_CODER_LATEST.name]: OllamaChatRequest &
🧹 Nitpick comments (25)
packages/typescript/ai-ollama/src/meta/model-meta-llama3-chatqa.ts (3)
50-58: Consider removing unused commented code.The commented out sections for image, embedding, audio, and video models appear to be placeholders. If these model variants don't exist for llama3-chatqa, consider removing the comments to reduce clutter. If they're planned for future implementation, a TODO comment would make the intent clearer.
61-66: Update or remove misleading comment.The comment on line 62 mentions "Models with thinking and structured output support", but all three models have empty
capabilities: []arrays in their definitions. Either update the comment to accurately reflect the models' capabilities or remove it.
68-73: Correct the misleading comment.The comment on line 69 states "Models with text, image, audio, video (no document)" but all three models only support text input (
input: ['text']). Update the comment to accurately reflect that these models only support text modalities.packages/typescript/ai-ollama/src/meta/model-meta-dolphin3.ts (1)
35-43: Consider removing commented-out placeholders.The commented-out constants for image, embedding, audio, and video models appear to be unused placeholders. Since Dolphin3 models only support text input/output (based on the metadata), consider:
- Removing these placeholders if they're not planned for future use, or
- Adding a TODO comment explaining why they're kept for future expansion
This helps maintain code clarity and reduces confusion.
packages/typescript/ai-ollama/src/meta/model-meta-llava.ts (3)
78-78: Consider PascalCase for type names.The exported types use camelCase (
llavaChatModelProviderOptionsByName,llavaModelInputModalitiesByName), but TypeScript convention typically uses PascalCase for type names (e.g.,LlavaChatModelProviderOptionsByName,LlavaModelInputModalitiesByName).🔎 Proposed refactor
-export type llavaChatModelProviderOptionsByName = { +export type LlavaChatModelProviderOptionsByName = { // Models with thinking and structured output support [LLAVA_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> [LLAVA_7b.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> [LLAVA_13b.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> [LLAVA_34b.name]: OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> } -export type llavaModelInputModalitiesByName = { +export type LlavaModelInputModalitiesByName = { // Models with text, image, audio, video (no document) [LLAVA_LATEST.name]: typeof LLAVA_LATEST.supports.input [LLAVA_7b.name]: typeof LLAVA_7b.supports.input [LLAVA_13b.name]: typeof LLAVA_13b.supports.input [LLAVA_34b.name]: typeof LLAVA_34b.supports.input }Also applies to: 93-93
67-75: Incomplete model type exports.The commented-out exports suggest this is a work-in-progress. If these model categories (image, embedding, audio, video) are planned for LLaVA models, they should either be implemented or removed to keep the codebase clean.
Would you like me to help complete these model type definitions, or should these commented lines be removed if they're not applicable to LLaVA models?
78-91: Consider simplifying duplicate type definitions.All four LLaVA model entries have the same type signature:
OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages>. While this explicit mapping supports per-model type safety, you could reduce duplication with a type helper if all models share the same options.💡 Example simplification
type LlavaCommonOptions = OllamaChatRequest & OllamaChatRequestMessages<OllamaMessageImages> export type llavaChatModelProviderOptionsByName = { [K in typeof LLAVA_MODELS[number]]: LlavaCommonOptions }Note: Only apply this if all LLaVA models will continue to share identical options. The current explicit approach is more maintainable if models diverge in the future.
packages/typescript/ai-ollama/src/meta/model-meta-shieldgemma.ts (1)
74-74: Consider uncommenting and exporting theShieldgemmaChatModelstype alias.The type alias provides a convenient union type for all Shieldgemma chat models, improving developer experience and consistency with other model-meta modules. Unless there's a specific reason to keep it commented, it should be exported.
🔎 Proposed change
-// export type ShieldgemmaChatModels = (typeof SHIELDGEMMA_MODELS)[number] +export type ShieldgemmaChatModels = (typeof SHIELDGEMMA_MODELS)[number]packages/typescript/ai-ollama/src/meta/model-meta-command-r.ts (1)
1-69: Well-structured model metadata following established patterns.The file correctly implements the per-model metadata pattern with type-safe provider options. One minor issue: the comment on line 56 states "Models with thinking and structured output support" but these models have
['tools']capability, not'thinking'. Consider updating the comment to accurately reflect the supported capabilities.🔎 Suggested comment fix
// Manual type map for per-model provider options export type CommandRChatModelProviderOptionsByName = { - // Models with thinking and structured output support + // Models with tools support [COMMAND_R_LATEST.name]: OllamaChatRequest &packages/typescript/ai-ollama/src/meta/model-meta-tinyllama.ts (1)
48-53: Inaccurate comment: TinyLlama has no special capabilities.The comment states "Models with thinking and structured output support" but
capabilitiesis an empty array for these models. Update the comment to reflect the actual capabilities.🔎 Suggested fix
// Manual type map for per-model provider options -export type TinnyllamaChatModelProviderOptionsByName = { - // Models with thinking and structured output support +export type TinyllamaChatModelProviderOptionsByName = { + // Basic text modelspackages/typescript/ai-ollama/src/meta/model-meta-olmo2.ts (1)
1-75: LGTM with minor comment improvement.The model metadata structure is correct and follows the established pattern. The comment on line 64 states "Models with thinking and structured output support" but these models have empty
capabilitiesarrays. Consider updating to "Basic text models" or similar.packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (1)
10-11: Naming inconsistency: constant uses "OPT" but model name uses "gpt".The constant
OPT_OSS_LATESTuses "OPT" prefix but the model name is'gpt-oss:latest'. Consider usingGPT_OSS_LATESTfor consistency with the model name and the exportedGPT_OSS_MODELSarray name.packages/typescript/ai-ollama/src/meta/model-meta-exaone3.5.ts (1)
59-64: Inconsistent naming pattern for MODELS constant.
EXAONE3_5MODELSis missing an underscore beforeMODELScompared to other files (e.g.,GEMMA2_MODELS,OLMO2_MODELS). Consider usingEXAONE3_5_MODELSfor consistency.packages/typescript/ai-ollama/src/meta/model-meta-opencoder.ts (1)
1-75: LGTM with minor comment improvement.The model metadata structure is correct and follows the established pattern. The comment on line 64 stating "Models with thinking and structured output support" is inaccurate since these models have empty
capabilitiesarrays. Consider updating to "Basic text models".packages/typescript/ai-ollama/src/meta/model-meta-gemma2.ts (1)
1-91: LGTM with minor comment improvement.The model metadata structure is correct and follows the established pattern for per-model type safety. As with other files, the comment on line 78 stating "Models with thinking and structured output support" is inaccurate for models with empty
capabilitiesarrays.packages/typescript/ai-ollama/src/meta/model-meta-codegemma.ts (1)
20-44: Variable names don't match the model names they define.
CODEGEMMA_8bdefines model name'codegemma:2b'CODEGEMMA_35bdefines model name'codegemma:7b'This is confusing and could lead to maintenance issues. Consider renaming the constants to match the actual model sizes.
🔎 Proposed fix
-const CODEGEMMA_8b = { +const CODEGEMMA_2b = { name: 'codegemma:2b', supports: { input: ['text'], output: ['text'], capabilities: [], }, size: '1.65gb', context: 8_000, } as const satisfies OllamaModelMeta< OllamaChatRequest & OllamaChatRequestMessages > -const CODEGEMMA_35b = { +const CODEGEMMA_7b = { name: 'codegemma:7b', supports: { input: ['text'], output: ['text'], capabilities: [], }, size: '5gb', context: 8_000, } as const satisfies OllamaModelMeta< OllamaChatRequest & OllamaChatRequestMessages >Then update the references in
CODEGEMMA_MODELSand the type maps accordingly.packages/typescript/ai-ollama/src/meta/model-meta-phi3.ts (1)
20-31: Consider renaming constant for clarity.The constant name
PHI3_3_8b(with "3_8") is slightly confusing. Since the model name is'phi3:8b', consider renaming toPHI3_8bfor consistency with naming patterns in other model metadata files (e.g.,LLAMA3_8bin model-meta-llama3.ts).packages/typescript/ai-ollama/src/meta/model-meta-sailor2.ts (1)
44-45: Minor formatting: missing blank line between model definitions.For consistency with other model definitions in this file, add a blank line between
SAILOR2_8bandSAILOR2_20b.🔎 Proposed fix
} as const satisfies OllamaModelMeta< OllamaChatRequest & OllamaChatRequestMessages > + const SAILOR2_20b = {packages/typescript/ai-ollama/src/meta/model-meta-phi4.ts (2)
45-50: Misleading comment about model capabilities.The comment states "Models with thinking and structured output support" but the model definitions on lines 12 and 25 show
capabilities: [](empty). Consider updating the comment to reflect actual capabilities, or remove it.🔎 Suggested fix
// Manual type map for per-model provider options export type Phi4ChatModelProviderOptionsByName = { - // Models with thinking and structured output support + // Basic chat models (text only) [PHI4_LATEST.name]: OllamaChatRequest & OllamaChatRequestMessages [PHI4_14b.name]: OllamaChatRequest & OllamaChatRequestMessages }
52-56: Misleading comment about input modalities.The comment mentions "text, image, audio, video" but these models only support
['text']input per lines 10 and 23.🔎 Suggested fix
export type Phi4ModelInputModalitiesByName = { - // Models with text, image, audio, video (no document) + // Text-only input models [PHI4_LATEST.name]: typeof PHI4_LATEST.supports.input [PHI4_14b.name]: typeof PHI4_14b.supports.input }packages/typescript/ai-ollama/src/meta/model-meta-llama4.ts (2)
72-83: Type name prefix inconsistency with model constants.The type is named
Llama3_4ChatModelProviderOptionsByNamebut the model constants useLLAMA4_*prefix. This appears to be a copy-paste artifact. Consider renaming for consistency.🔎 Suggested fix
-export type Llama3_4ChatModelProviderOptionsByName = { +export type Llama4ChatModelProviderOptionsByName = { // Models with thinking and structured output support
85-90: Type name prefix inconsistency.Same issue as above -
Llama3_4ModelInputModalitiesByNameshould beLlama4ModelInputModalitiesByNameto match the model family.🔎 Suggested fix
-export type Llama3_4ModelInputModalitiesByName = { +export type Llama4ModelInputModalitiesByName = {packages/typescript/ai-ollama/src/meta/model-meta-marco-o1.ts (1)
45-50: Misleading comment about model capabilities.The comment states "Models with thinking and structured output support" but both models have
capabilities: []. Marco-O1 is a reasoning model, so if it does support thinking, the capabilities array should reflect that.Please verify Marco-O1's actual capabilities and either update the comment or add
'thinking'to the capabilities array if applicable.packages/typescript/ai-ollama/src/meta/model-meta-deepseek-v3.1.ts (1)
76-90: Consider consistent type naming convention.The type name
Deepseekv3_1ChatModelProviderOptionsByNameuses lowercase 'v' which differs from the constant naming patternDEEPSEEK_V3_1_*. For consistency with other model families, considerDeepseekV3_1ChatModelProviderOptionsByName.packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (1)
123-131: Consider removing or uncommenting the type alias.The commented-out placeholder arrays (lines 123-130) appear to be intentional scaffolding for future modalities. However, the
Qwen2_5CoderChatModelstype alias (line 131) could be useful now as it provides a union type of all model names.💡 Option: Enable the model names type alias
-// export type Qwen2_5CoderChatModels = (typeof QWEN2_5_CODER_MODELS)[number] +export type Qwen2_5CoderChatModels = (typeof QWEN2_5_CODER_MODELS)[number]This would provide a convenient type for any code that needs to reference valid Qwen2.5 Coder model names.
packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
Outdated
Show resolved
Hide resolved
511477c to
0daf4ab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
packages/typescript/ai-ollama/src/adapters/summarize.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/model-meta.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/typescript/ai-ollama/src/adapters/summarize.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
packages/typescript/ai-ollama/src/model-meta.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
packages/typescript/ai-ollama/src/model-meta.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
packages/typescript/*/src/model-meta.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Files:
packages/typescript/ai-ollama/src/model-meta.ts
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Applied to files:
packages/typescript/ai-ollama/src/model-meta.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Applied to files:
packages/typescript/ai-ollama/src/model-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Applied to files:
packages/typescript/ai-ollama/src/model-meta.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts
🧬 Code graph analysis (1)
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (7)
OllamaModelMeta(3-13)OllamaChatRequest(28-39)OllamaChatRequestMessages(62-75)OllamaMessageTools(85-88)OllamaMessageThinking(77-79)OllamaChatRequestTools(49-51)OllamaChatRequestThinking_OpenAI(45-47)
🔇 Additional comments (5)
packages/typescript/ai-ollama/src/model-meta.ts (3)
1-67: LGTM! Constant imports are well-organized.The constant imports follow a consistent naming convention (SCREAMING_SNAKE_CASE) and are logically grouped.
399-464: Type intersection structure is correct.The intersection type approach correctly aggregates provider options from all model families, enabling per-model type safety as intended by the codebase architecture.
466-531: Type intersection structure is correct.The intersection type approach correctly aggregates input modalities from all model families, providing comprehensive type coverage across the entire model set.
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (2)
1-9: LGTM!The imports are well-structured using type-only imports for optimal tree-shaking, and all imported types are appropriately used throughout the file.
11-25: LGTM!OPT_OSS_LATEST correctly aligns its capabilities array (including 'thinking') with its type constraint (including thinking-related types). This maintains proper type safety.
4a688da to
a1151ce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (4)
packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts (1)
41-49: Consider removing unused commented placeholder sections.These commented sections for IMAGE, EMBEDDING, AUDIO, VIDEO models and the ChatModels type appear to be placeholders. If these model types are not applicable to Mistral, consider removing these comments to reduce clutter. If they're intended for future use, consider adding a TODO comment explaining the intent.
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (1)
124-132: Consider removing unused commented template code.The commented placeholder code for image, embedding, audio, and video models doesn't apply to the Qwen 2.5 Coder family. If these placeholders aren't serving a documentation or future-expansion purpose, removing them would reduce noise.
packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts (2)
87-101: Comment doesn't match model capabilities.Line 88 states "Models with thinking and structured output support" but the model definitions (lines 14, 29, 44, 59) only declare
['tools']capability—none include 'thinking'.🔎 Suggested comment correction
// Manual type map for per-model provider options export type Llama3_1ChatModelProviderOptionsByName = { - // Models with thinking and structured output support + // Models with tools support [LLAMA3_1_LATEST.name]: OllamaChatRequest &Based on learnings, capability comments should be uniform and accurate.
103-109: Comment incorrectly describes input modalities.Line 104 claims "Models with text, image, audio, video (no document)" but all model definitions only specify
input: ['text']in theirsupportsobjects. The comment should accurately reflect that these models only support text input.🔎 Suggested comment correction
export type Llama3_1ModelInputModalitiesByName = { - // Models with text, image, audio, video (no document) + // Models with text input only [LLAMA3_1_LATEST.name]: typeof LLAMA3_1_LATEST.supports.inputBased on learnings, maintain uniform templates but ensure comments accurately describe the actual capabilities.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
packages/typescript/ai-ollama/src/adapters/summarize.tspackages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3.1.tspackages/typescript/ai-ollama/src/meta/model-meta-llava.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral-small.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral.tspackages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.tspackages/typescript/ai-ollama/src/model-meta.ts
🚧 Files skipped from review as they are similar to previous changes (6)
- packages/typescript/ai-ollama/src/adapters/summarize.ts
- packages/typescript/ai-ollama/src/model-meta.ts
- packages/typescript/ai-ollama/src/meta/model-meta-llama3-gradient.ts
- packages/typescript/ai-ollama/src/meta/model-meta-mistral-small.ts
- packages/typescript/ai-ollama/src/meta/model-meta-llava.ts
- packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3.1.tspackages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3.1.tspackages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: Enforce that all model-meta files under packages/typescript/ai-ollama/src/meta/model-meta-*.ts use a standard, uniform template for capability-related comments, even if the text doesn’t perfectly align with each specific model capability. This ensures consistency across the codebase. Include a brief note in the template when a capability is not applicable, and avoid deviating from the established comment structure (e.g., header, template fields, and formatting) to maintain readability and tooling familiarity.
Applied to files:
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3.1.tspackages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Applied to files:
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3.1.tspackages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.
Applied to files:
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3.1.tspackages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Applied to files:
packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.tspackages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.tspackages/typescript/ai-ollama/src/meta/model-meta-mistral.tspackages/typescript/ai-ollama/src/meta/model-meta-llama3.1.tspackages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts
🧬 Code graph analysis (3)
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (7)
OllamaModelMeta(3-13)OllamaChatRequest(28-39)OllamaChatRequestMessages(62-75)OllamaMessageTools(85-88)OllamaMessageThinking(77-79)OllamaChatRequestTools(49-51)OllamaChatRequestThinking_OpenAI(45-47)
packages/typescript/ai-ollama/src/meta/model-meta-llama3.1.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
OllamaModelMeta(3-13)OllamaChatRequest(28-39)OllamaChatRequestMessages(62-75)OllamaMessageTools(85-88)OllamaChatRequestTools(49-51)
packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (5)
OllamaModelMeta(3-13)OllamaChatRequest(28-39)OllamaChatRequestMessages(62-75)OllamaMessageTools(85-88)OllamaChatRequestTools(49-51)
🔇 Additional comments (10)
packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts (5)
1-9: LGTM: Imports are clean and complete.All necessary types are imported and used appropriately throughout the file.
27-41: LGTM: Type safety is consistent.The model metadata structure is correct, and the capabilities array now properly includes 'thinking' to match the type constraints. This resolves the previously flagged critical issue.
43-57: LGTM: Type safety is consistent.The model metadata is correctly structured with matching capabilities and type constraints. The larger size (65gb vs 14gb) is appropriate for a 120b parameter model.
59-97: LGTM: Type maps provide proper per-model type safety.The exports and type maps follow the established pattern from other model-meta files. The type maps correctly provide per-model provider options and input modality types, enabling compile-time type safety for model-specific configurations.
Note: The comment on line 93 follows the standard template format used across all model-meta files for consistency, as per coding guidelines.
11-25: Model specifications are accurate; code is ready.The gpt-oss:latest specifications in the code match the official Ollama library: size is 14GB and context window is 128K tokens. Type constraints correctly reflect the declared capabilities for tools and thinking support.
packages/typescript/ai-ollama/src/meta/model-meta-mistral.ts (2)
1-7: LGTM!The imports are correct and all imported types are properly utilized in the model definitions and type maps.
9-22: Specifications verified as accurate.The
mistral:latestspecifications in the code are correct: size (4.4gb) and context window (32,000) match the official Ollama library.packages/typescript/ai-ollama/src/meta/model-meta-qwen2.5-coder.ts (2)
160-169: Correct inaccurate modality comment.The comment "Models with text, image, audio, video (no document)" doesn't match the actual input modalities. All Qwen 2.5 Coder models only support
['text']input.🔎 Suggested fix
- // Models with text, image, audio, video (no document) + // Models with text input only [QWEN2_5_CODER_LATEST.name]: typeof QWEN2_5_CODER_LATEST.supports.input⛔ Skipped due to learnings
Learnt from: CR Repo: TanStack/ai PR: 0 File: CLAUDE.md:0-0 Timestamp: 2025-12-13T17:09:09.794Z Learning: Maintain type safety through multimodal content support (image, audio, video, document) with model capability awarenessLearnt from: harry-whorlow Repo: TanStack/ai PR: 117 File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97 Timestamp: 2025-12-27T20:22:46.710Z Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
9-112: Update context windows for larger model variants to reflect actual Qwen capabilities.Model sizes are accurate, but the context window specification is incomplete. While Ollama displays 32K by default, Qwen documentation confirms that qwen2.5-coder models 7b, 14b, and 32b support up to 128K (131,072) tokens. Update the
contextproperty to128_000forQWEN2_5_CODER_7b,QWEN2_5_CODER_14b, andQWEN2_5_CODER_32bto reflect their actual capabilities; the smaller variants (0.5b, 1.5b, 3b) remain at 32K.⛔ Skipped due to learnings
Learnt from: harry-whorlow Repo: TanStack/ai PR: 117 File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97 Timestamp: 2025-12-27T20:22:46.710Z Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.Learnt from: CR Repo: TanStack/ai PR: 0 File: CLAUDE.md:0-0 Timestamp: 2025-12-13T17:09:09.794Z Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safetyLearnt from: harry-whorlow Repo: TanStack/ai PR: 117 File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18 Timestamp: 2025-12-27T19:48:57.631Z Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts (1)
46-60: LGTM! Export structure follows the established pattern.The model names export and placeholder comments for future model categories are consistent with other model-meta files in the codebase.
2e8eb92 to
7cf11b1
Compare
9e7373a to
9a6ff6c
Compare
| // interface ChatRequest { | ||
| // model: string | ||
| // messages?: Message[] | ||
| // stream?: boolean | ||
| // format?: string | object | ||
| // keep_alive?: string | number | ||
| // tools?: Tool[] | ||
| // think?: boolean | 'high' | 'medium' | 'low' | ||
| // logprobs?: boolean | ||
| // top_logprobs?: number | ||
| // options?: Partial<Options> | ||
| // } | ||
|
|
||
| export interface OllamaChatRequest { | ||
| // model: string | ||
| // messages?: Message[] | ||
| stream?: boolean | ||
| format?: string | object | ||
| keep_alive?: string | number | ||
| // tools?: Tool[] | ||
| // think?: boolean | 'high' | 'medium' | 'low' | ||
| logprobs?: boolean | ||
| top_logprobs?: number | ||
| options?: Partial<Options> | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with the comments, It would be nice to have them for reference for dev work... I'll remove the comments from the OllamaChatRequest type before pr merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/typescript/ai-ollama/src/meta/models-meta.ts (1)
45-47: Consider using camelCase for type name.The type name
OllamaChatRequestThinking_OpenAIuses an underscore, which doesn't follow the camelCase naming convention specified in the coding guidelines.🔎 Suggested refactor
-export interface OllamaChatRequestThinking_OpenAI { +export interface OllamaChatRequestThinkingOpenAI { think?: 'low' | 'medium' | 'high' }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
examples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-ollama/src/adapters/summarize.tspackages/typescript/ai-ollama/src/meta/models-meta.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/typescript/ai-ollama/src/adapters/summarize.ts
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
examples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-ollama/src/meta/models-meta.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
examples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-ollama/src/meta/models-meta.ts
examples/**
📄 CodeRabbit inference engine (CLAUDE.md)
Examples are not built by Nx and should be run independently from their directories with
pnpm devorpnpm install && pnpm dev
Files:
examples/ts-react-chat/src/routes/api.tanchat.ts
🧠 Learnings (8)
📓 Common learnings
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: For Ollama model metadata in the ai-ollama package, always consult https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities) rather than other documentation sources.
📚 Learning: 2025-12-27T20:22:46.710Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:46.710Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.ts
📚 Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.ts
📚 Learning: 2025-12-27T21:39:23.584Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-llama-guard3.ts:70-75
Timestamp: 2025-12-27T21:39:23.584Z
Learning: The standard template comments in ai-ollama model-meta files (like "Models with text, image, audio, video (no document)") should not be modified to match individual model capabilities, as they are intentionally kept uniform across all model-meta-*.ts files for consistency, regardless of what each specific model actually supports.
Applied to files:
packages/typescript/ai-ollama/src/meta/models-meta.ts
📚 Learning: 2025-12-27T19:48:57.631Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-deepseek-coder-v2.ts:7-18
Timestamp: 2025-12-27T19:48:57.631Z
Learning: When reviewing or updating metadata in the Ollama model metadata module (e.g., files under packages/typescript/ai-ollama/src/meta/), always treat https://ollama.com/library/ as the authoritative source for model specifications (size, context window, capabilities). Validate only against the official docs and avoid citing non-official sources. If discrepancies arise, reference the official docs and ensure metadata aligns with their specifications.
Applied to files:
packages/typescript/ai-ollama/src/meta/models-meta.ts
🔇 Additional comments (1)
examples/ts-react-chat/src/routes/api.tanchat.ts (1)
95-96: No issues found. Thegpt-oss:120bmodel is valid in Ollama's library, and thethink: 'low'option is correctly typed per theOllamaChatRequestThinking_OpenAIinterface which defines valid values as'low' | 'medium' | 'high'. The model metadata properly includes this capability and the implementation follows type-safe per-model configuration patterns.
| export interface OllamaChatRequestMessages< | ||
| TMessageExtension extends OllamaMessageExtension = undefined, | ||
| > { | ||
| messages?: Array< | ||
| { | ||
| role: string | ||
| content: string | ||
| // thinking?: string | ||
| // images?: Uint8Array[] | string[] | ||
| // tool_calls?: ToolCall[] | ||
| // tool_name?: string | ||
| } & TMessageExtension | ||
| > | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix type intersection that produces never type.
When TMessageExtension defaults to undefined, the intersection { role: string; content: string } & undefined evaluates to never, making the messages array unusable without explicitly providing a type parameter.
🔎 Proposed fixes
Option 1: Use empty object as default
export interface OllamaChatRequestMessages<
- TMessageExtension extends OllamaMessageExtension = undefined,
+ TMessageExtension = {},
> {
messages?: Array<
{
role: string
content: string
- // thinking?: string
- // images?: Uint8Array[] | string[]
- // tool_calls?: ToolCall[]
- // tool_name?: string
} & TMessageExtension
>
}Option 2: Use conditional type to handle undefined
export interface OllamaChatRequestMessages<
TMessageExtension extends OllamaMessageExtension = undefined,
> {
messages?: Array<
{
role: string
content: string
- // thinking?: string
- // images?: Uint8Array[] | string[]
- // tool_calls?: ToolCall[]
- // tool_name?: string
- } & TMessageExtension
+ } & (TMessageExtension extends undefined ? {} : TMessageExtension)
>
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export interface OllamaChatRequestMessages< | |
| TMessageExtension extends OllamaMessageExtension = undefined, | |
| > { | |
| messages?: Array< | |
| { | |
| role: string | |
| content: string | |
| // thinking?: string | |
| // images?: Uint8Array[] | string[] | |
| // tool_calls?: ToolCall[] | |
| // tool_name?: string | |
| } & TMessageExtension | |
| > | |
| } | |
| export interface OllamaChatRequestMessages< | |
| TMessageExtension = {}, | |
| > { | |
| messages?: Array< | |
| { | |
| role: string | |
| content: string | |
| } & TMessageExtension | |
| > | |
| } |
| export interface OllamaChatRequestMessages< | |
| TMessageExtension extends OllamaMessageExtension = undefined, | |
| > { | |
| messages?: Array< | |
| { | |
| role: string | |
| content: string | |
| // thinking?: string | |
| // images?: Uint8Array[] | string[] | |
| // tool_calls?: ToolCall[] | |
| // tool_name?: string | |
| } & TMessageExtension | |
| > | |
| } | |
| export interface OllamaChatRequestMessages< | |
| TMessageExtension extends OllamaMessageExtension = undefined, | |
| > { | |
| messages?: Array< | |
| { | |
| role: string | |
| content: string | |
| } & (TMessageExtension extends undefined ? {} : TMessageExtension) | |
| > | |
| } |
🤖 Prompt for AI Agents
In packages/typescript/ai-ollama/src/meta/models-meta.ts around lines 62 to 75,
the intersection with the default type parameter TMessageExtension = undefined
causes `{ role: string; content: string } & undefined` to collapse to never;
change the generic default or make it conditional so the intersection yields a
usable object. Replace the default `undefined` with an empty object type (e.g.,
TMessageExtension extends Record<string, unknown> = {}), or use a conditional
type (e.g., TMessageExtension extends undefined ? {} : TMessageExtension) so
that messages are typed as `{ role: string; content: string } &
(TMessageExtension or {})` rather than never.
This PR adds ollama meta and types to the ollama adapter.
It's not exhaustive, however dose match sdk in content and models
Summary by CodeRabbit
New Features
Refactor
Chores
✏️ Tip: You can customize this high-level summary in your review settings.