This repository was archived by the owner on Sep 7, 2025. It is now read-only.

Description
Currently, the model.ts and vector-store.ts files export a singleton instance of the chat model and vector store, respectively. However, the setup always uses OpenAI's models and Qdrant, which can be costly and unnecessary for local development.
This issue proposes making the exported instances dynamic based on the environment:
- In production: Use OpenAI's chat model and Qdrant vector store.
- In development: Use a free local chat model and an in-memory vector store (e.g.,
MemoryVectorStore). See Chat Models and Vector Store options that integrates with Langchain.
Benefits
- Reduces costs during development.
- Speeds up testing by avoiding external API calls.
- Makes it easy to swap models and vector stores in the future. The main reason for the singleton instance of both
model and vector-store in the first place.
Proposed Changes
- Modify
model.ts to select different chat models based on process.env.NODE_ENV.
- Modify
vector-store.ts to switch between Qdrant (production) and an in-memory vector store (development).
- Use environment variables to configure these settings dynamically.
Suggested Approach
- Update
model.ts to check the environment and use a different chat model in development.
- Update
vector-store.ts to use an in-memory vector store when not in production.
- Add necessary environment variables (
.env file) to configure model and vector store options.
Relevant Files
lib/model.ts
lib/vector-store.ts