Skip to content
This repository was archived by the owner on Sep 7, 2025. It is now read-only.
This repository was archived by the owner on Sep 7, 2025. It is now read-only.

Make Chat Model & Vector Store Selection Dynamic Based on Environment #2

@babblebey

Description

@babblebey

Currently, the model.ts and vector-store.ts files export a singleton instance of the chat model and vector store, respectively. However, the setup always uses OpenAI's models and Qdrant, which can be costly and unnecessary for local development.

This issue proposes making the exported instances dynamic based on the environment:

  • In production: Use OpenAI's chat model and Qdrant vector store.
  • In development: Use a free local chat model and an in-memory vector store (e.g., MemoryVectorStore). See Chat Models and Vector Store options that integrates with Langchain.

Benefits

  • Reduces costs during development.
  • Speeds up testing by avoiding external API calls.
  • Makes it easy to swap models and vector stores in the future. The main reason for the singleton instance of both model and vector-store in the first place.

Proposed Changes

  • Modify model.ts to select different chat models based on process.env.NODE_ENV.
  • Modify vector-store.ts to switch between Qdrant (production) and an in-memory vector store (development).
  • Use environment variables to configure these settings dynamically.

Suggested Approach

  1. Update model.ts to check the environment and use a different chat model in development.
  2. Update vector-store.ts to use an in-memory vector store when not in production.
  3. Add necessary environment variables (.env file) to configure model and vector store options.

Relevant Files

  • lib/model.ts
  • lib/vector-store.ts

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions