Build and deploy AI agent workflows in minutes.
Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.
Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.
Upload documents to a vector store and let agents answer questions grounded in your specific content.
Cloud-hosted: sim.ai
npx simstudioDocker must be installed and running on your machine.
| Flag | Description |
|---|---|
-p, --port <port> |
Port to run Sim on (default 3000) |
--no-pull |
Skip pulling latest Docker images |
# Clone the repository
git clone https://github.com/simstudioai/sim.git
# Navigate to the project directory
cd sim
# Start Sim
docker compose -f docker-compose.prod.yml up -dAccess the application at http://localhost:3000/
Run Sim with local AI models using Ollama - no external APIs required:
# Start with GPU support (automatically downloads gemma3:4b model)
docker compose -f docker-compose.ollama.yml --profile setup up -d
# For CPU-only systems:
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -dWait for the model to download, then visit http://localhost:3000. Add more models with:
docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8bIf you already have Ollama running on your host machine (outside Docker), you need to configure the OLLAMA_URL to use host.docker.internal instead of localhost:
# Docker Desktop (macOS/Windows)
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d
# Linux (add extra_hosts or use host IP)
docker compose -f docker-compose.prod.yml up -d # Then set OLLAMA_URL to your host's IPWhy? When running inside Docker, localhost refers to the container itself, not your host machine. host.docker.internal is a special DNS name that resolves to the host.
For Linux users, you can either:
- Use your host machine's actual IP address (e.g.,
http://192.168.1.100:11434) - Add
extra_hosts: ["host.docker.internal:host-gateway"]to the simstudio service in your compose file
Sim also supports vLLM for self-hosted models with OpenAI-compatible API:
# Set these environment variables
VLLM_BASE_URL=http://your-vllm-server:8000
VLLM_API_KEY=your_optional_api_key # Only if your vLLM instance requires authWhen running with Docker, use host.docker.internal if vLLM is on your host machine (same as Ollama above).
- Open VS Code with the Remote - Containers extension
- Open the project and click "Reopen in Container" when prompted
- Run
bun run dev:fullin the terminal or use thesim-startalias- This starts both the main application and the realtime socket server
Requirements:
- Bun runtime
- PostgreSQL 12+ with pgvector extension (required for AI embeddings)
Note: Sim uses vector embeddings for AI features like knowledge bases and semantic search, which requires the pgvector PostgreSQL extension.
- Clone and install dependencies:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install- Set up PostgreSQL with pgvector:
You need PostgreSQL with the vector extension for embedding support. Choose one option:
Option A: Using Docker (Recommended)
# Start PostgreSQL with pgvector extension
docker run --name simstudio-db \
-e POSTGRES_PASSWORD=your_password \
-e POSTGRES_DB=simstudio \
-p 5432:5432 -d \
pgvector/pgvector:pg17Option B: Manual Installation
- Install PostgreSQL 12+ and the pgvector extension
- See pgvector installation guide
- Set up environment:
cd apps/sim
cp .env.example .env # Configure with required variables (DATABASE_URL, BETTER_AUTH_SECRET, BETTER_AUTH_URL)Update your .env file with the database URL:
DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"- Set up the database:
First, configure the database package environment:
cd packages/db
cp .env.example .env Update your packages/db/.env file with the database URL:
DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"Then run the migrations:
bunx drizzle-kit migrate --config=./drizzle.config.ts- Start the development servers:
Recommended approach - run both servers together (from project root):
bun run dev:fullThis starts both the main Next.js application and the realtime socket server required for full functionality.
Alternative - run servers separately:
Next.js app (from project root):
bun run devRealtime socket server (from apps/sim directory in a separate terminal):
cd apps/sim
bun run dev:socketsCopilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set
COPILOT_API_KEYenvironment variable in your self-hosted apps/sim/.env file to that value
Key environment variables for self-hosted deployments (see apps/sim/.env.example for full list):
| Variable | Required | Description |
|---|---|---|
DATABASE_URL |
Yes | PostgreSQL connection string with pgvector |
BETTER_AUTH_SECRET |
Yes | Auth secret (openssl rand -hex 32) |
BETTER_AUTH_URL |
Yes | Your app URL (e.g., http://localhost:3000) |
NEXT_PUBLIC_APP_URL |
Yes | Public app URL (same as above) |
ENCRYPTION_KEY |
Yes | Encryption key (openssl rand -hex 32) |
OLLAMA_URL |
No | Ollama server URL (default: http://localhost:11434) |
VLLM_BASE_URL |
No | vLLM server URL for self-hosted models |
COPILOT_API_KEY |
No | API key from sim.ai for Copilot features |
If you're running Ollama on your host machine and Sim in Docker, change OLLAMA_URL from localhost to host.docker.internal:
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -dSee Using an External Ollama Instance for details.
Ensure PostgreSQL has the pgvector extension installed. When using Docker, wait for the database to be healthy before running migrations.
If ports 3000, 3002, or 5432 are in use, configure alternatives:
# Custom ports
NEXT_PUBLIC_APP_URL=http://localhost:3100 POSTGRES_PORT=5433 docker compose up -d- Framework: Next.js (App Router)
- Runtime: Bun
- Database: PostgreSQL with Drizzle ORM
- Authentication: Better Auth
- UI: Shadcn, Tailwind CSS
- State Management: Zustand
- Flow Editor: ReactFlow
- Docs: Fumadocs
- Monorepo: Turborepo
- Realtime: Socket.io
- Background Jobs: Trigger.dev
- Remote Code Execution: E2B
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Made with ❤️ by the Sim Team


