A powerful Rails engine for managing AI prompts with version control, A/B testing, and seamless LLM integration. PromptEngine provides a centralized admin interface where teams can create, version, test, and optimize their AI prompts without deploying code changes.
PromptEngine is currently being worked on actively to prepare for a proper initial version release. You can use it by including the gem sourced from the main branch of this repo however as we make changes, we will not ensure backward compatibility. Use at your own risk for now, but a final version should be ready in the next week.
- π Live Demo App
- π Documentation
- Version Control: Track every change to your prompts with automatic versioning
- A/B Testing: Test different prompt variations with built-in evaluation tools
- No Deploy Required: Update prompts through the admin UI without code changes
- LLM Agnostic: Works with OpenAI, Anthropic, and other providers
- Type Safety: Automatic parameter detection and validation
- Team Collaboration: Centralized prompt management for your entire team
- Production Ready: Battle-tested with secure API key storage
- π― Smart Prompt Management: Create and organize prompts with slug-based identification
- π Version Control: Automatic versioning with one-click rollback
- π Variable Detection: Auto-detects
{{variables}}and creates typed parameters - π§ͺ Playground: Test prompts with real AI providers before deploying
- π Secure: Encrypted API key storage using Rails encryption
- π Modern API: Object-oriented design with direct LLM integration
Add this line to your application's Gemfile:
gem "prompt_engine"bundle add "prompt_engine"And then execute:
$ bundle
# IMPORTANT: You must install migrations before running db:migrate
$ rails prompt_engine:install:migrations
$ rails db:migrate
$ rails prompt_engine:seed # Optional: adds sample promptsNote: PromptEngine migrations are NOT automatically loaded. You must explicitly run rails prompt_engine:install:migrations before running rails db:migrate.
In your config/routes.rb:
Rails.application.routes.draw do
mount PromptEngine::Engine => "/prompt_engine"
# your other routes...
endPromptEngine provides flexible authentication options to secure your admin interface. Since prompt templates may contain sensitive business logic, we strongly recommend configuring authentication for production environments.
# config/routes.rb
authenticate :user, ->(u) { u.admin? } do
mount PromptEngine::Engine => "/prompt_engine"
end# config/initializers/prompt_engine.rb
PromptEngine::Engine.middleware.use(Rack::Auth::Basic) do |username, password|
ActiveSupport::SecurityUtils.secure_compare(
Rails.application.credentials.prompt_engine_username, username
) & ActiveSupport::SecurityUtils.secure_compare(
Rails.application.credentials.prompt_engine_password, password
)
end# config/initializers/prompt_engine.rb
ActiveSupport.on_load(:prompt_engine_application_controller) do
before_action :authenticate_admin!
private
def authenticate_admin!
redirect_to main_app.root_path unless current_user&.admin?
end
def current_user
# Your app's current user method
main_app.current_user
end
endIf using Rails 8's built-in authentication generator:
# config/initializers/prompt_engine.rb
ActiveSupport.on_load(:prompt_engine_application_controller) do
include Authentication # Rails 8's authentication concern
before_action :require_authentication
endVisit /prompt_engine in your browser to access the admin interface where you can:
- Create and manage prompts
- Test prompts in the playground
- View version history
- Monitor prompt performance
# Render a prompt with variables (defaults to active prompts only)
rendered = PromptEngine.render("customer-support",
{ customer_name: "John", issue: "Can't login to my account" }
)
# Access rendered content
rendered.content # => "Hello John, I understand you're having trouble..."
rendered.system_message # => "You are a helpful customer support agent..."
rendered.model # => "gpt-4"
rendered.temperature # => 0.7
# Access parameter values - see docs/VARIABLE_ACCESS.md for details
rendered.parameters # => {"customer_name" => "John", "issue" => "Can't login..."}
rendered.parameter(:customer_name) # => "John"
# Direct integration with OpenAI
client = OpenAI::Client.new(access_token: ENV["OPENAI_API_KEY"])
response = rendered.execute_with(client)
# Or with Anthropic
client = Anthropic::Client.new(access_token: ENV["ANTHROPIC_API_KEY"])
response = rendered.execute_with(client)
# Override model settings at runtime
rendered = PromptEngine.render("email-writer",
{ subject: "Welcome to our platform" },
options: { model: "gpt-4-turbo", temperature: 0.9 }
)
# Load a specific version
rendered = PromptEngine.render("onboarding-email",
{ user_name: "Sarah" },
options: { version: 3 }
)
# Render prompts with different statuses (defaults to 'active')
# Useful for testing drafts or accessing archived prompts
rendered = PromptEngine.render("new-feature",
{ feature_name: "AI Assistant" },
options: { status: "draft" } # Can be 'draft', 'active', or 'archived'
)- Create Prompts: Use the admin UI to create prompts with
{{variables}} - Auto-Detection: PromptEngine automatically detects variables and creates parameters
- Version Control: Every save creates a new version automatically
- Test & Deploy: Test in the playground, then use in your application
- Monitor: Track usage and performance through the dashboard
Renders a prompt template with the given variables.
Parameters:
slug(String): The unique identifier for the promptvariables(Hash): Variables to interpolate in the prompt (optional positional argument)options:(Hash): Rendering options (optional keyword argument)status: The status to filter by (defaults to 'active')model: Override the default modeltemperature: Override the default temperaturemax_tokens: Override the default max tokensversion: Load a specific version number
Returns: PromptEngine::RenderedPrompt instance
content: The rendered prompt contentsystem_message: The system message (if any)model: The AI model to usetemperature: The temperature settingmax_tokens: The max tokens settingto_openai_params: Convert to OpenAI API formatto_ruby_llm_params: Convert to RubyLLM/Anthropic formatexecute_with(client): Execute with an LLM client
We welcome contributions! Here's how you can help:
-
Fork the repository
-
Clone your fork:
git clone https://github.com/YOUR_USERNAME/prompt_engine.git cd prompt_engine -
Install dependencies:
bundle install
-
Run setup script
bundle exec rails setup -
Run the tests:
bundle exec rspec -
Start the development server:
bundle exec rails server
-
Create a feature branch:
git checkout -b feature/your-feature-name
-
Make your changes and ensure tests pass:
bundle exec rspec bundle exec rubocop
-
Commit your changes:
git commit -m "Add your feature" -
Push to your fork:
git push origin feature/your-feature-name
-
Create a Pull Request
- Write tests for new features
- Follow Rails best practices
- Use conventional commit messages
- Update documentation as needed
- Be respectful in discussions
PromptEngine follows Rails engine conventions with a modular architecture:
- Models: Prompt, PromptVersion, Parameter, EvalSet, TestCase
- Services: VariableDetector, PlaygroundExecutor, PromptRenderer
- Admin UI: Built with Hotwire, Stimulus, and Turbo
- API: Object-oriented design with RenderedPrompt instances
- Proper Eval Sandbox and Testing
- Logging and Observability of Prompt Usage
- Cost tracking and optimization
- Innovent Capital - Pioneering AI innovation in financial services
The gem is available as open source under the terms of the MIT License.
- π Live Demo App
- π Documentation
- π Issue Tracker
- π¬ Discussions
- π§ Email Support
Built with β€οΈ by Avi.nyc