LiteChat is a modular, extensible, and privacy-focused AI chat application designed for power users, developers, and teams. It supports multiple AI providers, advanced prompt engineering, project-based organization, and powerful developer features like virtual file systems, Git integration, and a comprehensive modding system.
## ✨ Key Features
### 🔒 Privacy-First Architecture
- 100% Client-Side: All data stored locally in your browser using IndexedDB
- No Server Dependencies: Core functionality requires no backend services
- Full Data Control: Export/import your entire configuration or specific data types (conversations, projects, settings, API keys, providers, rules, tags, mods, sync repos, MCP servers, prompt templates, and agents).
### 🤖 Multi-Provider AI Support
- OpenRouter: Access to 300+ models through unified API
- OpenAI: GPT-4x, o3-mini, o4-mini, o3, o3-pro, with reasoning and tool support
- Google Gemini: Gemini Pro models with multimodal capabilities
- Anthropic Claude: Sonnet, Opus, ...
- Local Providers: Ollama, LMStudio, and other OpenAI-compatible APIs
- Advanced Features: Streaming, reasoning, tool execution, ...
### 🌐 Everyone's Favorite Features
- Send text files to any LLM: Even those who say they do not support file uploads
- Multimodal support: If a model support a file type, you can send it to it
- Auto title generation: The AI will generate a title for your conversation
- Conversation export: Export your conversation to a file
- Message regeneration: When the model falls on its face, you can regenerate the message
- Conversation Sync: Sync conversations with Git repositories for a "poor man" no thrill sync solution.
- Prompt Library: Create, manage, and use reusable prompt templates
### 💻 Power User Features
- Workflow Automation: Create, save, and execute multi-step AI workflows with automated sequences, variable mapping, and intelligent orchestration
- Agents: Create, manage, and use powerful AI agents and their associated tasks.
- Tool System: AI can read/write files, execute Git commands, and more, including tools from MCP servers.
- Text Triggers: Powerful inline commands for prompt automation (e.g., @.rules.auto;
, @.tools.activate vfs_read_file;
, @.params.temp 0.7;
)
- Race: you can send the same prompt to multiple models at once and see the results
- Mermaid Diagrams: Real-time diagram rendering with full Mermaid.js support
- Response editor: edit the response after it has been generated to remove the fluff and save on tokens
- Rules: you can add rules to the AI to guide its behavior, tags are here to bundle rules together
- Regenerate with: regenerate the message with a different model
### 🛠️ Developer-Focused Features
- Code Block Enhancements: Filepath syntax, individual downloads, ZIP exports
- Codeblock editor: you can edit the codeblock content directly in the browser, and use it in the follow up chats !
- Virtual File System: Browser-based filesystem with full CRUD operations
- Git Integration: Clone, commit, push, pull directly in the browser
- Structured Output: you can ask the AI to return a structured output, like a JSON, a table, a list, etc. (untested ^^')
- Formedible codeblock: LLMs can use the formedible
codeblock to create a form to interact with the user in a deterministice maner using the [Formedible](https://github.com/DimitriGilbert/Formedible) library.
> If you have a 1000 LoC to spare, you can create you own custom Codeblock renderer see [FormedibleBlockRendererModule](src/controls/modules/FormedibleBlockRendererModule.ts) for an example.
### 📁 Project Organization
- Hierarchical Projects: Organize conversations in nested project structures
- Per-Project Settings: Custom models, prompts, and configurations
- Rules & Tags: Reusable prompt engineering with organization
### 🔌 MCP (Model Context Protocol) Integration
- HTTP and Stdio MCP Servers: Connect to external MCP servers via HTTP Server-Sent Events, HTTP Stream Transport and Stdio (via [node ./bin/mcp-bridge.js](./bin/mcp-bridge.js))
- Automatic Tool Discovery: Tools from MCP servers are automatically available to the AI
- Graceful Error Handling: Configurable retry logic with exponential backoff
- Connection Management: Real-time status monitoring and manual retry capabilities
- Secure Authentication: Support for custom headers and API key authentication
### ⚙️ Extensibility & Customization
- Modding System: Safe, sandboxed extension API for custom functionality
- Control Modules: Modular UI components with clean separation of concerns
- Event-Driven Architecture: Decoupled communication for maintainability
- Build-Time Configuration: Ship with pre-configured setups for teams/demos
- Custom Themes: Full visual customization with CSS variables
Launch Your SaaS Fast, and Earn Money Fast.