Local AI Chat
A React chat application that runs AI locally using Ollama. Open source and completely free.

π₯ Watch Demo
Philosophy
This application runs AI locally on your machine using Ollama. Itβs open source and completely free to use.
- Local: Everything runs on your hardware
- Open Source: Code is transparent and modifiable
- Cost-Free: No subscriptions or API fees
Quick Start
-
Install Ollama
Download and install Ollama from the official website:
- macOS/Windows/Linux: Visit https://ollama.com and download the installer for your operating system
- Follow the installation instructions for your platform
- Install a model
- Run the application
- Open http://localhost:3000
Features
- β
Real-time chat with local AI
- β
Conversation history
- β
Multiple chat sessions
- β
Ollama status monitoring
- β
Privacy-focused (all data stays local)
Requirements
- Hardware: MacBook Pro or equivalent computer with sufficient RAM and processing power to run Llama3.1:8b locally
- Software:
- Node.js (v19 or higher)
- Ollama installed locally
- Recommended: 16GB+ RAM for optimal performance with Llama3.1:8b
Supported Models
Works with llama3.1:8b (recommended)
Configuration
Edit .env.local
to customize:
REACT_APP_OLLAMA_URL=http://localhost:11434
PORT=3000
Troubleshooting
Ollama Not Connected?
- Ensure
ollama serve
is running
- Check that port 11434 is not blocked
- Verify models are installed with
ollama list
Performance Issues?
- Use smaller models like llama3.1:8b
- Close other applications to free up memory
Development
npm start # Development server
npm run build # Production build
npm test # Run tests
License
MIT License - See LICENSE file for details.