OllamaTalk is a fully local, cross-platform AI chat application that runs seamlessly on macOS, Windows, Linux, Android, and iOS. All AI processing happens entirely on your device, ensuring a secure and private chat experience without relying on external servers or cloud services. This design guarantees complete control over your data while delivering a unified experience across all major platforms.
- macOS
- Windows
- Linux
- Web
- Android
- iOS
- Download and install Ollama from the official download page.
- Browse and download your preferred models from
the Ollama Model Hub.
Examples: deepseek-r1, llama, mistral, qwen, gemma2, llava, and more.
ollama serve # Defaults to http://localhost:11434
OLLAMA_HOST=0.0.0.0:11434 ollama serve # Enables access from mobile devices
Mobile Device Configuration:
-
When using OllamaTalk on mobile devices (Android/iOS):
- Use the cross-device access configuration on your server
- In the OllamaTalk mobile app settings:
- Navigate to settings
- Enter server IP as:
http://<server-ip>:11434
- Replace
<server-ip>
with your server's local network IP
Network Requirements:
- Server and mobile device must be on the same local network
- Visit the OllamaTalk Releases page.
- Download the latest version for your platform
- Open the installed application.
- Connect to your local Ollama server.
- Start chatting with AI.
Note: Ensure that the Ollama server is running before launching the application.
This project is licensed under the MIT License.
- Report bugs and suggest features through GitHub Issues.
- Contributions via Pull Requests are welcome!