Quick Start
Get AionUi running in a few steps. For full details see the Complete Installation Guide on the wiki.
1. Check System Requirements
Ensure your system meets the system requirements (macOS 10.15+, Windows 10+, or Linux; 4GB+ RAM recommended; 500MB+ disk).
2. Download & Install
Download AionUi for your platform from GitHub Releases, or on macOS use:
brew install aionui
3. Configure AI Service
Open AionUi and add an AI provider. AionUi includes Gemini CLI out of the box. You can also configure:
- Gemini – Google account or API key
- OpenAI, Claude, Qwen – API keys
- Ollama, LM Studio – Local API (e.g.
http://localhost:11434/v1)
Full details: LLM configuration guide.
4. Start Using
Start chatting. Conversations are saved locally. You can open multiple sessions, use the preview panel for generated files, and try file management or Excel processing.
Optional Next Steps
- Multi-agent mode – Use Claude Code, Codex, Qwen Code, etc. in one GUI
- WebUI – Access AionUi from other devices or over the internet
- MCP tools – Connect Model Context Protocol servers
- Image generation – Set up AI image models
- Scheduled tasks – Automate recurring AI tasks
First-hour checklist
- Send a trivial prompt to confirm the provider responds.
- Open a second chat tab to verify multi-session behavior.
- Drop a small text file into the workspace and ask the model to summarize it—exercises file awareness.
- Skim FAQ for platform-specific gotchas (macOS permissions, proxy settings).
If something fails
- Blank model list – Revisit LLM configuration; confirm keys or local server URL.
- Install blocked – On macOS, allow the app in Privacy & Security; on Windows, SmartScreen may require an explicit override.
- Timeouts – Check VPN, corporate proxy, or firewall rules blocking provider APIs.
Still stuck? Ask in Community with your OS, app version, and error text.