LLM Configuration

Set up AI models in AionUi: Gemini, OpenAI, Claude, Qwen, and local models like Ollama and LM Studio. Full details on the Wiki: LLM Configuration.

Supported Platforms

  • Gemini – Google account or API key. AionUi includes built-in Gemini CLI.
  • OpenAI – API key from OpenAI.
  • Claude (Anthropic) – API key.
  • Qwen – API key.
  • Ollama / LM Studio (Custom) – Local API. Select Custom platform and set base URL (e.g. http://localhost:11434/v1 for Ollama).

Gemini 3 Subscription

AionUi can detect subscribed users and recommend advanced Gemini models when appropriate.

Switching Models

You can switch between different models in the same interface. Add multiple providers in settings and choose the model per conversation or globally.

Choosing a provider

  • Cloud APIs – Best quality and lowest ops burden; you send data to the vendor under their terms.
  • Local (Ollama / LM Studio) – Higher latency on modest hardware but keeps prompts on-machine; set Custom platform with an OpenAI-compatible base URL such as http://localhost:11434/v1.
  • Mixed – Use fast local models for triage and cloud models for final drafts—swap per session.

API key hygiene

Store keys in OS keychains or environment variables where supported; rotate keys if they leak. Never commit keys to git. For team laptops, prefer org SSO flows from each vendor when available.

Cost & limits

Each provider bills differently (per token, per seat, or subscription). Watch rate limits when running scheduled tasks or high-volume multi-agent workloads. Local models shift cost to electricity and GPU/CPU time instead.

Related