API settings

Set up access to provider API

Overview

Geeps offers 4 built-in providers:

  • OpenAI (GPT, DALL·E, GPT Image, o-series models) - default provider. Supports Chat Completions and Responses API.
  • Anthropic (Claude Opus, Sonnet and Haiku)
  • Google (Gemini) via OpenAI compatibility mode.
  • OpenRouter - hundreds of models from various providers including open source ones. Many of them are free to use!

Custom providers

In addition to this you can add custom providers that are compatible with OpenAI Chat Completions API. Click / tap Add Custom Provider at the bottom to add your own provider. Responses API / Messages API for custom providers will be supported in the future.

Default provider

Select the default provider that will be used for new conversations. You can also switch between providers by long-pressing (iOS) or clicking and holding (macOS) the compose button.

Provider settings

Configure settings for built-in providers or add custom providers.

API key

The API keys are saved in the keychain and are not synced with iCloud. Most other settings are synced across your devices.

You can enable or disable each provider individually, except for OpenAI which is enabled by default.

API type (OpenAI only)

For OpenAI you can choose between Chat Completions API and Responses API. OpenAI recommends using Responses API for newer models (e.g. GPT-5+).

Identity

Custom providers - set the display name and symbol to be used for this provider. The symbol will be used for the compose button and in context menus.

Built-in providers: you can adjust the display name of the provider, but the symbol is curfixed.

Model

Model section lets you configure the default model to use with this provider. You can load models directly from the API as long as /v1/models endpoint is supported.

Choose the model from the dropdown, or enter it manually in the text field below. Use the format exactly as it is named in the API documentation, e.g. gpt-5-chat, claude-sonnet-4-5, moonshotai/kimi-k2.5 (for OpenRouter), etc.

For OpenRouter you can filter models list to show only free models.

Default prompt

Choose the default prompt that will be used for all new conversations with this provider. This prompt (aka system message or system instructions) will be prepended to every chat. You can also replace prompt for each conversation individually. Tap the + button in the Chat view to choose a different prompt for that conversation.

Manage your prompts in the prompt library.

Model parameters

Configure the default parameters for the model. You can set temperature, max output tokens, top p, top k (for Anthropic) frequency penalty and presence penalty (for OpenAI Chat Completions).

OpenAI: set Reasoning effort and Verbosity parameters for newer models (e.g. GPT-5+). These are available in Responses API (and OpenRouter when used with OpenAI models)

Anthropic: enable Adaptive Thinking and set the level. Supported with Sonnet / Opus 4.6 and later.

These parameters are used for all conversations with this provider unless overridden in the individual chat settings. To override parameters for a specific conversation, tap the + button in the chat view and select "Model parameters".

API host

For OpenAI you can override the default API host by entering a custom base URL or full custom endpoint. This is useful if you are using a proxy or a custom deployment of OpenAI API (e.g. Azure). For more information refer to this guide.

Full custom endpoint is only supported for Chat Completions API.

Load models

Use this to load models list directly from the API.

All built-in providers are supported, but some custom providers may not fully support /v1/models endpoint. If you encounter issues, let me know which provider you are using and I will try to fix it. In the meantime, you can enter model manually.

Loaded models are cached locally, so you don't need to load them every time. But you can refresh the list at any time to load the latest models. Make sure you enter the API key before loading models, otherwise you may get an error.

Test API key

Use this to test your API connection and key. If the key is valid, you will see a success message. If not, you will see an error message (refer to provider API documentation for error code explanation).

Please note! You need to have a positive balance on your account for the test to succeed. It sends a test message to the model and will use a few tokens.