Reasoning
Enable reasoning tokens
These tokens offer insight into the model’s reasoning process, providing a transparent view of its thought steps. Since Reasoning Tokens are considered output tokens, they are billed accordingly.
To enable reasoning, specify reasoning_effort
with one of the supported values in your API request.
Notes
- OpenAI does NOT share the actual reasoning tokens. You will not see them in the response.
- Deepseek reasoning models enable reasoning automatically, you don’t need to specify anything in the request to enable that.
- When using Deepseek and Anthropic, the reasoning content in the response will be under ‘reasoning_content’.
Reasoning effort values
Anthropic expects a specific number that sets the upper limit of thinking tokens. The limit must be less than the specified max tokens value.
OpenAI models expect one of the following ‘effort’ values:
- low
- medium
- high
Google Gemini expects a specific number when using Vertex AI, and supports OpenAI’s reasoning efforts via the Google AI Studio (their OpenAI-compatible API).
Requesty introduces a new ‘effort’ value: ‘max’ to support the upper limit for models that support budgets.
When using OpenAI via Requesty:
- If the client specifies a standard reasoning effort string, i.e. “low”/“medium”/“high”, Requesty forwards the same value to OpenAI.
- If the client specifies a the ‘max’ reasoning effort string, Requesty forwards the value ‘high’ to OpenAI.
- If the client specifies a reasoning budget string (e.g. “10000”), Requesty converts it to an effort, based on the conversion table below.
Converstion table from budget to effort:
- 0-1024 -> “low”
- 1025-8192 -> “medium”
- 8193 or higher -> “high”
When using Anthropic via Requesty:
- If the client specifies a reasoning effort string (“low”/“medium”/“high”/“max”), Requesty converts it to a budget, based on the conversion table below.
- If the client specifies a reasoning budget string (e.g. “10000”), Requesty passes this value to Google. If the budget is larger than the model’s maximum output tokens, it will automatically be reduced to stay within that token limit.
Converstion table from effort to budget:
- “low” -> 1024
- “medium” -> 8192
- “high” -> 16384
- “max” -> max output tokens for model minus 1 (i.e. 63999 for Sonnet 3.7 or 4, 31999 for Opus 4)
When using Vertex AI via Requesty:
- If the client specifies a reasoning effort string (“low”/“medium”/“high”/“max”), Requesty converts it to a budget, based on the conversion table below.
- If the client specifies a reasoning budget string (e.g. “10000”), Requesty passes this value to Google. If the budget is larger than the model’s maximum output tokens, it will automatically be reduced to stay within that token limit.
Converstion table from effort to budget:
- “low” -> 1024
- “medium” -> 8192
- “high” -> 24576
- “max” -> max output tokens for model
This conversion table is compatible with the Google AI Studio documentation.
When using Google AI Studio via Requesty:
Same as using OpenAI. See above.
Reasoning code example
For both tests, you can use either an OpenAI or Anthropic reasoning model, for example:
- “openai/o3-mini”
- “anthropic/claude-sonnet-4-0”