Overview
An overview of Requestys API
Requesty normalizes the schema across models and providers, so you don’t waste time with custom integrations.
Request Structure
Your request body to /v1/chat/completions
closely follows the OpenAI Chat Completion schema:
-
Required Fields:
messages
: An array of message objects withrole
andcontent
- Roles can be
user
,assistant
,system
, ortool
model
: The model name. If omitted, defaults to the user’s or payer’s default model. Here is a full list of the supported models
-
Optional Fields:
prompt
: Alternative tomessages
for some providers.stream
: A boolean to enable Server-Sent Events (SSE) streaming responses.max_tokens
,temperature
,top_p
, etc.: Standard language model parameters.tools / functions
: Allows function calling with a schema defined. See OpenAI’s function calling documentation for the structure of these requests.tool_choice
: Specifies how tool calling should be handled.response_format
: For structured responses (some models only).
Example Request Body
Here, we also provide a tool (get_current_weather
) that the model can call if it decides the user request involves weather data.
Some request fields require a different function, for example if you use response_format
you’ll need to update the request to client.beta.chat.completions.parse
and you may want to use the Pydantic or Zod format for your structure.
Response Structure
The response is normalized to an OpenAI-style ChatCompletion object:
- Streaming: If
stream: true
, responses arrive incrementally as SSE events withdata: lines
. See Streaming for documentation on streaming. - Function Calls (Tool Calls): If the model decides to call a tool, it will return a
function_call
in the assistant message. You then execute the tool, append the tool’s result as arole: "tool"
message, and send a follow-up request. The LLM will then integrate the tool output into its final answer.
Non-Streaming Response Example
Function Call Example: If the model decides it needs the weather tool:
You would then call the get_current_weather function externally, get the result, and send it back as:
The next completion will return a final answer integrating the tool’s response.