Chat Completions
Use Chat Completions for text chat, vision input, streaming, JSON output, and tool calling.
text
POST https://api.rout.my/v1/chat/completionsRequest
bash
curl https://api.rout.my/v1/chat/completions \
-H "Authorization: Bearer $ROUTMY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "provider/model-id",
"messages": [
{ "role": "system", "content": "Answer briefly." },
{ "role": "user", "content": "What does this endpoint do?" }
],
"temperature": 0.7,
"max_tokens": 256
}'Request fields
| Field | Type | Required | Notes |
|---|---|---|---|
model | string | Yes | Exact model ID from /v1/models. |
messages | array | Yes | Conversation messages with role and content. |
stream | boolean | No | Set true for Server-Sent Events. |
temperature | number | No | Sampling temperature. |
top_p | number | No | Nucleus sampling. |
max_tokens | integer | No | Maximum generated tokens. |
max_output_tokens | integer | No | Accepted for Gemini-style clients. |
stop | string or array | No | Stop sequence or sequences. |
response_format | object | No | OpenAI-style response format, for example JSON object mode. |
tools | array | No | OpenAI-style function tools. |
tool_choice | string or object | No | Controls tool selection. |
reasoning | object | No | Provider-compatible reasoning controls when supported. |
metadata | object | No | Client metadata passed through when supported. |
Unknown fields are preserved and forwarded when the upstream provider accepts them.
Message content
Text-only messages can use a string:
json
{
"role": "user",
"content": "Summarize this in one paragraph."
}Vision-capable models can receive an array of content parts:
json
{
"role": "user",
"content": [
{ "type": "text", "text": "Describe this image." },
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.png"
}
}
]
}Data URLs are accepted for image inputs:
text
data:image/png;base64,...Streaming
Set stream to true to receive Server-Sent Events:
json
{
"model": "provider/model-id",
"messages": [{ "role": "user", "content": "Write three short lines." }],
"stream": true
}The stream ends with:
text
data: [DONE]Tool calling
json
{
"model": "provider/model-id",
"messages": [
{ "role": "user", "content": "What is the status of order 123?" }
],
"tools": [
{
"type": "function",
"function": {
"name": "get_order",
"description": "Look up an order by ID.",
"parameters": {
"type": "object",
"properties": {
"order_id": { "type": "string" }
},
"required": ["order_id"]
}
}
}
],
"tool_choice": "auto"
}Image-capable chat models
Some models generate images through the chat endpoint. For those models, include image output in modalities:
json
{
"model": "provider/image-chat-model-id",
"messages": [
{ "role": "user", "content": "Create a square icon of a blue glass cube." }
],
"modalities": ["image", "text"],
"image_config": {
"aspect_ratio": "1:1",
"image_size": "1K"
}
}Generated images can appear on the assistant message in an images array with image URLs or data URLs.
Response
json
{
"id": "chatcmpl_abc123",
"object": "chat.completion",
"created": 1744000000,
"model": "provider/model-id",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "This endpoint creates model responses from chat messages."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 28,
"completion_tokens": 12,
"total_tokens": 40
}
}