Skip to content
SolidRusT.ai

Chat Completions

Create a chat completion with the specified model.

POST /v1/chat/completions
ParameterTypeRequiredDescription
modelstringYesModel ID to use (e.g., vllm-primary)
messagesarrayYesArray of message objects
temperaturenumberNoSampling temperature (0-2). Default: 1
max_tokensintegerNoMaximum tokens to generate
streambooleanNoEnable streaming responses. Default: false
top_pnumberNoNucleus sampling parameter (0-1)
stopstring/arrayNoStop sequences
FieldTypeRequiredDescription
rolestringYessystem, user, or assistant
contentstringYesThe message content
Terminal window
curl https://api.solidrust.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "vllm-primary",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is machine learning?"}
],
"temperature": 0.7,
"max_tokens": 500
}'
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1704067200,
"model": "vllm-primary",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Machine learning is a subset of artificial intelligence..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 25,
"completion_tokens": 150,
"total_tokens": 175
}
}

Set stream: true to receive responses as server-sent events (SSE).

Terminal window
curl https://api.solidrust.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "vllm-primary",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": true
}'

See Streaming Guide for more details.

CodeDescription
400Invalid request body
401Invalid or missing API key
429Rate limit exceeded
500Server error

See Error Handling for complete error documentation.