Claude Messages
Call Anthropic's Claude models using the OpenAI-compatible format.
Endpoint
- Method:
POST - URL:
https://1688token.ai/v1/chat/completions - Auth: Bearer Token
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | ✅ | Claude model name — see supported models |
messages | array | ✅ | Conversation message list |
max_tokens | integer | ✅ | Max output tokens — required for Claude |
stream | boolean | — | Enable streaming. Default: false |
temperature | number | — | Randomness, range 0~1. Default: 1 |
top_p | number | — | Nucleus sampling probability |
top_k | integer | — | Top-K sampling (Claude-specific parameter) |
Request Examples
python
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://1688token.ai/v1"
)
response = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[
{"role": "system", "content": "You are a professional code reviewer."},
{"role": "user", "content": "Review this Python code and identify issues:\n\ndef add(a, b):\n return a+b"}
],
max_tokens=1024,
temperature=0.7
)
print(response.choices[0].message.content)javascript
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://1688token.ai/v1"
});
const response = await client.chat.completions.create({
model: "claude-sonnet-4-6",
messages: [
{ role: "system", content: "You are a professional code reviewer." },
{ role: "user", content: "Review this Python code and identify issues:\n\ndef add(a, b):\n return a+b" }
],
max_tokens: 1024,
temperature: 0.7
});
console.log(response.choices[0].message.content);bash
curl https://1688token.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "system", "content": "You are a professional code reviewer."},
{"role": "user", "content": "Review this Python code and identify issues."}
],
"max_tokens": 1024
}'Streaming
python
stream = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[{"role": "user", "content": "Write a short essay about AI."}],
max_tokens=1024,
stream=True
)
for chunk in stream:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="", flush=True)Response Example
json
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1715000000,
"model": "claude-sonnet-4-6",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The code is clean and concise, but here are a few suggestions..."
},
"finish_reason": "end_turn"
}
],
"usage": {
"prompt_tokens": 52,
"completion_tokens": 128,
"total_tokens": 180
}
}Supported Models
| Model | Context | Description |
|---|---|---|
claude-opus-4-7 | 200K | Strongest reasoning, best for complex analysis |
claude-sonnet-4-6 | 200K | Balanced performance & speed — recommended |
claude-haiku-4-5-20251001 | 200K | Fast & lightweight for high-concurrency |
Notes
max_tokensis required for Claude models — set it according to your needs- Claude's
temperaturerange is0~1(different from GPT's0~2) finish_reasonisend_turnfor Claude (GPT usesstop)
