Skip to content

Claude Messages

Call Anthropic's Claude models using the OpenAI-compatible format.

Endpoint

  • Method: POST
  • URL: https://1688token.ai/v1/chat/completions
  • Auth: Bearer Token

Request Parameters

ParameterTypeRequiredDescription
modelstringClaude model name — see supported models
messagesarrayConversation message list
max_tokensintegerMax output tokens — required for Claude
streambooleanEnable streaming. Default: false
temperaturenumberRandomness, range 0~1. Default: 1
top_pnumberNucleus sampling probability
top_kintegerTop-K sampling (Claude-specific parameter)

Request Examples

python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://1688token.ai/v1"
)

response = client.chat.completions.create(
    model="claude-sonnet-4-6",
    messages=[
        {"role": "system", "content": "You are a professional code reviewer."},
        {"role": "user", "content": "Review this Python code and identify issues:\n\ndef add(a, b):\n    return a+b"}
    ],
    max_tokens=1024,
    temperature=0.7
)

print(response.choices[0].message.content)
javascript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "YOUR_API_KEY",
  baseURL: "https://1688token.ai/v1"
});

const response = await client.chat.completions.create({
  model: "claude-sonnet-4-6",
  messages: [
    { role: "system", content: "You are a professional code reviewer." },
    { role: "user", content: "Review this Python code and identify issues:\n\ndef add(a, b):\n    return a+b" }
  ],
  max_tokens: 1024,
  temperature: 0.7
});

console.log(response.choices[0].message.content);
bash
curl https://1688token.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "claude-sonnet-4-6",
    "messages": [
      {"role": "system", "content": "You are a professional code reviewer."},
      {"role": "user", "content": "Review this Python code and identify issues."}
    ],
    "max_tokens": 1024
  }'

Streaming

python
stream = client.chat.completions.create(
    model="claude-sonnet-4-6",
    messages=[{"role": "user", "content": "Write a short essay about AI."}],
    max_tokens=1024,
    stream=True
)

for chunk in stream:
    delta = chunk.choices[0].delta.content
    if delta:
        print(delta, end="", flush=True)

Response Example

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1715000000,
  "model": "claude-sonnet-4-6",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The code is clean and concise, but here are a few suggestions..."
      },
      "finish_reason": "end_turn"
    }
  ],
  "usage": {
    "prompt_tokens": 52,
    "completion_tokens": 128,
    "total_tokens": 180
  }
}

Supported Models

ModelContextDescription
claude-opus-4-7200KStrongest reasoning, best for complex analysis
claude-sonnet-4-6200KBalanced performance & speed — recommended
claude-haiku-4-5-20251001200KFast & lightweight for high-concurrency

Notes

  • max_tokens is required for Claude models — set it according to your needs
  • Claude's temperature range is 0~1 (different from GPT's 0~2)
  • finish_reason is end_turn for Claude (GPT uses stop)