Skip to content

OpenAI Responses

OpenAI's next-generation unified endpoint for multimodal input, structured output and tool calling. Compatible with the latest OpenAI SDK.

Endpoint

  • Method: POST
  • URL: https://1688token.ai/v1/responses
  • Auth: Bearer Token

Request Parameters

ParameterTypeRequiredDescription
modelstringModel name
inputstring | arrayInput content — text or multimodal array
instructionsstringSystem instructions (equivalent to system prompt)
max_output_tokensintegerMaximum output tokens
streambooleanEnable streaming. Default: false
toolsarrayTool/function definitions
temperaturenumberRandomness, range 0~2
previous_response_idstringPrevious response ID for multi-turn context

Request Examples

python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://1688token.ai/v1"
)

response = client.responses.create(
    model="gpt-4.1-mini",
    input="Give a brief overview of 1688token.ai's main features."
)

print(response.output_text)
javascript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "YOUR_API_KEY",
  baseURL: "https://1688token.ai/v1"
});

const response = await client.responses.create({
  model: "gpt-4.1-mini",
  input: "Give a brief overview of 1688token.ai's main features."
});

console.log(response.output_text);
bash
curl https://1688token.ai/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "gpt-4.1-mini",
    "input": "Give a brief overview of 1688token.ai'\''s main features."
  }'

Multi-Turn Conversation

Use previous_response_id to link context — no need to pass message history manually:

python
# Turn 1
first = client.responses.create(
    model="gpt-4.1-mini",
    input="What is machine learning?"
)
print(first.output_text)

# Turn 2 — context carried over automatically
second = client.responses.create(
    model="gpt-4.1-mini",
    input="How does it differ from deep learning?",
    previous_response_id=first.id
)
print(second.output_text)

Response Example

json
{
  "id": "resp_abc123",
  "object": "response",
  "created_at": 1715000000,
  "model": "gpt-4.1-mini",
  "output": [
    {
      "type": "message",
      "role": "assistant",
      "content": [
        {
          "type": "output_text",
          "text": "1688token.ai is an AI model aggregation platform..."
        }
      ]
    }
  ],
  "usage": {
    "input_tokens": 20,
    "output_tokens": 45,
    "total_tokens": 65
  }
}

Supported Models

ModelDescription
gpt-4.1Ultra-long context for large documents
gpt-4.1-miniLong-context lightweight version
gpt-4oMultimodal flagship
gpt-4o-miniCost-effective everyday model