Skip to content

创建对话请求(OpenAI)

Creates a model response for the given chat conversation.

http
POST https://api.woagent.net/v1/chat/completions

Authorizations


Authorization string header required

Use the following format for authentication: Bearer <your api key>

Body

model enum<string> default:Qwen/QwQ-32B required

Corresponding Model Name. To better enhance service quality, we will make periodic changes to the models provided by this service, including but not limited to model on/offlining and adjustments to model service capabilities. We will notify you of such changes through appropriate means such as announcements or message pushes where feasible.

Available options:

Qwen/Qwen3-30B-A3B-Instruct-2507Qwen/Qwen3-235B-A22B-Thinking-2507Qwen/Qwen3-235B-A22B-Instruct-2507baidu/ERNIE-4.5-300B-A47Bmoonshotai/Kimi-K2-Instructascend-tribe/pangu-pro-moetencent/Hunyuan-A13B-InstructMiniMaxAI/MiniMax-M1-80kTongyi-Zhiwen/QwenLong-L1-32BQwen/Qwen3-30B-A3BQwen/Qwen3-32BQwen/Qwen3-14BQwen/Qwen3-8BQwen/Qwen3-235B-A22BTHUDM/GLM-Z1-32B-0414THUDM/GLM-4-32B-0414THUDM/GLM-Z1-Rumination-32B-0414THUDM/GLM-4-9B-0414THUDM/GLM-4-9B-0414Qwen/QwQ-32BPro/deepseek-ai/DeepSeek-R1Pro/deepseek-ai/DeepSeek-V3deepseek-ai/DeepSeek-R1deepseek-ai/DeepSeek-V3deepseek-ai/DeepSeek-R1-0528-Qwen3-8Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-32Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-14Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-7BPro/deepseek-ai/DeepSeek-R1-Distill-Qwen-7Bdeepseek-ai/DeepSeek-V2.5Qwen/Qwen2.5-72B-Instruct-128KQwen/Qwen2.5-72B-InstructQwen/Qwen2.5-32B-InstructQwen/Qwen2.5-14B-InstructQwen/Qwen2.5-7B-InstructQwen/Qwen2.5-Coder-32B-InstructQwen/Qwen2.5-Coder-7B-InstructQwen/Qwen2-7B-InstructTeleAI/TeleChat2THUDM/glm-4-9b-chatVendor-A/Qwen/Qwen2.5-72B-Instructinternlm/internlm2_5-7b-chatPro/Qwen/Qwen2.5-7B-InstructPro/Qwen/Qwen2-7B-InstructPro/THUDM/glm-4-9b-chat

Example: "Qwen/QwQ-32B"


messages object[]> required

A list of messages comprising the conversation so far.

Required array length: 1 - 10 elements

child attributes

role enum<string>> default:user required

The role of the messages author. Choice between: system, user, or assistant.

Available options: user assistant system

Example:"user"


content string required

default:What opportunities and challenges will the Chinese large model industry face in 2025?

The contents of the message.

Example: "What opportunities and challenges will the Chinese large model industry face in 2025?"


stream boolean default:false

If set, tokens are returned as Server-Sent Events as they are made available. Stream terminates with data: [DONE]

Example: false


max_tokens integer default:512

The maximum number of tokens to generate.

Required range: 1 <= x <= 4096

Example: 512


stop Option · string[] | null default:<|endoftext|>

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

Example: ""


temperature number default:0.7

Determines the degree of randomness in the response.

Example: 0.7


top_p number default:0.7

The top_p (nucleus) parameter is used to dynamically adjust the number of choices for each predicted token based on the cumulative probabilities.

Example: 0.7


top_k number default:50

Example: 50


frequency_penalty number default:0.5

Example: 0.5


n integer default:1

Number of generations to return

Example: 1


response_format object

An object specifying the format that the model must output.

child attributes

response_format.type string

The type of the response format.

Example: "text"