Skip to content

创建对话请求(Anthropic)

Creates a model response for the given chat conversation.

http
POST https://api.woagent.net/messages

Authorizations


Authorization string header required

Use the following format for authentication: Bearer <your api key>

Body


model enum<string> required

Corresponding Model Name. To better enhance service quality, we will make periodic changes to the models provided by this service, including but not limited to model on/offlining and adjustments to model service capabilities. We will notify you of such changes through appropriate means such as announcements or message pushes where feasible.

Available options:

  • deepseek-ai/DeepSeek-V3.1 Pro/moonshotai/Kimi-K2-Instruct
  • moonshotai/Kimi-K2-Instruct Pro/deepseek-ai/DeepSeek-V3
  • deepseek-ai/DeepSeek-V3 moonshotai/Kimi-Dev-72B
  • baidu/ERNIE-4.5-300B-A47B

Example: "Pro/moonshotai/Kimi-K2-Instruct"


messages object[] required

A list of messages comprising the conversation so far.

Required array length: 1 - 10 elements


  • role enum<string> required

    The role of the messages author. Choice between: system, user.

    Available options: user system assistant.

    Example: "user"

  • content string required

    The contents of the message.

    Example: "What opportunities and challenges will the Chinese large model industry face in 2025?"


max_tokens integer required The maximum number of tokens to generate before stopping.

Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

Different models have different maximum values for this parameter. See models for details.

Example: 8192


system string

System prompt.

A system prompt is a way of providing context and instructions to llm, such as specifying a particular goal or role.


stop_sequences string

Custom text sequences that will cause the model to stop generating.

Our models will normally stop when they have naturally completed their turn, which will result in a response stop_reason of "end_turn"

If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom sequences, the response stop_reason value will be "stop_sequence" and the response stop_sequence value will contain the matched stop sequence.


stream boolean

If set, tokens are returned as Server-Sent Events as they are made available. Stream terminates with data: [DONE]

Example: true


temperature number

Determines the degree of randomness in the response.

Required range: 0 <= x <= 2

Example: 0.7


top_p number

The top_p (nucleus) parameter is used to dynamically adjust the number of choices for each predicted token based on the cumulative probabilities.

Required range: 0.1 <= x <= 1

Example: 0.7


top_k number

Required range: 0 <= x <= 50

Example: 50


tools object[]

Each tool definition includes:

  • name: Name of the tool.

  • description: Optional, but strongly-recommended description of the tool.

  • input_schema: JSON schema for the tool

  • input: shape that the model will produce in

  • tool_use: output content blocks.

  • tool_use child attributes

    • name string required

    Name of the tool.

    This is how the tool will be called by the model and in tool_use blocks.

    • input_schema object required

    JSON schema for this tool's input.

    This defines the shape of the input that your tool accepts and that the model will produce.


  • input_schema child attributes

    • input_schema.type enum<string> required

      Available options: object

    • input_schema.properties object
    • input_schema.required string[]

tool_choice object

How the model should use the provided tools. The model can use a specific tool, any available tool, decide by itself, or not use tools at all. The model will automatically decide whether to use tools.