DeepSeek

DeepSeek

其他
HTTP状态码说明
API在线调用
接口地址
https://api.briskapi.com/model/deepseek
请求方法
POST
请求鉴权(Headers)
Content-Type: application/json
Authorization: Bearer {API key}
请求参数
  • model string 必需
    The identifier of the chat generation model to use:
    example: "deepseek-v3", "deepseek-v3-0324", "deepseek-r1", "deepseek-r1-llama-70b", "deepseek-r1-qwen-32b"
  • messages array 必需
    An array of message objects representing the conversation context for multi‑turn dialogue, where each object has a role field (e.g., "user" for user inputs, "assistant" for model replies) and a content field containing the text.
    example: [{"role": "user", "content":"User input content"}]
    messages.role: "user","system"
  • max_tokens int 可选
    The maximum number of tokens to include in the generated reply.
    default: 8192
    range: 1 - 8192
  • temperature float64 可选
    Controls randomness in the sampling process. Higher values (e.g., 0.8–1.0) make outputs more varied and creative, increasing the chance of different answers to the same prompt. Lower values (e.g., 0.2–0.5) produce more focused, predictable results.
    default: 0.7
    range: 0 - 1
  • top_k int 可选
    Sets how many of the highest‑probability tokens are considered at each step. Raising this (up to 6) boosts diversity and creativity, while lowering it (minimum 1) makes the model stick more closely to the strongest predictions and reduces variety.
    default: 4
    range: 1 - 6
  • stream bool 可选
    A boolean flag indicating whether to use streaming mode; if true, tokens are returned incrementally as they’re generated, and if false, the complete response is sent all at once.
    default: false
    "false","true"
  • search bool 可选
    When set to true, allows the model to perform online searches based on user input; when false, no web searches will be triggered.
    default: false
    "false","true",Online Search
请求示例CURL
curl https://api.briskapi.com/model/deepseek \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {API key}" \
-d '{
  "model": "deepseek-r1",
  "messages": [
    {
      "content": "I am a Chinese language teacher.",
      "role": "system"
    },
    {
      "content": "hello",
      "role": "user"
    }
  ],
  "max_tokens": 2048,
  "temperature": 1,
  "top_k": 0.7,
  "stream":false,
  "search": true
}'
返回示例
{
	"choices": [{
		"finish_reason": "stop",
		"index": 0,
		"message": {
			"content": "Okay, the user wrote \"hello\". I should respond in a friendly and welcoming manner. Since they mentioned they're a Chinese language teacher, maybe I can offer help related to teaching Chinese. Let me ask if they need any assistance with lesson planning, resources, or student engagement strategies. Keeping it open-ended to invite specific requests.\n\n\nHello! How can I assist you today? Whether you need help with lesson planning, teaching resources, or creative ways to engage your students in learning Chinese, feel free to ask! 😊 ",
			"role": "assistant"
		}
	}],
	"created": "1745211400",
	"id": "cmpl-f4dc264964eb9782e702ffdfeaec81e0",
	"model": "deepseek-r1",
	"object": "chat.completion",
	"usage": {
		"completion_tokens": 163,
		"prompt_tokens": 10,
		"total_tokens": 173
	}
}
返回字段说明
  • id string
    A unique string assigned to this particular response, which you can use for logging, debugging, or auditing purposes.
  • object string
    Indicates the type of object returned. For chat completions, this will always be "chat.completion".
  • created int
    The time the response was generated, given as a Unix timestamp.
  • model string
    The name of the model that produced this response.
  • choices array
    An array of one or more possible replies generated by the model. You can request multiple (n) choices in one call.
  • choices.index int
    The zero-based position of this choice in the choices array
  • choices.message object
    An object describing the content and role of the message.
    {"role": "assistant", "content":"Hello, I am a teacher"}
  • choices.finish_reason string
    A string explaining why the model stopped generating text.
  • usage object
    An object summarizing token counts for this request.
  • usage.prompt_tokens int
    The number of tokens consumed by your input (the prompt)
  • usage.completion_tokens int
    The number of tokens generated by the model in its reply.
  • usage.total_tokens int
    The sum of prompt_tokens and completion_tokens, representing the total token cost of the request.