DeepSeek V3.2

Model Information

Display Name: DeepSeek V3.2

API Model ID: deepseek-ai/deepseek-v3.2

Category: Text To Text

Description: DeepSeek V3.2 is an advanced large language model with exceptional reasoning capabilities and cost efficiency. **Key Features:** - 163K+ token context window - Function calling and tool use - Structured outputs (JSON) - Reasoning mode (can be toggled) - Prompt caching for cost savings - FP4 quantization for efficiency **Capabilities:** - Advanced reasoning and math - Code generation and debugging - Function/tool calling - JSON mode for structured outputs - Multi-turn conversations **Best For:** - Complex reasoning tasks - Code assistance - Cost-effective high-volume use - Tasks requiring long context **Technical Specs:** - Quantization: FP4 - Caching: 50% discount on cached inputs

Context Window: 163,840 tokens

Max Output: 8,192 tokens

How to Use This Model

To use DeepSeek V3.2 via the HInow.ai API, use the model ID: deepseek-ai/deepseek-v3.2

API Request Example (Chat/Text)


POST https://api.hinow.ai/v1/chat/completions
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json

{
  "model": "deepseek-ai/deepseek-v3.2",
  "messages": [
    {"role": "user", "content": "Your message here"}
  ]
}
              

Pricing

  • input: $0.37
  • output: $0.75

Available Parameters

  • temperature: Controls randomness (0-2). Default: 0.7 (Options: 0, 0.3, 0.5, 0.7, 1.0, 1.5, 2.0)
  • top_p: Nucleus sampling (0-1). Default: 0.9 (Options: 0.1, 0.5, 0.7, 0.9, 0.95, 1.0)
  • max_tokens: Max tokens to generate (1-8192) (Options: 256, 512, 1024, 2048, 4096, 8192)
  • repetition_penalty: Reduce repetition (0.01-5). Default: 1 (Options: 1.0, 1.1, 1.2, 1.5, 2.0)
  • response_format: Output format (Options: text, json_object, json_schema)

Quick Reference

To use this model, set: "model": "deepseek-ai/deepseek-v3.2"

Featured: Yes

Documentation: https://hinow.ai/models/deepseek-ai/deepseek-v3.2

API Endpoint: https://api.hinow.ai/v1

Back to Models
DeepSeek V3.2

DeepSeek V3.2

Featured

deepseek-ai/deepseek-v3.2

$0.370 / $0.750
per 1M tokens (in/out)

About

DeepSeek V3.2 is an advanced large language model with exceptional reasoning capabilities and cost efficiency.

Key Features:

  • 163K+ token context window
  • Function calling and tool use
  • Structured outputs (JSON)
  • Reasoning mode (can be toggled)
  • Prompt caching for cost savings
  • FP4 quantization for efficiency

Capabilities:

  • Advanced reasoning and math
  • Code generation and debugging
  • Function/tool calling
  • JSON mode for structured outputs
  • Multi-turn conversations

Best For:

  • Complex reasoning tasks
  • Code assistance
  • Cost-effective high-volume use
  • Tasks requiring long context

Technical Specs:

  • Quantization: FP4
  • Caching: 50% discount on cached inputs

Capabilities

Text To Text
Context164K tokens
Max Output8K tokens

Parameters

temperature

Controls randomness (0-2). Default: 0.7

00.30.50.71.01.52.0
top_p

Nucleus sampling (0-1). Default: 0.9

0.10.50.70.90.951.0
max_tokens

Max tokens to generate (1-8192)

2565121024204840968192
repetition_penalty

Reduce repetition (0.01-5). Default: 1

1.01.11.21.52.0
response_format

Output format

textjson_objectjson_schema

Code Examples

curl -X POST https://api.hinow.ai/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $HINOW_API_KEY" \
  -d '{
    "model": "deepseek-ai/deepseek-v3.2",
    "messages": [
      {"role": "user", "content": "Hello! How are you?"}
    ],
    "parameters": {
      "temperature": "0",
      "top_p": "0.1",
      "max_tokens": "256",
      "repetition_penalty": "1.0",
      "response_format": "text"
    }
  }'