← All models
DSV4DeepSeek · DeepSeekSynthesizedOpen Weights

DeepSeek V4

1T MoE open-weights model — synthesized ~70% accounting capability at roughly 1/50th of GPT-5.4's cost.

Accounting overall

68.8%

Input / Output

$0.30 / $0.90 per MTok

Context

1M

Speed

~70 tok/s

Released

2026-03

Cutoff

2025-12

01Accounting Task Breakdown

Eight accounting-task categories borrowed from DualEntry's 101-task benchmark. Measured where published, synthesized from adjacent benchmarks otherwise.

Transaction Class.
83.0
Journal Entry
82.0
Accounts Payable
72.0
Accounts Receivable
71.0
Bank Reconciliation
68.0
Financial Reporting
54.0
Month-End Close
42.0
Accounting Knowledge
78.0
02Research

DeepSeek V4 is a ~1 trillion-parameter Mixture-of-Experts model (~37B active) with a 1M-token context. Public reporting places its overall capability near GPT-5-class, at roughly 1/50th of the input token cost ($0.30/MTok). It hasn't been directly published in DualEntry's accounting benchmark, so our score is **synthesized** from its adjacent benchmarks (90% HumanEval, ~80% SWE-bench Verified), Artificial Analysis intelligence index positioning, and editorial judgment — we estimate ~70% on the DualEntry 101-task suite.

The open-weights release meaningfully matters for data-residency-sensitive finance teams: self-hosted DeepSeek V4 in a controlled VPC avoids the data-sharing concerns of hosted frontier APIs, while landing in the same rough capability band.

Synthesized score — update when DualEntry publishes a measured number.

Citations