deepseek-ai/DeepSeek-V3 cover image
featured

deepseek-ai/DeepSeek-V3

DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.

DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.

Public
$0.49/$0.89 in/out Mtoken
65,536
Function
ProjectLicense
demoapi

4c1f24cc10a2a1894304c7ab52edd9710c047571

2025-01-03T23:03:07+00:00