Skip to main content

DeepSeek's new models are so efficient they'll run on a toaster ... by which we mean Huawei's NPUs

Posted in 業界新聞

Now available in preview, DeepSeek V4 cuts inference costs to a fraction of R1

Chinese AI darling DeepSeek is back with a new open weights large language model that promises performance to rival the best proprietary American LLMs. Perhaps more importantly, it claims to dramatically reduce inference costs and it extends support for Huawei's Ascend family of AI accelerators.…

View original 0 Likes 0 Boosts

Comments (0)

No comments yet.