Ironically, this is a great case study to illustrate the value of Chinese models. They’ve released a number that are on par with Claude’s latest models under “open weight” licenses that would allow you to run them yourselves if you wanted to, or to hire some other third party to provide API access. It wouldn’t matter what the original company’s “usage policy” is in that case.
There are a couple of Western open models that aren’t bad either, but they tend to be aimed at a smaller and simpler use case than Claude.
The one currently making the headlines is Kimi K2.6, on the benchmarks it’s just short of Opus 4.7. It’s a trillion-parameter model so it won’t run on desktop computers, but it’s something a company could run on reasonably buildable servers for their own use.
For local use, I’ve been finding Qwen3.6’s 35B parameter model to be uncannily good. Gemma4 is also good, that’s one of the Western ones. These models won’t do the sort of heavy lifting that Opus can do but you don’t need that heavy lifting for all tasks.
Ironically, this is a great case study to illustrate the value of Chinese models. They’ve released a number that are on par with Claude’s latest models under “open weight” licenses that would allow you to run them yourselves if you wanted to, or to hire some other third party to provide API access. It wouldn’t matter what the original company’s “usage policy” is in that case.
There are a couple of Western open models that aren’t bad either, but they tend to be aimed at a smaller and simpler use case than Claude.
What models exactly? And what kind of hardware do you need to run them? Also, are there any GitHub repos that replicate Claude projects?
The one currently making the headlines is Kimi K2.6, on the benchmarks it’s just short of Opus 4.7. It’s a trillion-parameter model so it won’t run on desktop computers, but it’s something a company could run on reasonably buildable servers for their own use.
For local use, I’ve been finding Qwen3.6’s 35B parameter model to be uncannily good. Gemma4 is also good, that’s one of the Western ones. These models won’t do the sort of heavy lifting that Opus can do but you don’t need that heavy lifting for all tasks.
They are not as capable as opus, and that sadly matters.
Kimi K2.6 is close to Opus. It beats Opus 4.6 on the benchmarks, so if Opus 4.6 was sufficient for your needs then Kimi K2.6 should be on par.
If you literally can’t access Opus because Anthropic cut you off I suspect that matters more than a slight difference in benchmarks.
Quick, another fix of the LLMs! Let’s not think about what the downtime means for the industry.