7 comments

  • sho 7 minutes ago
    I am no-where near as concerned by this as I was a year ago, when I was expecting the axe to fall at any moment before the Chinese labs achieved some sort of escape velocity. I now think it's too late, all the cats are out of all the bags, there's no moat except maybe a temporal one of a few months, the genie is out of the bottle.

    There is no secret sauce the US labs have that the Chinese ones don't, or won't have soon enough. Deepseek 4 and Kimi 2.5 are not quite Claude 4.5/GPT5.5 but there's no fundamental principle missing - they are strong evidence that there's no real advantage the "frontier" labs possess that isn't related to scale, which they will gain in time (if they even need to). The RL post-training techniques that work are widely known and easily copied. All Deepseek is really lacking is data, which they're getting - and the harder Anthropic/the USG makes it to access claude in china, the more of that precious data they'll get!

    I used to sort of entertain the "fast take-off breakaway" scenario as being plausible but not really anymore. The only genuine moat the frontier labs have is their product take-up, which isn't nothing, far from it, but it's not some unbreakable technological wall. Too late guys - it might have been too late for quite some time.

  • terrib1e 32 minutes ago
    No mention of open weights anywhere in the piece, which is weird. Qwen, Llama, DeepSeek are months behind frontier, not years. If you're a European startup worried about getting cut off from Anthropic's API in 2027, the real question is what the open-weight frontier looks like then. Probably pretty capable. That undercuts most of the doom scenario.

    Also, he concedes Mythos-level capabilities will be cheap next year, then handwaves it with "you need the best AI, not good-enough AI." For most use cases, frontier minus six months is fine.

    • sholladay 4 minutes ago
      Open models are pretty good at this point but the problem is that they are limited by the tooling and infrastructure that surrounds them. For example, the last time I tried to set up web search with an open model, the experience was pretty bad.
    • rTX5CMRXIfFG 5 minutes ago
      Affordability of hardware that can run local LLMs is a real factor, too. Not sure when RAM prices are going down, but with everything that’s happening and can happen in the world right now, it doesn’t look like it’ll drop in the near or medium-term
      • wahnfrieden 2 minutes ago
        No one is going to run models that are comparable to frontier locally without spending enormous sums for use at scale or in large orgs. Even with cheap RAM, you will still need a very large budget for frontier-level capability.

        Open models that are competitive with frontier will be used on shared hosts.

    • wahnfrieden 8 minutes ago
      Llama is not months behind GPT 5.5 Pro. I don't think Qwen or DeepSeek are either.
  • nl 5 minutes ago
    Quote:

    > “The two AI superpowers are going to start talking. We’re going to set up a protocol in terms of how do we go forward with best practices for AI to make sure nonstate actors don’t get a hold of these models,” Bessent told Joe Kernen on Thursday, on the sidelines of President Donald Trump’s two-day meeting in Beijing with Chinese President Xi Jinping.

    https://www.cnbc.com/2026/05/14/us-china-ai-rules-bessent-us...

    OpenAI is already talking openly about gated access to their models (see this OpenAI podcast episode for example: https://openai.com/podcast/#oai-podcast-episode-16)

    Separately there's also a very active effort to stop open weight releases.

    It's dangerous to those who think access to frontier intelligence is important.

  • coderenegade 23 minutes ago
    The distillation risk has been brewing for a while now. In a very real sense, the model is the data, so if the data is locked down because of how valuable it is, it was only a matter of time before fully open access to the models would be revoked.

    There's also an additional economic concern that rarely gets mentioned: because no one has cracked continual learning, keeping models up-to-date and filling in gaps in performance requires retraining on an ever growing dataset. Granted, you aren't starting from scratch each time, but the scaling required just to stay relevant looks daunting.

    I don't know where any this goes on a societal level, but I've believed since the release of deepseek r1 that access to frontier models would eventually be locked up behind contracts, since the only moats protecting the models themselves are purely artificial. It remains to be seen how effective China is at pushing the envelope, and whether they are interested in providing unfettered access. And on top of that, it remains to be seen how well these models actually turn out to scale in the long run.

  • evdubs 17 minutes ago
    What's the likelihood that universities eventually become open model providers?
  • eth0up 34 minutes ago
    Damn. I predicted this last year and got thrashed for it.

    Glad to see others catching on.

  • zelon88 42 minutes ago
    > And it doesn’t stop with the security questions: the Trump administration’s signature style of international engagement is to wield American leverage as a bundle. Deadlocks in trade negotiations are broken by threatening to withhold intelligence, tech deals are stalled by reference to food safety standards. And so I don’t know when a U.S. administration would choose to leverage its seemingly inevitable predeployment authority over frontier models to secure its broader interests, but I’m sure it would in due time. That means that even if we do everything ‘right’ on the security and economic side, frontier access is still fundamentally contingent as long as there’ll be divergences between governments’ strategic interests.

    The Trump Administration telling the very neo-fascist oligarchs who bought him an election and bought him a ballroom to play nice with their toys? At the expense of rampant capitalism? Lol.

    He already showed us the limit of his comprehension of the topic when he made EO 14179 limiting states from regulating AI.

    Trump doesn't swing for perfect pitches. He is a madman, a lunatic, and a true moron. Do not give this man any credit. I would be shocked if he could tell you the time on an analog clock.