11 comments

  • momojo 15 minutes ago
    This reminds me of Antirez's "Don't fall into the anti-AI hype" [0]

    In a sentence: These foundation models are really good at optimizing these extremely high level, extremely well defined problem spaces (ie multiply matrices faster). In Antirez's case, it's "make Redis faster".

    There have been two reactions: "Oh it would never work for me" and "I have seen months of my life accomplished in an hour", and I think they're both right. I think we should be excited for Antirez, (who has since been popping off [1]), and I think the rest of us should rest easy knowing that LLM's can't (and maybe were never meant to) tackle the tacit-knowledge-filled, human-system-centric, ambiguously-defined-problem-space jobs most mortals work.

    [0] https://antirez.com/news/158 [1] https://antirez.com/news/164

    • dinfinity 1 minute ago
      > I think the rest of us should rest easy knowing that LLM's can't [...]

      What if (when?) (AI-assisted) research moves AI beyond LLMs? Do you think that can't happen?

  • alecco 55 minutes ago
    Are Googlers themselves happy using Gemini coding agent instead of Claude Code or Codex? (no snark, I'm really asking)
    • dekhn 9 minutes ago
      If you mean specifically the Gemini VS Code Extension: it's terrible compared to Claude Code or Codex. I don't know how they can get away with it. Just constant timeouts, weird failure modes, have to start a new chat to switch modes... but I don't think any of that is specific to gemini the model- it seems to be the extension.

      As for actual solutions to problems ignoring the VS Code extension aspect, I find all three premiere models to be excellent coding agents for my purposes.

    • carbocation 53 minutes ago
      Last month, Steve Yegge suggested that they are not: https://xcancel.com/Steve_Yegge/status/2043747998740689171
      • NitpickLawyer 39 minutes ago
        > He says the problem is that they can't use Claude Code because it's the enemy, and Gemini has never been good enough to capture people's workflows like Claude has, so basically agentic coding just never really took off inside Google. They're all just plodding along, completely oblivious to what's happening out there right now.

        This is a bunch of gabagoo. Wrong on so many layers, it's not even worth reading further.

        a) goog has agentic coding in both antigravity & cli forms. While it is not at the level of cc + opus, it's still decent.

        b) goog has their own versions of models trained on internal code

        c) goog has claude in vertex, and most definitely can set it up in secure zones (like they can for their clients) so they'd be able to use claude (at cost) within their own projects.

      • stormbeard 36 minutes ago
        Demis Hassabis chimed in on that thread and called it what it is: clickbait.
        • typs 23 minutes ago
          I’m not so sure. From talking to some of my own friends at google they feel that antigravity/gemini models are handicapping them and would much rather be using claude code (which only deepmind gets to use)
          • beanard 6 minutes ago
            Sure, but there's cavernous distance between "google = john deere" and "darn I have to use Gemini"
      • FrustratedMonky 7 minutes ago
        There is value in the "eating your own dog food".

        If internal staff aren't happy with the tools they build, typically that should drive improvements to their own tools

      • PunchTornado 42 minutes ago
        This couldn't be further from the truth
    • nine_k 41 minutes ago
      The point of dogfooding is exactly that: if we're unhappy, we're the ones to improve.
    • jensensbutton 39 minutes ago
      Note that coding is not the only use of Gemini or any of these models. It's also not what this article is talking about. Gemini can be not the best coding agent, but very good at other things.
    • j2kun 10 minutes ago
      I for one can't tell the difference between Claude and Gemini for coding. And the internal agent tooling is many times faster than Claude Code in my experience.
    • PunchTornado 42 minutes ago
      Codex?
  • pingou 57 minutes ago
    AI improving itself (or at least the architecture it runs on), the singularity is near as they say.

    Do we have other examples of AI being used to improve the LLMs, apart for the creation of synthetic data and the testing of the models?

    • NitpickLawyer 45 minutes ago
      > Do we have other examples of AI being used to improve the LLMs

      Yes, last year when they revealed AlphaEvolve they used a previous gemini model to improve kernels that were used in training this gen models, netting them a 1% faster training run. Not much, but still.

    • mkw5053 51 minutes ago
      I feel like the most viral lately is https://github.com/karpathy/autoresearch
    • dinfinity 35 minutes ago
      > AI improving itself

      This is the thing to look for in 2027, imho. All the big AI labs have big projects working on research agents, also specifically into improving AI (duh) and I expect a lot of that to get out of the experimental phases this year.

      Next year they actually get to do a lot of work and I think we will see the first big effective architectural change co-invented by AI.

      • pjmlp 18 minutes ago
        And then on 2028 we will be selling ice cream at the beach.
    • lewtun 44 minutes ago
      Shameless plug: https://huggingface.co/spaces/smolagents/ml-intern

      It’s a simple harness around Opus, but with tight integration to Hugging Face infra, so the agent can read papers, test code and launch experiments

  • dandaka 10 minutes ago
    How many times we have to hear again about Erdös problems? :) It sounds like a great achievement for humanity at first, but after a while they keep coming back!
  • brkn 34 minutes ago
    I would be interested to see how exactly the agent helped. How was it used, where did it lead to the given improvement and in how far would it have taken a human to come to the same solution.
    • j2kun 10 minutes ago
      The blog post has many links to papers and preprints discussing this exact question.
  • baq 56 minutes ago
    RSI is here on the hardware level and on software level. Sprinkle with a couple algorithmic breakthroughs and results are nigh unimaginable.
  • maxothex 57 minutes ago
    What I'm most curious about is how this translates to messy, real-world codebases without well-defined metrics. Most production software isn't chip design or kernel optimization - it's business logic with unclear success criteria. The infrastructure story is impressive, but I'd love to see how they handle domains where the evaluation function itself is ambiguous.
  • marcus_ai 1 hour ago
    [flagged]
  • kadam2576 1 hour ago
    [flagged]
    • stalfie 1 hour ago
      Well, if the evaluation infrastructure is something humans could have had access to before, and that the agents key "skill" is just that it's a more patient and scalable worker, I would still argue that this "comes from the agent".

      Humans get bored, inpatient, or run out of time, and so often give up in what they perceive to be a decent "local minima". Early verification harnesses using gpt-4 for optimizing robot reward functions succeeded quite well on the fact that the LLM just kept going (link below). As long as it is too boring for a human to use the same evaluation infrastructure, this is still an agent skill.

      https://arxiv.org/abs/2310.12931