Ambient agents premise lands and is thought provoking.
But the more you read the article the more the point is lost. The prescriptions given aren't ambient?
CLI: a good command-line interface makes it easy for an agent loop to interact with your system and saves tokens.
Specs: Declarative configs, schemas, manifests. Artifacts that state the desired outcome, not the steps.
Reconciliation loops (as popularized by Kubernetes): you declare the target state, let the system continuously converge toward it. Detect if something drifts.
The gripe seems to be specifically with being able to chat with the AI. Yes, ideally the AI just knows to do stuff. But the chat interface is also the reason every Jack and Joe has chatGPT in their pocket. It's also just growing pains.
> Give an agent the right interfaces and it becomes less conversational and more ambient. It no longer needs to constantly ask, explain, summarize, and negotiate. It can stay in the background, react to changes, and make steady progress with less supervision and less noise. That is closer to Weiser’s vision: calm technology, but for machines.
I tend to agree quite a bit.
I created a ambient background agent for my projects that does just that.
It is there, in the background, constantly analysing my code and opening PRs to make it better.
The hard part is finding a definition of "better" and for now it is whatever makes the longer and type checker happy.
Just take a look at the pull requests / issues opened of a repository that’s popular with LLM agents, to understand how well that works.
If there’s one take away it’s that these agents need more, not less, oversight. I don’t agree at all with the “just remove a few tools and you can remove the human from the loop” approach. It just reduces the blast radius in case the agent gets it wrong, not the fact that it gets it wrong.
I'd pay more for deterministic, explainable, and fast software without agents. The value of computers is that they do tasks repeatably, reliably, and at blinding speed.
> Agentic management software is all the hype today: What started with Moltbot and OpenClaw now has a lot of competition: ZeroClaw, Hermes, AutoGPT etc.
Moltbot is OpenClaw, AutoGPT was born significantly before. I just couldn’t read after the first paragraph, I’ve lost the trust entirely, whatever/whoever wrote it.
It’s marketing. They’re selling some change management solution, so obviously they advocate for showing AI agents only changes, rather than the full context.
But the more you read the article the more the point is lost. The prescriptions given aren't ambient?
The gripe seems to be specifically with being able to chat with the AI. Yes, ideally the AI just knows to do stuff. But the chat interface is also the reason every Jack and Joe has chatGPT in their pocket. It's also just growing pains.I tend to agree quite a bit.
I created a ambient background agent for my projects that does just that.
It is there, in the background, constantly analysing my code and opening PRs to make it better.
The hard part is finding a definition of "better" and for now it is whatever makes the longer and type checker happy.
But overall it is a pleasure to use.
If there’s one take away it’s that these agents need more, not less, oversight. I don’t agree at all with the “just remove a few tools and you can remove the human from the loop” approach. It just reduces the blast radius in case the agent gets it wrong, not the fact that it gets it wrong.
I crafted the AI loop to do exactly what I would be doing by manually.
Out of 10 PRs, 6 to 7 gets merged. The other simply get closed.
This stuff is negative value.
Moltbot is OpenClaw, AutoGPT was born significantly before. I just couldn’t read after the first paragraph, I’ve lost the trust entirely, whatever/whoever wrote it.
Doesn’t mean it’s a good idea, though.