> The specification must contain a non-ambiguous formal grammar that can be parsed easily. A page can then be tested against the standard and reject or accept as compliant. Pages that don't conform with the specification won't be rendered. It is explicitly forbidden for clients to accept any page that doesn't conform with the specification.
This is what XHTML was, and it was a complete disaster. There's a reason almost nobody serves XHTML with the application/xhtml+xml MIME type, and that reason is that getting a “parser error” (this is what browsers still do! try it!) is always worse than getting a page that 99% works.[0] I strongly believe that rejecting the robustness principle is a fatal mistake for a web-replacement project. The fact that horribly broken old sites can stay online and stay readable is a huge part of the web's value. Without that, it's not really “the web”, spiritually or otherwise.
[0] It's particularly “cool” how they simply do not work in the Internet Archive's Wayback machine. The page can be retrieved, but nobody can read it.
XHTML failed in an era when writers (even normies) were writing some HTML of their own and they could't be trusted to close their tags properly. XHTML also assumed writers would be personally invested in semantic markup like distinguishing e.g. the italics of book titles from the italics of emphasis.
Today, when writers are using visual editors (or Markdown), few are writing their own HTML any more. A web standard requiring compliance would work differently today.
Markdown sux and so do visual editors. I think visual editors were just invented to make it so cut-and-paste never quite works right. There's been some conceptual problem with the whole idea ever since MS Word and the industry has never dealt with it.
> XHTML failed in an era when writers (even normies) were writing some HTML of their own
I'd say it was a minority of writers that were handcrafting XHTML. And it was the case that everyone or their handcrafting or using tools could validate their compliance using a browser which made it very easy to adjust your tools or your handcrafted code. We are now in a situation where there is no schema for HTML.
I, for one, am very much in favor of forking the web with a document format with a schema. It really seems like a small and simple change to me.
Note that when I say "writing their own HTML", I don't mean handcrafting a whole webpage. I mean that people were writing i or b tags in their Wordpress editors or in online comment boxes, because back then such text fields did not have visual editors and would accept raw tags. Under XHTML, if the writer did not close tags properly, such input would have broken the whole page, so obviously back then such a standard was DOA.
Those cases were easy to fix by using eg htmltidy on the UGC.
Honestly I don't think it was killed by one thing, or by anything. Just no platform really cared and it wasn't a win for anyone and occasionally a loss.
Agreed. There may be some situations where I may want to ensure 100% correctness. I'm thinking life or death scenarios, (which if so, maybe should use a different protocol). However, checking the sports score or looking at cat memes isn't that.
No scripting is a tell, it's about wanting other people to accommodate their concerns about running a complex browser, not about solving a real problem.
If it did somehow happen that a good deal of interesting content was published using the standard, the most popular client would probably be nonconforming, ignoring the rule to not render ambiguous content.
Web browsers turned into application engines because it was a path to get useable software on PCs without having to deal with Microsoft. IE6 stayed broken forever for a reason.
Now, they enable applications to exist without going through app store gateways.
A new document-only protocol aligned the Web's original intention would be very useful simply for security reasons. I liked Gemini because, by design, a Gemini document is not executable in any way; there's no popups, plugins, or even cookies; all this is out of the box without having to manage settings, and Gemini documents are very readable without an app at all.
But replacing the modern browser rather than being another option will actually lock in people further than they already are-open protocols require apps which are all behind a gateway now on the primary computing device of users: phones.
It probably won't matter in a few years as the Web will likely be equally locked down soon, though.
> One of the problems with the Web is that as soon as a monopolistic entity can build a mechanism to extract revenue from it, there will be an incentive to capture the standard and change it to for their own benefit. In the particular case of the Web, this has resulted in a standard that grows out of control in complexity so it increases the barrier of entry for new browsers and reduces the competition.
Maybe I'm just stupid, but I don't really know what the author is talking about here. What parts of the standard? HTTP? HTML? DOM APIs? What?
Why not try to define a strict subset of the current specs, that would target ease of implementation & graceful degradation? I'd rather have many different clients compatible with a "web-lite" spec that is enough to navigate on 95% of websites, which would have an incentive to officially support that subset if it becomes popular enough.
I think at least part of the reason for this is acknowledging that the web isn't much of a web any longer. You've got three or four vendors that serve the vast majority of all internet traffic. And it's not happenstance that those same vendors now control something which was originally meant to be democratic.
Most of this document reads to me like that's the problem they're trying to solve, not just chrome's huge marketshare, so simply not targeting it doesn't serve their purpose.
This sentence highlights the reason why these efforts fail despite any original good intentions:
"as soon as a monopolistic entity can build a mechanism to extract revenue from it, there will be an incentive to capture the standard and change it to for their own benefit"
Personally I'd love a simple semantic versioned subset of the web. The required traction and buy-in from existing key players (browser vendors, web hosting platforms etc) makes it largely a non-starter though. I'd love to be wrong though.
Instead of "forking", it may be more prudent to extend or revive something more like Gopher, so you don't constantly get baraged by incompatible sites (like you would in a forked web)
> A page can then be tested against the standard and reject or accept as compliant. Pages that don't conform with the specification won't be rendered. It is explicitly forbidden for clients to accept any page that doesn't conform with the specification.
it's as if nothing was learned from the XHTML debacle
I think XHTML failed because it didn't give web devs any new capabilities, so most didn't feel the need to learn it and do the extra work of getting their tags correct.
Then html5 came along, providing all kinds of shiny goodies and saying not to bother with the tags. In the end, a more rigid standard would have been nice.. (Though this is mostly about the skin deep part of the standards.)
History explains why HTML is now a living standard: https://whatwg.org/faq (Ctrl+F Living and keep reading).
> A published version of the standard NEVER, EVER, EVER, EVER changes.
WhatWG does have per-commit snapshots of the standard. They're just not semantically versioned because it is a living standard.
I think what the author wants is something like Gemini instead of HTML, but that has its own set of problems. My plea for Dillo would be to instead just support a text/markdown mime-type natively and we can try for adoption in more browsers.
> The objective is not to create a feature-by-feature clone of the Web, but to create an specification that allows humans to exchange knowledge, notes, and other forms of information without the imposed requirement of having to run a full blown VM to read it.
Markdown in browsers fits your objective! Only gotcha is commonmark extensions, and they can work with sub-type declarations in the mimetype.
I think original web standards were solving a completely different problem: sharing information.
Modern Internet is 45% appearances and 50% search traffic optimizations. For better or worse we lost all usable registries of websites, we lost appearance-less and traffic considerations-less websites. Information-focused Web is pretty much dead.
Maybe these ideas did not scale and did not monetize that well, but we will never really know what information-focused version of Internet would have looked like because evolution took it elsewhere. Unless we try building another one with different principles and limitations at the core.
I agree. Even where blogging and sharing information is still around, it is strongly linked with brand-building, monetization, and engagement-maxxing. Look at all the old Wordpress bloggers who switch to Substack in order to have some eyeballs on their posts, and then inevitably begin conforming to its ethos willingly or unwillingly.
For me, the information-sharing part of the internet now is the shadow libraries. I can get access to all (well, still not quite all) journals and university-press publications from the last century? Awesome. Vastly more informative than some blogger who nowadays is probably trying to monetize my attention.
The only sort of problem this might solve is the insanely low barrier of entry that the Web has in 2026. The Web was arguably a better (albeit imperfect) place when it was dominated by geeks and kids who could learn to use it faster than their elders. It was a club in a sense. Today it's a club where everyone on the planet is invited, meaning it's no longer a club. I know that sounds great to a lot of people, but I don't agree that systems become better with more participation and fewer criteria for that participation.
Even so, those who want to share and access information can already do that via the Web. Nobody has to use scripting. Nobody has to use The Google as their search. Nobody has to rely on an LLM. If there is demand for simple webpages that are free of scripting, they can be built and shared today. Because of this, the proposal comes off as very out of touch and deep within the HN bubble. Strict grammar for declaring documents is merely a fetish. If there's no scripting, then there's no reason for a document to break for some silly reason.
I mostly agree with the article - I believe the differentiation should be between documents and applications.
While HTML serves its purpose, especially for documents, the modern web is a giant mess of that legacy, combined with unfriendly ergonomics and glue/hacks built on top just so we as developers can have better DX for creating complex software on top of it.
Building a browser means having to deal with all that legacy, wether we like it or not, so most of the browser market got captured by the big players who have enough manpower to cover all those edge cases. That also means we have to deal with whatever technical choices or bloat they make, causing an infinite stream of issues, from memory usage, to size, to limitations that don't make sense in 2026 but are still there because someone 20 years ago decided to write them like that. As I deal with mobile webviews a lot in my daily work, I unfortunately had to get familiar with quite many gotcha's and edge cases, and some are just... absurd in this day and age.
I believe we need a separation between an application layer and the document layer, and especially between the UI language and the actual application code - script tags serve their purpose, but again, they are a hacky solution with its own bag of tricks, and those tricks impact all of the software built upon it.
Now, a bit of a shameless plug I've been working on something to fill that gap, at least for myself and hopefully for others who encounter the same issue - it's called Hypen (https://hypen.space) and it's a DSL for building apps that work natively on all platforms, with strict separation of code/UI/state, and support for as many languages and platforms as I can maintain, not "just javascript". While currently it's focused on streaming UI, it's built with Rust and WASM at it's core and will soon allow fully "compileable" apps.
While it may not be the future of software, once you get into building something like that, it becomes obvious that the way we are building now is at least wrong, and at best kafkaesque.
The purpose should also be defined. It should answer the question why. Also, what's broken with scripting and what alternatives are proposed? What's the end state (with an example usage of the new web).
I am generally interested in approaches to cut down complexities of fundamental web technologies. Creating a browser from scratch shouldn't be impossible or a trillion dollar experiment. But...
> No scripting
How is will it be possible to go back? The average ecom presence usually relies heavily on JS. I haven't checked in a long time that any relevant sites work without JS. I think going back to more basic approaches could even improve user experience, as many usage patterns probably would converge and simply look and function as intended. But considering that the whole web world is so fixated to solve everything with JS seems like targeting the highest resistance target you can find. Don't get me wrong, I hate this situation and we must not have a single language that dominates everything.
I also don't believe is that enthusiasts will create a significant shift. They can surely provide the fundamentals, but if there isn't a huge mainstream impact, it will not change anything.
Can't say I hate the HTML 5 spec. It resolves the ambiguities that made previous HTML specs insufficient to make a working web browser.
The standards that make my life miserable at times are the secondary standards like GDPR and WCAG as well as the de facto "standard" systems we are forced to participate in such as Cloudflare, the advertising economy, etc.
It's easy to say "WebUSB is bloat" and I'd certainly say PWA is something that could only come out of the mind that brought us Kubernetes, but lately I've been building biosignals applications and what should my choice be: write fragile GUI applications for the desktop that look like they came out of a lab and crash from memory leaks or spend 1/5 the time to make web applications that look like they belong in the cockpit of a Gundam and "just work"?
>I'd certainly say PWA is something that could only come out of the mind that brought us Kubernetes
How so? PWAs are awesome! Democratizing for users. Democratizing for developers. They work well for the right class of apps. They would go much further if there weren't forces actively resisting them. Think of all the electron type-apps out there. Now imagine if the average Joe could just install them from the web with 2 clicks.
(Regular ole bookmarks get you a decent percent of the way but clearly something extra than that was needed.)
Seems like somebody is not accepting that every successful project will grow and become unwieldy like this. This is all legacy backwards compatibility of all iterated ideas that now you have to support.
Ah yes, another "If I Were King" blog post. For an example of how it will turn out, look at how many JavaScript frameworks have been built to replace an overly complicated, unwieldy previous one.
This is what XHTML was, and it was a complete disaster. There's a reason almost nobody serves XHTML with the application/xhtml+xml MIME type, and that reason is that getting a “parser error” (this is what browsers still do! try it!) is always worse than getting a page that 99% works.[0] I strongly believe that rejecting the robustness principle is a fatal mistake for a web-replacement project. The fact that horribly broken old sites can stay online and stay readable is a huge part of the web's value. Without that, it's not really “the web”, spiritually or otherwise.
[0] It's particularly “cool” how they simply do not work in the Internet Archive's Wayback machine. The page can be retrieved, but nobody can read it.
Today, when writers are using visual editors (or Markdown), few are writing their own HTML any more. A web standard requiring compliance would work differently today.
I'd say it was a minority of writers that were handcrafting XHTML. And it was the case that everyone or their handcrafting or using tools could validate their compliance using a browser which made it very easy to adjust your tools or your handcrafted code. We are now in a situation where there is no schema for HTML.
I, for one, am very much in favor of forking the web with a document format with a schema. It really seems like a small and simple change to me.
Honestly I don't think it was killed by one thing, or by anything. Just no platform really cared and it wasn't a win for anyone and occasionally a loss.
If it did somehow happen that a good deal of interesting content was published using the standard, the most popular client would probably be nonconforming, ignoring the rule to not render ambiguous content.
Now, they enable applications to exist without going through app store gateways.
A new document-only protocol aligned the Web's original intention would be very useful simply for security reasons. I liked Gemini because, by design, a Gemini document is not executable in any way; there's no popups, plugins, or even cookies; all this is out of the box without having to manage settings, and Gemini documents are very readable without an app at all.
But replacing the modern browser rather than being another option will actually lock in people further than they already are-open protocols require apps which are all behind a gateway now on the primary computing device of users: phones.
It probably won't matter in a few years as the Web will likely be equally locked down soon, though.
Maybe I'm just stupid, but I don't really know what the author is talking about here. What parts of the standard? HTTP? HTML? DOM APIs? What?
Most of this document reads to me like that's the problem they're trying to solve, not just chrome's huge marketshare, so simply not targeting it doesn't serve their purpose.
in real democracies the populists (facebook, tiktok, chrome) always win. because that's what the masses want
Is Friedrich Merz a populist? Was Angela Merkel a populist? This theory seems to have considerable limits.
"as soon as a monopolistic entity can build a mechanism to extract revenue from it, there will be an incentive to capture the standard and change it to for their own benefit"
Personally I'd love a simple semantic versioned subset of the web. The required traction and buy-in from existing key players (browser vendors, web hosting platforms etc) makes it largely a non-starter though. I'd love to be wrong though.
Instead of "forking", it may be more prudent to extend or revive something more like Gopher, so you don't constantly get baraged by incompatible sites (like you would in a forked web)
- Peer to Peer (No DNS)
- Built in requirement for opensource apps (no closed source servers, no servers at all)
- End to end encryption, no TLS or "Certificate Authorities"
- Keep HTML, CSS and JS
- Build in encrypted data storage
Kinda like https://veilid.com/
it's as if nothing was learned from the XHTML debacle
Then html5 came along, providing all kinds of shiny goodies and saying not to bother with the tags. In the end, a more rigid standard would have been nice.. (Though this is mostly about the skin deep part of the standards.)
> A published version of the standard NEVER, EVER, EVER, EVER changes.
WhatWG does have per-commit snapshots of the standard. They're just not semantically versioned because it is a living standard.
I think what the author wants is something like Gemini instead of HTML, but that has its own set of problems. My plea for Dillo would be to instead just support a text/markdown mime-type natively and we can try for adoption in more browsers.
> The objective is not to create a feature-by-feature clone of the Web, but to create an specification that allows humans to exchange knowledge, notes, and other forms of information without the imposed requirement of having to run a full blown VM to read it.
Markdown in browsers fits your objective! Only gotcha is commonmark extensions, and they can work with sub-type declarations in the mimetype.
You can certainly make something with it, but I can't imagine most people finding a use for it.
Modern Internet is 45% appearances and 50% search traffic optimizations. For better or worse we lost all usable registries of websites, we lost appearance-less and traffic considerations-less websites. Information-focused Web is pretty much dead.
Maybe these ideas did not scale and did not monetize that well, but we will never really know what information-focused version of Internet would have looked like because evolution took it elsewhere. Unless we try building another one with different principles and limitations at the core.
For me, the information-sharing part of the internet now is the shadow libraries. I can get access to all (well, still not quite all) journals and university-press publications from the last century? Awesome. Vastly more informative than some blogger who nowadays is probably trying to monetize my attention.
Even so, those who want to share and access information can already do that via the Web. Nobody has to use scripting. Nobody has to use The Google as their search. Nobody has to rely on an LLM. If there is demand for simple webpages that are free of scripting, they can be built and shared today. Because of this, the proposal comes off as very out of touch and deep within the HN bubble. Strict grammar for declaring documents is merely a fetish. If there's no scripting, then there's no reason for a document to break for some silly reason.
While HTML serves its purpose, especially for documents, the modern web is a giant mess of that legacy, combined with unfriendly ergonomics and glue/hacks built on top just so we as developers can have better DX for creating complex software on top of it.
Building a browser means having to deal with all that legacy, wether we like it or not, so most of the browser market got captured by the big players who have enough manpower to cover all those edge cases. That also means we have to deal with whatever technical choices or bloat they make, causing an infinite stream of issues, from memory usage, to size, to limitations that don't make sense in 2026 but are still there because someone 20 years ago decided to write them like that. As I deal with mobile webviews a lot in my daily work, I unfortunately had to get familiar with quite many gotcha's and edge cases, and some are just... absurd in this day and age.
I believe we need a separation between an application layer and the document layer, and especially between the UI language and the actual application code - script tags serve their purpose, but again, they are a hacky solution with its own bag of tricks, and those tricks impact all of the software built upon it.
Now, a bit of a shameless plug I've been working on something to fill that gap, at least for myself and hopefully for others who encounter the same issue - it's called Hypen (https://hypen.space) and it's a DSL for building apps that work natively on all platforms, with strict separation of code/UI/state, and support for as many languages and platforms as I can maintain, not "just javascript". While currently it's focused on streaming UI, it's built with Rust and WASM at it's core and will soon allow fully "compileable" apps.
While it may not be the future of software, once you get into building something like that, it becomes obvious that the way we are building now is at least wrong, and at best kafkaesque.
Edit: actually it looks like w3m was ‘95 and Dillo was ‘99.
Gemini protocol?
> No scripting
How is will it be possible to go back? The average ecom presence usually relies heavily on JS. I haven't checked in a long time that any relevant sites work without JS. I think going back to more basic approaches could even improve user experience, as many usage patterns probably would converge and simply look and function as intended. But considering that the whole web world is so fixated to solve everything with JS seems like targeting the highest resistance target you can find. Don't get me wrong, I hate this situation and we must not have a single language that dominates everything.
I also don't believe is that enthusiasts will create a significant shift. They can surely provide the fundamentals, but if there isn't a huge mainstream impact, it will not change anything.
It would be great to differentiate between "static" and "dynamic" pages based upon scripting, IMO.
The standards that make my life miserable at times are the secondary standards like GDPR and WCAG as well as the de facto "standard" systems we are forced to participate in such as Cloudflare, the advertising economy, etc.
It's easy to say "WebUSB is bloat" and I'd certainly say PWA is something that could only come out of the mind that brought us Kubernetes, but lately I've been building biosignals applications and what should my choice be: write fragile GUI applications for the desktop that look like they came out of a lab and crash from memory leaks or spend 1/5 the time to make web applications that look like they belong in the cockpit of a Gundam and "just work"?
How so? PWAs are awesome! Democratizing for users. Democratizing for developers. They work well for the right class of apps. They would go much further if there weren't forces actively resisting them. Think of all the electron type-apps out there. Now imagine if the average Joe could just install them from the web with 2 clicks.
(Regular ole bookmarks get you a decent percent of the way but clearly something extra than that was needed.)
oh and also https://xkcd.com/927/