this post was submitted on 07 Oct 2024
201 points (96.7% liked)
Firefox
17898 readers
56 users here now
A place to discuss the news and latest developments on the open-source browser Firefox
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Fair points. Talking of revolution was indeed a bit vague.
Perhaps I am just more conservative in temperament. I focus on the value in keeping things and improving them. Software lends itself to iterative development where the result can still end up being revolutionary. So my intuition is that if there's a problem with HTTP then let's solve that problem rather than throwing the whole thing out and losing all its accrued value. In this case 3 decades of web archives and the skills capital of all the people who make it work.
Sure, HTTP is suboptimal, and as a sometime web developer I can see that HTML is verbose and ugly and was only chosen because XML was fashionable back then. Even the domain name system suffers from original sin: the TLDs should come first, not last!
Human culture is messy. Throwing things out is risky and even reckless given that the alternative is all but certain not to work out as imagined. Much safer to build upon and improve things than to destroy them.
It's one month later and I am back to reply:
I don't want to replace HTTP, or the web. But, I also absolutely don't want to build anything in greater complexity than what we have today. In other words, keep it for what it's doing now, but having an isolated app/container based platform efficiently served through a browser might just be a good thing for everyone?
5 years ago I was writing rust code compiled to web-assembly and then struggling to get it to run in a browser. I did that because I couldn't write an efficient enough version of whatever the algorithm I was following in javascript - probably on account of most things being objects. I got it to run eventually with decent enough performance, but it wasn't fun gluing all that mess together. I think if there was a better delivery platform for WASM built into browsers and maybe eventually mobile platforms, it would probably be better than today's approach to cross-platform apps being served via HTTP.
This seems to be the argument that the web was designed for documents and that we should stop trying to shoe-horn apps into documents. Hard to disagree at this point, especially when the app in question is, say, a graphics tool, or a game. I still think that, in the case of more document-adjacent applications, a website implemented with best-practices progressive enhancement is about as elegant a solution as is imaginable. Basically: an app which can gracefully degrade to a stateless document, and metamorphose back into an app, depending on system resources and connectivity, and all completely open source and open standards and accessible. That was IMO the promise of the web fulfilled: the separation of content from presentation, and presentation from functionality. Unfortunately there were never more than a tiny minority of websites that achieved this. Hardly any web developers had the deep skill set needed to pull it off.
I was once skeptical about WASM on the grounds that it's effectively closed-source software - tantamount to DRM. But people reply that functionally there's not much difference between WASM and a blob of minified JS, and the WASM security can be locked down. So I guess I accept that WASM is now the best the web can hope for.
I'm personally of the opinion it's not so much an issue of a lack of talent that prevented graceful fallback from being adopted, but simply the amount of extra effort necessary to implement it properly.
In my opinion, to do it properly you can't make any assumptions about the browser your app is running on; you should never base anything on the reported user agent string. Instead, you need to test for each individual JavaScript, HTML, (or sometimes even CSS) feature and design the experience around having a fallback for when that one singular piece of functionality isn't present. Otherwise you create a brand new problem where, for example, a forked Firefox browser with a custom user agent string doesn't get recognized despite having the feature set to provide the full experience, and that person then gets screwed over.
But yeah, that approach is incredibly cumbersome and time consuming to code and test for. Even with libraries that help with properly detecting the capabilities of the browser, you'll still need to implement granular fallbacks that work for your particular application, and that's a lot of extra work.
Add to that the fact devs in this field are already burdened with having to support layouts and designs that must scale responsively to everything ranging from a phone screen to a 100" inch TV and it quickly becomes nearly impossible to actually finish any project on a realistic timeline. Doing it that way is a monumental task to undertake, and realistically it probably mainly benefits people that use NoScript or similar -- so not a lot of people.
Actually, it doesn't just benefit "geeks who use NoScript". The original audience for accessibility was disabled users, which is why some of the best websites ever made are for government agencies. But sure, they don't count much when there's a deadline to keep. I know what you're talking about, I know that progressive enhancement and respecting WCAG etc is just time-consuming and time is money. I was in the meetings. But it's also just hard, for the reasons you describe, and few developers have ever been able to do it. Maybe precisely because the skillset straddles different domains: not just programming but also UX and graphic design and information architecture. The first web developers were tinkerers and lots of them came from the world of print. Now they're all just IT guys who see everything as an app. Even when it's in essence a document.