Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Safari 16.4 Is An Admission

If you're a web developer not living under a rock, you probably saw last week's big Safari 16.4 reveal. There's much to cheer, but we need to talk about why this mega-release is happening now, and what it means for the future.

But first, the list!

WebKit's Roaring Twenties

Apple's summary combines dozens of minor fixes with several big-ticket items. Here's an overview of the most notable features, prefixed with the year they shipped in Chromium:

A number of improvements look promising, but remain exclusive to macOS and iPadOS:

The lack of iOS support for Fullscreen API on <canvas> elements continues to harm game makers; likewise, the lack of AVIF and AV1 holds back media and streaming businesses.

Regardless, Safari 16.4 is astonishingly dense with delayed features, inadvertantly emphasising just how far behind WebKit has remained for many years and how effective the Blink Launch Process has been in allowing Chromium to ship responsibly while consensus was witheld in standards by Apple.

The requirements of that process accelerated Apple's catch-up implementations by mandating proof of developer enthusiasm for features, extensive test suites, and accurate specifications. This collateral put the catch-up process on rails for Apple.

The intentional, responsible leadership of Blink was no accident, but to see it rewarded so definitively is gratifying.

The size of the release was expected in some corners, owing to the torrent of WebKit blog posts over the last few weeks:

This is a lot, particularly considering that Apple has upped the pace of new releases to once every eight weeks (or thereabouts) over the past year and a half.

Good Things Come In Sixes

Leading browsers moved to 6-week update cadence by 2011 at the latest, routinely delivering fixes at a quick clip. It took another decade for Apple to finally adopt modern browser engineering and deployment practices.

Starting in September 2021, Safari moved to an eight-week cadence. This is a sea change all its own.

Before Safari 15, Apple only delivered two substantial releases per year, a pattern that had been stable since 2016:

For a decade, two releases per year meant that progress on WebKit bugs was a roulette that developers lost by default.

In even leaner years (2012-2015), a single Fall release was all we could expect. This excruciating cadence affected Safari along with every other iOS browser forced to put its badge on Apple's sub-par product.

Contrast Apple's manufactured scarcity around bug fix information with the open bug tracking and reliable candecne of delivery from leading browsers. Cupertino manages the actual work of Safari engineers through an Apple-internal system ("Radar"), making public bug reports a sort of parallel track. Once an issue is imported to a private Radar bug it's more likely to get developer attention, but this also obscures progress from view.

This lack of transparency is by design.

It provides Apple deniability while simultaneously setting low expectations, which are easier to meet. Developers facing showstopping bugs end up in a bind. Without competitive recourse, they can't even recommend a different browser bucause they'll all be at least as broken as Safari.

Given the dire state of WebKit, and the challenges contributors face helping to plug the gaps, these heartbreaks have induced a learned helplessness in much of the web community. So little improved, for so long, that some assumed it never would.

But here we are, with six releases a year and WebKit accelerating the pace at which it's closing the (large) gap.

What Changed?

Many big-ticket items are missing from this release — iOS fullscreen API for <canvas>, Paint Worklets, true PWA installation APIs for competing browsers, Offscreen Canvas for WebGL, Device APIs (if only for installed web apps), etc. — but the pace is now blistering.

This is the power of just the threat of competition.

Apple's laywers have offered claims in court and in regulatory filings defending App Store rapaciousness because, in their telling, iOS browsers provide an alternative. If developers don't like the generous offer to take only 30% of revenue, there's always Cupertino's highly capable browser to fall back on.

The only problem is that regulators ask follow-up questions like "is it?" and "what do developers think?"

Which they did.

TL;DR: it wasn't, and developers had lots to say.

This is, as they say, a bad look.

And so Apple hedged, slowly at first, but ever faster as 2021 bled into 2022 and the momentum of additional staffing began to pay dividends.

Headcount Is Destiny

Apple had the resources needed to build a world-beating browser for more than a decade. The choice to ship a slower, less secure, less capable engine was precisely that: a choice.

Starting in 2021, Apple made a different choice, opening up dozens of Safari team positions. From 2023 perspective of pervasive tech layoffs, this might look like the same exuberant hiring Apple's competitors recently engaged in, but recall Cupertino had maintained extreme discipline about Safari staffing for nearly two decades. Feast or famine, Safari wouldn't grow, and Apple wouldn't put significant new resourcing into WebKit, no matter how far it fell behind.

The decision to hire aggressively, including some "big gets" in standards-land, indicates more is afoot, and the reason isn't that Tim lost his cool. No, this is a strategy shift. New problems needed new (old) solutions.

Apple undoubtedly hopes that a less egregiously incompetent Safari will blunt the intensity of arguments for iOS engine choice. Combined with (previously winning) security scaremongering, reduced developer pressure might allow Cupertino to wriggle out of needing to compete worldwide, allowing it to ring-fence progress to markets too small to justify browser development resources (e.g., just the EU).

Increased investment also does double duty in the uncertain near future. In scenarios where Safari is exposed to real competition, a more capable engine provides fewer reasons for web developers to recommend other browsers. It takes time to board up the windows before a storm, and if competition is truly coming, this burst of energy looks like a belated attempt to batten the hatches.

It's critical to Apple that narrative discipline with both developers and regulators is maintained. Dilatory attempts at catch-up only work if developers tell each other that these changes are an inevitable outcome of Apple's long-standing commitment to the web (remember the first iPhone!?!). An easily distracted tech press will help spread the idea that this was always part of the plan; nobody is making Cupertino do anything it doesn't want to do, nevermind the frantic regulatory filings and legal briefings.

But what if developers see behind the veil? What if they begin to reflect and internalise Apple's abandonment of web apps after iOS 1.0 as an exercise of market power that held the web back for more than a decade?

That might lead developers to demand competition. Apple might not be able to ring-fence browser choice to a few geographies. The web might threaten Cupertino's ability to extract rents in precisely the way Apple represented in court that it already does.

Early Innings

Rumours of engine ports are afoot. The plain language of the EU's DMA is set to allow true browser choice on iOS. But the regulatory landscape is not at all settled. Apple might still prevent progress from spreading. It might yet sue its way to curtailing the potential size and scope of the market that will allow for the web to actually compete, and if it succeeds in that, no amount of fast catch-up in the next few quarters will pose a true threat to native.

Consider the omissions:

Depending on the class of app, any of these can be a deal-breaker, and if Apple isn't facing ongoing, effective competition it can just reassign headcount to other, "more critical" projects when the threat blows over. It wouldn't be the first time.

So, this isn't done. Not by a long shot.

Safari 16.4 is an admission that competition is effective and that Apple is spooked, but it isn't an answer. Only genuine browser choice will ensure the taps stay open.


  1. Apple's standards engineers have a long and inglorious history of stalling tactics in standards bodies to delay progress on important APIs, like Declarative Shadow DOM (DSD).

    The idea behind DSD was not new, and the intensity of developer demand had only increased since Dimitri's 2015 sketch. A 2017 attempt to revive it was shot down in 2018 by Apple engineers without evidence or data.

    Throughout this period, Apple would engage sparsely in conversations, sometimes only weighing in at biannual face-to-face meetings. It was gobsmacking to watch them argue that features were unnecessary directly to the developers in the room who were personally telling them otherwise. This was disheartening because a key goal of any proposal was to gain support from iOS. In a world where nobody else could ship-and-let-live, and where Mozilla could not muster an opinion (it did not ship Web Components until late 2018), any whiff of disinterest from Apple was sufficient to kill progress.

    The phrase "stop-energy" is often misused, but the dampening effect of Apple on the progress of Web Components after 2015-2016's burst of V1 design energy was palpable. After that, the only Web Components features that launched in leading-edge browsers were those that an engineer and PM were willing to accept could only reach part of the developer base.

    I cannot stress enough how effectively this slowed progress on Web Components. The pantomime of regular face-to-face meetings continued, but Apple just stopped shipping. What had been a grudging willingness to engage on new features became a stalemate.

    But needs must.

    In early 2020, after months of background conversations and research, Mason Freed posted a new set of design alternatives, which included extensive performance research. The conclusion was overwhelming: not only was Declarative Shadow DOM now in heavy demand by the community, but it would also make websites much faster.

    The proposal looked shockingly like those sketched in years past. In a world where <template> existed and Shadow DOM V1 had shipped, the design space for Declarative Shadow DOM alternatives was not large; we just needed to pick one.

    An updated proposal was presented to the Web Components Community Group in March 2020; Apple objected on spurious grounds, offering no constructive counter.[2]

    Residual questions revolved around security implications of changing parser behaviour, but these were also straightforward. The first draft of Mason's Explainer even calls out why the proposal is less invasive than a whole new element.

    Recall that Web Components and the <template> element themselves were large parser behaviour changes; the semantics for <template> even required changes to the long-settled grammar of XML (long story, don't ask). A drumbeat of (and proposals for) new elements and attributes post-HTML5 also represent identical security risks, and yet we barrel forward with them. These have notably included <picture>, <portal> (proposed), <fencedframe> (proposed), <dialog>, <selectmenu> (proposed), and <img srcset>.

    The addition of <template shadowroot="open"> would, indeed, change parser behaviour, but not in ways that were unknowably large or unprecedented. Chromium's usage data, along with the HTTP Archive crawl HAR file corpus, provided ample evidence about the prevalence of patterns that might cause issues. None were detected.

    And yet, at TPAC 2020, Apple's representatives continued to press the line that large security issues remained. This was all considered at length. Google's security teams audited the colossal volume of user-generated content Google hosts for problems and did not find significant concerns. And yet, Apple continued to apply stop-energy.

    The feature eventually shipped with heavy developer backing as part of Chromium 90 in April 2021 but without consensus. Apple persistently repeated objections that had already been answered with patient explication and evidence.

    Cupertino is now implementing this same design, and Safari will support DSD soon.

    This has not been the worst case of Apple deflection and delay — looking at you, Push Notifications — but serves as an exemplar of the high-stakes games that Apple (and, to a lesser extent, Mozilla) have forced problem solvers to play over their dozen years of engine disinvestment.

    Even in Chromium, DSD was delayed by several quarters. Because of the Apple Browser Ban, cross-OS availability was further postponed by two years. The fact that Apple will ship DSD without changes and without counterproposals across the long arc of obstruction implies claims of caution were, at best, overstated.

    The only folks to bring data to the party were Googlers and web developers. No new thing was learned through groundless objection. No new understanding was derived from the delay. Apple did no research about the supposed risks. It has yet to argue why it's safe now, but wasn't then.

    So let's call it what it was: concern trolling.

    Uncritical acceptance of the high-quality design it had long delayed is an admission, of sorts. It shows a ennui about meeting developer and user needs (until pressed), paired with great skill at deflection.

    The playbook is simple:

    • Use opaque standards processes to make it look like occasional attendance at a F2F meeting is the same thing as good-faith co-engineering.
    • "Just ask questions" when overstretched or uninterested in the problem.
    • Spread FUD about the security or privacy of a meticulously-vetted design.
    • When all else fails, say you will formally object and then claim that others are "shipping whatever they want" and "not following standards" when they carefully launch a specced and tested design you were long consulted about, but withheld good faith engagement to improve.

    The last step works because only insiders can distinguish between legitimate critiques and standards process jockeying. Hanging the first-mover risk around the neck of those working to solve problems is nearly cost-free when you can also prevent designs from moving forward in standards, paired with a market veto (thaks to anti-competitive shenanigans).

    Play this dynamic out over dozens of features across a decade, and you'll better understand why Chromium participants get exercised about responsibility theatre by various Apple engineers. Understood in context, it decodes as delay and deflection from using standards bodies to help actually solve problems.

    Cupertino has paid no price for deploying these smoke screens, thanks to the Apple Browser Ban and a lack of curiosity in the press. Without those shields, Apple engineers would have had to offer convincing arguments from data for why their positions were correct. Instead, they have whatabouted for over three years, only to suddenly implement proposals they recently opposed when the piercing gaze of regulators finally fell on WebKit.[3] ↩︎

  2. The presence or absence of a counterproposal when objecting to a design is a primary indicator of seriousness within a standards discussion. All parties will have been able to examine proposals before any meeting, and in groups that operate by consensus, blocking objections are understood to be used sparingly by serious parties.

    It's normal for disagreements to surface over proposed designs, but engaged and collaborative counter-parties will offer soft concerns – "we won't block on this, but we think it could be improved..." – or through the offer to bring a counterproposal. The benefits of a concrete counter are large. It demonstrates good faith in working to solve the problem and signals a willingness to ship the offered design. Threats to veto, or never implement a specific proposal, are just not done in the genteel world of web standards.

    Over the past decade, making veto threats while offering neither data nor a counterproposal have become a hallmark of Apple's web standards footprint. It's a bad look, but it continues because nobody in those rooms wants to risk pissing off Cupertino. Your narrator considered a direct accounting of just the consequences of these tactics a potentially career-ending move; that's how serious the stakes are.

    The true power of a monopoly in standards is silence — the ability to get away with things others blanch at because they fear you'll hold an even larger group of hostages next time. ↩︎

  3. Apple has rolled out the same playbook in dozens of areas over the last decade, and we can learn a few things from this experience.

    First, Apple corporate does not care about the web, no matter how much the individuals that work on WebKit (deeply) care. Cupertino's artificial bandwidth constraints on WebKit engineering ensured that it implements only when pressured.

    That means that external pressure must be maintained. Cupertino must fear losing their market share for doing a lousy job. That's a feeling that hasn't been felt near the intersection of I-280 and CA Route 85 in a few years. For the web to deliver for users, gatekeepers must sleep poorly.

    Lastly, Apple had the capacity and resources to deliver a richer web for a decade but simply declined. This was a choice — a question of will, not of design correctness or security or privacy.

    Safari 16.4 is evidence, an admission that better was possible, and the delaying tactics were a sort of gaslighting. Apple disrespects the legitimate needs of web developers when allowed, so it must not be.

    Lack of competition was the primary reason Apple feared no consequence for failing to deliver. Apple's protectionism towards Safari's participation-prize under-achievement hasn't withstood even the faintest whiff of future challengers, which should be an enduring lesson: no vendor must ever be allowed to deny true and effective browser competition. ↩︎

The Market for Lemons

For most of the past decade, I have spent a considerable fraction of my professional life consulting with teams building on the web.

It is not going well.

Not only are new services being built to a self-defeatingly low UX and performance standard, existing experiences are pervasively re-developed on unspeakably slow, JS-taxed stacks. At a business level, this is a disaster, raising the question: "why are new teams buying into stacks that have failed so often before?"

In other words, "why is this market so inefficient?"

George Akerlof's most famous paper introduced economists to the idea that information asymmetries distort markets and reduce the quality of goods because sellers with more information can pass off low-quality products as more valuable than informed buyers appraise them to be. (PDF, summary)

Customers that can't assess the quality of products pay too much for poor quality goods, creating a disincentive for high-quality products to emerge while working against their success when they do. For many years, this effect has dominated the frontend technology market. Partisans for slow, complex frameworks have successfully marketed lemons as the hot new thing, despite the pervasive failures in their wake, crowding out higher-quality options in the process.[1]

These technologies were initially pitched on the back of "better user experiences", but have utterly failed to deliver on that promise outside of the high-management-maturity organisations in which they were born.[2] Transplanted into the wider web, these new stacks have proven to be expensive duds.

The complexity merchants knew their environments weren't typical, but sold their highly specialised tools to folks shopping for general purpose solutions anyway. They understood most sites lack latency budgeting, dedicated performance teams, hawkish management reviews, ship gates to prevent regressions, and end-to-end measurements of critical user journeys. They grasped that massive investment in controlling complexity is the only way to scale JS-driven frontends, but warned none of their customers.

They also knew that their choices were hard to replicate. Few can afford to build and maintain 3+ versions of a site ("desktop", "mobile", and "lite"), and vanishingly few web experiences feature long sessions and login-gated content.[3]

Armed with this knowledge, they kept the caveats to themselves.

What Did They Know And When Did They Know It?

This information asymmetry persists; the worst actors still haven't levelled with their communities about what it takes to operate complex JS stacks at scale. They did not signpost the delicate balance of engineering constraints that allowed their products to adopt this new, slow, and complicated tech. Why? For the same reason used car dealers don't talk up average monthly repair costs.

The market for lemons depends on customers having less information than those selling shoddy products. Some who hyped these stacks early on were earnestly ignorant, which is forgivable when recognition of error leads to changes in behaviour. But that's not what the most popular frameworks of the last decade did.

As time passed, and the results continued to underwhelm, an initial lack of clarity was revealed to be intentional omission. These omissions have been material to both users and developers. Extensive evidence of these failures was provided directly to their marketeers, often by me. At some point (certainly by 2017) the omissions veered into intentional prevarication.

Faced with the dawning realisation that this tech mostly made things worse, not better, the JS-industrial-complex pulled an Exxon.

They could have copped to an honest error, admitted that these technologies require vast infrastructure to operate; that they are unscalable in the hands of all but the most sophisticated teams. They did the opposite, doubling down, breathlessly announcing vapourware year after year to forestall critical thinking about fundamental design flaws. They also worked behind the scenes to marginalise those who pointed out the disturbing results and extraordinary costs.

Credit where it's due, the complexity merchants have been incredibly effective in one regard: top-shelf marketing discipline.

Over the last ten years, they have worked overtime to make frontend an evidence-free zone. The hucksters knew that discussions about performance tradeoffs would not end with teams investing more in their technology, so boosterism and misdirection were aggressively substituted for evidence and debate. Like a curtain of Halon descending to put out the fire of engineering dialogue, they blanketed the discourse with toxic positivity. Those who dared speak up were branded "negative" and "haters", no matter how much data they lugged in tow.

Sandy Foundations

It was, of course, bullshit.

Astonishingly, gobsmackingly effective bullshit, but nonsense nonetheless. There was a point to it, though. Playing for time allowed the bullshitters to punt introspection of the always-wrong assumptions they'd built their entire technical ediface on:

In time, these misapprehensions would become cursed articles of faith.

All of this was falsified by 2016, but nobody wanted to turn on the house lights while the JS party was in full swing. Not the developers being showered with shiny tools and boffo praise for replacing "legacy" HTML and CSS that performed fine. Not the scoundrels peddling foul JavaScript elixirs and potions. Not the managers that craved a check to write and a rewrite to take credit for in lieu of critical thinking about user needs and market research.

Consider the narrative Crazy Ivans that led to this point.

By 2013 the trashfuture was here, just not evenly distributed yet. Undeterred, the complexity merchants spent a decade selling <a href='/2022/12/performance-baseline-2023/'>inequality-exascerbating technology</a> as a cure-all tonic.
By 2013 the trashfuture was here, just not evenly distributed yet. Undeterred, the complexity merchants spent a decade selling inequality-exascerbating technology as a cure-all tonic.

It's challenging to summarise a vast discourse over the span of a decade, particularly one as dense with jargon and acronyms as that which led to today's status quo of overpriced failure. These are not quotes, but vignettes of distinct epochs in our tortured journey:

It's the Steamed Hams of technology pitches.

Like Chalmers, teams and managers often acquiesce to the contradictions embedded in the stacked rationalisations. Together, the community invented dozens of reasons to look the other way, from the theoretically plausible to the fully imaginary.

But even as the complexity merchant's well-intentioned victims meekly recite the koans of trickle-down UX — it can work this time, if only we try it hard enough! — the evidence mounts that "modern" web development is, in the main, an expensive failure.

The baroque and insular terminology of the in-group is a clue. It's functional purpose (outside of signaling) is to obscure furious plate spinning. The tech isn't working, but admitting as much would shrink the market for lemons.

You'd be forgiven for thinking the verbiage was designed obfuscate. Little comfort, then, that folks selling new approaches must now wade through waist-deep jargon excrement to argue for the next increment of complexity.

The most recent turn is as predictable as it is bilious. Today's most successful complexity merchants have never backed down, never apologised, and never come clean about what they knew about the level of expense involved in keeping SPA-oriented technologies in check. But they expect you'll follow them down the next dark alley anyway:

An admission against interest.
An admission against interest.

And why not? The industry has been down to clown for so long it's hard to get in the door if you aren't wearing a red nose.

The substitution of heroic developer narratives for user success happened imperceptibly. Admitting it was a mistake would embarrass the good and the great alike. Once the lemon sellers embed the data-light idea that improved "Developer Experience" ("DX") leads to better user outcomes, improving "DX" became and end unto itself. Many who knew better felt forced to play along.

The long lead time for falsifying trickle-down UX was a feature, not a bug; they don't need you to succeed, only to keep buying.

As marketing goes, the "DX" bait-and-switch is brilliant, but the tech isn't delivering for anyone but developers.[4] The highest goal of the complexity merchants is to put brands on showcase microsites and to make acqui-hiring failing startups easier. Performance and success of the resulting products is merely a nice-to-have.

Denouement

You'd think there would be data, that we would be awash in case studies and blog posts attributing product success to adoption of SPAs and heavy frameworks in an incontrovertable way.

And yet, after more than a decade of JS hot air, the framework-centric pitch is still phrased in speculative terms because there's no there there. The complexity merchants can't cop to the fact that management competence and lower complexity — not baroque technology — are determinative of product and end-user success.

The simmering, widespread failure of SPA-premised approaches has belatedly forced the JS colporteurs to adapt their pitches. In each iteration, they must accept a smaller rhetorical lane to explain why this stack is still the future.

The excuses are running out.

At long last, the journey has culminated with the rollout of Core Web Vitals. It finally provides an objective quality measurement that prospective customers can use to assess frontend architectures.

It's no coincidence the final turn away from the SPA justification has happened just as buyers can see a linkage between the stacks they've bought and the monetary outcomes they already value; namely SEO. The objective buyer, circa 2023, will understand heavy JS stacks as a regrettable legacy, one that teams who have hollowed out their HTML and CSS skill bases will pay for dearly in years to come.

No doubt, many folks who know their JS-first stacks are slow will do as Akerlof predicts, and obfuscate for as long as possible. The market for lemons is, indeed, mostly a resale market, and the excesses of our lost decade will not be flushed from the ecosystem quickly. Beware tools pitching "100 on Lighthouse" without checking the real-world Core Web Vitals results.

Shrinkage

A subtle aspect of Akerlof's theory is that markets in which lemons dominate eventually shrink. I've warned for years that the mobile web is under threat from within, and the depressing data I've cited about users moving to apps and away from terrible web experiences is in complete alignment with the theory.

When websites feel like worse experiences to the folks who write the checks, why should anyone expect them to spend a lot on them? And when websites stop being where accurate information and useful services are, will anyone still believe there's a future in web development?

The lost decade we've suffered at the hands of lemon purveyors isn't just a local product travesty; it's also an ecosystem-level risk. Forget AI putting web developers out of jobs; JS-heavy web stacks have been shrinking the future market for your services for years.

As Stigliz memorably quipped:

Adam Smith's invisible hand — the idea that free markets lead to efficiency as if guided by unseen forces — is invisible, at least in part, because it is not there.

But dreams die hard.

I'm already hearing laments from folks who have been responsible citizens of framework-landia lo these many years. Oppressed as they were by the lemon vendors, they worry about babies being throw out with the bathwater, and I empathise. But for the sake of users, and for the new opportunities for the web that will open up when experiences finally improve, I say "chuck those tubs".

Chuck 'em hard, and post the photos of the unrepentant bastards that sold this nonsense behind the cash register.

Anti JavaScript JavaScript Club

We lost a decade to smooth talkers and hollow marketeering; folks who failed the most basic test of intellectual honesty: signposting known unknowns. Instead of engaging honestly with the emerging evidence, they sold lemons and shrunk the market for better solutions. Furiously playing catch-up to stay one step ahead of market rejection, frontend's anguished, belated return to quality has been hindered at every step by those who would stand to lose if their false premises and hollow promises were to be fully re-evaluated.

Toxic mimicry and recalcitrant ignorance must not be rewarded.

Vendor's random walk through frontend choices may eventually lead them to be right twice a day, but that's not a reason to keep following their lead. No, we need to move our attention back to the folks that have been right all along. The people who never gave up on semantic markup, CSS, and progressive enhancement for most sites. The people who, when slinging JS, have treated it as special occasion food. The tools and communities whose culture puts the user ahead of the developer and hold evidence of doing better for users in the highest regard.[1:1]

It's not healing, and it won't be enough to nurse the web back to health, but tossing the Vercels and the Facebooks out of polite conversation is, at least, a start.

Deepest thanks to Bruce Lawson, Heydon Pickering, Frances Berriman, and Taylor Hunt for their thoughtful feedback on drafts of this post.


  1. You wouldn't know it from today's frontend discourse, but the modern era has never been without high-quality alternatives to React, Angular, Ember, and other legacy desktop-era frameworks.

    In a bazaar dominated by lemon vendors, many tools and communities have been respectful of today's mostly-mobile users at the expense of their own marketability. These are today's honest brokers and they deserve your attention far more than whatever solution to a problem created by React that the React community is on about this week.

    This has included JS frameworks with an emphasis on speed and low overhead vs. cadillac comfort of first-class IE8 support:

    It's possible to make slow sites with any of these tools, but the ethos of these communities is that what's good for users is essential, and what's good for developers is nice-to-have — even as they compete furiously for developer attention. This uncompromising focus on real quality is what has been muffled by the blanket the complexity merchants have thrown over today's frontend discourse.

    Similarly, the SPA orthodoxy that precipitated the market for frontend lemons has been challenged both by the continued success of "legacy" tools like WordPress, as well as a new crop of HTML-first systems that provide JS-friendly authoring but output that's largely HTML and CSS:

    The key thing about the tools that work more often than not is that they start with simple output. The difficulty in managing what you've explicitly added based on need, vs. what you've been bequeathed by an inscrutable Rube Goldberg-esque framework, is an order of magnitude in difference. Teams that adopt tools with simpler default output start with simpler problems that tend to have better-understood solutions. ↩︎ ↩︎

  2. Organisations that manage their systems (not the other way around) can succeed with any set of tools. They might pick some elegant ones and some awkward ones, but the sine qua non of their success isn't what they pick up, it's how they hold it.

    Recall that Facebook became a multi-billion dollar, globe-striding colossus using PHP and C++.

    The differences between FB and your applications are likely legion. This is why it's fundamentally lazy and wrong for TLs and PMs to accept any sort of argument along the lines of "X scales, FB uses it".

    Pigs can fly; it's only matter of how much force you apply — but if you aren't willing to fund creation of a large enough trebuchet, it's unlikley that porcine UI will take wing in your organisation. ↩︎

  3. I hinted last year at and under-developed model for how we can evolve our discussion around web performance to take account of the larger factors that distinguish different kinds of sites.

    While it doesn't account for many corner-cases, and is insufficient on its own to describe multi-modal experiences like WordPress (a content-producing editor for a small fraction of important users vs. shallow content-consumption reader experience for most), I wind up thinking about the total latency incurred in a user's session divided by the number of interactions. This raises a follow-on question: what's an interaction? Elsewhere, I've defined it as "turns through the interaction loop", but can be more easily described as "taps or clicks that involve your code doing work". This helpfully excludes scrolling, but includes navigations.

    ANYWAY, all of this nets out a session-depth weighted intuition about when and where heavyweight frameworks make sense to load up-front:

    Sites with shorter average sessions can afford less JS up-front.
    Sites with shorter average sessions can afford less JS up-front.

    Social media sites that gate content behind a login (and can use the login process to pre-load bundles), and which have tons of data about session depth — not to mention ML-based per-user bundling, staffed performance teams, ship gates to prevent regressions, and the funding to build and maintain at least 3 different versions of the site — can afford to make fundamentally different choices about how much to load up-front and for which users.

    The rest of us, trying to serve all users from a single codebase, need to prefer conservative choices that align with our management capacity to keep complexity in check. ↩︎

  4. The "DX" fixation hasn't even worked for developers, if we're being honest. Teams I work with suffer eye-watering build times, shockingly poor code velocity, mysterious performance cliffs, and some poor sod stuck in a broom closet that nobody bothers, lest the webs stop packing.

    And yet, these same teams are happy to tell me they couldn't live without the new ball-and-chain.

    One group, after weeks of debugging a particularly gnarly set of issues brought on by their preposterously inefficient "CSS-in-JS" solution, combined with React's penchant for terrible data flow management, actually said to me that they were so glad they'd moved everything to hooks because it was "so much cleaner" and that "CSS-in-JS" was great because "now they could reason about it"; nevermind the weeks they'd just lost to the combination of dirtier callstacks and harder to reason about runtime implications of heisenbug styling.

    Nothing about the lived experience of web development has meaningfully improved, except perhaps for TypeScript adding structure to large codebases. And yet, here we are. Celebrating failure as success while parroting narratives about developer productivity that have no data to back them up.

    Sunk-cost fallacy rules all we survey. ↩︎

Older Posts

Newer Posts