Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

The "Developer Experience" Bait-and-Switch

TL;DR* we cannot continue to use as much JavaScript as is now normal and expect the web to flourish. At the same time, most developers experience no constraint on their use of JS...until it's too late. Lightweight, effective tools are here, but we're stuck in a rhetorical rut. We need to reset our conversation about "developer experience" to factor in the asymmetric cost of JS.

JavaScript is the web's CO2. We need some of it, but too much puts the entire ecosystem at risk. Those who emit the most are furthest from suffering the consequences — until the ecosystem collapses. The web will not succeed in the markets and form-factors where computing is headed unless we get JS emissions under control.

Against this backdrop, there's something peculiar about the discourse surrounding JS-oriented development: a rhetorical substitution of developer value for user value. Here's a straw-man composite from several recent conversations:

These tools let us move faster. Because we can iterate faster we're delivering better experiences. If performance is a problem, we can do progressive enhancement through Server-Side Rendering.

This argument substitutes good intentions and developer value ("moving faster", "less complexity") for questions about the lived experience of users. It also tends to do so without evidence. We're meant to take it on faith that it will all work out if only the well intentioned people are never questioned about the trajectory of the outcomes.

Often, and unfortunately, this substitution is offered to shield the preferences of those in a position to benefit at the expense of folks who can least afford to deal with the repercussions. Polluters very much prefer conversations that don't focus on the costs of emissions.

The backdrop to this argument is a set of nominally shared values to which folks assign different weights:

The "developer experience" bait-and-switch works by appealing to the listener's parochial interests as developers or managers, claiming supremacy in one category in order to remove others from the conversation. The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.

Shifting the conversation away from actual user experiences to team-level advantages enables a culture in which the folks who receive focus and attention are developers, rather than end-users or the business. It naturally follows that teams can then substitute tools for goals.

This has predictable consequences, particularly when developers, through their privileged positions as expensive-knowers-of-things-about-computers, are allowed to externalize costs. And they do. Few teams I've encountered have actionable metrics associated with the real experiences of their users. I can count on one hand the number of teams I've worked with who have goals that allow them to block launches for latency regressions, including Google products. Nearly all developers in modern frontend shops do not experience performance constraints until it's too late. The brakes aren't applied until performance is so poor that it actively hurts the business.

If one views the web as a way to address a fixed market of existing, wealthy web users, then it's reasonable to bias towards richness and lower production costs. If, on the other hand, our primary challenge is in growing the web along with the growth of computing overall, the ability to reasonably access content bumps up in priority. If you believe the web's future to be at risk due to the unusability of most web experiences for most users, then discussion of developer comfort that isn't tied to demonstrable gains for marginalized users is at best misguided.

Competition between these forces is as old as debates about imagemaps vs. tables for layout. What's new is JavaScript; or rather, the amount we're applying to solve our problems:

Median mobile sites have gone from ~50KB of JS in 2011 to more than 350KB today. That unzips to roughly 2MB of script.
Median mobile sites have gone from ~50KB of JS in 2011 to more than 350KB today. That unzips to roughly 2MB of script.

I've previously outlined why JavaScript is the most expensive way to accomplish anything in a browser. This has been coupled with an attempt to lean on evolving facts about computing (it's all going to mobile, mostly to Android, and not high-end devices). My hope is that anyone who connects these ideas will come to understand that we can't afford to continue on as we have. We must budget. We must cap-and-trade JS. There is no other way to fix what we have now broken with script — we simply must use less of it.

There have been positive signs that this message has taken root in certain quarters, but it has not generally changed the dynamic. Despite the heroic efforts of Polymer, Preact, Svelte, Ionic, and Vue to create companion "starter kits" or "CLI" tools that provide the structure necessary to send less JS be default, as many (or more) JS-heavy performance disasters cross my desk in an average month as in previous years.

Still, framework marketing continues unmodified. The landing pages of popular tools talk about "speed" without context. Relatively few folks bring WPT traces to arguments. Appeals to "Developer Experience" are made without context. Which set of users do we intend to serve? All? Or the wealthy few? It is apparently possible to present performance arguments to the JavaScript community in 2018 — a time when it has never been easier to collect and publish traces — without traces against the global baseline or an explanation of why that baseline is inappropriate. The bait-and-switch still works, and that's a hell of a problem.

Perhaps my arguments have not been effective because I hold to a policy of not posting analyses without site owner's consent. This leaves me as open to critique by Hitchen's Razor as my dataless interlocutors. The evidence has never been easier to gather and the aggregates paint a chilling picture. But aggregates aren't specific, citable incidents. Video of a single slow-loading page lands in a visceral way; abstract graphs don't.

And the examples are there, many of them causing material, negative business impact. A decent hedge-fund strategy would be to run a private WPT instance and track JS bloat and TTI for commercial-intent sites — and then short firms that regress because they just rewrote everything in The One True Framework. Seeing the evidence instills terror, yet I've been hamstrung to do more than roughly sketch the unfolding disaster while working behind the scenes with teams.

There is, however, one exception to my rule: the public sector. Specifically public sector sites in countries where I pay taxes. Today, that's the US and the UK, although I suspect I could be talked into a more blanket exception.

So I'm going to start posting and dissecting a lot more traces of public sector work, but the goal isn't to mock or shame the fine folks doing hard work for too little pay. Rather, it's to demonstrate what "modern frontend" is doing to the accessibility of the web — not in the traditional "a11y" sense, but in the "is going to this site reasonable for its intended users?" sense. That is, I will be talking about this as a proxy for the data I can't share.

Luckily, the brilliant folks at the USDS and the UK's Government Digital Service have been cleaning up many of the worst examples of government-procurement-gone-wild. My goal isn't to detract anything from this extraordinary achievement:

Just wanted to send my 💌 and 🙏 to the lovely souls at @gdsteam.

I spend a lot of time despairing at what Silicon Valley thinks is acceptable and y'all are beacon on a hill, showing what's possible and what inclusion *really* meanswww.webpagetest.org/result/180827_FR_7ca373cd8e9e200d531c63fa03a14809/ee

My hope, instead, is that by showing specific outcomes and the overwhelming volume of these examples it will become possible to talk more specifically about what's wrong, using and pervasively citing data. I hope that by talking about what it means to build well when trying to serve everybody, we can show businesses how short they're falling of the mark — and why those common root-causes in JS-centric development are so toxic. And if the analysis manages to help clean up some public sector services, so much the better; we're all paying for it anyway.

An old version of Code.gov loading on a fast connection on an iPhone 8 vs. an Android Go device

This wasn't Plan A, but neither was the CDS talk in '16 that got everyone so upset.

I don't like that this is where we are as a community and as a platform. I hate that this continues to estrange me from the JS community. We need tools. We need frameworks. But we need to judge them by whether or not the deliver a better developer experience without fundamentally impairing the user experience.

We must get to a place where tools don't smother experiences in JS by default. Frameworks and tooling need to explain clearly, in small words, with reproducible instructions how they deliver under budget, how much room is left after their take, and what devices and networks their tools are appropriate for.

This will mean that many popular tools are relegated to prototyping. That's OK.

We're now on Plan D...or E. But the crisis is real and it isn't inevitable. It is not exogenous. We made it, and we can fix it. To get it fixed, we need to confront the "developer experience" bait-and-switch.

Tools that cost the poorest users to pay wealthy developers are bunk. To do better, we need to move the conversation to an evidence-based footing. I wish the arguments folks made against my positions were data-driven. There's so much opening! Perhaps a firm is doing market analysis and only cares about ever reaching users who make more than $100K USD/yr or who are in enterprise settings. Perhaps research will demonstrate that interactivity isn't as valuable as getting bits on screen (the usual SSR argument). Or, more likely, that acknowledgement (bits on screen) buys a larger-than-anticipated amount of perceptual padding (perhaps due to scanning). Perhaps the global network landscape is shifting so dramatically that the budget for client-side JS runtime has increased. Perhaps the median CPU improvement that doesn't look set to materialize until 2021 at the earliest will happen much earlier; i.e., maybe the current baseline is wrong!

But we aren't having that conversation. And we aren't going to have it until we identify, call-out, and end the "developer experience" bait-and-switch.

Thanks and apologies to Ade Oshineye, Ojan Vafai, Frances Berriman, Dion Almaer, Addy Osmani, Gray Norton and Philip Walton for their feedback on drafts of this post.

Effective Standards Work, Part 2: Threading the Needle

The web standards process fails us too often. This series explores the forces at work, how we're improving the situation, and how you can shape new features more effectively.

"Part 1: The Lay of The Land" discussed persistent challenges in standards and forces that give rise to misunderstandings. It also described the ecosystem dynamics that make change difficult, even before considering the varying firm-level strategies of browser vendors.

Essential Ingredients

Making progress on new features is extraordinarily challenging in this environment. However, armed with a clear understanding of the situation, it's possible to chart a narrow but reliable path forward. Necessary ingredients in solving problems on the web platform include:

It's attractive to think that design can (or should) happen within a formal Working Group. A well-functioning WG should include both developers and implementers, after all. Those groups often have face-to-face meetings, and the path toward standardisation is shortest in those venues! But it doesn't work; not often enough to be useful, anyway.

Starting your journey there leads to pain and failure. Why? The deck is stacked against design-in-committee, both structurally and procedurally.

Structurally, it is the job of a Working Group to evaluate proposals for inclusion in a specification. The basis for inclusion in nearly all standards I know of is not rigorous or scientific. Evidence is not (yet) a compelling argument. The norms of standards organisations are set, largely, by social cohesion amongst those working to improve the systems they maintain. The older the specification and the more stable the composition of the group, the harder it is for new ideas and people to enter with credibility.

A further difficulty for non-implementers (in another universe, "customers") within these groups is the information asymmetry inherent in the producer/consumer relationship. Implementers feel a responsibility to resist designs they feel would be detrimental to either their architecture or their competitive position. New ideas have to enter this environment roughly "done" to even get on the agenda.

Procedurally, it's the responsibility of chairs and the overall group to make progress towards the promised deliverables. Working Group charters typically set up a scoped set of deliverables and a time-table, and while there's lots of play built into these things, groups that don't continue to produce new versions on a regular basis are considered problematic. Problematic groups tend not to continue to receive the organisational support they require to continue.

Failure and iteration are the lifeblood of good design, but these groups are geared for success. They aggressively filter out new ideas to preserve their ability to ship new versions of specs. Once something is locked into a WG agenda, it's "in". This is inherently anti-iteration.

If you've never been to a functioning standards meeting, it's easy to imagine languid intellectual salons wherein brilliant ideas spring forth unbidden and perfect consensus is forged in a blinding flash. Nothing could be further from the real experience. Instead, the time available to cover updates and get into nuances of proposed changes can easily eat all of the scheduled time. And this is expensive time! Even when participants don't have to travel to meet, high-profile groups are comically busy. Recall that the most in-demand members of the group (chairs, engineers from the most consequential firms) are doing this as a part-time commitment. Standards work is time away from the day-job, so making the time and expense count matters. Before anyone gets into the room, everyone knows what the important topics will be, and if precious time is taken from resolving those issues — particularly to explore "half baked" ideas — influential folks (and the teams they represent) will be upset. Not a recipe for agreement.

The idea that a public, agenda-driven, minuted, chaired forum with a full docket and a room full of powerful decision-makers primed to say "no" is where your best design work will happen is barmy. Policies aren't dreamt up in open session at Parliament, Congress, or the UN; rather they're presented and voted on, possibly with minor amendments.

Note: There are many dysfunctional standards groups; they tend to have lighter agendas or a great deal of make-work. Those groups are unlikely to be well-attended by busy implementers. Groups that can't keep implementer interest aren't worth investing time in.

This insight is why the Chrome team now insists on doing design work in "incubation" forums. These can be embedded into a WG's formal process (as at TC39), or in separate forums which are feeders for formal, chartered groups (e.g. WICG or RICG).

Design → Iterate → Ship & Standardise

What I've learned over the past decade trying to evolving the web platform is a frustratingly short list given the amount of pain involved in extracting each insight:

These derive from our overriding goal: ship the right thing.

All too often we've seen designs (cough AppCache cough) that could have been improved by listening to available feedback. Design processes without web developers involved tend to fail because they can't error correct. Implementers most acutely feel the constraints of their system, not web developer reality. Without the voices of web developers, designs tend towards easy-to-build — rather than fit-for-purpose. Group-think too often takes hold, as those represented share the same perspective, making change and iteration harder.

Similarly, design efforts without implementers present are missing the constraints that lead to successful design. Proposals without this grounding are easily written off. It's tempting to get a group together to design future APIs in a vacuum, but without implementers critical mass never forms.

So how can you shape the future of the platform as a web developer?

The first thing to understand is that browser engineers want to solve important problems, but they might not know which problems are worth their time. Making progress with implementers is often a function of helping them understand the positive impact of solving a problem. They don't feel it, so you may need to sell it!

Building this understanding is a social process. Available, objective evidence can be an important tool, but so are stories. Getting these in front of a sympathetic audience within a browser team is perhaps harder. Thankfully, functional browser engine teams now staff sizable outreach and Developer Relations groups (oh hai, @ChromiumDev, @mozappsdev, MSEdgeDev, and Jonathan!). Similarly, if you happen to work for a top-1k web property, your team may already have a connection to a browser's partnerships team. Those teams can route thoughtful questions to the right engineers.

Other models for early collaborations involve sideline conversations at industry gatherings, e.g. TPAC or BlinkOn. Special-purpose vehicles like W3C Workshops are somewhat harder to organize, but browser engineers are willing to join them. I can't speak for other vendors, but Chromies are also willing to travel for ad-hoc gatherings to do early design work. Andrew Betts masterfully orchestrated such an event while at the FT, kicking off what became Service Workers. You might not have Andrew's wealth of connections, but odds are you probably know someone who does. Remember, at the start this is about individuals. Drawing attention to an issue that you think is important means building a small group of like-minded folks. It's effort to find "your people", but it's far from impossible!

Next, recognize that the design, development, iteration, and eventual standardisation phases take time. Sometime a lot of time. As a web developer, it's unlikely that you'll be able to sustain professional interest in such a process as there's no practical way it can bear fruit in time for your current (or even next) project. This is not a personal failing, it's just how the gearing works. You have information that browser teams don't, but less leverage and time. Setting them on a better course is helping the next person and, if you're doing this as your profession, may eventually help you too. Don't feel guilt for needing to drop out of the process at some point.

It has gotten ever easier to stay engaged as designs iterate. After initial meetings, early designs are sketched up and frequently posted to GitHub where you can provide comments. Forums like WICG let you provide direct design feedback during development — a very intentional shift by the Chrome and Edge teams to give developers a louder voice when designs are still maleable.

Further along the process, Chrome is now running a series of "Origin Trials", an idea the Chrome team borrowed from Jacob Rossi at MSFT. Origin Trials allow developers to test new features on live sites and shape their evolution. Teams running these trials actively solicit feedback and frequently change them in response.

Astute readers will note none of this involves joining a Working Group or keeping up with busy mailing lists. Affecting the trajectory of the web platform has never been easier, assuming you know which side of the amplifier to approach.

"Ship The Right Thing"

These relatively new opportunities for participation outside formal processes have been intentionally constructed to give developers and evidence a larger role in the design process. We've supported their creation because they help us to separate open design and iteration from standardisation, allowing each process to assist the community in improving features at the point where they are most effective.

These processes aren't perfect by any stretch, and it would be an epic understatement to suggest that the broader browser and standards communities agree with design via incubation outside of formal Working Groups. Maintenance work is a particularly thorny topic. Regardless, the Chrome team has gathered compelling evidence that this is a better way to work.

Prizing collaboration, iteration, and evidence has enabled us to shape process to support those values. Incubation and related processes let us be more responsive to developers while simultaneously increasing confidence that features shipped to Stable meaningfully address problems worth solving. Hopefully this series will help you shape the future with fewer misunderstandings. After all, we all want to see the right thing ship.


I've been drafting and re-drafting versions of this post for almost 4 years. In that time I've promised a dozen or more people that I had a post in process that talked about these issues, but for some of the reasons I cited at the beginning, it has never seemed a good time to hit "Publish". To those folks, my apologies for the delay.

There's a meta-critique of formal standards and the defacto-exclusionary processes used to create them. This series didn't deal in it deeply because doing so would require a long digression into the laws surrounding anti-trust and competition. Suffice to say, I have a deep personal interest in bringing more voices into developing the future of the web platform, and the changes to Chrome's approach to standards discussed above have been made with an explicit eye towards broader diversity, inclusion, and a greater role for evidence.

Deep thanks to Andrew Betts, Bruce Lawson, Chris Wilson, and Mariko Kosaka for reviewing drafts of this series and correcting many of the errors within.


  1. Sometimes an implementer will say "that can't possibly work" and then another one will show up with a working prototype that does exactly what the first claimed was impossible. In this situation, it's fine to discount folks who have been proven wrong.

    It's hard from the outside to know why they were wrong, but they were. Sometimes, they might have even known a feature was possible but just want to do less work. Even if that's not the case, it's also reasonable to discount their (personal) future claims of impossibility. Implementers should be very, very careful when they claim something is impossible. ↩︎

Older Posts

Newer Posts