The “Developer Experience” Bait-and-Switch

TL;DR: we cannot continue to use as much JavaScript as is now “normal” and expect the web to flourish. At the same time, most developers experience no constraint on their use of JS…until it’s too late. “JS neutral” (or negative) tools are here, but we’re stuck in a rhetorical rut. We need to reset our conversation about “developer experience” to factor in the asymmetric cost of JS.

JavaScript is the web’s CO2. We need some of it, but too much puts the entire ecosystem at risk. Those who emit the most are furthest from suffering the consequences — until the ecosystem collapses. The web will not succeed in the markets and form-factors where computing is headed unless we get JS emissions under control.

Against this grim backdrop, there’s something peculiar about conversations regarding the costs of JS-oriented development: a rhetorical substitution of developer value for user value. Here’s a straw-man composite from several recent conversations:

These tools let us move faster. Because we can iterate faster we’re delivering better experiences. If performance is a problem, we can do progressive enhancement through Server-Side Rendering.

This argument substitutes good intentions and developer value (“moving faster”, “less complexity”) for questions about the lived experience of users. It also tends to do so without evidence. We’re meant to take it on faith that it will all work out if only the well intentioned people are never questioned about the trajectory of the outcomes.

Most unfortunately, this substitution is frequently offered to shield the preferences of those in a position to benefit at the expense of folks who can least afford to deal with the repercussions. Polluters very much prefer conversations that don’t focus on the costs of emissions.

The backdrop to this argument is a set of nominally shared values to which folks assign different weights:

  • Universality and accessibility
  • Fidelity and richness
  • Competitive (or superior) cost to produce

The “developer experience” bait-and-switch works by appealing to the listener’s parochial interests as developers or managers, claiming supremacy in one category in order to remove others from the conversation. The swap is executed by implying that by making things better for developers, users will eventually benefit equivalently. The unstated agreement is that developers share all of the same goals with the same intensity as end users and even managers. This is not true.

Shifting the conversation away from actual user experiences to team-level advantages enables a culture in which the folks who receive focus and attention are developers, rather than end-users or the business. It naturally follows that teams can then substitute tools for goals.

This has predictable consequences, particularly when developers, through their privileged positions as expensive-knowers-of-things-about-computers, are allowed to externalize costs. And they do. Few teams I’ve encountered have actionable metrics associated with the real experiences of their users. I can count on one hand the number of teams I’ve worked with who have goals that allow them to block launches for latency regressions, including Google products. Nearly all developers in the modern frontend shops do not experience performance constraints until it’s too late. The brakes aren’t applied until performance is so poor that it actively hurts the business.

If one views the web as a way to address a fixed market of existing, wealthy web users, then it’s reasonable to bias towards richness and lower production costs. If, on the other hand, our primary challenge is in growing the web along with the growth of computing overall, the ability to reasonably access content bumps up in priority. If you believe the web’s future to be at risk due to the unusability of most web experiences for most users, then discussion of developer comfort that isn’t tied to demonstrable gains for marginalized users is at best misguided.

Competition between these forces is as old as debates about imagemaps vs. tables for layout. What’s new is JavaScript; or rather, the amount we’re applying to solve our problems:

Mobile pages have gone from an average of 50KB of JS in 2011 to more than 300KB today.

Median mobile sites have gone from ~50KB of JS in 2011 to more than 350KB today. That unzips to roughly 2MB of script.

I’ve previously outlined why JavaScript is the most expensive way to accomplish anything in a browser. This has been coupled with an attempt to lean on evolving facts about computing (it’s all going to mobile, mostly to Android, and not high-end devices). My hope is that anyone who connects these ideas will come to understand that we can’t afford to continue on as we have. We must budget. We must cap-and-trade JS. There is no other way to fix what we have now broken with script — we simply need to use less of it.

There have been positive signs that this message has taken root in certain quarters, but it has not generally changed the dynamic. Despite the heroic efforts of Polymer, Preact, Svelte, Ionic, and Vue to create companion “starter kits” or “CLI” tools that provide the structure necessary to send less JS be default, as many (or more) JS-heavy performance disasters cross my desk in an average month as in previous years.

And still framework marketing continues unmodified. The landing pages of popular tools talk about “speed” without context. Relatively few folks bring WPT traces to arguments. Appeals to “Developer Experience” are made without context. Which set of users do we intend to serve? All? Or the wealthy few? It is apparently possible to present performance arguments to the JavaScript community in 2018 — a time when it has never been easier to collect and publish traces — without traces against the global baseline or an explanation of why that baseline is inappropriate. The bait-and-switch still works, and that’s a hell of a problem.

Perhaps my arguments have not been effective because I hold to a policy of not posting analyses without site owner’s consent. This leaves me as open to critique by Hitchen’s Razor as my dataless interlocutors. The evidence has never been easier to gather and the aggregates paint a chilling picture. But aggregates aren’t specific, citable incidents. Video of a single slow-loading page lands in a visceral way; abstract graphs don’t.

And the examples are there, many of them causing material, negative business impact. A decent hedge-fund strategy would be to run a private WPT instance and track JS bloat and TTI for commercial-intent sites — and then short firms that regress because they just rewrote everything in The One True Framework. Seeing the evidence instills terror, yet I’ve been hamstrung to do more than roughly sketch the unfolding disaster while working behind the scenes with teams.

There is, however, one exception to my rule: the public sector. Specifically public sector sites in countries where I pay taxes. Today, that’s the US and the UK, although I suspect I could be talked into a more blanket exception.

So I’m going to start posting and dissecting a lot more traces of public sector work, but the goal isn’t to mock or shame the fine folks doing hard work for too little pay. Rather, it’s to demonstrate what “modern frontend” is doing to the accessibility of the web — not in the traditional “a11y” sense, but in the “is going to this site reasonable for its intended users?” sense. That is, I will be talking about this as a proxy for the data I can’t share.

Luckily, the brilliant folks at the USDS and the UK’s Government Digital Service have been cleaning up many of the worst examples of government-procurement-gone-wild. My goal isn’t to detract anything from this extraordinary achievement:

My hope, instead, is that by showing specific outcomes and the overwhelming volume of these examples it will become possible to talk more specifically about what’s wrong, using and pervasively citing data. I hope that by talking about what it means to build well when trying to serve everybody, we can show businesses how short they’re falling of the mark — and why those common root-causes in JS-centric development are so toxic. And if the analysis manages to help clean up some public sector services, so much the better; we’re all paying for it anyway.

This isn’t Plan A, but neither was the CDS talk in ’16 that got everyone so upset. I don’t like that this is where we are as a community and as a platform. I hate that this continues to estrange me from the JS community. We need tools. We need frameworks. But we need to judge them by whether or not the deliver a better developer experience without fundamentally impairing the user experience. We must get to JS-neutral (or, my preferred, Time-to-Interactive-neutral or negative) tooling. Frameworks and tooling need to explain clearly, in small words, with reproducible instructions how they deliver under-budget experiences, how much room is left after their budgetary cost, and what devices and networks their tools are appropriate in. This will mean that many popular tools are relegated to prototyping. That’s OK.

This is very much Plan D…or E. But the crisis is real and it isn’t inevitable. It is not exogenous. We made it, and we can fix it.

To get this fixed, we need to confront the “developer experience” bait-and-switch. Tools that cost the poorest users to pay wealthy developers are bunk. To do better, we need to move the conversation to an evidence-based footing. I wish the arguments folks made against my positions were data-driven. There’s so much opening! Perhaps a firm is doing market analysis and only cares about ever reaching users who make more than $100K USD/yr or who are in enterprise settings. Perhaps research will demonstrate that interactivity isn’t as valuable as getting bits on screen (the usual SSR argument). Or, more likely, that acknowledgement (bits on screen) buys a larger-than-anticipated amount of perceptual padding (perhaps due to scanning). Perhaps the global network landscape is shifting so dramatically that the budget for client-side JS runtime has increased. Perhaps the median CPU improvement that doesn’t look set to materialize until 2021 at the earliest will happen much earlier; i.e., maybe the current baseline is wrong!

But we aren’t having that conversation. And we aren’t going to have it until we identify, call-out, and end the “developer experience” bait-and-switch.

Thanks and apologies to Ade Oshineye, Ojan Vafai, Frances Berriman, Dion Almaer, Addy Osmani, Gray Norton and Philip Walton for their feedback on drafts of this post.

16 Comments

  1. Posted September 11, 2018 at 1:56 pm | Permalink

    Your video doesn’t load for me.

    (Appreciate the article)

  2. Posted September 11, 2018 at 2:15 pm | Permalink

    Hey Bill,

    Sorry about that. You can see the source side-by-side here: https://www.webpagetest.org/video/compare.php?tests=180831_25_3a7e0326e5deb329b760f6241b3a87f5-r%3A1-c%3A0%2C180831_Q2_36cc2c3f96e11252eb47d7ce521891bf-r%3A1-c%3A0&thumbSize=200&ival=500&end=full

  3. Brisn
    Posted September 11, 2018 at 2:56 pm | Permalink

    awesome

  4. Posted September 11, 2018 at 9:11 pm | Permalink

    This is a good argument to pay attention to code bloat, UX, tti, and perhaps scope creep.

    I don’t understand why this was aimed at JS like it was.

    Bloat isn’t the fault of any particular 3-50k framework.

    The fact our current JS ecosystem is increasingly modular is both the cause and the solution to this problem.

    Lots of people lazily include all of lodash or moment. While this is typically easy to remedy, ideally only what you need of these libraries. (Btw, Plenty of native and 3rd party alternatives exist for moment.)

    NPM’s vast module ecosystem makes replacing old bloated libraries an easier proposition than in many other environments.

    As an example, I wrote an alternative to bluebird in 300-400 lines (functional-promises – fpromises.io). It’s a drop-in for 80% of the API, and 90% smaller.

    This is how the ecosystem evolves. Needs get met with better suited options.

    It’s unfortunate devs tend to wait until problems blow up. Ultimately this is a bug in human nature, not JavaScript.

  5. Andrew Whitworth
    Posted September 12, 2018 at 4:51 am | Permalink

    I worry that there don’t tend to be a lot of people looking out for the well-being of developers, so many developers are forced to focus on themselves. The relationship between developers and all other teams in an organization tend to be adversarial: Mangers want tighter deadlines, Product Management wants more features. QA wants fewer regressions, Sales makes all sorts of promises that Developers are on the hook to deliver, Customer Support wants fewer bugs and more fixes, etc. The only people left to really worry about improving developer experience are the developers themselves. This, as you point out, leads to a really warped focus on the self and less on the customer and the marketplace. We’ve still got a lot to learn about how developers and the rest of the world interact and conversations like this one are priceless parts of that.

  6. Chris Rosenau
    Posted September 12, 2018 at 6:54 am | Permalink

    I see your point, yet there are many different cases where this just doesn’t apply.

    When developers produce a website, they do so for their intended audience. If they are developing an e-commerce website, they don’t develop for the poor. They develop for an audience that can afford their product and that audience in most cases has the technology to see the website just fine.

    If I was a developer in Ghana for example, I would know my audience. That audience is about 90% cell phone users. Maybe 20-30% have smart phones. Thus I would develop for only for mobile devices and I would use almost no JavaScript.

    Now if you are a corporation who wants the entire world to view your website. Well then I imagine more of them might be listening to your point of view. Yet that is already where things are headed. Most sites are now mobile friendly. Google has AMP which strips down what is delivered to mobile devices. As you say there are new frameworks like Vue.

    If I developing for the government then you need to go even farther because you need to addresses disabilities.

    Actually if there is a huge problem, it isn’t JavaScript, it is the total lack of consideration of people with disabilities.

    Also this whole drive to make everything work on a cell phone is just a ridiculous idea. We might as well go back to the origin of the World Wide Web and just have everything be text. Remember libraries and internet cafes? These still have a use in many places in the world. Or maybe we should go back to paper. Paper actually is a bigger screen size than stupid cell phones and requires zero energy to use.

  7. Arthur
    Posted September 12, 2018 at 8:46 am | Permalink

    The only problem with picking public sector websites as examples is that it, however hedged, will feed the narrative of “private good, public bad” when, as you say, it’s an equal problem in the private sector.

    Can you use examples where the public sector sites where bad but have now been improved? That’s a much more positive take.

  8. Posted September 12, 2018 at 8:58 am | Permalink

    Arthur:

    I share that concern and I think it’s a big part of why I haven’t leant into the public-sector exclusion until now, several years past when this has become clear to me as an ecosystem-wide crisis.

    Something I’ve been mulling over is a variant on a 90-day disclosure policy: https://en.wikipedia.org/wiki/Project_Zero#Bug_finding_and_reporting

    E.g., if I privately report to you that your site is suuuuuper slow, I’d then reserve the right to note as much publicly 90 days later or whenever it gets significantly faster, whichever comes first. That would allow me to treat public and private sector slowness the same.

    I do still worry that will raise the antagonism level to a place where folks won’t want to partner on getting issues resolved, but it would increase the available data set I can talk about.

    WDYT?

  9. Chad
    Posted September 12, 2018 at 10:22 am | Permalink

    http://www.webpagetest.org/video/compare.php?tests=180202_67_ae21f1a34a7f570599edae125e1c292a%2C180202_RZ_4b410646d168afd7a5738f5325790254%2C180202_YB_3bf4cd9abf749f0a891451bac3d1f579%2C180202_VF_84ae556dc7c51e405d5fccccccb383ae&thumbSize=200&ival=16.67&end=visual

  10. Posted September 12, 2018 at 10:50 am | Permalink

    Thanks for the trace, Chad! It raises some questions (particularly as I can’t re-run these tests since the script isn’t available and I presume the experiments have been taken down):

    • Why a desktop-class browser and CPU?
    • Why is a Cable connection the right baseline?
    • Does LinkedIn care about mobile users on these stacks? Is LinkedIn expecting growth to come from primarily desktop-oriented users?
    • What causes hero images to be so frequently delayed?
    • The time to deliver https://www.linkedin.com/sailfish/api/feed/sailfishFeedUpdates?q=feed seems pretty variable. How was that controlled for, as it seems to be critical-path?

    Thanks again for adding context. I wish the original post had included this link so that the conversation could have been data-driven at the time.

  11. Dave
    Posted September 14, 2018 at 1:30 pm | Permalink

    Calling out developers for such issues is as much nonsense as calling out developers for the proliferation of loot boxes in modern video games (which I hear of quite a bit).

    The goals and means of any product are driven by its management, and that is where the responsibility to the customer lies. Developers only have a “privileged position” if their management allows it to be so.

    Calling out javscript specifically as a culprit is also nonsense, especially when comparing it to CO2 emissions. I don’t have much choice about the air I breathe, but I sure have a choice when it comes to what web content I consume. Websites that are not accessed due to their bloat don’t make up part of any ecosystem.

  12. Mikael Gramont
    Posted September 15, 2018 at 1:19 am | Permalink

    Chris Rosenau,

    “When developers produce a website, they do so for their intended audience”
    That’s a pretty bold statement. I would argue that there are plenty of companies out there who commission websites from agencies and who don’t have any idea the whole performance debate even exists. Agencies rush to deliver a product and if the client sees it perform well on their desktop, they move on. And the web holds one more website that is slow for everyone but recent desktop computers.

    “If they are developing an e-commerce website, they don’t develop for the poor”
    I had never heard this argument. I suspect you wrote that quickly so I won’t comment on the idea itself, which is kind of shocking, but the argument itself doesn’t hold water. Speed is not just about money/social status, it’s also about connectivity. And because of that, I’ll refer you to this 10-year-old article: https://blog.gigaspaces.com/amazon-found-every-100ms-of-latency-cost-them-1-in-sales/

    “Most sites are now mobile friendly”
    No they’re not, that is the very reason Alex is writing articles like these!

    “If I developing for the government then you need to go even farther because you need to addresses disabilities.
    Actually if there is a huge problem, it isn’t JavaScript, it is the total lack of consideration of people with disabilities.”
    I don’t mean to pick on you, but you go from basically saying “only government websites need to worry about a11y” to “we should care more about a11y”.

    “Also this whole drive to make everything work on a cell phone is just a ridiculous idea”
    You’re not even arguing it’s impossible to make things work on cell phones, you’re saying you don’t see why we should. I think we’ve been past that debate for a good 5 years now.

  13. Bridget M Stewart
    Posted September 16, 2018 at 6:29 am | Permalink

    Thank you for advocating as you are. Like you, I love what Javascript is capable of providing, but I am not pleased with the resulting bloat of the “developer experience” designed solutions.

    Chris Rosenau (above) makes a some interesting statements. I work for a company whose aim is to sell its products worldwide. So, the need for mobile speed on our site is real – and we fail, horribly. We are working to repair this, all the while being sensitive to accessibility as well. Developers need to advocate continually to our leaders and the businesses in which we work to provide visitors to our digital properties the best experience possible — and DEFINE for them what that is. Good experience is not shiny, over-the-top interactivity.

    Unfortunately, I think a lot of designers and developers got excited about the really cool stuff that could be done, that they forgot how to edit their work to make sure the really cool stuff was applied intelligently, at the right time instead of to ALL. THE. THINGS.

    As for going back to paper, Chris Rosenau, sure. Why not? If you look at where we are headed in language today, emojis are taking over. We are not that far away from returning to hieroglyphics on cave walls. :-)

  14. Posted September 17, 2018 at 1:19 pm | Permalink

    For the record, Tom Dale and I discussed Chad’s traces and the questions above. Thread here: https://twitter.com/slightlylate/status/1040270680179335168

    ISTM that the tests are noisy on a large critical-path factor and seem mostly to measure the effect of loading script in parallel or serial; unsurprisingly, parallel is faster.

  15. Ken Brown
    Posted September 18, 2018 at 5:25 am | Permalink

    Another metric is how much basic site functionality can be achieved with scripting disabled?
    Plenty of sites do not even display any text with JS disabled, use JS only links etc. i.e. not even basic site navigation / reading of content is possible

  16. Sam
    Posted September 18, 2018 at 10:30 pm | Permalink

    I really didn’t expect to agree with this article… But I do. I don’t see why you’re squeamish about naming and shaming. Web sites are inherently public, and the tools used to access them come with the tools to profile them. Honestly, I needed to be reminded how bad things are for some people.

    I’m surprised you left out tracking. Though IME it’s used reasonably in the public sector, go to a news site without an ad blocker and it’s not the framework that hits you, it’s the 50 ad trackers.

Post a Comment

Your email is never shared. Required fields are marked *

*
*