Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

The Performance Inequality Gap, 2024

Frontend is haunted by 2013's Great Branch Mispredict

The global device and network situation continues to evolve, and this series is an effort to provide an an up-to-date understanding for working web developers. So what's changed since last year? And how much HTML, CSS, and (particularly) JavaScript can a new project afford?

The Budget, 2024

In a departure from previous years, two sets of baseline numbers are presented for first-load under five seconds on 75th (P75) percentile devices and networks[1]; one set for JavaScript-heavy content, and another for markup-centric stacks.

Budget @ P75 Markup-based JS-based
Total Markup JS Total Markup JS
3 seconds 1.4MiB 1.3MiB 75KiB 730KiB 365KiB 365KiB
5 seconds 2.5MiB 2.4MiB 100KiB 1.3MiB 650KiB 650KiB

This was data was available via last year's update, but was somewhat buried. Going forward, I'll produce both as top-line guidance. The usual caveats apply:

Global baselines matter because many teams have low performance management maturity, and today's popular frameworks – including some that market performance as a feature – fail to ward against catastrophic results.

Until and unless teams have better data about their audience, the global baseline budget should be enforced.

This isn't charity; it's how products stay functional, accessible, and reliable in a market awash in bullshit. Limits help teams steer away from complexity and towards tools that generate simpler output that's easier to manage and repair.

JavaScript-Heavy

Since at least 2015, building JavaScript-first websites has been a predictably terrible idea, yet most of the sites I trace on a daily basis remain mired in script.[2] For these sites, we have to factor in the heavy cost of running JavaScript on the client when describing how much content we can afford.

HTML, CSS, images, and fonts can all be parsed and run at near wire speeds on low-end hardware, but JavaScript is at least three times more expensive, byte-for-byte.

Most sites, even those that aspire to be "lived in", are generally experienced through short sessions, which means they can't justify much in the way of up-front code. First impressions always matter.

Most sorts of sites have shallow sessions, making up-front script costs hard to justify.
Most sorts of sites have shallow sessions, making up-front script costs hard to justify.

Targeting the slower of our two representative devices, and opening only two connections over a P75 network, we can afford ~1.3MiB of compressed content to get interactive in five seconds. A page fitting this budget can afford:

If we set the target a more reasonable three seconds, the budget shrinks to ~730KiB, with no more than 365KiB of compressed JavaScript.

Similarly, if we keep the five second target but open five TLS connections, the budget falls to ~1MiB. Sites trying to load in three seconds but which open five connections can afford only ~460KiB total, leaving only ~230KiB for script.

Markup-Heavy

Sites largely comprised of HTML and CSS can afford a lot more, although CSS complexity and poorly-loaded fonts can still slow things down. Conservatively, to load in five seconds over two connections, we should try to keep content under 2.5MiB, including:

To hit the three second first-load target, we should aim for a max 1.4MiB transfer, made up of:

These are generous targets. The blog you're reading loads in ~1.2 seconds over a single connection on the target device and network profile. It consumes 120KiB of critical path resources to become interactive, only 8KiB of which is script.

Calculate Your Own

As in years past, you can use the interactive estimator to understand how connections and devices impact budgets. This the tool has been updated to let you select from JavaScript-heavy and JavaScript-light content composition and defaults to the updated network and device baseline (see below).

<em>Tap to try the interactive version.</em>
Tap to try the interactive version.

It's straightforward to understand the number of critical path network connections and to eyeball the content composition from DevTools or WebPageTest. Armed with that information, it's possible to use this estimator to quickly understand what sort of first-load experience users at the margins can expect. Give it a try!

Situation Report

These recommendations are not context-free, and folks can reasonably disagree.

Indeed, many critiques are possible. The five second target first load)[1:1] is arbitrary. A sample population comprised of all internet users may be inappropriate for some services (in both directions). A methodology of "informed reckons" leaves much to be desired. The methodological critiques write themselves.

The rest of this post works to present the thinking behind the estimates, both to spark more informed points of departure and to contextualise the low-key freakout taking place as INP begins to put a price on JavaScript externalities.

Another aim of this series is to build empathy. Developers are clearly out of touch with market ground-truth. Building an understanding of the differences in the experiences of the wealthy vs. working-class users can make the privilege bubble's one-way mirror perceptible from the inside.[3]

Mobile

The "i" in iPhone stands for "inequality".

Premium devices are largely absent in markets with billions of users thanks to the chasm of global wealth inequality. India's iOS share has surged to an all-time high of 7% on the back of last-generation and refurbished devices. That's a market of 1.43 billion people where Apple doesn't even crack the top five in terms of shipments.

The Latin American (LATAM) region, home to more than 600 million people and nearly 200 million smartphones, shows a similar market composition:

In <abbr>LATAM</abbr>, iPhones make up less than 6% of total device shipments.
In LATAM, iPhones make up less than 6% of total device shipments.

Everywhere wealth is unequally distributed, the haves read about it in Apple News over 5G while the have-nots struggle to get reliable 4G coverage for their Androids. In country after country (PDF) the embedded inequality of our societies sorts ownership of devices by price. This, in turn, sorts by brand.

This matters because the properties of devices defines what we can deliver. In the U.S., the term "smartphone dependence" has been coined to describe folks without other ways to access the increasing fraction of essential services only available through the internet. Unsurprisingly, those who can't afford other internet-connected devices, or a fixed broadband subscription, are also likely to buy less expensive smartphones:

Missing alt text

As smartphone ownership and use grow, the frontends we deliver remain mediated by the properties of those devices. The inequality between the high-end and low-end is only growing, even in wealthy countries. What we choose to do in response defines what it means to practice UX engineering ethically.

Device Performance

Extending the SoC performance-by-price series with 2023's data, the picture remains ugly:

<em>Tap for a larger version.</em><br>Geekbench 5 single-core scores for 'fastest iPhone', 'fastest Android', 'budget', and 'low-end' segments.
Tap for a larger version.
Geekbench 5 single-core scores for 'fastest iPhone', 'fastest Android', 'budget', and 'low-end' segments.

Not only have fruity phones extended their single-core CPU performance lead over contemporary high-end Androids to a four year advantage, the performance-per-dollar curve remains unfavourable to Android buyers.

At the time of publication, the cheapest iPhone 15 Pro (the only device with the A17 Pro chip) is $999 MSRP, while the S23 (using the Snapdrago 8 gen 2) can be had for $860 from Samsung. This nets out to 2.32 points per dollar for the iPhone, but only 1.6 points per dollar for the S23.

Meanwhile, a $175 (new, unlocked) Samsung A24 scores a more reasonable 3.1 points per dollar on single-core performance, but is more than 4.25× slower than the leading contemporary iPhone.

The delta between the fastest iPhones and moderately price new devices rose from 1,522 points last year to 1,774 today.

Put another way, the performance gap between wealthy users and budget shoppers grew more this year (252 points) than the gains from improved chips delivered at the low end (174 points). Inequality is growing faster than the bottom-end can improve. This is particularly depressing because single-core performance tends to determine the responsiveness of web app workloads.

A less pronounced version of the same story continues to play out in multi-core performance:

<em>Tap for a larger version.</em><br>Round and round we go: Android ecosystem <abbr>SoC</abbr>s are improving, but the Performance Inequality Gap continues to grow. Even the fastest Androids are 18 months (or more) behind equivalently priced iOS-ecosystem devices.
Tap for a larger version.
Round and round we go: Android ecosystem SoCs are improving, but the Performance Inequality Gap continues to grow. Even the fastest Androids are 18 months (or more) behind equivalently priced iOS-ecosystem devices.

Recent advances in high-end Android multi-core performance have closed the previous three-year gap to 18 months. Meanwhile, budget segment devices have finally started to see improvement (as this series predicted), thanks to hand-me-down architecture and process node improvements. That's where the good news ends.

The multi-core performance gap between i-devices and budget Androids grew considerably, with the score delta rising from 4,318 points last year to 4,936 points in 2023.

Looking forward, we can expect high-end Androids to at least stop falling further behind owing to a new focus on performance by Qualcomm's Snapdragon 8 gen 3 and MediaTek's Dimensity 9300 offerings. This change is long, long overdue and will take years to filter down into positive outcomes for the rest of the ecosystem. Until that happens, the gap in experience for the wealthy versus the rest will not close.

iPhone owners experience a different world than high-end Android buyers, and live galaxies apart from the bulk of the market. No matter how you slice it, the performance inequality gap is growing for CPU-bound workloads like JavaScript-heavy web apps.

Networks

As ever, 2023 re-confirmed an essential product truth: when experiences are slow, users engage less. Doing a good job in an uneven network environment requires thinking about connection availability and engineering for resilience. It's always better to avoid testing the radio gods than spend weeks or months appeasing them after the damage is done.

5G network deployment continues apace, but as with the arrival of 4G, it is happening unevenly and in ways and places that exacerbate (rather than lessen) performance inequality.[4]

Data on mobile network evolution is sketchy,[5] and the largest error bars in this series' analysis continue to reside in this section. Regardless, we can look industry summaries like the GSMA's report on "The Mobile Economy 2023" (PDF) for a directional understanding that we can triangulate with other data points to develop a strong intuition.

For instance, GSMA predicts that 5G will only comprise half of connections by 2030. Meanwhile, McKinsey predicts that high-quality 5G (networks that use 6GHz bands) will only cover a quarter of the world's population by 2030. Regulatory roadblocks are still being cleared.

As we said in 2021, "4G is a miracle, 5G is a mirage."

This doesn't mean that 4G is one thing, or that it's deployed evenly, or even that the available spectrum will remain stable within a single generation of radio technology. For example, India's network environment has continued to evolve since the Reliance Jio revolution that drove 4G into the mainstream and pushed the price of a mobile megabyte down by ~90% on every subcontinental carrier.

Speedtest.net's recent data shows dramatic gains, for example, and analysts credit this to improved infrastructure density, expanded spectrum, and back-haul improvements related to the 5G rollout — 4G users are getting better experiences than they did last year because of 5G's role in reducing contention.

India's speed test medians are moving quickly, but variance is orders-of-magnitude wide, with 5G penetration below 25% in the most populous areas.
India's speed test medians are moving quickly, but variance is orders-of-magnitude wide, with 5G penetration below 25% in the most populous areas.

These gains are easy to miss looking only at headline "4G vs. 5G" coverage. Improvements arrive unevenly, with the "big" story unfolding slowly. These effects reward us for looking at P75+, not just means or medians, and intentionally updating priors on a regular basis.

Events can turn our intuitions on their heads, too. Japan is famously well connected. I've personally experienced rock-solid 4G through entire Tokyo subway journeys, more than 40m underground and with no hiccups. And yet, the network environment has been largely unchanged by the introduction of 5G. Having provisioned more than adequately in the 4G era, new technology isn't having the same impact from pent-up demand. But despite consistent performance, the quality of service for all users is distributed in a much more egalitarian way:

Japan's network environment isn't the fastest, but is much more evenly distributed.
Japan's network environment isn't the fastest, but is much more evenly distributed.

Fleet device composition has big effects, owing to differences in signal-processing compute availability and spectrum compatibility. At a population level, these influences play out slowly as devices age out, but still have impressively positive impacts:

Device impact on network performance is visible in Opensignal's iPhone dataset.
Device impact on network performance is visible in Opensignal's iPhone dataset.

As inequality grows, averages and "generation" tags can become illusory and misleading. Our own experiences are no guide; we've got to keep our hands in the data to understand the texture of the world.

So, with all of that as prelude, what can we say about where the mobile network baseline should be set? In a departure from years prior, I'm going to use a unified network estimate (see below). You'll have to read on for what it is! But it won't be based on the sort of numbers that folks explicitly running speed tests see; those aren't real life.

Market Factors

The market forces this series previewed in 2017 have played out in roughly a straight line: smartphone penetration in emerging markets is approaching saturation, ensuring a growing fraction of purchases are made by upgrade shoppers. Those who upgrade see more value in their phones and save to buy better second and third devices. Combined with the emergence and growth of the "ultra premium" segment, average selling prices (ASPs) have risen.

2022 and 2023 have established an inflection point in the regard, with worldwide average selling prices jumping to more than $430, up from $300-$350 for much of the decade prior. Some price appreciation has been due to transient impacts of the U.S./China trade wars, but most of it appears driven by iOS ASPs which peaked above $1,000 for the first time in 2023. Android ASPs, meanwhile, continued a gradual rise to nearly $300, up from $250 five years ago.

Missing alt text

A weak market for handsets in 2023, plus stable sales for iOS, had an notable impact on prices. IDC expects global average prices to fall back below $400 by 2027 as Android volumes increase from an unusually soft 2023.

Counterpoint data shows declining sales in both 2022 and 2023.
Counterpoint data shows declining sales in both 2022 and 2023.
Shipment growth in late 2023 and beyond is coming from emerging markets like the Middle East and Africa. Samsung's A-series mid-tier is doing particularly well.
Shipment growth in late 2023 and beyond is coming from emerging markets like the Middle East and Africa. Samsung's A-series mid-tier is doing particularly well.

Despite falling sales, distribution of Android versus iOS sales remains largely unchanged:

Android sales reliably constitute 80-85% of worldwide volume.
Android sales reliably constitute 80-85% of worldwide volume.
Even in rich nations like Australia and the <a href='https://www.statista.com/statistics/262179/market-share-held-by-mobile-operating-systems-in-the-united-kingdom/'>the U.K.</a>, iPhones account for less than half of sales. Predictably, they are over-represented in analytics and logs owing to wealth-related factors including superior network access and performance hysteresis.
Even in rich nations like Australia and the the U.K., iPhones account for less than half of sales. Predictably, they are over-represented in analytics and logs owing to wealth-related factors including superior network access and performance hysteresis.

Smartphone replacement rates have remained roughly in line with previous years, although we should expect higher device longevity in future years. Survey reports and market analysts continue to estimate average replacement at 3-4 years, depending on segment. Premium devices last longer, and a higher fraction of devices may be older in wealthy geographies. Combined with discretionary spending pressure and inflationary impacts on household budgets, consumer intent to spend on electronics has taken a hit, which will be felt in device lifetime extension until conditions improve. Increasing demand for refurbished devices also adds to observable device aging.

The data paints a substantially similar picture to previous years: the web is experienced on devices that are slower and older than those carried by affluent developers and corporate directors whose purchasing decisions are not impacted by transitory inflation.

To serve users effectively, we must do extra work to live as our customers do.

Test Device Recommendations

Re-using last year's P75 device calculus, our estimate is based on a device sold new, unlocked for the mid-2020 to mid-2021 global ASP of ~$350-375.

Representative examples from that time period include the Samsung Galaxy A51 and the Pixel 4a. Neither model featured 5G,[6] and we cannot expect 5G to play a significant role in worldwide baselines for at least the next several years.[4:1]

The A51 featured eight slow cores (4x2.3 GHz Cortex-A73 and 4x1.7 GHz Cortex-A53) on a 10nm process:

Geekbench 6 scores for the Galaxy A51 versus today's leading device.
Geekbench 6 scores for the Galaxy A51 versus today's leading device.

The Pixel 4a's slow, eight-core big.LITTLE configuration was fabricated on an 8nm process:

Google spent more on the <abbr>SoC</abbr> for the Pixel 4a and enjoyed a later launch date, boosting performance relative to the A51.
Google spent more on the SoC for the Pixel 4a and enjoyed a later launch date, boosting performance relative to the A51.

Pixels have never sold well, and Google's focus on strong SoC performance per dollar was sadly not replicated across the Android ecosystem, forcing us to use the A51 as our stand-in.

Devices within the envelope of our attention are 15-25% as fast as those carried by programmers and their bosses — even in wealthy markets.

The Galaxy may be slightly faster than last year's recommendation of the Galaxy A50 for testing, but the picture is muddy:

Geekbench 5 shows almost no improvement between the A50 and the A51.
Geekbench 5 shows almost no improvement between the A50 and the A51.
Geekbench 6 shows the same story within the margin of error. The low-end is stagnant, and still <a href='https://www.statista.com/statistics/934471/smartphone-shipments-by-price-category-worldwide/' target='_new'>30% of worldwide volume</a>.
Geekbench 6 shows the same story within the margin of error. The low-end is stagnant, and still 30% of worldwide volume.

If you're building a test lab today, refurbished A51s can be had for ~$150. Even better, the newer Nokia G100 can be had for as little as $100, and it's faithful to the sluggish original in nearly every respect.[7]

If your test bench is based on last year's recommended A50 or Nokia G11, I do not recommend upgrading in 2024. The absolute gains are so slight that the difference will be hard to feel, and bench stability has a value all its own. Looking forward, we can also predict that our bench performance will be stable until 2025.

Claims about how "performant" modern frontend tools are have to be evaluated in this slow, stagnant context.

Desktop

It's a bit easier to understand the Desktop situation because the Edge telemetry I have access to provides statistically significant insight into 85+% of the market.

Device Performance

The TL;DR for desktop performance is that Edge telemetry puts ~45% of devices in a "low-end" bucket, meaning they have <= 4 cores or <= 4GB of RAM.

Device Tier Fleet % Definition
Low-end 45% Either:
<= 4 cores, or
<= 4GB RAM
Medium 48% HDD (not SSD), or
4-16 GB RAM, or
4-8 cores
High 7% SSD +
> 8 cores +
> 16GB RAM

20% of users are on HDDs (not SSDs) and nearly all of those users also have low (and slow) cores.

You might be tempted to dismiss this data because it doesn't include Macs, which are faster than the PC cohort. Recall, however, that the snapshot also excludes ChromeOS.

ChromeOS share has veered wildly in recent years, representing 50%-200% of Mac shipments in a given per quarter. In '21 and '22, ChromeOS shipments regularly doubled Mac sales. Despite post-pandemic mean reversion, according to IDC ChromeOS devices outsold Macs ~5.7M to ~4.7M in 2023 Q2. The trend reversed in Q3, with Macs almost doubling ChromeOS sales, but slow ChromeOS devices aren't going away and, from a population perspective, more than offset Macs today. Analysts also predict growth in the low end of the market as educational institutions begin to refresh their past purchases.

Networks

Desktop-attached networks continue to improve, notably in the U.S. Regulatory intervention and subsidies have done much to spur enhancements in access to U.S. fixed broadband, although disparities in access remain and the gains may not persist.

This suggests that it's time to also bump our baseline for desktop tests beyond the 5Mbps/1Mbps/28ms configuration that WebPageTest.org's "Cable" profile has defaulted to for desktop tests.

How far should we bump it? Publicly available data is unclear, and I've come to find out that Edge's telemetry lacks good network observation statistics (doh!).

But the comedy of omissions doesn't end there: Windows telemetry doesn't capture a proxy for network quality, I no longer have access to Chrome's data, the population-level telemetry available from CrUX is unhelpful, and telcos li...er...sorry, "market their products in accordance with local laws and advertising standards."

All of this makes it difficult to construct an estimate.

One option is to use a population-level assessment of medians from something like the Speedtest.net data and then construct a histogram from median speeds. This is both time-consuming and error-prone, as population-level data varies widely across the world. Emerging markets with high mobile internet use and dense populations can feature poor fixed-line broadband penetration compared with Western markets.

Another option is to mathematically hand-wave using the best evidence we can get. This might allow us to reconstruct probable P75 and P90 values if we know something about the historical distribution of connections. From there, we can gut-check using other spot data. To do this, we need to assume some data set is representative, a fraught decision all its own.[8] Biting the bullet, we could start from the Speedtest.net global survey data, which currently fails to provide anything but medians (P50):

Speedtest.net's global median values are unhelpful on their own, both because they represent users who are testing for speed (and not organic throughput) and because they don't give us a fuller understanding of the distribution.
Speedtest.net's global median values are unhelpful on their own, both because they represent users who are testing for speed (and not organic throughput) and because they don't give us a fuller understanding of the distribution.

After many attempted Stupid Math Tricks with poorly fitting curves (bandwidth seems to be a funky cousin of log-normal), I've decided to wing it and beg for help: instead of trying to be clever, I'm leaning on Cloudflare Radar's P25/P50/P75 distributions for populous, openly-connected countries with >= ~50M internet users. It's cheeky, but a weighted average of the P75 of download speeds (3/4ths of all connections are faster) should get us in the ballpark. We can then use the usual 5:1 downlink:uplink ratio to come up with an uplink estimate. We can also derive a weighted average for the P75 RTT from Cloudflare's data. Because Cloudflare doesn't distinguish mobile from desktop connections, this may be an overly conservative estimate, but it's still be more permissive than what we had been pegged to in years past:

National P75 Downlink and RTT
Country P75 Downlink (Mbps) P75 RTT (ms)
India 4 114
USA 11 58
Indonesia 5 81
Brazil 8 71
Nigeria 3 201
Pakistan 3 166
Bangladesh 5 114
Japan 17 42
Mexico 7 75
Egypt 4 100
Germany 16 36
Turkey 7 74
Philippines 7 72
Vietnam 7 72
United Kingdom 16 37
South Korea 24 26
Population Weighted Avg. 7.2 94

We, therefore, update our P75 link estimate 7.2Mbps down, 1.4Mbps up, and 94ms RTT.

This is a mild crime against statistics, not least of all because it averages unlike quantities and fails to sift mobile from desktop, but all the other methods available at time of writing are just as bad. Regardless, this new baseline is half again as much link capacity as last year, showing measurable improvement in networks worldwide.

If you or your company are able to generate a credible worldwide latency estimate in the higher percentiles for next year's update, please get in touch.

Market Factors

The forces that shape the PC population have been largely fixed for many years. Since 2010, volumes have been on a slow downward glide path, shrinking from ~350MM per year in a decade ago to ~260MM in 2018. The pandemic buying spree of 2021 pushed volumes above 300MM per year for the first time in eight years, with the vast majority of those devices being sold at low-end price points — think ~$300 Chromebooks rather than M1 MacBooks.

Lest we assume low-end means "short-lived", recent announcements regarding software support for these devices will considerably extend their impact. This low-end cohort will filter through the device population for years to come, pulling our performance budgets down, even as renewed process improvement is unlocking improved power efficiency and performance at the high end of the first-sale market. This won't be as pronounced as the diffusion of $100 smartphones has been in emerging markets, but the longer life-span of desktops is already a factor in our model.

Test Device Recommendations

Per our methodology from last year which uses the 5-8 year replacement cycle for a PC, we update our target date to late 2017 or early 2018, but leave the average-selling-price fixed between $600-700. Eventually we'll need to factor in the past couple of years of gyrations in inflation and supply chains into account when making an estimate, but not this year.

So what did $650, give or take, buy in late 2017 or early 2018?

One option was a naf looking tower from Dell, optimistically pitched at gamers, with a CPU that scores poorly versus a modern phone, but blessedly includes 8GB of RAM.

In laptops (the larger segment), ~$650 bought the Lenovo Yoga 720 (12"), with a 2-core (4-thread) Core i3-7100U and 4GB of RAM. Versions with more RAM and a faster chip were available, but cost considerably more than our budget. This was not a fast box. Here's a device with that CPU compared to a modern phone; not pretty:

The phones of wealthy developers absolutely smoke the baseline PC.
The phones of wealthy developers absolutely smoke the baseline PC.

It's considerably faster than some devices still being sold to schools, though.

What does this mean for our target devices? There's wild variation in performance per dollar below $600 which will only increase as inflation-affected cohorts grow to represent a larger fraction of the fleet. Intel's move off of 14nm (finally!) also means that gains are starting to arrive at the low end, but in an uneven way. General advice is therefore hard to issue. That said, we can triangulate based on what we know about the market:

My recommendation, then, to someone setting up a new lab today is not to spend more than $350 on new a test device. Consider laptops with chips like the N4120, N4500, or the N5105. Test devices should also have no more than 8GB of RAM, and preferably 4GB. The 2021 HP 14 is a fine proxy. The updated ~$375 version will do in a pinch, but try to spend less if you can. Test devices should preferably score no higher than 1,000 in single-core Geekbench 6 tests; a line the HP 14's N4120 easily ducks, clocking in at just over 350.

Takeaways

There's a lot of good news embedded in this year's update. Devices and networks have finally started to get faster (as predicted), pulling budgets upwards.

At the same time, the community remains in denial about the disastrous consequences of an over-reliance on JavaScript. This paints a picture of path dependence — frontend isn't moving on from approaches that hurt users, even as the costs shift back onto teams that have been degrading life for users at the margins.

We can anticipate continued improvement in devices, while network gains will level out as the uneven deployment of 5G stumbles forward. Regardless, the gap between the digital haves and have-nots continues to grow. Those least able to afford fast devices are suffering regressive taxation from developers high on DX fumes.

It's no mystery why folks in the privilege bubble are not building with empathy or humility when nobody calls them to account. What's mysterious is that anybody pays them to do it.

The PM and EM disciplines have utterly failed, neglecting to put business constraints on the enthusiasms of developers. This burden is falling, instead, on users and their browsers. Browsers have had to step in as the experience guardians of last resort, indicating a market-wide botching of the job in technology management ranks and an industry-scale principal-agent issue amongst developers.

Instead of cabining the FP crowd's proclivities for the benefit of the business, managers meekly repeat bullshit like "you can't hire for fundamentals" while bussing in loads of React bootcampers. It is not too much to ask that managers run bake-offs and hire for skills in platform fundamentals that serve businesses better over time. The alternative is continued failure, even for fellow privilege bubble dwellers.

Case in point: this post was partially drafted on airplane wifi, and I can assure you that wealthy folks also experience RTT's north of 500ms and channel capacity in the single-digit-Mbps.

Even the wealthiest users step into the wider world sometimes. Are these EMs and PMs really happy to lose that business?

<em>Tap for a larger version.</em><br>Wealthy users are going to experience networks with properties that are even worse than the 'bad' networks offered to the Next Billion Users. At an altitude of 40k feet and a ground speed for 580 MPH somewhere over Alberta, CA, your correspondent's bandwidth is scarce, lopsided, and laggy.
Tap for a larger version.
Wealthy users are going to experience networks with properties that are even worse than the 'bad' networks offered to the Next Billion Users. At an altitude of 40k feet and a ground speed for 580 MPH somewhere over Alberta, CA, your correspondent's bandwidth is scarce, lopsided, and laggy.

Of course, any trend that can't continue won't, and INP's impact is already being felt. The great JavaScript merry-go-round may grind to a stop, but the momentum of consistently bad choices is formidable. Like passengers on a cruise ship ramming a boardwalk at flank speed, JavaScript regret is dawning far too late. As the good ship Scripting shudders and lists on the remains of the ferris wheel, it's not exactly clear how to get off, but the choices that led us here are becoming visible, if only through their negative consequences.

The Great Branch Mispredict

We got to a place where performance has been a constant problem in large part because a tribe of programmers convinced themselves that it wasn't and wouldn't be. The circa '13 narrative asserted that:

It was all bullshit, and many of us spotted it a mile away.

The problem is now visible and demands a solution, but the answers will be largely social, not technical. User-centered values must contest the airtime previouly taken by failed trickle-down DX mantras. Only when the dominant story changes will better architectures and tools win.

How deep was the branch? And how many cycles will the fault cost us? If CPUs and networks continue to improve at the rate of the past two years, and INP finally forces a reckoning, the answer might be as little as a decade. I fear we will not be so lucky; an entire generation has been trained to ignore reality, to prize tribalism rather than engineering rigor, and to devalue fundamentals. Those folks may not find the next couple of years to their liking.

Frontend's hangover from the JavaScript party is gonna suck.


  1. The five second first-load target is arbitrary, and has always been higher than I would prefer. Five seconds on a modern computer is an eternity, but in 2016 I was talked down from my preferred three-second target by Googlers that despaired that "nobody" could hit that mark on the devices and networks of that era.

    This series continues to report budgets with that target, but keen readers will see that I'm also providing three-second numbers. The interactive estimation tool was also updated this year to provides the ability to configure the budget target.

    If you've got thoughts about how this should be set in future, or how it could be handled better, plesae get in touch. ↩︎ ↩︎

  2. Frontend developers are cursed to program The Devil's Computer. Web apps execute on slow devices we don't spec or provision, on runtimes we can barely reason about, lashed to disks and OSes taxed by malware and equally invasive security software, over networks with the variability of carrier pigeons.

    It's vexing, then, that contemporary web development practice has decided that the way to deliver great experiences is to lean into client CPUs and mobile networks, the most unreliable, unscalable properties of any stack.

    And yet, here we are in 2024, with Reactors somehow still anointed to decree how and where code should run, despite a decade of failure to predict the obvious, or even adapt to the world as it has been. The mobile web overtook desktop eight years ago, and the best time to call bullshit on JS-first development was when we could first see the trends clearly.

    The second best time is now. ↩︎

  3. Engineering is design under constraint, with the goal to develop products that serve users and society.

    The opposite of engineering is bullshit; substituting fairy tales for inquiry and evidence.

    For the frontend to earn and keep its stripes as an engineering discipline, frontenders need to internalise the envelope of what's possible on most devices. ↩︎

  4. For at least a decade to come, 5G will continue to deliver unevenly depending on factors including building materials, tower buildout, supported frequencies, device density, radio processing power, and weather. Yes, weather (PDF).

    Even with all of those caveats, 5G networks aren't the limiting factor in wealthy geographies; devices are. It will take years for the deployed base to be fully replaced with 5G-capable handsets, and we should expect the diffusion to be "lumpy", with wealthy markets seeing 5G device saturation at nearly all price points well in advance of less affluent countries where capital availability for 5G network roll-outs will dominate. ↩︎ ↩︎

  5. Ookla! Opensignal! Cloudflare! Akamai! I beseech thee, hear my plea and take pity, oh mighty data collectors.

    Whilst you report medians and averages (sometimes interchangeably, though I cannot speculate why), you've stopped publishing useable histogram information about the global situation, making the reports nearly useless for anything but telco marketing. Opensignal has stopped reporting meaningful 4G data at all, endangering any attempt at making sense.

    Please, I beg of you, publish P50, P75, P90, and P95 results for each of your market reports! And about the global situation! Or reach out directly and share what you can in confidence so I can generate better guidance for web developers. ↩︎

  6. Both the benchmark A51 and Pixel 4a devices were eventually sold in 5G variants (A51 5G, Pixel 4a 5G), but at a price of $500 brand-new, unlocked at launch, making them more than 40% above the price of the base models and well above our 2020-2021 ASP of $350-$375. ↩︎

  7. Samsung's lineup is not uniform worldwide, with many devices being region-specific. The closest modern (Western) Samsung device to the A51 is the Samsung A23 5G, which scores in the range of the Pixel 4a. As a result of the high CPU score and 5G modem, it's hard to recommend it — or any other current Samsung model — as a lab replacement. Try the Nokia G100 instead. ↩︎

  8. The idea that any of the publicly available data sets is globally representative should set off alarms.

    The obvious problems include (but are not limited to):

    • geographic differences in service availability and/or deployed infrastructure,
    • differences in market penetration of observation platforms (e.g., was a system properly localised? Equally advertised?), and
    • mandated legal gaps in coverage.

    Of all the hand-waving we're doing to construct an estimate, this is the biggest leap and one of the hardest to triangulate against. ↩︎

Why Are Tech Reporters Sleeping On The Biggest App Store Story?

Browsers are the most likely disruptor of the mobile duopoly but you'd never know it reading Wired or The Verge.

The tech news is chockablock[1] with antitrust rumblings and slow-motion happenings. Eagle-eyed press coverage, regulatory reports, and legal discovery have comprehensively documented the shady dealings of Apple and Google's app stores. Pressure for change has built to an unsustainable level. Something's gotta give.

This is the backdrop to the biggest app store story nobody is writing about: on pain of steep fines, gatekeepers are opening up to competing browsers. This, in turn, will enable competitors to replace app stores with directories of Progressive Web Apps. Capable browsers that expose web app installation and powerful features to developers can kickstart app portability, breaking open the mobile duopoly.

But you'd never know it reading Wired or The Verge.

With shockingly few exceptions, coverage of app store regulation assumes the answer to crummy, extractive native app stores is other native app stores. This unexamined framing shapes hundreds of pieces covering regulatory events, including by web-friendly authors. The tech press almost universally fails to mention the web as a substitute for native apps and fail to inform readers of its potential to disrupt app stores.

As Cory Doctorow observed:

"An app is just a web-page wrapped in enough IP to make it a crime to defend yourself against corporate predation."

The implication is clear: browsers unchained can do to mobile what the web did to desktop, where more than 70% of daily "jobs to be done" happen on the web.

Replacing mobile app stores will look different than the web's path to desktop centrality, but the enablers are waiting in the wings. It has gone largely unreported that Progressive Web Apps (PWAs) have been held back by Apple and Google denying competing browsers access to essential APIs.[2]

Thankfully, regulators haven't been waiting on the press to explain the situation. Recent interventions into mobile ecosystems include requirements to repair browser choice, and the analysis backing those regulations takes into account the web's role as a potential competitor (e.g., Japan's JFTC (pdf)).

Regulators seem to understand that:

Apple and Google saw what the web did to desktop, and they've laid roadblocks to the competitive forces that would let history repeat on smartphones.

The Buried Lede

The web's potential to disrupt mobile is evident to regulators, advocates, and developers. So why does the tech news fail to explain the situation?

Consider just one of the many antitrust events of recent months. It was covered by The Verge, Mac Rumors, Apple Insider, and more.

None of the linked articles note browser competition's potential to upend app stores. Browsers unshackled have the potential to free businesses from build-it-twice proprietary ecosystems, end rapacious app store taxes, pave the way for new OS entrants — all without the valid security concerns side-loading introduces.

Lest you think this an isolated incident, this article on the impact of the EU's DMA lacks any hint of the web's potential to unseat app stores. You can repeat this trick with any DMA story from the past year. Or spot-check coverage of the NTIA's February report.

Reporters are "covering" these stories in the lightest sense of the word. Barrels of virtual ink has been spilt documenting unfair app store terms, conditions, and competition. And yet.

Disruption Disrupted

In an industry obsessed with "disruption," why is this David vs. Goliath story going untold? Some theories, in no particular order.

First, Mozilla isn't advocating for a web that can challenge native apps, and none of the other major browser vendors are telling the story either. Apple and Google have no interest in seeing their lucrative proprietary platforms supplanted, and Microsoft (your narrator's employer) famously lacks sustained mobile focus.

Next, it's hard to overlook that tech reporters live like wealthy people, iPhones and all. From that vantage point, it's often news that the web is significantly more capable on other OSes (never mind that they spend much of every day working in a desktop browser). It's hard to report on the potential of something you can't see for yourself.

Also, this might all be Greek. Reporters and editors aren't software engineers, so the potential of browser competition can remain understandably opaque. Stories that include mention of "alternative app stores" generally fail to mention that these stores may not be as safe, or that OS restrictions on features won't disappear just because of a different distribution mechanism, or that the security track records of the existing duopolist app stores are sketchy at best. Under these conditions, it's asking a lot to expect details-based discussion of alternatives, given the many technical wrinkles. Hopefully, someone can walk them through it.

Further, market contestability theory has only recently become a big part of the tech news beat. Regulators have been writing reports to convey their understanding of the market, and to shape effective legislation that will unchain the web, but smart folks unversed in both antitrust and browser minutiae might need help to pick up what regulators are putting down.

Lastly, it hasn't happened yet. Yes, Progressive Web Apps have been around for a few years, but they haven't had an impact on the iPhones that reporters and their circles almost universally carry. It's much easier to get folks to cover stories that directly affect them, and this is one that, so far, largely hasn't.

Green Shoots

The seeds of web-based app store dislocation have already been sown, but the chicken-and-egg question at the heart of platform competition looms.

On the technology side, Apple has been enormously successful at denying essential capabilities to the web through a strategy of compelled monoculture combined with strategic foot-dragging.

Missing alt text

As an example, the eight-year delay in implementing Push Notifications for the web[3] kept many businesses from giving the web a second thought. If they couldn't re-engage users at the same rates as native apps, the web might as well not exist on phones. This logic has played out on a loop over the last decade, category-by-category, with gatekeepers preventing competing browsers from bringing capabilities to web apps that would let them supplant app stores[2:1] while simultaneously keeping them from being discovered through existing stores.

Proper browser choice could upend this situation, finally allowing the web to provide "table stakes" features in a compelling way. For the first time, developers could bring the modern web's full power to wealthy mobile users, enabling the "write once, test everywhere" vision, and cut out the app store middleman — all without sacrificing essential app features or undermining security.

Sunsetting the 30% tax requires a compelling alternative, and Apple's simultaneous underfunding of Safari and compelled adoption of its underpowered engine have interlocked to keep the web out of the game. No wonder Apple is massively funding lobbyists, lawyers, and astroturf groups to keep engine diversity at bay while belatedly battening the hatches.

On the business side, managers think about "mobile" as a category. Rather than digging into the texture of iOS, Android, and the differing web features available on each, businesses tend to bulk accept or reject the app store model. One sub-segment of "mobile" growing the ability to route around highway robbery Ts & Cs is tantalising, but not enough to change the game; the web, like other metaplatforms, is only a disruptive force when pervasive and capable.[4]

A prohibition on store discovery for web apps has buttressed Apple's denial of essential features to browsers:

Even if developers overcome the ridiculous hurdles that Apple's shoddy browser engine throws up, they're still <a href='https://developer.apple.com/app-store/review/guidelines/#2.4'>prevented by Apple policy</a> from making interoperable web apps discoverable where users look for them.
Even if developers overcome the ridiculous hurdles that Apple's shoddy browser engine throws up, they're still prevented by Apple policy from making interoperable web apps discoverable where users look for them.

Google's answer to web apps in Play is a dog's breakfast, but it does at least exist for developers willing to put in the effort, or for teams savvy enough to reach for PWA Builder.

Recent developments also point to a competitive future for capable web apps.

First, browser engine choice should become a reality on iOS in the EU in 2024, thanks to the plain language of the DMA. Apple will, of course, attempt to delay the entry of competing browsers through as-yet-unknown strategies, but the clock is ticking. Once browsers can enable capable web apps with easier distribution, the logic of the app store loses a bit of its lustre.

Work is also underway to give competing browsers a chance to facilitate PWAs that can install other PWAs. Web App Stores would then become a real possibility through browsers that support them, and we should expect that regulatory and legislative interventions will facilitate this in the near future. Removed from the need to police security (browsers have that covered) and handle distribution (websites update themselves), PWA app stores like store.app can become honest-to-goodness app management surfaces that can safely facilitate discovery and sync.

PWA app stores like Appscope and store.app exist, but they're hobbled by gatekeepers that have denied competing browsers access to APIs that could turn PWA directories into real contenders.

It's no surprise that Apple and Google have kept private the APIs needed to make this better future possible. They built the necessary infrastructure for the web to disrupt native, then kept it to themselves. This potential has remained locked away within organisations politically hamstrung by native app store agendas. But all of that is about to change.

This begs the question: where's the coverage? This is the most exciting moment in more than 15 years for the web vs. native story, but the tech press is whiffing it.

A New Hope

2024 will be packed to the gills with app store and browser news, from implementation of the DMA, to the UK's renewed push into mobile browsers and cloud gaming, to new legislation arriving in many jurisdictions, to the first attempts at shipping iOS ports of Blink and Gecko browsers. Each event is a chance to inform the public about the already-raging battle for the future of the phone.

It's still possible to reframe these events and provide better context. We need a fuller discussion about what it will mean for mobile OSes to have competing native app stores when the underlying OSes are foundationally insecure. There are also existing examples of ecosystems with this sort of choice (e.g., China), and more needs to be written about the implications for users and developers. Instead of nirvana, the insecure status quo of today's mobile OSes, combined with (even more) absentee app store purveyors, turns side-loading into an alternative form of lock-in, with a kicker of added insecurity for users. With such a foundation, the tech-buying public could understand why a browser's superior sandboxing, web search's better discovery, and frictionless links are better than dodgy curation side-deals and "beware of dog" sign security.

The more that folks understand the stakes, the more likely tech will genuinely change for the better. And isn't that what public interest journalism is for?

Thanks to Charlie, Stuart Langride, and Frances Berriman for feedback on drafts of this post.


  1. Antitrust is now a significant tech beat, and recent events frequently include browser choice angles because regulators keep writing regulations that will enhance it. This beat is only getting more intense, giving the tech press ample column inches to explain the status quo more deeply and and educate around the most important issues.

    In just the last two months:

    All but one of the 19 links above are from just the last 60 days, a period which includes a holiday break in the US and Europe. With the EU's DMA coming into force in March and the CMA back on the job, browser antitrust enforcement is only accelerating. It sure would be great if reporters could occasionally connect these dots. ↩︎

  2. The stories of how Apple and Google have kept browsers from becoming real app stores differ greatly in their details, but the effects have been nearly identical: only their browsers could offer installation of web apps, and those browsers have done shockingly little to support web developers who want to depend on the browser as the platform.

    The ways that Apple has undermined browser-based stores is relatively well known: no equivalent to PWA install or "Smart Banners" for the web, no way for sites to suppress promotion of native apps, no ability for competing browsers to trigger homescreen installation until just this year, etc. etc. The decade-long build of Apple's many and varied attacks on the web as a platform is a story that's both tired and under-told.

    Google's malfeasance has gotten substantially less airtime, even among web developers – nevermind the tech press.

    The story picks up in 2017, two years after the release of PWAs and Push Notifications in Chrome. At the time, the PWA install flow was something of a poorly practised parlour trick: installation used an unreliable homescreen shortcut API that failed on many devices with OEM-customised launchers. The shortcut API also came laden with baggage that prevented effective uninstall and cross-device sync.

    To improve this situation, "WebAPKs" were developed. This new method of installation allows for deep integration with the OS, similar to the Application Identity Proxy feature that Windows lets browsers to provide for PWAs, with one notable exception: on Android, only Chrome gets to use the WebAPK system.

    Without getting into the weeds, suffice to say many non-Chrome browsers requested access. Only Google could meaningfully provide this essential capability across the Android ecosystem. So important were WebAPKs that Samsung gave up begging and reverse engineered it for their browser on Samsung devices. This only worked on Samsung phones where Suwon's engineers could count on device services and system keys not available elsewhere. That hasn't helped other browsers, and it certainly isn't an answer to an ecosystem-level challenge.

    Without WebAPK API access, competing browsers can't innovate on PWA install UI and can't meaningfully offer PWA app stores. Instead, the ecosystem has been left to limp along at the excruciating pace of Chrome's PWA UI development.

    Sure, Chrome's PWA support has been a damn sight better than Safari's, but that's just damning with faith praise. Both Apple and Google have done their part to quietly engineer a decade of unchallenged native app dominance. Neither can be trusted as exclusive stewards of web competitiveness. Breaking the lock on the doors holding back real PWA installation competition will be a litmus test for the effectiveness of regulation now in-flight. ↩︎ ↩︎

  3. Push Notifications were, without exaggeration, the single most requested mobile Safari feature in the eight years between Chromium browsers shipping and Apple's 2023 capitulation.

    It's unedifying to recount all of the ways Apple prevented competing iOS browsers from implementing Push while publicly gaslighting developers who requested this business-critical feature. Over and over and over again. It's also unhelpful to fixate on the runarounds that Apple privately gave companies with enough clout to somehow find an Apple rep to harangue directly. So, let's call it water under the bridge. Apple shipped, so we're good, right?

    Right?

    I regret to inform you, dear reader, that it is not, in fact, "good".

    Despite most of a decade to study up on the problem space, and nearly 15 years of of experience with Push, Apple's implementation is anything but complete.

    The first few releases exposed APIs that hinted at important functionality that was broken or missing. Features as core as closing notifications, or updating text when new data comes in. The implementation of Push that Apple shipped could not allow a chat app to show only the latest message, or a summary. Instead, Apple's broken system leaves a stream of notifications in the tray for every message.

    Many important features didn't work. Some still don't.. And the pathetic set of customisations provided for notifications are a sick, sad joke.

    Web developers have once again been left to dig through the wreckage to understand just how badly Apple's cough "minimalist" cough implementation is compromised. And boy howdy, is it bad.

    Apple's implementation might have passed surface-level tests (gotta drive up that score!), but it's unusable for serious products. It's possible to draw many possible conclusions from this terrible showing, but even the relative charity of Hanlon's Razor is damning.

    Nothing about this would be worse than any other under-funded, trailing-edge browser over the past three decades (which is to say, a bloody huge problem), except for Apple's well-funded, aggressive, belligerent ongoing protest to every regulatory attempt to allow true browser choice for iPhone owners.

    In the year 2024, you can have any iOS browser you like. You can even set them as default. They might even have APIs that look like they'll solve important product needs, but as long as they're forced to rely on Apple's shit-show implementation, the web can't ever be a competitive platform.

    When Apple gets to define the web's potential, the winner will always be native, and through it, Apple's bottom line. ↩︎

  4. The muting effect of Apple's abuse of monopoly over wealthy users to kneecap the web's capabilities is aided by the self-censorship of web developers. The values of the web are a mirror world to native, where developers are feted for adopting bleeding-edge APIs. On the web, features aren't "available" until 90+% of all users have access to them. Because iOS is at least 20% of the pie), web developers don't go near features Apple fails to support. Which is a lot.

    caniuse.com's "Browser Score" is one way to understand the scale of the gap in features that Apple has forced on all iOS browsers.
    The Web Platform Tests dashboard highlights 'Browser Specific Failures', which only measure failures in tests for features the browser claims to support. Not only are iOS browsers held back by Apple's shockingly poor feature support, but the features that _are_ available are broken so often that many businesses feel no option but to retreat to native APIs that Apple doesn't break on a whim, forcing the logic of the app store on them if they want to reach valuable users.

    Apple's pocket veto over the web is no accident, and its abuse of that power is no bug.

    Native app stores can only take an outsized cut if the web remains weak and developers stay dependent on proprietary APIs to access commodity capabilities. A prohibition on capable engines prevents feature parity, suppressing competition. A feature-poor, unreliable open web is essential to prevent the dam from breaking.

    Why, then, have competing browser makers played along? Why aren't Google, Mozilla, Microsoft, and Opera on the ramparts, waving the flag of engine choice? Why do they silently lend their brands to Apple's campaign against the web? Why don't they rename their iOS browsers to "Chrome Lite" or "Firefox Lite" until genuine choice is possible? Why don't they ask users to write their representatives or sign petitions for effective browser choice? It's not like they shrink from it for other worthy causes.

    I'm shocked by not surprised by the tardiness of browser bosses to seize the initiative. Instead of standing up to unfair terms, they've rolled over time and time again. It makes a perverse sort of sense.

    More than 30 years have passed since we last saw effective tech regulation. The careers of those at the top have been forged under the unforgiving terms of late-stage, might-makes-right capitalism, rather than the logic of open markets and standards. Today's bosses didn't rise by sticking their necks above the parapets to argue virtue and principle. At best, they kept the open web dream alive by quietly nurturing the potential of open technology, hoping the situation would change.

    Now it has, and yet they cower.

    Organisations that value conflict aversion and "the web's lane is desktop" thinking get as much of it as they care to afford. ↩︎

  5. Recall that Apple won an upset victory in March after litigating the meaning of the word "may" and arguing that the CMA wasn't wrong to find after multiple years of investigations that Apple were (to paraphrase) inveterate shitheels, but rather that the CMA waited too long (six months) to bring an action which might have had teeth.

    Yes, you're reading that right; Apple's actual argument to the Competition Appeal Tribunal amounted to a mashup of rugged, free-market fundamentalist " but mah regulatory certainty!", performative fainting into strategically placed couches, and feigned ignorance about issues it knows it'll have to address in other jurisdictions.

    Thankfully, the Court of Appeals was not to be taken for fools. Given the harsh (in British) language of the reversal, we can hope a chastened Competition Appeal Tribunal will roll over less readily in future. ↩︎

  6. If you're getting the sense that legalistic hair-splitting is what Apple spends its billion-dollar-per-year legal budget on because it has neither the facts nor real benefits to society on its side, wait 'till you hear about some of the stuff it filed with Japan's Fair Trade Commission!

    A clear strategy is being deployed. Apple:

    • First claims there's no there there (pdf). When that fails...
    • Claims competitors that it has expressly ham-strung are credible substitutes. When that fails...
    • Claims security would suffer if reasonable competition were allowed. Rending of garments is performed while prophets of doom recycle the script that the sky will fall if competing browsers are allowed (which would, in turn, expand the web's capabilities). Many treatments of this script fill the inboxes of regulators worldwide. When those bodies investigate, e.g. the history of iOS's forced-web-monoculture insecurity, and inevitably reject these farcical arguments, Apple...
    • Uses any and every procedural hurdle to prevent intervention in the market it has broken.

    The modern administrative state indulges firms with "as much due process as money can buy", and Apple knows it, viciously contesting microscopic points. When bluster fails, huffingly implemented, legalistic, hair-splitting "fixes" are deployed on the slowest possible time scale. This strategy buys years of delay, and it's everywhere: browser and mail app defaults, payment alternatives, engine choice, and right-to-repair. Even charging cable standardisation took years longer than it should have thanks to stall tactics. This maximalist, joined-up legal and lobbying strategy works to exhaust regulators and bamboozle legislators. Delay favours the monopolist.

    A firm that can transform the economy of an entire nation just by paying a bit of the tax it owes won't even notice a line item for lawyers to argue the most outlandish things at every opportunity. Apple (correctly) calculates that regulators are gun-shy about punishing them for delay tactics, so engagement with process is a is a win by default. Compelling $1600/hr white-shoe associates to make ludicrous, unsupportable claims is a de facto win when delay brings in billions. Regulators are too politically cowed and legally ham-strung to do more, and Apple plays process like a fiddle. ↩︎

Older Posts

Newer Posts