What has perhaps earned proponents of JSON-files-that-point-to-JPGs less scorn, however, are attempts to affiliate their technologies with the web when, in fact, the two are technically unrelated by design. The politics of blockchain proponents have led them to explicitly reject the foundational protocols and technical underpinnings of the web. "web3" tech can't be an evolution of the web because it was designed to stand apart.
What is the web proper?
Cribbing from Dieter Bohn's definition, the web is the set of HTML documents (and subresources they reference) currently reachable via links.
To be on the web is to be linked to and linked from — very literally, connected by edges in a great graph. And the core of that connection? DNS, the "Domain Name System" that allows servers running at opaque and forgettable IP addresses to be found at friendlier names, like infrequently.org.
DNS underpins URLs. URLs and links make the web possible. Without these indirections, the web would never have escaped the lab.
These systems matter because the web is for humans, and humans have feeble wetware that doesn't operate naturally on long strings of semi-random numbers and characters. This matters to claims of decentralisation because, underneath DNS, the systems that delivered this very page you're reading to your screen are, in fact, distributed and decentralised.
Naming is centralising.
"web3" partisans often cast a return to nameless, unmemorable addresses as a revolution when their systems rely on either the same centralising mechanisms or seek to re-create them under new (less transparent, equally rent-seeking) management. As a technical matter, browsers are capable of implementing content-addressed networking, thanks to Web Packages, without doing violence to the web's gaurantees of safety in the process. Still, it turns out demand for services of this sort hasn't been great, in part, because of legitimate privacy concerns.
"web3" proponents variously dismiss and (sometimes) claim to solve privacy concerns, but the technical reality is less hospitable: content-addressed data must be either fully public or rely on obscurity.
Accessing "web3"-hosted files is less private because the architecture of decentralisation choosen by "web3" systems eschews mechanisms that build trust in the transport layer. A fully public, immutable ledger of content, offered by servers you don't control and can't attribute or verify, over links you can't trust, is hardly a recipe for privacy. One could imagine blockchain-based solutions to some of these problems, but this isn't the focus of "web3" boosters today.
Without DNS-backed systems like TLS there's little guarantee that content consumption will prevent tracking by parties even more unknowable than in the "web 2.0" mess that "web3" advocates decry.
Hanlon's Razor demands we treat these errors and omissions as sincere, if misguided.
What's less excusable is an appropriation of the term "web" concerning (but not limited to):
Crypto project "standards"
Despite forceful assertions that these systems represent the next evolution of "the web", they technically have no connection to it.
This takes doing! The web is vastly capable, and browsers today are in the business of providing access to nearly every significant subsystem of modern commodity computers. If "web3" were truly an evolution of the web, surely there would be some technical linkage... and yet.
Having rejected the foundational protocols of the web, these systems sail a parallel plane, connecting only over "bridges" and "gateways" which, in turn, give those who run the gateways incredible centralised power.
Browsers aren't going to engineer this stuff into the web's liberally licensed core because the cryptocurrency community hasn't done the necessary licensing work. Intricate toil is required to make concrete proposals that might close these gaps and demonstrate competent governance, and some of it is possible. But the community waving the red shirt of "web3" isn't showing up and isn't doing that work.
What this amounts to, then, is web-washing.
The term "web3" is a transparent attempt to associate technologies diametrically opposed to the web with its success; an effort to launder the reputation of systems that have most effectively served as vehicles for money laundering, fraud, and the acceleration of ransomware using the good name of a system that I help maintain.
Perhaps this play to appropriate the value of the web is what it smells like: a desperate move by bag-holders to lure in a new tranche of suckers, allowing them to clear speculative positions. Or perhaps it's honest confusion. Technically speaking, whatever it is, it isn't the web or any iteration of it.
The worst versions of this stuff use profligate, world-burning designs that represent a threat to the species. There's work happening in some communities to address those challenges, and that's good (if overdue). Even so, if every technology jockeying for a spot under the "web3" banner evolves beyond proof-of-work blockchains, these systems will still not be part of the web because they were designed not to be.
That could change. Durable links could be forged, but I see little work in that direction today. For instance, systems like IPFS could be made to host Web Packages which would (at least for public content) create a web-centric reason to integrate the protocol into browsers. Until that sort of work is done, folks using the "web3" coinage unironically are either grifters or dupes. Have pity, but don't let this nonsense slide.
"web3" ain't the web, and the VCs talking their own book don't get the last word, no matter how much dirty money they throw at it.
Experts tend to treat Apple's arguments with disdain, but this skepticism is expressed in technical terms that can obscure deeper issues. Apple's response to the U.S. House Antitrust Subcommittee includes its fullest response and it provides a helpful, less-technical framing to discuss how browser engine choice relates to power over software distribution:
4. Does Apple restrict, in any way, the ability of competing web browsers to deploy their own web browsing engines when running on Apple's operating system? If yes, please describe any restrictions that Apple imposes and all the reasons for doing so. If no, please explain why not.
The purpose of this rule is to protect user privacy and security. Nefarious websites have analysed other web browser engines and found flaws that have not been disclosed, and exploit those flaws when a user goes to a particular website to silently violate user privacy or security. This presents an acute danger to users, considering the vast amount of private and sensitive data that is typically accessed on a mobile device.
By requiring apps to use WebKit, Apple can rapidly and accurately address exploits across our entire user base and most effectively secure their privacy and security. Also, allowing other web browser engines could put users at risk if developers abandon their apps or fail to address a security flaw quickly. By requiring use of WebKit, Apple can provide security updates to all our users quickly and accurately, no matter which browser they decide to download from the App Store.
WebKit is an open-source web engine that allows Apple to enable improvements contributed by third parties. Instead of having to supply an entirely separate browser engine (with the significant privacy and security issues this creates), third parties can contribute relevant changes to the WebKit project for incorporation into the WebKit engine.
Let's address these claims from most easily falsified to most contested.
The open source nature of WebKit is indisputable as a legal technicality. Anyone who cares to download and fork the code can do so. To the extent they are both skilled in browser construction and have the freedom to distribute modified binaries, WebKit's source code can serve as the basis for new engines. Anyone can fork WebKit and improve it, but they cannot ship enhancements to iOS users of their products.
Apple asserts this is fine becase WebKit's openness extends to open governance regarding feature additions. It must know this is misleading.
Presumably, Apple's counsel included this specious filigree to distract from the reality that Apple rarely accepts outside changes that push the state of the art forward. Here I speak from experience.
From 2008 to 2013, the Chromium project was based on WebKit, and a growing team of Chrome engineers began to contribute heavily "upstream." I helped lead the team that developed Web Components. Our difficulty in trying to develop these features in WebKit cannot be overstated. The eventual Blink fork was precipitated by an insurmountable difficulty in doing precisely what Apple suggested to Congress: contributing new features to WebKit.
The differing near-term objectives of browser teams often make potential additions contentious, and only competition has been shown to reliably drive consensus. Every team has more than enough to do, and time spent even considering new features can be seen as a distraction. Project owners fiercely guard the integrity of their codebases. Until and unless they become convinced of the utility of a feature, "no" is the usual response. If there is no competition to force the issue, it can also be the final answer.
Browser engines are large projects, necessitating governance through senior engineer code review. There tend to be very few experts empowered to do reviews in each area relative to number of engineers contributing code.
It's inevitable that managers will communicate disinterest in continuing collaboration if they find their most senior engineers spending a great deal of time reviewing code for features they have no interest in and will disable ("flag off") in their own products. The pace of code reviews needed to finish a feature in this state can taper off or dry up completely, frustrating collaborators on both sides.
When browsers provide their own engines (an "integrated browser"), then it's possible to disagree in standards venues, return to one's corner, and deliver their best design to developers (responsibly, hopefully). Developers can then provide feedback and lobby other vendors to adopt (or re-design) them. This process can be messy and slow, but it never creates a political blockage for developing new capabilities for the web.
WebKit, by contrast, has in recent years gone so far as to publicly, pre-emptively "decline to implement" a veritable truckload features that some vendors feel are essential and would be willing to ship in their products.
The signal to parties who might contribute code for these features could scarcely be clearer: your patch is unlikely to be accepted into WebKit.
Suppose by some miracle a "controversial" feature is merged into WebKit. This is no gaurantee that iOS browsers will gain access to it. Features in this state have lingered behind flags for years, ensuring they are not available in either Safari or competing iOS browsers.
When priority disagreements inevitably arise, competing iOS browsers cannot reliably demonstrate a feature is safe or well received by web developers by contributing to WebKit. Potential sponsors of this work won't dare the expense of an attempt. Apple's opacity and history of challenging collaboration have done more than enough to discourage ambitious participants.
Other mechanisms for extending features of third party browsers may be possible (in some areas, with low fidelity; more on that below), but contributions to WebKit are not a viable path for a majority of potential additions.
It is shocking, but unsurprising, that Apple felt compelled to mislead Congress on these points. The facts are not in their favour, but few legislative staffers have enough context to see through debates about browser internals.
The most convincing argument in Apple's 2019 response to the U.S. House Judiciary Committee is rooted in security. Apple argues it bans other engines from iOS because:
Nefarious websites have analysed other web browser engines and found flaws that have not been disclosed, and exploit those flaws when a user goes to a particular website to silently violate user privacy or security.
As a result of this threat landscape, responsible browser vendors work to put untrusted code (everything downloaded from the web) in "sandboxes"; restricted execution environments that are given fewer privileges than regular programs. Modern browsers layer protections on top of OS-level sandboxes, bolstering the default configuration with further limits on "renderer" processes.
The incredibly powerful devices Apple sells provide more than enough resources to raise such software defences, yet iOS users are years behind in recieving them and can't access them by switching browser. Apple's under-investment in security combine with its uniquely anti-competitive polices to ensure these gaps cannot be filled, no matter how contientious iOS users are about their digital hygiene.
Leading browsers are also adopting more robust processes for closing the "patch gap". Since all engines contain latent security bugs, precautions to insulate users from partial failure (e.g., sandboxing), and the velocity with which fixes reach end-user devices are paramount in determining the security posture of modern browsers. Apple's rather larger patch gap serves as an argument in favour of engine choice, all things equal. Cupertino's industry-lagging pace in adding additional layers of defence do not inspire confidence, either.
This brings us to the final link in the chain of structural security mitigations: the speed of delivering updates to end-users. Issues being fixed in the source code of an engine's project has no impact on its own; only when those fixes are rolled into new binaries and those binaries are delivered to user's devices do patches become fixes.
Apple's reply hints at the way its model for delivering fixes differs from all of its competitors:
[...] By requiring apps to use WebKit, Apple can rapidly and accurately address exploits across our entire user base and most effectively secure their privacy and security.
By requiring use of WebKit, Apple can provide security updates to all our users quickly and accurately, no matter which browser they decide to download from the App Store.
Aside from Chrome OS (and not for much longer), I'm aware of no modern browser that continues the medieval practice of requiring users download and install updates to their Operating System to apply browser patches. Lest Chrome OS's status quo seem a defence of iOS, know that the cost to end-users of these updates in terms of time and effort is night-and-day, thanks to near-instant, transparent updates on restart. If only my (significantly faster) iOS devices updated this transparently and quickly!
Lower-friction updates lead to faster patch application, keeping users safer, and Chrome OS is miles ahead of iOS in this regard.
All other browsers update "out of band" from the OS, including the WebView system component on Android. The result is, that for users with equivalent connectivity and disk space, out-of-band patches are installed on the devices significantly faster.
This makes intuitive sense: iOS update downloads are large and installing them can disrupt using a device for as much as a half hour. Users are understandably hesitant to incur these interruptions. Browser updates delivered out-of-band can be smaller and faster to apply, often without explicit user intervention. In many cases, simply restarting the browser delivers improved security updates.
Differences in uptake rates matter because it's only by updating a program on the user's devices that fixes can begin to protect users. iOS's high friction engine updates are a double strike against its security posture; albeit ones Cupertino has attempted to spin as a positive.
The philosophical differences underlying software update mechanisms run deep. All other projects have learned through long experience to treat operating systems as soft targets that must be defended by the browser, rather than as the ultimate source of user defence. To the extent that the OS is trustworthy, that's a "nice to have" property that can add additional protection, but it is not treated as a fundamental protection in and of itself. Browser engineers outside the WebKit and Safari projects are habituated to thinking of OS components as systems not designed for handling unsafe third-party input. Mediating layers are therefore built to insulate the OS from malicious sites.
Apple, by contrast, tends to rely on OS components directly, leaning on fixes within the OS to repair issues which other projects can patch at a higher level. Apple's insistence on treating the OS as a single, hermetic unit slows the pace of fixes reaching users, and results in reduced flexibility in delivering features to web developers. While iOS has decent baseline protections, being unable to layer on extra levels of security is a poor trade.
This arrangement is, however, maximally efficient for Apple in terms of staffing. But is HR cost efficiency for Apple the most important feature of a web engine? And shouldn't users be able to choose engines that are willing to spend more on engineering to prevent latent OS issues from becoming security problems? By maintaining a thin artifice of perfect security, Apple's iOS monoculture renders itself brittle in the face of new threats, leaving users without the benefits of the layered paranoia that the most secure browsers running on the best OSes can provide. As we'll see in a moment, Apple's claim to keep users safe when using alternative browsers by fusing engine updates to the OS is, at best, contested.
Instead of raising the security floor, Apple has set a cap while breeding a monoculture that ensures all iOS browsers are vulnerable to identical attacks, no matter whose icon is on the home screen.
Given Apple's response to Congress, it seems Cupertino is unfamiliar with the way iOS browsers other than Safari are constructed. Because it forbids integrated browsers, developers have no choice but to use Apple's own APIs to construct message-passing mechanisms between the privileged Browser Process and Renderer Processes sandboxed by Apple's WebKit framework.
These message-passing systems make it possible for WebKit-based browsers to add a limited subset of new features, even within the confines of Apple's WebKit binary. With this freedom comes the exact sort of liabilities that Apple insists it protects users from by fixing the full set of features firmly at the trailing edge.
To drive the point home: alternative browsers can include security issues every bit as severe as those Apple nominally guards against because of the side-channels provided by Apple's own WebKit framework. Any capability or data entrusted to the browser process can, in theory, be put at risk by these additional features.
More troublingly, these features are built in a way that is different to the mechanisms used by browser teams on every other platform. Any browser that delivers a feature to other platforms, then tries to bring it to iOS through script extensions, has doubled the security analysis and attack surface area.
None of this is theoretical; needing to re-develop features through a straw, using less-secure, more poorly tested and analyzed mechanisms, has led to serious security issues in alternative iOS browsers. Apple's policy, far from insulating responsible WebKit browsers from security issues, is a veritable bug farm for the projects wrenched between the impoverished feature set of Apple's WebKit and the features they can securely deliver with high fidelity on every other platform.
This is, of course, a serious problem for Apple's argument as to why it should be exclusively responsible for delivering updates to browser engines on iOS.
Apple cautions against poor browser vendor behaviour in its response, and it deserves special mention:
[...] Also, allowing other web browser engines could put users at risk if developers abandon their apps or fail to address a security flaw quickly.
Ignoring the extent to which WebKit represents precisely this scenario to vendors who would give favoured appendages to deliver stronger protections to their users on iOS, the justification for Apple's security ceiling has a (very weak) point: browsers are a serious business, and doing a poor job has bad consequences. One must wonder, of course, how Apple treats applications with persistent security issues that aren't browsers. Are they un-published from the App Store? And if so, isn't that a reasonable precedent here?
Whatever the precedent, Apple is absolutely correct that browsers shouldn't be distributed without commitments to maintenance, and that vendors who fail to keep the pace with security patches shouldn't be allowed to degrade the security posture of end-users. Fortunately, these are terms that nearly every reputable browser developer can easily agree to.
Indeed, reputable browser vendors would very likely be willing to sign up to terms that only allow use of the (currently proprietary and private) APIs that Apple uses to create sandboxed renderer processes for WebKit if their patch and CVE-fix rates matched some reasonable baseline. Apple's recently-added Browser Entitlement provides a perfect way to further contain the risk: only browsers that can be set as the system default could be allowed to bring alternative engines. Such a solution preserves Apple's floor on abandonware and embedded WebViews without capping the potential for improved experiences.
There are many options for managing the clearly-identifiable case of abandonware browsers, assuming Apple managers are genuinely interested solutions rather than sandbagging the pace of browser progress. Setting high standards has broad support.
The history of this unstated policy is long, winding, and less enlightening than a description of the status quo:
So what does the prohibition on JITs actually accomplish?
Allowing other engines would mean providing access to the currently-private APIs that allow the creation of sandboxed subprocesses.
Blessing Safari as the only app allowed to mint sandboxed subprocesses, while preventing other from doing so, is clearly unfair. This one-sided situation has persisted because the details of sandboxing and process creation have been obscured by a blanket prohibition on alternative engines. Should Apple choose (or be required) to allow higher-quality engines, this private API should surely be made public, even if it's restricted to browsers.
Similarly, skimping on RAM in thousand-dollar phones seems a weak reason to deny users access to faster, safer browsers. The Chromium project has a history of strengthening the default sandboxes provided by OSes (including Apple's), and would no doubt love the try its hand at improving Apple's security floor qua ceiling.
The relative problems with JITs — very much including Apple's — are, if anything, an argument for opening the field to vendors who will to put in the work Apple has not to protect users. If the net result is that Cupertino sells safer devices while accepting a slightly lower margin (or an even more eye-watering price) on its super-premium devices, what's the harm? And isn't that something the market should sort out?
High-modernism may mean never having to admit you're wrong, but it doesn't keep one from errors that functional markets would discipline. You do learn about them, but at the greatest of delays.
Apple may genuinely believe it is improving security by preventing other engines, not just padding its bottom line. For instance, beyond the abandonware problem, what of threats from "legitimate" browsers that abuse JIT priviledges? Or vendors that drag their heels in responding to security issues?
No OS vendor wants third parties exposing users to risks it feels helpless to mitigate. Removing browsers from user's devices is an existing option, but would be a drastic step that raises serious governance questions about the power Apple wields (and on whose behalf).
As middle-ground policy options go, Apple is far from helpless.
It has already created a bright line between browsers and other apps that embed WebViews, thanks to the Browser Entitlement, and could continue to require the latter use Apple's system-provided WebKit.
For browsers slow to fix security bugs, there also options short of dissalowing other engines and their JITs. Every engine on the market today also contains a non-JITing mode. Apple could require that vendors submit both JITful and JITless builds for each version they wish to publish and could, as a matter of policy and with warning, update user devices with non-JITing versions of these browsers should users be opened to widespread attack through vendor negligence.
In the process of opening up the necessary private APIs to build truly competitive browsers, Apple can set design quality standards. For example, if Apple's engine uses a now-private mechanism to ensure that code pages are not both writeable and executable, it could require other engines adopt the same techniques. Apple could further compel vendors to aggressively adopt protections from new hardware capabilities (e.g. Control Flow Integrity (pdf)) as it releases them.
Lastly, Apple can mandate all code loaded into sandboxed renderer processes be published as open source, along with build configurations, so that Apple can verify the supply chain integrity of browsers granted these capabilities.
Apple can maintain protections for users in the face of competition. Hiding behind security concerns to deny its users access to better, safer, faster browsers is indefensible.
A final argument made by others, (but not by Apple who surely knows better), is that:
Diversity in browser engines is desirable because, without competition, there is little reason for engines to keep improving.
Apple's restrictions on iOS ensure that a heavily-used engine has a different codebase to the growing use of Blink/Chromium in other browsers.
Therefore, Apple's policies are — despite their clear restrictions on engine choice — promoting the cause of engine diversity.
This is a slap-dash line of reasoning along several axes.
First, it fails to account for the different sorts of diversity that are possible within the browser ecosystem. Over the years, developers have suffered mightily under the thumb of entirely unwanted engine diversity in the form of trailing-edge browsers; most notably Internet Explorer 6.
The point of diversity and competition is to propel the leading edge forward by allowing multiple teams to explore alternative approaches to common problems. Competition at the frontier enables the market and competitive spirits to push innovation forward. What isn't beneficial is unused diversity potential. That is, browsers occupying market share but failing to meaningfully advance the state of the art.
The solution to this sort of deadweight diversity has been market pressure. Should a browser fall far enough behind, and for long enough, developers will begin to suggest (and eventually require) users to adopt more modern options to access their services at the highest fidelity.
This is a beneficial market mechanism (despite its unseemly aspects) because it creates pressure on browsers to keep pace with user and developer needs. The threat of developers encouraging users to "vote with their feet" also helps ensure that no party can set a hard cap on the web's capabilities over time. This is essential to ensure that oligopolists cannot weaponise a feature gap to tax all software.
Taxation of software occurs through re-privatisation of low-level, standards-based features and APIs. By restricting use of previously-free features (e.g. Bluetooth, USB, Serial, MIDI, and HID) to proprietary frameworks and distribution channels, a successful would-be monopolist can extract outsized rents on any application that requires even one of these features. Impoverishing the commons through delay and obstruction is, over time, indistinguishable from active demolition.
Apple's playbook is in line with this diagnosis, preserving the commons as a historical curiosity at best. Having blockaded every road to upgrading the web, Apple have made it impossible for an open platform to keep pace with Apple's own modern-but-proprietary options. The game's simple once pointed out, but hard to see at first because it depends on consistent inaction. This sort deadweight loss is hard to spot on short time horizons. Disallowing competitive engines that might upgrade the carrying capacity of freely-available alternatives may have been accidental at introduction of iOS, but it's value to Apple now can scarcely be overstated. After all, it's hard to extract ruinous taxes on a restive population with straightforward emigration options. No wonder Cupertino continues to perform new showings of the "web apps are a credible alternative on iOS!" pantomime.
In this understanding, the web helps maintain a fair market for software services. Web standards and open source web engines combine to create an interoperable commons across closed operating systems upon which services can be built without taxation; but only to the extent it's capable enough to meet user needs over time. Continuing to bring previouly proprietary features into the commons is the core mechanism by which this progress is delivered. Push notifications may have been new and shiny in 2011 but, a decade later, there's no reason to think that a developer should pay an ongoing tax for a feature that is offered by every major OS and device. The same goes for access to a phone's menagerie of sensors, as well as more efficient codecs.
The sorts of diversity we have come to value in the web ecosystem exist exclusively at the leading edge.
Intense disputes about the best ways to standardise a use-case or feature are a strong sign of a healthy dynamic. It's rancid, however, when a single vendor can prevent progress across a wide swathe of domains that are critical to delivering better experiences, and suffer no market consequence.
Apple has cut the fuel lines of progress by requiring use of WebKit in every iOS browser; choice without competition, distinction without difference. Users can have any sort of web they like, so long as it's as trailing-edge as Apple's.
Yet this sort of participation-prize diversity is exactly what purported defenders of Apple's policies would have us believe is healthy for the web.
It's a curious argument, admitting Apple's engine is deeply sub-par to defend an ongoing failure to compete. Apple is not wanting for the funds and talent to build a competitive product, it simply chooses not to. Apple's 2+ trillion dollar market cap is paired with nearly $200 billion in cash on hand. One could produce a competitive browser for the spare change in Cupertino's Eames lounges.
Claims that foot-dragging must be protected because otherwise capable engines might win share is not much of a defence. Excusing poor performance is to suggest that Apple does not possess the talent, skill, and resources to ever construct a competitive engine. I, at least, think better of Apple's engineering acumen than these nominal defenders.
Would WebKit really dissapear if Apple were to allow other engines onto iOS? We have a natural experiment in Safari for macOS. It continues to enjoy a high share of that browser market despite stiff and competition from browsers that include higher-quality engines. Why are Apple's defenders so certain that this won't be the iOS result?
And what is the worst-case scenario, exactly? That Safari loses share such that Apple must respond by funding the WebKit team adequately? That the Safari team feels compelled to switch to another open source rendering engine (e.g. Gecko or Blink), preserving their ability to fork down the road, just as they did with KHTML, and as the Blink project did with WebKit?
None of these are close ended scenarios, nor must they result in a reduction in constructive, leading edge diversity. Edge, Brave, Opera, and Samsung Internet consistently innovate on privacy and other features without creating undue drag on core developer interests. Should the Chromium project become an unwelcome host for this sort of work, all of these organisations can credibly consider a fork, adding another new branch to the lineage of browser engines.
It's not a foregone conclusion the world's most valuable tech firm should produce the lowest-quality browser engine. Developer's would likely take Apple's side if coercion about engine choice weren't paired with failure to keep pace on even basic features.
The point of diversity at the leading edge is progress through competition. The point of diversity amid laggards is the freedom to replace them — that's the market at work.
Apple's polices against browser choice were, at some point, relatively well grounded in the low resource limits of early smartphones. But those days are long gone. Sadly, the legacy of a closed choice, back when WebKit was still a leader in many areas, is an industry-wide hangover. We accepted a bad deal because the situation seemed convivial, and ignored those who warned it was a portent of a more closed, more extractive future for software.
Only if we had listened.
Thanks to Chris Palmer and Eric Lawrence for their thoughtful comments on drafts of this post. Thanks also to Frances for putting up with me writing this post on holiday.
As we shall see, it would be better for Apple if their "supporters" would stop inventing straw man arguments as they tend to undermine, rather than bolster, Cupertino's side. ↩︎
Browser engines all have a form of selective exclusion of code that is technically available within the codebase but, for one reason or another, is disabled in a particular environment. These switches are known variously as "flags," "command line switches," or "runtime-enabled features."
New features that are not ready for prime time may be developed for months "behind a flag" and only selectively enabled for small populations of developers or users before being made available to all by default. Many mechanisms have existed for controlling the availability of features guarded by flags. Still, the key thing to know is that not all code in a browser engine's source repository represents features that web developers can use. Only the set that is flagged on by default can affect the programmable surface that web developers experience.
The ability of the eventual producer of a binary to enable some flags but not others means that even if an open source project does agree to include code for a feature, restrictions on engine binaries can preclude an alternative browser's ability to provide even some features which are part of the code the system binary could include.
Flags, and Apple's policies towards them over the years, are enough of a reason to reject Apple's feint towards open source as an outlet for unmet web developer needs on iOS. ↩︎
It's perverse that the wealthy users Apple sells its powerful devices to — the very folks who can most easily dedicate the extra CPU and RAM necessary to enable multiple layers of protection — are prevented from doing so by Apple's policies that are, ostensibly, designed to improve security. ↩︎
JIT and sandbox creation are technically separate concerns (and could be managed by policy independently), but insofar as folks impute a reason to Apple for allowing its engine to use this technique, sandboxing is often offered as a reason. ↩︎
Joining a new team has surfaced just how much I've relied on a few lenses to explain the incredible opportunities and challenges of platform work. This post is the second in an emergent series towards a broader model for organisational and manager maturity in platform work, the first being last year's Platform Adjacency Theory. That article sets out a temporal model that focuses on trust in platforms. That trust has a few dimensions:
Trust in reach. Does the platform deliver access to the users an app or service caters to? Will reach continue to expand at the rate computing does?
Trust in capabilities. Can the platform enable the core use-cases of most apps in a category?
Trust in governance. Often phrased as fear of lock-in, the goal of governance is to marry stability in the tax rate of a platform with API stability and reach.
These traits are primarily developer-facing for a simple reason: while the products that bring platforms to market have features and benefits, the real draw comes from safely facilitating trade on a scale the platform vendor can't possibly bootstrap on their own.
Search engines, for example, can't afford to fund producing even a tiny sliver of the content they index. As platforms, they have to facilitate interactions between consumers and producers outside their walls — and continue to do so on reasonably non-extractive terms.
Thinking about OSes and browsers gives us the same essential flavour: to make a larger market for the underlying product (some OS, browsers in general), the platform facilitates a vast range of apps and services by maximising developer reach from a single codebase at a low incremental cost. Those services and apps convince users to obtain the underlying products. This is the core loop at the heart of software platforms:
Cycles around the loop take time, and the momentum added or lost in one turn of the loop creates or destroys opportunity for the whole ecosystem at each successive step. Ecosystems are complex systems and grow and shrink through multi-party interplay.
Making progress through intertemporal effects is maddening to product-focused managers who are used to direct build ⇒ launch ⇒ iterate cycles. They treat ecosystems as static and immutable because, on the timescales they operate, that is apparently true. The lens of Pace Layering reveals the disconnect:
Products that include platforms iterate their product features on the commerce or fashion timescale, while platform work is the slower, higher-leverage movement of infrastructure and governance. Features added in a release for end-users have impact in the short run, while features added for developers may add cumulative momentum to the flywheel many releases later as developers pick up the new features and build new types of apps that, in turn, attract new users.
This creates a predictable bias in managers towards product-only work. Iterating on features around an ecosystem becomes favoured, even when changing the game (rather than learning to play it incrementally better) would best serve their interests. In extreme versions, product-only work leads to strip-mining ecosystems for short-term product advantage, undermining long-term prospects. Late-stage capitalism loves this sort of play.
The second common bias is viewing ecosystems that can't be fully mediated as somebody else's problem or as immovable. Collective action problems in open ecosystem management are abundant. Managers without much experience or comfort in complex spaces tend to lean on learned helplessness about platform evolution. "Standards are slow" and "we need to meet developers where they are" are the reasonable-sounding refrains of folks who misunderstand their jobs as platform maintainers to be about opportunities one can unlock in a single annual OKR cycle. The upside for organisations willing to be patient and intentional is that nearly all your competitors will mess this up.
Failure to manage platform work at the appropriate time-scale is so ingrained that savvy platform managers can telegraph their strategies, safe in the knowledge they'll look like mad people.
One might as well be playing cricket in an American park; the actions will look familiar to passers-by, but the long game will remain opaque. They won't be looking hard enough, long enough to discern how to play — let alone win.
Successful platforms can extract unreasonably high taxes in many ways, but they all feature the same mechanism: using a developer's investments in one moment to extract higher rents later. A few examples:
IP licensing fees that escalate, either over time or with scale.
Platform controls put in place for safety or other benefits re-purposed for rent extraction (e.g. payment system taxes, pay-for-ranking in directories, etc.).
Use of leverage to prevent suppliers from facilitating platform competitors in equal terms.
Platforms are also in competition over these taxes. One of the web's best properties is that, through a complex arrangement of open IP licensing and broad distribution, it exerts significantly lower taxes on developers in a structural way (ceteris peribus). ↩︎
How Apple, Facebook, and Google Broke the Mobile Browser Market by Silently Undermining User Choice
At first glance, the market for mobile browsers looks roughly functional. The 85% global-share OS (Android) has historically facilitated browser choice and diversity in browser engines. Engine diversity is essential, as it is the mechanism that causes competition to deliver better performance, capability, privacy, security, and user controls. More on that when we get to iOS.
Tech pundits and policymakers form expectations of browsers on the desktop and think about mobile browser competition the same way. To recap:
Users can freely choose desktop browsers with differing features, search engines, privacy features, security properties, and underlying engines.
Browsers update quickly, either through integrated auto-update mechanisms or via fast OS updates (e.g., ChromeOS).
Browsers bundled with desktop OSes represent the minority of browser usage, indicating a healthy market for replacements.
Popular native apps usually open links in users' chosen browsers and don't undermine the default behaviour of link clicks.
Each point highlights a different aspect of ecosystem health. Together, these properties show how functioning markets work: clear and meaningful user choice creates competitive pressure that improves products over time. Users select higher quality products in the dimensions they care about most, driving quality and progress.
The mobile ecosystem appears to retain these properties, but the resemblance is only skin deep. Understanding how mobile OSes undermine browser choice requires a nuanced understanding of OS and browser technology. It's no wonder that few commenters are connecting the dots.
How bad is the situation? It may surprise you to learn that until late last year only Safari could be default browser on iOS. It may further disorient you to know that competing vendors are still prevented from delivering their own engines on iOS. Meanwhile, on Android, the #2 and #3 sources of web traffic do not respect browser choice. Users can have any browser with any engine they like, but it's unlikely to be used. The Play Store is little more than a Potemkin Village of browser choice; a vibrant facade to hide the rot.
Registering to handle link taps is only half the battle. For a browser to serve as the user's agent, it must also receive navigations. Google's Search App and Facebook's various apps for Android undermine these choices in slightly different ways. This reduces the effectiveness of privacy and security choices users entrust in their browsers. Developers also suffer higher costs and reduced opportunities to escape Google, Facebook, and Apple's walled gardens.
Web engineers frequently refer to browsers as "User Agents", a nod to their unique role as interpreters of developer intent that give users the final say over how the web is experienced. A silent erosion in the effectiveness of browser choice has transferred this power away from users, re-depositing it with dominant platforms and app publishers. To understand how this sell-out happened (quite literally) under our noses, we must look closely at how mobile and desktop differ.
The predominant desktop situation is relatively straightforward:
Browsers handle links, and non-browsers defer loading http and https URLs to the system, which in turn invokes the default browser. This flow is the central transaction that gives links power and utility. If any of the players involved (OSes, browsers, or referring apps) violate aspects of the contract, user choice in browsers becomes less effective.
"What, then, is a 'browser'?" you might ask? I've got a long blog post brewing on this, but jumping to the end, an operable definition is:
A browser is an application that can register with an OS to handle http and https navigations by default.
No matter how an OS technically facilitates user choice, it's this ability to choose that defines browsers as a class. How often links lead users to their preferred browser controls the meaningfulness of this choice.
The history of mobile computing starts from an incredibly resource-constrained point. First-generation iOS and Android smartphones were slow single-core, memory-impoverished affairs, leading mobile OSes to learn new tricks to facilitate responsive computing. Android and iOS adopted heuristics to kill and reclaim RAM used by non-foreground apps when resource pressure intensified.
This background task killing behaviour created unique problems for link-heavy apps. Launching the user's browser placed linking apps in the background, increasing friction in returning to the sending app, as browser UI did not provide affordances for returning to referring applications. Being put in the background also increases the likelihood of being killed. Returning to the source app while in this state can feel excruciating. It can take seconds to re-start the original app and restore the UI state, an experience that gets worse on low-end devices that are most likely to evict apps in the first place.
Engagement-thirsty apps began including "In-App Browsers" ("IABs") to address these challenges. Contrary to any plain-language understanding of "a browser", these IABs cannot generally be installed as the default handler for links, even when OSes support browser choice. Instead, they load content referred by their hosting native app in system-provided WebViews.
The benefits to apps that adopt WebView-based IABs are numerous:
WebViews are system components designed for use within other apps. They do not place embedders in the background where the system may kill them to reclaim resources. This reduces friction and commensurately increases "engagement" metrics.
As they are now "the browser", they can provide UI that makes returning to the host application easier than continuing on the web.
Apps can customise UI to add deeper integrations, e.g., "pinning" images from a hosted page to Pinterest.
On Android today and early iOS versions, WebViews allow embedders to observe and modify all network traffic (regardless of encryption). Apps can also monitor user input, resulting DOM, and system auto-filled credentials.
To the extent that users are comfortable with apps not remembering their previously-stored passwords, login state, privacy preferences, extensions, or accessibility configurations, this can be a win-win.
Conversely, the web feels broken when any one of those conditions is not met.
A View that displays web pages.
In most cases, we recommend using a standard web browser, like Chrome, to deliver content to the user.
WebViews have a long history in mobile OSes, filling several roles:
Rendering HTML on behalf of the first-party application developer.
Displaying cooperating, second-party content like ads.
Providing the core of browsers, whose job is to display third-party content. The original Android Browser used early-Android's system WebView, for instance.
The power dynamics of these situations are starkly different, even though "web technology" is used to render content in each case.
The use of a "raw" WebView is entirely appropriate for first and second-party content. Here, native apps are doing work related to their core function; storage and tracking of user data are squarely within the four corners of the app's natural responsibilities. Furthermore, the content developer is unlikely to find limits presented by the WebView to be unwelcome or unreasonably immutable (via collaboration with the app developer). Instead of breaking content, WebViews are likely to facilitate it in these scenarios faithfully.
All bets are off regarding WebViews and third-party content, however. To understand why it helps to know that WebViews are not browsers.
WebViews contain core browser features, along with hooks that allow embedders to "light up" many more. However, producing a complete and competitive WebView-based browser requires additional UI and glue code. In particular, features that require permission-gated access to privileged services need explicit support from embedders to work as specified.
PWA installation and home screen shortcuts for sites
Few (if any) WebView browsers implement all of these features, even when their underlying WebViews support bindings for them.
The situation is even more acute in WebView IABs, which tend not to fully support features from these categories even when they appear available to developers via script. Worse, debugging from these IABs is challenging, compounded by a lack of awareness about how much traffic may come from these sources.
How can that be? Web developers are accustomed to real browsers in the desktop mould. Standard tools, analytics packages, and feature availability dashboards do not make mention of IABs, and the largest WebView IAB promulgators (Facebook, Pinterest, Snap, etc.) have invested almost nothing in clarifying the situation.
It's vital to understand that neither users nor developers chose Facebook, Pinterest, or Google Go as a browser. The flow that WebView IABs present denies users agency over their choices, and technical limits imposed by them often prevent developers from opening content in real browsers.
No documentation is available for third-party web developers from any of the largest WebView IAB (ab)users. This absence mirrors the scandalous free-riding of these app publishers regarding browser feature support, which is perhaps not surprising. It is, however, all the more egregious for the subtlety and scale of breakage.
If Facebook, the third largest "browser"-maker for Android, employs a single developer relations engineer or doc writer to cover these issues, I'm unaware of it. Meanwhile, forums are full of melancholy posts recounting myriad ways these submarine renderers break features that work in other browsers.
Having been given "the first 80%" of a browser, with development and distribution of critical components subsidised by OS vendors, WebView IABs near-universally fail to keep up their end of the bargain with either users or developers. First-party webdevs can collaborate with their app-development colleagues to build custom access for any exotic feature supported by the OS. Second-party developers expect less (ads are generally not given broad feature access). But third-party developers? They are as helpless as users are to understand why an otherwise browser-presenting environment appears subtly, yet profoundly, broken.
There are still users browsing with a Chrome 37 engine (7 years ago), not because they don't update their browsers but because it's Facebook Mobile Browser on Android 5 using a webview. Facebook does NOT honor user browser choice leaving that user with an old engine. +
These same app publishers request (and heavily use) features within real browsers they do not enable for others, even when spotted the bulk of the work. Perhaps browser and platform vendors should consider denying these apps access to capabilities they undermine for others.
The consequences of WebView IABs on developers are noteworthy, but it's the impacts on users that inspire confusion and rage.
Consider again the desktop reference scenario:
Clicking links from apps transfers control to an external browser which dutifully applies the user's stored preferences and accumulated state. Login credentials for example.com are not forgotten when a link is followed from an email. The same unified experience ensures that saved addresses and payment information are readily available. Most importantly, accessibility settings and privacy preferences are consistently applied.
By contrast, WebView IABs fracture user state, storing it in silos within each hosting application, creating a continuous partial amnesia.
The confusion that reliably results is the consequence of an inversion of the power relationship between apps and websites.
Does any user expect that everything one does on any website loaded from a link in the Facebook app, Instagram, or Google Go can be fully monitored by those apps? That all passwords shared and the full scope of sites visited from the first page can potentially be recorded and tracked? To be clear, there's no record of these apps using this extraordinary access in overtly hostile ways, but even the unintended side-effects reduce user control over data and security.
Retaining onward links is not objectionable in programs that also offer themselves as browsers, but the WebView IAB sleight of hand is to act as a browser when users least expect it, but never to cop to the power and privacy implications of the responsibilities browsers accept.
To address this challenge, Apple introduced SFSafariViewController and Google followed suit with the inartfully-named Chrome Custom Tabs protocol and helper library. Both systems allow native app developers to skip the drudge work of building a WebView IAB system and instead work with the OS to invoke the user's default browser to load web pages within the context of the host app. Similarly to WebView IABs, CCT and SFSVC address background eviction and lost app state. However, because they invoke the user's actual browser, they also prevent user confusion whilst delivering the complete set of features supported by proper browsers.
These solutions come at the cost of some flexibility for app developers who lose access to read network traffic between users and third-party sites. They also cannot inspect and change page content trivially, removing the ability to add new, non-standard features to their IABs. Counterbalancing these concerns, CCT and SFSVC restore user choice and ensure developers access to the complete set of browser features.
Well, it is. At least in the default configuration. Despite the clunky inclusion of "Chrome" in the name, the CCT library and protocol are browser-agnostic. A well-behaved CCT-invoking-app (e.g., Twitter for Android) will open URLs in the CCT-provided IAB-alike UI via Firefox, Brave, Samsung Internet, Edge, or Chrome if they are the system default browser.
@slightlylate I recently was talking to my Dad about the Web and asked what browser he uses and he showed me what he does: He searches for the Web site in the Google search widget and then just uses the results page Chrome tab as his entire browser. His default browser is not set to Chrome.
Who would do this, you might ask? None other than Google's own Search App; you know the one that comes on every reputable Android device via the ubiquitous home screen search widget.
Known as the "Android Google Search App" ("AGSA", or "AGA"), this humble text input is the source of a truly shocking amount of web traffic; traffic that all goes to Chrome, no matter the user's choice of browser.
There were justifiable reasons to add code like this. Early in the life of the CCT protocol, before support was widespread, many browsers exhibited showstopper bugs. 2021 is far advanced from those early days, however, and so the primary effect of calling to Chrome is to distort the market for browsers and undermine user choice. This behaviour subverts user privacy, undermines the ecosystem benefits of engine diversity, and makes it hard for alternative browsers to compete on a level playing field.
This situation is admittedly better than the wholesale neutering of important developer-facing features by WebView IABs, but a Hobson's Choice none the less.
Google can (and should) revert to the system default of affirmatively respecting user choice in browsers by deleting the offending choice override. Given that AGSA uses CCT to load web pages rather than a WebView, this is nearly trivial today. CCT's core design is sound and has enormous potential if made mandatory in place of WebView IABs by the Android and Play teams.
There's reason to worry that this is unlikely.
Instead of addressing frequent developer requests for features in the CCT library, the Chrome for Android team has invested heavily in the "WebLayer" project. You can think of WebLayer like a WebView-with-batteries-included, repairing issues related to missing features but continuing to fracture state and user choice.
There is a positive case for WebLayer: as a replacement for WebViews in the context of browsers, it's a major step forward. In the context of IABs, however, WebLayer looks set to entrench user-hostile patterns further.
This subversion of choice extends a dispiriting trend in search apps that fancy themselves browsers without even attempting to earn a user's business as a browser.
In addition to Google Go, the Google app for iOS as well as Microsoft's Bing app for Android both capture outbound links in WebView IABs, subverting both browser choice and feature availability for developers. If there's any mercy, it's that their relatively lower use somewhat limits their impact on the overall ecosystem. Adopting WebLayer will not meaningfully improve the user experience or privacy of these amnesiac browsing experiences.
Google and Apple have the chance to lead, to show they aren't hostile to users, and remove the permission structure for lousy behaviour that less scrupulous players exploit. More on that in a moment.
Imagine if automakers could only use one government-mandated engine model across all cars and trucks. Different tires and upholstery only go so far. If the engine is underpowered, many tasks might not be possible, rendering whole vehicle classes pointless. If the engine is particularly polluting, choosing a different model of car can't help to reduce harmful emissions. That's the situation iOS creates for browser makers and the browser-downloading public, and the only recourse it to buy a phone running a different OS.
iOS matters because wealthy users carry iPhones. It's really as simple as that. Even when Apple's products fail to gain a numerical majority of users in a market, the margin contribution of iOS users can dominate all other business considerations.
From at least 2012, Apple has deigned to allow "competing browsers" in its App Store. Those applications could not be browsers in any meaningful sense as they could not supplant Safari as the default handler of http/https links. The long charade of choice without effect finally ended with the release of iOS 14.2 in late 2020, bringing iOS into line with every other significant OS in supporting alternative browsers.
But Apple has taken explicit and extensive care to ensure that this choice is only ever skin deep on iOS. Browsers on Windows, Linux, ChromeOS, Android, and MacOS can be Integrated Browsers. iOS, meanwhile, restricts browsers to shells over the system-provided WebView.
Unlike WebView browsers on other OSes, Apple locks down these components in ways that prevent competition in additional areas, including restrictions on network stacks that block improved performance, new protocols, or increased privacy. These restrictions make some sense in the context of WebView IABs, but extending them to browsers only serves to deflect pressure from Apple to improve their browser.
Perhaps it would be reasonable for iOS to foreclose competition from integrated browsers and insist on uniquely constrained WebViews. Such policies would represent a different view of what computing should be if native apps were required to live within similar limits. However, Apple is happy to provide a much wider variety of features to unsafe native applications so long as they comply with the coercive terms of its App Store.
Apple forestalls this bottom-line threat by keeping the web on iOS from gaining reasonable feature parity. Outlawing integrated browser choice leaves only Apple's own, farcially under-powered, Safari/WebKit browser/engine...and there's precious little that other WebView browsers can do to improve the situation at a deep level.
Pointing to a site of serial developer mistreatment to justify other developer-hostile App Store policies takes next-level chutzpah.
Developer anger only hints at the underlying structural rot. 25+ years of integrated browser competition has driven waves of security, capability, and performance improvements. Competition has been so effective in delivering these benefits that browsers now represent most computing time on OSes with meaningful and integrated browser choice.
Hollowing out browser choice while simultaneously starving Safari and WebKit of resources, somewhat miraculously, put the genie back in the bottle. Privacy, security, performance, and feature evolution all suffer when the competition is less vibrant — and that's how Apple likes it.
A vexing issue for commentators regarding Apple's behaviour in this area is that of "market definition". What observers should understand is that, in the market for browsers, the costs that a browser vendor can inflict on web developers extend far beyond the market penetration for their specific product.
When browsers with more than ~10% share fail to add a feature or exhibit nasty bugs, web developers must pay attention and work around these limitations. In the case of outright missing APIs, entire classes of content may simply be viewed as unworkable. The cost of these capability gaps is steep. When the web cannot deliver experiences that iOS native apps can (a very long list), businesses must build entirely different apps using Apple's proprietary tools. These apps, not coincidentally, can only be distributed via Apple's high-tax App Store.
A lack of meaningful user choice in browsers leads directly to higher costs for users and developers across the entire digital ecosystem even if they don't use Apple's products. The permission structure Apple's norm-eroding policies have constructed has served to justify some of the worst privacy and choice-undermining behaviour of tech giants. Apple's leadership in the race to the bottom has inspired a burgeoning field of fast-followers.
Beyond direct harms, interested parties should not consider browser choice as somehow orthogonal to other objectionable App Store policies; they are part and parcel of an architecture of control that tilts commerce into coercive, centralising App Stores. No matter how Apple wants to define the market, its actions distort and undermine competition.
Here's a quick summary of the systems and variations we've seen thus far, as well as their impacts on user choice:
Maximizes impact of choice
Reduces diversity in engines; problematic when the only option (iOS).
Undermines user choice, reduces engine diversity, and directly harms developers through lower monetisation and feature availability (e.g., Facebook, Google Go).
Chrome Custom Tabs (CCT)
WebView IABs replacement, preserves choice by default (e.g. Twitter). Problematic when configured to ignore user preferences (e.g. AGA).
Like WebView with better feature support. Beneficial when used in place of WebViews for browsers. Problematic when used as a replacement for WebView IABs.
Similar to CCT in spirit, but fails to support multiple browsers.
Proposals to repair this profoundly broken situation must centre first on the effectiveness of browser choice. Some policymakers have suggested returning to browser choice ballots, however these will not be effective in a world where user choice is undermined no matter which browser they choose. Interventions to encourage informed browser choice cannot have a positive effect until the impact of choices can be assured.
Thankfully, repairing the integrity of browser choice in the mobile ecosystem can be accomplished with relatively small interventions. We only need to ensure that integrated browsers are universally available and that when third-party content is displayed, user choice of browser is respected.
Repairing the IAB situation will likely require multiple steps, given the extreme delay in new Android OS revisions gaining a foothold in the market. Thankfully, many fixes don't need OS updates:
Google should update the CCT system to respect browser choice when loading third-party content and require updates to CCT-using apps to this new behaviour within six months.
Verification of first-party content for use with specific engines is possible thanks to the Digital Asset Links infrastructure that underpins Trusted Web Activities, the official mechanism for putting web apps in the Play Store.
AGSA and Google Go should respect user choice via CCT.
Android's WebView and WebLayer should be updated with code to detect a new HTTP header value sent with top-level documents that cause the URL to be opened in the user's default browser (or a CCT for that browser) instead.
These systems update out-of-band every six weeks on 90+% of devices, delivering quick relief.
Such an opt-out mechanism preserves WebViews for first-party and second-party use-cases (those sites will simply not set the new header) while giving third-parties a fighting chance at being rendered in the user's default browser.
Apps that are themselves browsers (can be registered as default http/https handlers) would be exempt, preserving the ability to build WebView browsers. "Browserness" can be cheaply verified via an app's manifest.
Google should provide access to all private APIs currently reserved to Chrome, including but not limited to the ability to install web applications to the system (a.k.a. "WebAPKs").
Future releases of Android should bolster these improvements by creating system-wide opt-out of WebView and WebLayer IABs.
Play policy enforcement of rules regarding CCT, WebView, and WebLayer respect for user and developer choice will also be necessary. Such enforcement is not challenging for Google, given its existing binary analysis infrastructure.
Together, these small changes can redress the worst anti-web, anti-user, anti-developer, and anti-choice behaviour of Google and Facebook regarding Android browsers, putting users back in control of their data and privacy along the way.
The mobile web is a pale shadow of its potential because the vehicle of progress that has delivered consistent gains for two decades has silently been eroded to benefit native app platforms and developers. These attacks on the commons have at their core a shared disrespect for the sanctity of user choice, substituting the agenda of app and OS developers for mediation by a user's champion.
This power inversion has been as corrosive as it has been silent, but it is not too late. OSes and app developers that wish to take responsibility can start today to repair their own rotten, choice-undermining behaviour and put users back in control of their browsing, their data, and their digital lives.
Windows 10, for example includes several features (taskbar search box, lock screen links) that disrespect a user's choice of default browser. This sort of shortcut-taking in the competition for user attention has a long and discouraging history, but until relatively recently was viewed as "out of bounds". Mobile has shifted the Overton Window.
A decade of degraded norms around browser choice by mobile OSes has made these sorts of unreasonable tie-ins less exceptional. The work-a-day confusion of following links on mobile helps to create a permission structure that enables ever-more bad behaviour. The Hobbesian logic of power-begets-success is fundamentally escalatory, forcing those without a priori privilege into a paranoid mode, undercutting attempts to differentiate products in a market on their merits.
Fixing mobile won't be sufficient to unwind desktop's increasingly negative dark patterns, of course. But that's no reason to delay. Centering user's choices on their most personal devices can do much to reset the expectations of PMs and managers across the industry as to which tactics are, in fact, above-board. ↩︎
It's less clear why Mozilla is MIA in at least making noise about the situation. Their organisation has a front-row seat to the downsides of undermined user choice. The inability to project the benefits of their engine into the lives of their mobile users materially harms their future business and differentiation prospects.
It seems unlikely (if plausible) that the Firefox OS experience has so thoroughly burned management that there is no scope for mobile risk-taking, even if constrained to jawboning or blog posts.
If any organisation can credibly, independently connect the dots, it should be the Mozilla Foundation. One hopes they do. ↩︎
The history, competitive pressures, and norms of Android app developers caused many smaller apps to capture clicks (and user data), failing to send navigations onward.
A shortlist of notable apps that undermine user choice via IABs would include:
Microsoft Bing Search
Some apps that previously (ab)used WebViews for IABs in the pre-CCT era switched over to that choice-respecting mechanism, notably Twitter. ↩︎
This definition of "a browser" may sit uncomfortably with folks accustomed to the impoverished set of choices Apple made possible on iOS until late last year. In particular, folks will undoubtedly note that "alternative browsers" were available in the App Store much earlier, including a Chrome-branded app since at least 2012.
Not all applications that can load web pages are browsers. Only apps that can become the user's agent in browsing the web are. Until nine months ago, iOS only supported Safari as a proper browser. "Alternative browsers" could only traverse link space when users began browsing within them. They were impotent to support users more broadly, unable to consistently assist users, modulate harmful aspects of content, or project user preferences into sites. Without the ability to catch all navigations sent to the OS, users who downloaded these programs suffered frequent computing amnesia. User preferences were only respected if users started browsing from within a specific app. Incidental navigations, however, were subject to Apple's monopoly on link handling and whatever choices Safari projected.
In this way, iOS undermined choice and competition. OSes that prevent users from freely picking their agent in navigating the web most of the time cannot, therefore, be said to support browser choice — no matter how many directed-browsing apps they allow to list in their stores. ↩︎
Problems related to background task killing can, of course, be avoided by building a web app instead of a native app one. When users remain in a browser across sites, there's no heavy process switch between pages. Developers tried this path for a while but quickly found themselves at an impossible feature disadvantage. Lack of Push Notifications alone proved business-defining, and Apple's App Store policies explicitly forbid web apps in their store.
To be discovered where users are looking for apps and access business-critical features, mobile platforms effectively forced all developers into app stores. A strong insinuation that things would not go well for them in app stores if they used web technologies (via private channels, naturally) reliably accompanied this Sophie's choice.
Platforms played these user-and-developer hostile games in mobile's early days to dig a moat of OS-exclusive apps. Exclusives create friction for users considering a switch to a different OS. Platform owners know the cost of re-developing apps for each OS means when independent software vendors invest heavily in their proprietary systems, it becomes less likely that those developers can deliver quality experiences on their competitor's system.
App developers only have so many hours in the day, and it costs enormous amounts, both initially and in an ongoing way, to re-build features for each additional platform. The web is a portable applications platform, and portability is a bug to proprietary platform owners. The combination of engine neglect, feature gap expansion, and app store policies against web participation — explicit and implied — proved a shockingly effective "fix".
The story of feature-gap coercion and "app store lottery" games illuminate the backdrop of a new normal that none of us should accept. ↩︎
Many have adroitly covered the perspective and ethical distortions within social media firms caused by the relentless pursuit of "north star" metrics. There's little new I can add.
I can, however, confirm some uncharitable takes of their detractors are directionally correct. One cannot engage with engineers and PMs from these organisations for a decade without learning something about their team's values.
The blinkered pursuit of growth via "make number go up"-OKRs creates blind spots that are managed as exogenous crises. The health of the ecosystems around them is unfailingly subordinate to questions of competitive positioning. The hermetically circular logic of "we're changing the world for the better"does create incentives to undermine user autonomy, safety, and choice.
The jury is no longer out. Change is possible, but it will not come from within. But "unintended consequences!" special pleading weighs heavily. To improve this situation, folks must understand it sufficient depth to mandate maximally effective, competition-and-choice-enhancing interventions that carry the lightest footprint.
In the long list of dangerous, anti-competitive, opacity-increasing ills of modern tech products, the hollowing out of browser choice may seem small-time. Issues of content recommendation radicalisation, "persuasive design" dark patterns, source-of-funds ads opacity, and buried data collection controls surely deserve more attention. However, it would be a missed opportunity not to put users back in control of this aspect of their digital lives whilst the opportunity presents itself. ↩︎
Social apps strip-mining ecosystems they didn't build for their benefit while deflecting responsibility for downside consequences?
Why is this not a game-over problem for Facebook's desktop website?
If it's necessary to keep users within a browser that Facebook owns end-to-end, why not simply allow Facebook's native apps to be browsers. It's a simple Android manifest change that would put them back into line with the norms and expectations of the broader web community and allow them to compete for user's browsing time on the up-and-up. Not doing so suggests they have something to hide and may be ashamed of this browser that, by their calculations, keeps users safer.
The need for more information to protect users may be real, but undermining choice for all is a remedy that, at least with the information that's public thus far, seems very tough to justify. ↩︎
iOS didn't support browser choice at the time of SFSafariViewController's introduction and appeared only to have acquiesced to minimal (and initially broken) browser choice under regulatory duress. It is hardly surprising, then, that Apple hasn't updated SFSafariViewController to work with other default browsers the way CCT does.
For reasons that seem to boil down to Great Power calculations and myopic leadership focus on desktop, none of the major browser vendors has publicly challenged these rules or the specious, easily-debunked arguments offered to support them.
Commenters forwarding these claims, as a rule, do not understand browser architecture. Any modern browser can suffer attacks against the privileged "parent" process, JIT or not. These "sandbox escapes" are not less likely for the mandated use of WebKit; indeed, by failing to expose APIs for sandboxed process creation, Apple prevents others from bringing stronger protections to users. iOS's security track record, patch velocity, and update latency for its required-use engine is not best-in-class.
User security would be meaningfully improved were Apple to allow integrated browsers that demonstrated an Apple-esqe-or-better patch velocity. Such a policy is not hard to formulate, and the ability for apps running on top of the OS to update without slow, painful-for-users update processes would meaningfully improve patch rates versus today's OS-update-locked cadence for WebKit.
Some commenters claim that browsers might begin to provide features that some users deem (without evidence) unnecessary or unsafe if alternative engines were allowed. These claims are doubly misinformed.
Misdirection about JITs and per-feature security posture are technically wanting but serve ably distract from iOS's deeper restrictions. Capable integrated browsers need access to a suite of undocumented APIs and capabilities Apple currently reserves to Safari, including the inability to create processes, set tighter sandboxing boundaries, or efficiently decode alternative media formats. Opening these APIs to competing integrated browsers would pave the way to safer, faster, more capable computing for iPhone owners.
Others have argued on Apple's behalf that if engine competition were allowed, Chromium's (Open Source) Blink engine would become ubiquitous on iOS, depriving the ecosystem of diversity in engines. This argument is seemingly offered with a straight face to defend the very policies that have prevented effective engine diversity to date. Mozilla ported Gecko twice, but was never allowed to bring its benefits to iOS users. In addition to being self-defeating regarding engine choice, this fear also seems to ignore the best available comparison points. Safari is the default browser for MacOS and has maintained a healthy 40-50% share for many years, despite healthy competition from other integrated browsers (Chrome, Firefox, Opera, Edge, etc.). Such an outcome is at least as likely on iOS.
Sitting under all of these arguments are, I suspect, more salient concerns to Apple's executives to resist increasing RAM in the iPhone's Bill of Materials. In the coerced status quo, Apple can drive device margins by provisioning relatively little in the way of (expensive) RAM components while still supporting multitasking. A vital aspect of this penny-pinching is to maximise sharing of "code pages" between programs. If alternative browsers suddenly began bringing their engines, code page sharing would not be as effective, requiring more RAM in Apple's devices to provide good multitasking experiences. More RAM could help deliver increased safety and choice to users, but would negatively impact Apple's bottom line.
Undermining user choice in browsers has, in this way, returned significant benefits — to AAPL shareholders, anyway. ↩︎
Engine developers possess outsized ability within standards bodies to deny new features and designs the ability to become standards in the first place. The Catch-22 is easy to spot once you know to look for it, but casual observers are often unacquainted with the way feature development on the web works.
In a nutshell, its often the case features are shipped by browsers ahead of final, formal inclusion in web standards. Specifications are documents that describe the working of a system. Some specifications are ratified by Standards Development Organisations (SDOs) like the World Wide Web Consortium (W3C) or Internet Engineering Task Force (IETF) as "web standards". Thanks to wide implementation and unambiguous IP licensing, standards increase market confidence and adoption of designs. But no new feature's specification begins life as a standard.
Market testing of proposed standards ("running code" in IETF-speak) are essential for the progress of any platform, and pejorative claims that a feature in this state is "proprietary" is misleading. This bleeds into active deception when invoked by other vendors who neither propose alternatives to solve developer challenges nor participate in shaping proposals in open collaboration.
Withholding engagement, then claiming that someone else is proceeding unilaterally — when your input would remove the stain — is a rhetorical Möbius strip. ↩︎
Git Worktrees appear to solve a set of challenges I encounter when working on this blog:
Maintenance branches for 11ty and other dependencies come and go with some frequency.
Writing new posts on parallel branches isn't fluid when switching frequently.
If I incidentally mix some build upgrades into a content PR, it can be difficult to extract and re-apply if developed in a single checkout.
Worktrees hold the promise of parallel working branch directories without separate backing checkouts. Tutorials I've found seemed to elide some critical steps, or required deeper Git knowledge than I suspect is common (I certainly didn't have it!).
After squinting at man pages for more time than I'd care to admit and making many mistakes along the way, here is a short recipe for setting up worktrees for a blog repo that, in theory, already exists at github.com/example/workit:
## # Make a directory to hold a branches, including main ##
$ cd /projects/ $ mkdir workit $ cd workit $ pwd # /projects/workit
## # Next, make a "bare" checkout into `.bare/` ##
## # Tell Git that's where the goodies are via a `.git` # file that points to it ##
$ echo"gitdir: ./.bare"> .git
## # *Update* (2021-09-18): OPTIONAL # # If your repo is going to make use of Git LFS, at # this point you should stop and edit `.bare/config` # so that the `[remote "origin"]` section reads as: # # [remote "origin"] # url = firstname.lastname@example.org:example/workit.git # fetch = +refs/heads/*:refs/remotes/origin/* # # This ensures that new worktrees do not attempt to # re-upload every resource on first push. ##
## # Now we can use worktrees. # # Start by checking out main; will fetch repo history # and may therefore be slow. ##
$ git worktree add main # Preparing worktree (checking out 'main') # ... # Filtering content: 100% (1226/1226), 331.65 MiB | 1.17 MiB/s, done. # HEAD is now at e74bc877 do stuff, also things
## # From here on out, adding new branches will be fast ##
$ git worktree addtest # Preparing worktree (new branch 'test') # Checking out files: 100% (2216/2216), done. # HEAD is now at e74bc877 do stuff, also things
## # Our directory structure should now look like ##
$ ls -la # total 4 # drwxr-xr-x 1 slightlyoff eng 38 Jul 7 23:11 . # drwxr-xr-x 1 slightlyoff eng 964 Jul 7 23:04 .. # drwxr-xr-x 1 slightlyoff eng 144 Jul 7 23:05 .bare # -rw-r--r-- 1 slightlyoff eng 16 Jul 7 23:05 .git # drwxr-xr-x 1 slightlyoff eng 340 Jul 7 23:11 main # drwxr-xr-x 1 slightlyoff eng 340 Jul 7 23:05 test
## # We can work in `test` and `main` independently now ##