Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Reckoning: Part 1 — The Landscape

Instead of an omnibus mega-post, this investigation into JavaScript-first frontend culture and how it broke US public services has been released in four parts. Other posts in the series:


When you live in the shadow of a slow-moving crisis, it's natural to tell people about it. At volume. Doubly so when engineers can cheaply and easily address the root causes with minor tweaks. As things worsen, it's also hard not to build empathy for Cassandra.

In late 2011, I moved to London, where the Chrome team was beginning to build Google's first "real" browser for Android.[1] The system default Android Browser had, up until that point, been based on the system WebView, locking its rate of progress to the glacial pace of device replacement.[2]

In a world where the Nexus 4's 2GB of RAM and 32-bit, 4-core CPU were the high-end, the memory savings the Android Browser achieved by reusing WebView code mattered immensely.[3] Those limits presented enormous challenges for Chromium's safer (but memory-hungry) multi-process sandboxing. Android wasn't just spicy Linux; it was an entirely new ballgame.

Even then, it was clear the iPhone wasn't a fluke. Mobile was clearly on track to be the dominant form-factor, and we needed to adapt. Fast.[4]

Browsers made that turn, and by 2014, we had made enough progress to consider how the web could participate in mobile's app-based model. This work culminated in 2015's introduction of PWAs and Push Notifications.

Disturbing patterns emerged as we worked with folks building on this new platform. A surprisingly high fraction of them brought slow, desktop-oriented JavaScript frameworks with them to the mobile web. These modern, mobile-first projects neither needed nor could afford the extra bloat frameworks included to paper over the problems of legacy desktop browsers. Web developers needed to adapt the way browser developers had, but consistently failed to hit the mark.

By 2016, frontend practice had fully lapsed into wish-thinking. Alarms were pulled, claxons sounded, but nothing changed.

It could not have come at a worse time.

By then, explosive growth at the low end was baked into the cake. Billions of feature-phone users had begun to trade up. Different brands endlessly reproduced 2016's mid-tier Androids under a dizzying array of names. The only constants were the middling specs and ever-cheaper prices. Specs that would set punters back $300 in 2016, sold for only $100 a few years later, opening up the internet to hundreds of millions along the way. The battle between the web and apps as the dominant platform was well and truly on.

<em>Tap for a larger version.</em><br>Geekbench 5 single-core scores for 'fastest iPhone', 'fastest Android', 'budget', and 'low-end' segments.<br><br>Nearly all growth in smartphone sales volume since the mid '10s occured in the 'budget' and 'low-end' categories.
Tap for a larger version.
Geekbench 5 single-core scores for 'fastest iPhone', 'fastest Android', 'budget', and 'low-end' segments.

Nearly all growth in smartphone sales volume since the mid '10s occured in the 'budget' and 'low-end' categories.

But the low-end revolution barely registered in web development circles. Frontenders poured JavaScript into the mobile web at the same rate as desktop, destroying any hope of a good experience for folks on a budget.

Median JavaScript bytes for Mobile and Desktop sites. <br>As this blog has <a href='/series/performance-inequality/'>covered at length,</a> median device specs were largely stagnant between 2014 and 2022. Meanwhile, web developers made sure the
Median JavaScript bytes for Mobile and Desktop sites.
As this blog has covered at length, median device specs were largely stagnant between 2014 and 2022. Meanwhile, web developers made sure the "i" in "iPhone" stood for "inequality."

Prices at the high end accelerated, yet average selling prices remained stuck between $300 and $350. The only way the emergence of the $1K phone didn't bump the average up was the explosive growth at the low end. To keep the average selling price at $325, three $100 low-end phones needed to sell for each $1K iPhone; which is exactly what happened.

And yet, the march of JavaScript-first, framework-centric dogma continued, no matter how incompatible it was with the new reality. Predictably, tools sold on the promise they would deliver "app-like experiences" did anything but.[5]

Billions of cheap phones that always have up-to-date browsers found their CPUs and networks clogged with bloated scripts designed to work around platform warts they don't have.

Environmental Factors

In 2019, Code for America published the first national-level survey of online access to benefits programs, which are built and operated by each state. The follow-up 2023 study provides important new data on the spread of digital access to benefits services.

One valuable artefact from CFA's 2019 research is a post by Dustin Palmer, documenting the missed opportunity among many online benefits portals to design for the coming mobile-first reality that was already the status quo in the rest of the world.

Worldwide mobile browsing surpassed desktop browsing sometime in 2016.
Worldwide mobile browsing surpassed desktop browsing sometime in 2016.
US browsing exhibited the same trend, slightly delayed, owing to comparatively high desktop and laptop ownership vs emerging markets.
US browsing exhibited the same trend, slightly delayed, owing to comparatively high desktop and laptop ownership vs emerging markets.

Moving these systems online only reduces administrative burdens in a contingent sense; if portals fail to work well on phones, smartphone-dependent folks are predictably excluded:

28% of US adults in households with less than $30K/yr income are smartphone-dependent, falling to only 19% for families making 30-70K/yr.
28% of US adults in households with less than $30K/yr income are smartphone-dependent, falling to only 19% for families making 30-70K/yr.

But poor design isn't the only potential administrative burden for smartphone-dependent users.[6]

The networks and devices folks use to access public support aren't latest-generation or top-of-the-line. They're squarely in the tail of the device price, age, and network performance distributions. Those are the overlapping conditions where the consistently falsified assumptions of frontend's lost decade have played out disastrously.

California is a rich mix of urban and hard-to-reach rural areas. Some of the poorest residents are in the least connected areas, ensuring they will struggle to use bloated sites.
California is a rich mix of urban and hard-to-reach rural areas. Some of the poorest residents are in the least connected areas, ensuring they will struggle to use bloated sites.

It would be tragic if public sector services adopted the JavaScript-heavy stacks that frontend influencers have popularised. Framework-based, "full-stack" development is now the default in Silicon Valley, but should obviously be avoided in universal services. Unwieldy and expensive stacks that have caused agony in the commercial context could never be introduced to the public sector with any hope of success.

Right?

Next: Object Lesson: a look at California's digital benefits services.

Thanks to Marco Rogers, and Frances Berriman for their encouragement in making this piece a series and for their thoughtful feedback on drafts.


  1. A "real browser", as the Chrome team understood the term circa 2012, included:

    • Chromium's memory-hungry multi-process architecture which dramatically improved security and stability
    • Winning JavaScript performance using our own V8 engine
    • The Chromium network stack, including support for SPDY and experiments like WebRTC
    • Updates that were not locked to OS versions
    ↩︎
  2. Of course, the Chrome team had wanted to build a proper mobile browser sooner, but Android was a paranoid fiefdom separate from Google's engineering culture and systems. And the Android team were intensely suspicious of the web, verging into outright hostility at times.

    But internal Google teams kept hitting the limits of what the Android Browser could do, including Search. And when Search says "jump", the only workable response is "how high?"

    WebKit-based though it was (as was Chrome), OS-locked features presented a familiar problem, one the Chrome team had solved with auto-update and Chrome Frame. A deal was eventually struck, and when Chrome for Android was delivered, the system WebView also became a Chromium-based, multi-process, sandboxed, auto-updating system. For most, that was job done.

    This made a certain sort of sense. From the perspective of Google's upper management, Android's role was to put a search box in front of everyone. If letting Andy et al. play around with an unproven Java-based app model was the price, OK. If that didn't work, the web would still be there. If it did, then Google could go from accepting someone else's platform to having one it owned outright.[7] Win/win.

    Anyone trying to suggest a more web-friendly path for Android got shut down hard. The Android team always had legitimate system health concerns that they could use as cudgels, and they weilded them with abandon.

    The launch of PWAs in 2015 was an outcome Android saw coming a mile away and worked hard to prevent. But that's a story for another day. ↩︎

  3. Android devices were already being spec'd with more RAM than contemporary iPhones, thanks to Dalvik's chonkyness. This, in turn, forced many OEMs to cut corners in other areas, including slower CPUs.

    This effect has been a silent looming factor in the past decade's divergence in CPU performance between top-end Android and iPhones. Not only did Android OEMs have to pay a distinct profit margin to Qualcomm for their chips, but they also had to dip into the Bill Of Materials (BOM) budget to afford more memory to keep things working well, leaving less for the CPU.

    Conversely, Apple's relative skimpiness on memory and burning desire to keep BOM costs low for parts it doesn't manufacture are reasons to oppose browser engine choice. If real browsers were allowed, end users might expect phones with decent specs. Apple keeps that in check, in part, by maximising code page reuse across browsers and apps that are forced to use the system WebView.

    That might dig into margins ever so slightly, and we can't have that, can we? ↩︎

  4. It took browsers that were originally architected in a desktop-only world many years to digest the radically different hardware that mobile evolved. Not only were CPU speeds and memory budgets cut dramatically — nevermind the need to port to ARM, including JS engine JITs that were heavily optimised for x86 — but networks suddenly became intermittent and variable-latency.

    There were also upsides. Where GPUs had been rare on the desktop, every phone had a GPU. Mobile CPUs were slow enough that what had felt like a leisurely walk away from CPU-based rendering on desktop became an absolute necessity on phones. Similar stories played out across input devices, sensors, and storage.

    It's no exaggeration to say that the transition to mobile force-evolved browsers in a compressed time frame. If only websites had made the same transition. ↩︎

  5. Let's take a minute to unpack what the JavaScript framework claims of "app-like experiences" were meant to convey.

    These were code words for more responsive UI, building on the Ajax momentum of the mid-naughties. Many boosters claimed this explicitly and built popular tools to support these specific architectures.

    As we wander through the burning wreckage of public services that adopted these technologies, remember one thing: they were supposed to make UIs better. ↩︎

  6. When confronted with nearly unusable results from tools sold on the idea that they make sites easier, better, and faster to use, many technologists offer the variants of "but at least it's online!" and "it's fast enough for most people". The most insipid version implies causality, constructing a strawman but-for defense; "but these sites might not have even been built without these frameworks."[9]

    These points can be both true and immaterial at the same time. It isn't necessary for poor performance to entirely exclude folks at the margins for it to be a significant disincentive to accessing services.

    We know this because it has been proven and continually reconfirmed in commercial and lab settings. ↩︎

  7. The web is unattractive to every Big Tech company in a hurry, even the ones that owe their existence to it.

    The web's joint custody arrangement rankles. The standards process is inscrutable and frustrating to PMs and engineering managers who have only ever had to build technology inside one company's walls. Playing on hard mode is unappealing to high-achievers who are used to running up the score.

    And then there's the technical prejudice. The web's languages offend "serious" computer scientists. In the bullshit hierarchy of programming language snobbery, everyone looks down on JavaScript, HTML, and CSS (in that order).

    The web's overwhelmingly successful languages present a paradox: for the comfort of the snob, they must simultaneously be unserious toys beneath the elevated palettes of "generalists" and also Gordian Knots too hard for anyone to possibly wield effectively. This dual posture justifies treating frontend as a less-than discipline, and browsers as anything but a serious application platform.

    This isn't universal, but it is common, particularly in Google's C++/Java-pilled upper ranks.[8] Endless budgetary space for projects like the Android Framework, Dart, and Flutter were the result. ↩︎

  8. Someday I'll write up the tale of how Google so thoroughly devalued frontend work that it couldn't even retain the unbelievably good web folks it hired in the mid-'00s. Their inevitable departures after years of being condescended to went hand-in-hand with an inability to hire replacements.

    Suffice to say, by the mid '10s, things were bad. So bad an exec finally noticed. This created a bit of space to fix it. A team of volunteers answered the call, and for more than a year we met to rework recruiting processes and collateral, interview loop structures, interview questions, and promotion ladder criteria.

    The hope was that folks who work in the cramped confines of someone else's computer could finally be recognised for their achievements. And for a few years, Google's frontends got markedly better.

    I'm told the mean has reasserted itself. Prejudice is an insidious thing. ↩︎

  9. The but-for defense for underperforming frontend frameworks requires us to ignore both the 20 years of web development practice that preceeded these tools and the higher OpEx and CapEx costs associated with React-based stacks.

    Managers sometimes offer a hireability argument, suggesting they need to adopt these univerally more expensive and harder to operate tools because they need to be able to hire. This was always nonsense, but never more so than in 2024. Some of the best, most talented frontenders I know are looking for work and would leap at the chance to do good things in an organisation that puts user experience first.

    Others sometimes offer the idea that it would be too hard to retrain their teams. Often, these are engineering groups comprised of folks who recently retrained from other stacks to the new React hotness or who graduated boot camps armed only with these tools. The idea that either cohort cannot learn anything else is as inane as it is self-limiting.

    Frontenders can learn any framework and are constantly retraining just to stay on the treadmill. The idea that there are savings to be had in "following the herd" into Next.js or similar JS-first development cul-de-sacs has to meet an evidentiary burden that I have rarely seen teams clear.

    Managers who want to avoid these messes have options.

    First, they can crib Kellan's tests for new technologies. Extra points for digesting Glyph's thoughts on "innovation tokens."

    Next, they should identify the critical user journeys in their products. Technology choices are always situated in product constraints, but until the critical user journeys are enunciated, the selection of any specific architecture is likely to be wrong.

    Lastly, they should always run bakeoffs. Once critical user journeys are outlined and agreed, bakeoffs can provide teams with essential data about how different technology options will perform under those conditions. For frontend technologies, that means evaluating them under representative market conditions.

    And yes, there's almost always time to do several small prototypes. It's a damn sight cheaper than the months (or years) of painful remediation work. I'm sick to death of having to hand-hold teams whose products are suffocating under unusably large piles of cruft, slowly nursing their code-bases back to something like health as their management belatedely learns the value of knowing their systems deeply.

    Managers that do honest, user-focused bakeoffs for their frontend choices can avoid adding their teams to the dozens I've consulted with who adopted extremely popular, fundamentally inappropriate technologies that have had disasterous effects on their businesses and team velocity. Discarding popular stacks from consideration through evidence isn't a career risk; it's literally the reason to hire engineers and engineering leaders in the first place. ↩︎

Misfire

We're in a bad place when even the W3C TAG falls for Apple's privacy schtick.

The W3C Technical Architecture Group[1] is out with a blog post and an updated Finding regarding Google's recent announcement that it will not be imminently removing third-party cookies.

The current TAG members are competent technologists who have a long history of nuanced advice that looks past the shouting to get at the technical bedrock of complex situations. The TAG also plays a uniquely helpful role in boiling down the guidance it issues into actionable principles that developers can easily follow.

All of which makes these pronouncements seem like weak tea. To grok why, we need to walk through the threat model, look at the technology options, and try to understand the limits of technical interventions.

But before that, I should stipulate my personal position on third-party cookies: they aren't great!

They should be removed from browsers when replacements are good and ready, and Google's climbdown isn't helpful. That said, we have seen nothing of the hinted-at alternatives, so the jury's out on what the impact will be in practice.[2]

So why am I dissapointed in the TAG, given that my position is essentially what they wrote? Because it failed to acknowledge the limited and contingent upside of removing third-party cookies, or the thorny issues we're left with after they're gone.

Unmasking The Problem

So, what do third-party cookies do? And how do they relate to the privacy theat model?

Like a lot of web technology, third-party cookies have both positive and negative uses. Owing to a historcal lack of platform-level identity APIs, they form the backbone of nearly every large Single Sign-On (SSO) system. Thankfully, replacements have been developed and are being iterated on.

Unfortunately, some browsers have unilaterally removed them without developing such replacements, disrupting sign-in flows across the web, harming users and pushing businesses toward native mobile apps. That's bad, as native apps face no limits on communicating with third parties and are generally worse for tracking. They're not even subject to pro-user interventions like browser extensions. The TAG should have called out this aspect of the current debate in its Finding, encouraging vendors to adopt APIs that will make the transition smoother.

The creepy uses of third-party cookies relate to advertising. Third-party cookies provide ad networks and data brokers the ability to silently reidentify users as they browse the web. Some build "shadow profiles", and most target ads based on sites users visit. This targeting is at the core of the debate around third-party cookies.

Adtech companies like to claim targeting based on these dossiers allows them to put ads in front of users most likely to buy, reducing wasted ad spending. The industry even has a shorthand: "right people, right time, right place."

Despite the bold claims and a consensus that "targeting works," there's reason to believe pervasive surveillence doesn't deliver, and even when it does, isn't more effective.

Assuming the social utility of targeted ads is low — likely much lower than adtech firms claim — shouldn't we support the TAG's finding? Sadly, no. The TAG missed a critical opportunity to call for legislative fixes to the technically unfixable problems it failed to enumerate.

Privacy isn't just about collection, it's about correlation across time. Adtech can and will migrate to the server-side, meaning publishers will become active participants in tracking, funneling data back to ad networks directly from their own logs. Targeting pipelines will still work, with the largest adtech vendors consolidating market share in the process.

This is why "give us your email address for 30% discount" popups and account signup forms are suddenly everywhere. Email addresses are stable, long-lived reidentifiers. Overt mechanisms like this are already replacing third-party cookies. Make no mistake: post-removal, tracking will continue for as long as reidentification has perceived positive economic value. The only way to change that equation is legislation; anything else is a band-aid.

Pulling tracking out of the shadows is good, but a limited and contingent good. Users have a terrible time recognising and mitigating risk on the multi-month time-scales where privacy invasions play out. There's virtually no way to control or predict where collected data will end up in most jurisdictions, and long-term collection gets cheaper by the day.

Once correlates are established, or "consent" is given to process data in ways that facilitate unmasking, re-identification becomes trivial. It only takes giving a phone number to one delivery company, or an email address to one e-commerce site to suddenly light up a shadow profile, linking a vast amount of previously un-attributed browsing to a user. Clearing caches can reset things for a little while, but any tracking vendor that can observe a large proportion of browsing will eventually be able to join things back up.

Removal of third-party cookies can temporarily disrupt this reidentification while collection funnels are rebuilt to use "first party" data, but that's not going to improve the situation over the long haul. The problem isn't just what's being collected now, it's the ocean of dormant data that was previously slurped up.[3] The only way to avoid pervasive collection and reidentification over the long term is to change the economics of correlation.

The TAG surely understands the only way to make that happen is for more jurisdictions to pass privacy laws worth a damn. It should say so.

Fire And Movement

The goal of tracking is to pick users out of crowds, or at least bucket them into small unique clusters. As I explained on Mastodon, this boils down to bits of entropy, and those bits are everywhere. From screen resolution and pixel density, to the intrinsic properties of the networks, to extensions, to language and accessibility settings that folks rely on to make browsing liveable. Every attribute that is even subtly different can be a building block for silent reidentification; A.K.A., "fingerprinting."[4]

In jurisdictions where laws allow collected data to remain the property of the collector, the risks posed by data-at-rest is only slightly attenuated by narrowing the funnel through which collection takes place.

It's possible to imagine computing that isn't fingerprintable, but that isn't what anyone is selling. For complex reasons, even the most cautious use of commodity computers is likely to be uniquely identifiable with enough time. This means that the question to answer isn't "do we think tracking is bad?", it's "given that we can't technically eliminate it, how can we rebuild privacy?". The TAG's new Finding doesn't wrestle with that question, doing the community a disservice in the process.

The most third-party cookie removal can deliver is temporary disruption. That disruption will affect distasteful collectors, costing them money in the short run. Many think of this as a win, I suspect because they fail to think through the longer-term consequences. The predictable effect will be a recalibration and entrenchment of surveillence methods. It will not put the panopticon out of business; only laws can do that.

For a preview of what this will look like, think back on Apple's "App Tracking Transparency" kayfabe, which did not visibly dent Facebook's long-term profits.

So this is not a solution to privacy, it's fire-and-movement tactics against corporate enemies. Because of the deep technical challenges in defeating fingerprinting[4:1], even the most outspoken vendors have given up, introducing "nutrition labels" to shift responsibility for privacy onto consumers.

If the best vertically-integrated native ecosystems can do is to shift blame, the TAG should call out posturing about ineffective changes and push for real solutions. Vendors should loudly lobby for stronger laws that can truly change the game and the TAG should join those calls. The TAG should also advocate for the web, rather than playing into technically ungrounded fearmongering by folks trying to lock users into proprietary native apps whilst simultaneously depriving users of more private browsers.

Finding A Way Forward

The most generous take I can muster is that the TAG's work is half-done. Calling on vendors to drop third-party cookies has the virtue of being technical and actionable, properties I believe all TAG writing should embody. But having looked deeply at the situation, the TAG should have also called on browser vendors to support further reform along several axes — particularly vendors that also make native OSes.

First, if the TAG is serious about preventing tracking and improving the web ecosystem, it should call on all OS vendors to prohibit the use of "in-app browsers" when displaying third-party content within native apps.

It is not sufficient to prevent JavaScript injection because the largest native apps can simply convince the sites to include their scripts directly. For browser-based tracking attenuation to be effective, these side-doors must be closed. Firms grandstanding about browser privacy features without ensuring users can reliably enjoy the protections of their browser need to do better. The TAG is uniquely positioned to call for this erosion of privacy and the web ecosystem to end.

Next, the TAG should have outlined the limits of technical approaches to attenuating data collection. It should also call on browser vendors to adopt scale-based interventions (rather than absolutism) in mitigating high-entropy API use.[5] The TAG should go first in moving past debates that don't acknowledge impossibilities in removing all reidentification, and encourage vendors to do the same. There's no solution to the privacy puzzle that can be solved by the purchase of a new phone, and the TAG should be clarion about what will end our privacy nightmare: privacy laws worth a damn.

Lastly, the TAG should highlight discrepancies between privacy marketing and the failure of vendors to push for strong privacy laws and enforcement. Because the threat model of privacy intrusion renders solely techincal interventions ineffective on long timeframes, this is the rare case in which the TAG should push past providing technical advice.

The TAG's role is to explain complex things with rigor and signpost credible ways forward. It has not done that yet regarding third-party cookies, but it's not too late.


  1. Praise, as well as concern, in this post is specific to today's TAG's, not the output of the group while I served. I surely got a lot of things wrong, and the current TAG is providing a lot of value. My hope here is that it can extend this good work by expanding its new Finding. ↩︎

  2. Also, James Roswell can go suck eggs. ↩︎

  3. It's neither here nor there, but the TAG also failed in these posts to encourage users and developers to move their use of digital technology into real browsers and out of native apps which invasively track and fingerprint users to a degree web adtech vendors only fantasize about.

    A balanced finding would call on Apple to stop stonewalling the technologies needed to bring users to safer waters, including PWA installation prompts. ↩︎

  4. As part of the drafting of the 2015 finding on Unsanctioned Web Tracking, the then-TAG (myself included) spent a great deal of time working through the details of potential fingerprinting vectors. What we came to realise was that only the Tor Browser had done the work to credibly analyise fingerprinting vectors and produce a coherent threat model. To the best of my knowledge, that remains true today.

    Other vendors continue to publish gussied-up marketing documents and stroppy blog posts that purport to cover the same ground, but consistently fail to do so. It's truly objectionable that those same vendors also prevent users from chosing disciplined privacy-focused browsers.

    To understand the difference, we can do a small thought experiment, enumerating what would be necessary to sand off currently-identifiable attributes of individual users. Because only 31 or 32 bits are needed to uniquely identify anybody (often less), we want a high safety factor. This means bundling users into very large crowds by removing distinct observable properties. To sand off variations between users, a truly private browser might:

    • Run the entire browser in a VM in order to:
      • Cap the number of CPU cores, frequency, and centralise on a single instruction set (e.g., emulating ARM when running on x86). Will likely result in a 2-5x slowdown.
      • Ensure (high) fixed latency for all disk access.
      • Set a uniform (low) cap on total memory.
    • Disable hardware acceleration for all graphics and media.
    • Disable JIT. Will slow JavaScript by 3-10x.
    • Only allow a fixed set of fonts, screen sizes, pixel densities, gamuts, and refresh rates; no more resizing browsers with a mouse. The web will pixelated and drab and animations will feel choppy.
    • Remove most accessibility settings.
    • Remove the ability to install extensions.
    • Eliminate direct typing and touch-based interactions, as those can leak timing information that's unique.
    • Run all traffic through Tor or a similarly high-latency VPN egress nodes.
    • Disable all reidentifying APIs (no more web-based video conferencing!)

    Only the Tor project is shipping a browser anything like this today, and it's how you can tell that most of what passes for "privacy" features in other browsers are anti-annoyance and anti-creep-factor interventions; they matter, but won't end the digital panopticon. ↩︎ ↩︎

  5. It's not a problem that sign-in flows need third-party cookies today, but it is a problem that they're used for pervasive tracking.

    Likewise, the privacy problems inherent in email collection or camera access or filesystem folders aren't absolute, they're related to scale of use. There are important use-cases that demand these features, and computers aren't going to stop supporting them. This means the debate is only whether or not users can use the web to meet those needs. Folks who push an absolutist line are, in effect, working against the web's success. This is anti-user, as the alternatives are generally much more invasive native apps.

    Privacy problems arise at scale and across time. Browsers should be doing more to discourage high-quality reidentifiaction across cache clearing and in ways that escalate with risk. The first site you grant camera access isn't the issue; it's the 10th. Similarly, speed bumps should be put in place for use of reidentifying APIs on sites across cache clearing where possible.

    The TAG can be instrumental is calling for this sort of change in approach. ↩︎

Older Posts

Newer Posts