The Phony Balance Benchmark

There’s a palpable tension in my shoulders as I tap this out — I know already that this post will create cringe-worthy responses and name calling and all the rest. But on we plod.

A friend called out to me a peculiar feature of a conference Program Committee they were serving on: that it was part of the PC’s role to keep a look out for strong minority/female speakers and encourage them to submit to the open CFP.

Soooooooo much has been written on these points, but in my (biased) view Frances covers it well:

Discrimination is a problem. I, personally, don’t give a monkey’s how many women or whoever are in our industry, as long as everyone who wanted to be here could and had free opportunity to do so, but sadly that is not the case and as such our community is not representative of all those that could be here if discrimination, from stereotyping roles to outright sexism/racism/agism/*ism, was not present. As such, we have a duty to address the problems that disable people’s opportunities.

I found myself reflecting on the PC’s I’ve served on over the years and the many styles they’ve embodied. There’s a particular style to the O’Reilly-run conferences that is distinctive, largely for the scale involved. OSCON is hundreds of talks across dozens of rooms. Velocity EU isn’t that much smaller. The level of curation that each PC applies is also hugely variable: Steve Souders is incredibly hands-on and detail oriented whereas I have no idea who actually heads up the OSCON PC any more. It doesn’t seem to functionally matter. But in both cases, despite the huge differences in style and approach, the box into which ORA puts the PCs creates the illusion of responsibility for a balance which, as Frances accurately notes, isn’t a particularly local concern.

The pressure itself is unmistakeable. The overt urges to over-select for some trait, the conversations that happening among PC members or in comments in the review tools…as long as it’s not anonymized feedback against anonymized submissions (my favorite kind), the cultural need to be seen to be “doing something” at a point far, far removed from any actual leverage is nearly overwhelming. And the risks to “doing something” are enormous. Nobody at a tech conference wants to be a token of anything other than sparkling technical achievement.

The risks to conference organizers run counter, however: we’ve seen over and over again that the angry mob demonstrates little faculty with math. And that mob can sink a conference. The mob’s leaders don’t appear to acknowledge that some populations are structurally under-represented in computing (conflating it with generic “STEM” representation levels, even after correction), or that being the case, that bludgeoning PCs and organizers into over-representation may present as many pitfalls as positives.

What to do?

The conclusion that smacked me upside the head today is that conference organizers must nip this in the bud: demonstrate action at the root cause to diffuse the tension that would otherwise bubble into inappropriate selection pressure and math-challenged “advocacy”. The solution is the same in both cases: credibly commit to donating a % of revenue (not profit) to efforts that teach more girls and other under-represented minorities how to be engineers.

We can’t retroactively fix the selection pressures that got us to the current terrible state. But we can clear the path for the next generation, and we can set a better example of being decent humans each other while our overly-white, overly-male generation of engineers recedes into anecdote. Conferences that commit, audit-ably and credibly to doing this, along with setting codes of conduct and taking their responsibilities seriously in other regards, are doing the only things that can be truly argued to be in the best good for our discipline. We must work to create the conditions of equal opportunity in our field, not merely affect the outward appearance of a discipline that has it.

It’s time to de-fang the phony arguments about “balance”, re-direct our focus to the areas that can have more effect, and judge the results by the wellbeing of our peers and those who seek to join those ranks.

Update: I elided any discussion of effectiveness of the organizations which I’m suggesting conferences should be supporting. That was intentional. There’s quite a lot of variability in effectiveness and, in most cases, not even much measurement. The process of bringing interested youngsters into CS is fraught with hurdles, and the right mental model is a “sales funnel” in which we lose potential conversions at many steps. Equal opportunity must be present at each step in the funnel for it to be truly achieved. I’d hope that conferences and others supporting the cause of equality in technology take a data-driven view to how to best deploy their support. But the failures of most charitable organizations to characterize their results is a long digression for another time.

For Jo

This was originally drafted as response to [Jo Rabin’s blog post] discussing a meetup the W3C TAG hosted last month. For some reason, I was having difficulty adding comments there.

Hi Jo,

Thanks for the thoughtful commentary, and for the engaging chat at the meetup. Your post mirrors some of my own thinking about what the TAG can be good for.

I can’t speak for everyone on the TAG, but like me, most of the new folks who have joined have backgrounds as web developers. For the last several months, we’ve been clearing away old business from the agenda, explicitly to make way for new areas of work which mirror some of your ideas. At the meeting which the meetup was attached to, the TAG decided at the urging of the new members to become a functional API review board. The goal of that project is to encourage good layering practice across the platform and to help WGs specify better, more coherent, more idiomatic APIs. That’s a long-game thing to do, and you can see how far we’ve got to go in terms of making even the simplest idioms universal.

Repairing these sorts of issues is what the TAG, organizationally, is suited to do. Admittedly, it has not traditionally shown much urgency or facility with them. We’re changing that, but it takes some time. Hopefully you’ll see much more to cheer in the near future.

As for the overall constituency and product, I feel strongly that one of the things we’ve accomplished in our efforts to reform the TAG is that we’re re-focusing the work to emphasize the ACTUAL web. Stuff that’s addressable with URLs and has a meaningful integration point with HTML. Too much time has been wasted worrying about problems we don’t have, or for good reasons, are unlikely to have. Again, I don’t speak for the TAG, but I promise to continue to fight for the pressing problems of webdevs.

The TAG can use this year to set an agenda, show positive progress, and deliver real changes in specs. Already we’re making progress with Promises, constructability, layering (how do the bits relate?), and extensibility. We also have a task to explain what’s important and why. That’s what has lead to efforts like the Extensible Web Manifesto. You’ll note other TAG members as signatories.

Along those lines, the TAG has also agreed to begin work on documents that will help spec authors understand how to approach the design process with the constraints of idiomaticness and layering in mind. That will take time, but it’s being informed by our direct, hands-on work with spec authors and WGs today.

So the lines are drawn: the TAG is refocusing, taking up the architectural issues that cause real people real harm in the web we actually have, and those who think we ought to be minding some other store aren’t very much going to like it. I’m OK with that, and I hope to have your support in making it happen.

Regards

Why JavaScript?

One strain of objection I often hear about the project of making the web more extensible is that it implies travelling further down the JavaScript rabbit hole. The arguments often include:

  • No other successful platform is so limited to a single language (in semantics if not syntax).
  • Better languages exist and, surely, could be hooked up to the HTML DOM instead.
  • JavaScript can’t really describe everything the web platform does, so it’s not the right tool for this archeological excavation.

These, incidentally, are mirrors to the fears that many have about the web becoming “too reliant” on JavaScript. But that’s a topic for another post.

Lets examine these in turn.

The question of what languages a platform admits as first-class isn’t about the languages — not really, anyway. It’s about the conventions of the lowest observable level of abstraction. We have many languages today that cooperate at runtime on “classical” platforms (Windows/Linux/OSX) and the JVM because they collaborate on low-level machine operations. In the C-ish OSes, that’s about moving words around memory area and using particular calling conventions for structuring input and outputs to kernel API thunks. Above that it’s all convention; see COM. Similarly, JVM languages interop at the level of JVM bytecode.

The operational semantics of these platforms are incredibly low level. The flagship languages and most of the runtime behavior of programs are built up from these very low-level contracts. Where interop happens at an API level, it’s usually about a large-ish standard library which obeys most of the same calling conventions (even if its implementation is radically different).

The web has it the other way around. It achieved broad compatibility by starting the bidding at extremely high level semantics which, initially, had very little in the way of a contract beyond bugwards compatibility with whatever Netscape or MSFT shipped last. The coarse, interpret-it-as-you-go contract of HTML is one of the things that has made it such a hardy survivor. JavaScript was added later, and while it has lower-level operational semantics than HTML or CSS, that history of bolting JS on later has led to the current project of encouraging extensibility and layering; e.g., through Web Components. It’s also why those who cargo-cult their experiences of other platforms onto the web find themselves adrift. There just isn’t a shared lower level on which to interoperate.

That there aren’t other languages interfacing with the web successfully today is, in part, the natural outcome of a lack of shared lower-level idioms on which those languages could build-up runtimes on. It’s no accident that CoffeeScript, TypeScript, and even Dart find themselves running mostly on top of JS VMs. There’s no lower level in the platform to contemplate.

Which brings us to the second argument: there are other, better languages…surely we could just all agree on some bytecode format for the web that would allow everyone to get along…right?

This is possible, but implausible.

Implausibility is the only reason I pour time and effort into trying to improve JS and not something else. The Nash Equilibrium of the web gives rise to predicable plays: assuming that incentives for adopting low-level descriptions of JS (as any such bytecode would have to describe JS as well as everything else) are not evenly distributed, movement by any group that is not all of the competitors stymies compatibility, which after all is the whole goal. Any language that wishes to interoperate with JavaScript and the existing DOM is best off describing its runtime in terms of JavaScript for fear that the threat to not adopting a compatible bytecode is credible. Compatibility strategies that straddle the fence can work, but it’s not a short (or clear) game to play. And introducing an abstraction that’s not fundamentally lower-level than JS (and/or does not fully subsume its semantics) is simply doomed. It would lack the power to even credibly hold out hope for a compatible future.

So, yes, there are better languages. Yes, you could put them in a browser. But unless you possess the power to put them in every browser, they don’t matter unless their operational semantics are 1:1 with JavaScript.

You can see how I ended up on TC39. It’s not that I think JS is great (it has well-documented flashes of genius, but so does any competitor worth mentioning) or even the perfect language for the web. But it is the *one language that every vendor is committed to shipping compatibly*. Evolving JS has the leverage to add/change the semantics of the platform in a way that no other strategy credibly can, IMO.

This leaves us with the last objection: JS doesn’t fully describe everything in the web platform, so why not recant and switch horses before it’s too late to turn back?

This misreads platforms vs. runtimes. All successful platform have privileged APIs and behaviors. Successful, generative platforms merely reduce the surface area of this magic and ensure that privileged APIs “blend in” well — no funky calling conventions, no alien semantics, etc. Truly great platforms leave developers thinking they’re the only ship in the entire ocean and that it is a uniform depth the whole way across. It’s hard to think of a description any more at odds with the web platform. Having acknowledged the necessity and ubiquity of privileged APIs, the framing is now right to ask: what can be done about it?

I’ve made it my work for the past 3+ years — along with a growing troupe of fellow thinkers — to answer this charge by reducing the scope and necessity of magic in everyday web development. To describe how something high-level in the platform works in terms of JS isn’t to deny some other language a fair shot or to stretch too far with JS, it’s simply to fill in the obvious gaps by asking the question “how are these bits connected?”

Those connections and that archeological dig are what are most likely to turn up the sort of extensible, layered, compatible web platform that shares core semantics across languages. You can imagine other ways of doing it, but I don’t think you can get there from here. And the possible is all that matters.

That Old-Skool Smell, Part 2

The last post covered a few of the ways that the W3C isn’t effective facilitating the discussions that lead to new standards work and, more generally, how trying to participate feels as though you are being transported back to a slower, more mediated era.

Which brings up a couple of things I’ve noticed across the W3C and which can likely be fixed more quickly. But some background first: due to W3C rules, it’s hard to schedule meetings (usually conference calls) quickly. You often need 2 weeks notice for it to happen under a W3C-condoned WG, but canceling meetings is, as we all know, much easier. As a result, many groups set up weekly or bi-weekly meetings but, in practice, meet much less frequently. This lightens the burden for those participating heavily in one or two topics, but leaves occasional participants and those trying to engage from non-majority time-zones at a serious dis-advantage because the notice of meeting cancellation is near-universally handled via mailing list messages.

Yes, you read that right, the W3C uses mailing lists to manage meeting notices. In 2013. And there is no uniformity across groups.

Thanks to Peter Linss, the TAG is doing better: there’s an ical feed for all of our upcoming meetings that anyone can subscribe to. Yes, notices are still sent to the list, but you no longer need to dig through email to attempt to find out if the regularly-scheduled meeting is going to happen. Wonder of wonders, I can just look at my calendar…at least when it comes to the TAG.

That this is new says, to my mind, everything you need to know about how the current structure of the W3C’s spending on technical infrastructure and staff has gone unchallenged for far, far too long. The TAG is likewise starting to make a move from CVS to Git…and once again it finds itself at the vanguard of organizational practice. That here has been no organization-wide attempt to get WGs to move to more productive tools is, to me, an indicator of how many in positions of authority (if not power) at the WG and on the Staff think things are going. That this state of affairs isn’t prima-facia evidence of the need for urgent change and modernization says volumes. As usual, it’s not about the tools, but about the way the tools help the organization meet (or fail to meet) its goals. Right now, “better” looks like what nearly every member organization’s software teams are already doing. Modernizing in this environment will be a relief, not a burden.

It’s also sort of shocking to find that there are no dashboards. Anywhere. For anything — at least not ones that I can find.

No progress or status dashboard to give the organization a sense for what’s currently happening, no dashboard to show charter and publication milestones across groups, no visible indicators about which groups are highly active and which are fading away.

If the W3C has an optics problem — and I submit that it does — it’s not doing itself any favors by burying the evidence of its overall trajectory and in arcane mailing lists.

There is, at base, a question raised by this and many other aspects of W3C practice: how can the organization be seen to be a good steward of member time, attention, and resources when it does not seem to pay much mind to the state of the workshop. I’d be delighted to see W3C staff liasons for WGs working to make products visible, easy to engage with, and efficient to contribute to as their primary objective. As it is, I don’t sense that’s their role. And that’s just not great customer service.. I hope I’m wrong, or I hope that changes.