Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

JavaScript UXO Removal Updated

JavaScript is a lovable language. Real closures, first class functions, incredibly dynamic behavior...it's a joy when you know it well.

Less experienced JS programmers often feel as though they're waltzing in a minefield, though. At many steps along the path to JS enlightenment everything feels like it's breaking down around you. The lack of block lexical scope sends you on pointless errands, the various OO patterns give you fits as you try to do anything but what's in the examples, and before you know it even the trusty "dot" operator starts looking suspect. What do you mean that this doesn't point to the object I got the function from?

Repairs for some of the others are on the way in ES6 so I want to focus on the badly botched situation regarding "promiscuous this", in particular how ES5 has done us few favors and why we're slated to continue the cavalcade of failure should parts of the language sprout auto-binding.

Here's the problem in 5 lines:

var obj = {
  _counter: 0,
  inc: function() { return this._counter++; },
};
node.addEventListener("click", obj.inc);

See the issue? obj.inc results in a reference to the inc method without any handle or reference to its original context (obj). This is asymmetric with the behavior we see when we directly call methods since in that case the dot operator populates the ThisBinding scope. We can see it clearly when we assign to intermediate variables:

var _counter = 0; // global "_counter", we'll see why later
var inc = obj.inc;
obj.inc(); // 1
obj.inc(); // 2
inc(); // 1

Reams have been written on the topic, and ES5's belated and weak answer is to directly transcribe what JS libraries have been doing by providing a bind() method that returns a new function object that carries the correct ThisBinding. Notably, you can't un-bind a bound function object, nor can you treat a bound function as equal to its unbound ancestor. This, then, is just an API formalism around the pattern of using closures to carry the ThisBinding object around:

var bind = function(obj, name) {
  return function() {
     return obj[name].apply(obj, arguments);
  };
};
// Event handling now looks like:
//   node.addEventListener("click", bind(obj, "inc"));

var inc = bind(obj, "inc"); obj.inc(); // 1 obj.inc(); // 2 inc(); // 3

inc === obj.inc; // false

ES5's syntax is little better but it is built-in and can potentially perform much better:

var inc = obj.inc.bind(obj);
// In a handler:
node.addEventListener("click", obj.inc.bind(obj));

Syntax aside, we didn't actually solve the big problems since unbound functions can still exist, meaning we still have to explain to developers that they need to think of the dot operator doing different things based on what charachters happen to come after the thing on the right-hand side of the dot. Worse, when you get a function it can either be strongly-bound (i.e., it breaks the .call(otherThis, ...) convention) or unbound -- potentially executing in the "wrong" ThisBinding. And there's no way to tell which is which.

So what would be better?

It occurs to me that what we need isn't automatic binding for some methods, syntax for easier binding, or even automatic binding for all methods. No, what we really want is weak binding; the ability to retrieve a function object through the dot operator and have it do the right thing until you say otherwise.

We can think of weak binding as adding an annotation about the source object to a reference. Each de-reference via [[Get]] creates a new weak binding which is then used when a function is called. This has the side effect of describing current [[Get]] behavior when calling methods (since the de-reference would carry the binding and execution can be described separately). As a bonus, this gives us the re-bindability that JS seems to imply should be possible thanks to the .call(otherThis) contract:

var o = {
  log: function(){
    console.log(this.msg);
  },
  msg: "hello, world!",
};

var o2 = { msg: "howdy, pardner!", };

o.log(); // "hello, world!" o2.log = o.log; // calling log through o2 replaces weak binding o2.log(); // "howdy, pardner!"

But won't this break the entire interwebs!?!?

Maybe not. Hear me out.

We've already seen our pathological case in earlier examples. Here's the node listener use-case again, this time showing us exactly what context is being used for unbound methods:

document.body.addEventListener("click", function(evt) {
  console.log(this == document.body); // true in Chrome and FF today
}, true);

We can think of dispatch of the event calling the anonymous function with explicit ThisBinding, using something like listener.call(document.body, evt); as the call signature for each registered handler in the capture phase. Now, it's pretty clear that this is whack. DOM dispatch changing the ThisBinding of passed listeners is an incredibly strange side-effect and means that even if we add weak binding, this context doesn't change. At this point though we can clearly talk about the DOM API bug in the context of sane, consistent language behavior. The fact that event listeners won't preserve weak binding and will continue to require something like this is an issue that can be wrestled down in one working group:

node.addEventListener("click",
       (function(evt) { ... }).bind(otherThis),
       true);

The only case I can think of when weak bindings will change program semantics is when unbound method calls in the global object do work on this in a way that is intentional. We have this contrived example from before too, but as you can see, it sure looks like a bug, no?

var _counter = 0; // a.k.a.: "this._counter", a.k.a.: "window._counter"
var obj = {
  _counter: 0,
  inc: function() { return this._counter++; },
};
var inc = obj.inc;
obj.inc(); // 1
obj.inc(); // 2
console.log(obj._counter, this._counter); // 2, 0
inc(); // 1
inc(); // 2
console.log(obj._counter, this._counter); // 2, 2

If this turns out to be a problem in real code, we can just hide weak bindings behind some use directive.

Weak binding now gives us a middle ground: functions that are passed to non-pathological callback systems "do the right thing", most functions that would otherwise need to have been bound explicitly can Just Work (and can be rebound to boot), and the wonky [[Get]] vs. [[Call]] behavior of the dot operator is resolved in a tidy way. One less bit of unexploded ordinance removed.

So the question now is: why won't this work? TC39 members, what's to keep us from doing this in ES6?

Update: Mark Miller flags what looks to be a critical flaw:

var obj = {
  callbacks: [],
  register: function(func) {
    for (var i = 0, i < this.callbacks.length; i++) {
      this.callbacks[i]();
    }
  },
};
obj.register(foo.bar); // Does the wrong thing!

The problem here is our call into each of the callback functions which still execute in the scope of the wrong object. This means that legacy code still does what it always did, but that's just as broken as it was. We'd still need new syntax to make things safe. Ugg.

Twitter By The Back Of A Napkin

In which you talk me into finally getting a Twitter account by explaining to me why I don't understand Twitter.

I'm a Twitter luddite for perhaps the most pedantic of excuses: for years I've scratched my head at why what seemed like a solved problem has eluded Twitter in its search for scale with stability. A new presentation by Twitter engineer Raffi Krikorian deepens my confusion. First the numbers:

Avg. Inbound Tweets / Second 800
Max. Inbound Tweets / Second 3283
Tweet Size (bytes) 200
Registered Users (M) 150
Max Fanout (M) 6.1

Social networks like Twitter are just that -- networks -- and to understand Twitter as a network we want to know how much traffic the Twitter "backbone" is routing. Knowing that Twitter does 800 messages inbound per second doesn't tell us but an estimate is possible. From a talk last year by another Twitter engineer, we know that Twitter users have less than 200 followers on average. That means that despite the eye-popping 6.1M follower (in networking terms "fanout") count for Lady GaGa, we should expect most tweets to generate significantly less load. Dealing just in averages, we should expect baseline load to be roughly 100K delivery attempts per second. Peak traffic is likely less than 1.5M delivery attempts per second (4K senders w/ double the average connectedness plus some padding for high-traffic outliers).

Knowing that peak loads are 4x average loads is useful and we can provision based on that. We also know that Twitter doesn't guarantee message order and has no SLA for delivery which means we can deal with the Lady GaGa case by smearing delivery for users with huge fanout, ordering by something smart (most active users get messages first?). Heck, Twitter doesn't even guarantee delivery, so we could even go best-effort if the system is congested, taking total load into account for the smear size of large senders or recovering out of band later by having listeners que a DB. So far our requirements are looking pretty sweet. Twitter's constraints significantly ease the engineering challenge for the core routing and delivery function (the thing that should never be down).

What about tweet size? How much will an individual tweet tax a network? Can we handle tweets as packets? Tweet text is clamped to 200 bytes (as per Raffi's slides) but Tweets now support extra metadata. The Twitter API Wiki notes that this metadata is also limited, clamped to 512 bytes. Assuming we need a GUID-sized counter for a unique tweet ID, that puts our payload at 200+512+16 = 728 bytes. That's less than half the size of default ethernet MTU -- 1500 bytes. IP allows packets up to 64K in size, and with jumbo ethernet frames we could avoid fragmentation at the link level and still accomodate 9K packets, but there's no need to worry about that now.

Twitter's subscriber base also fits neatly in the IPv4 address range of ~4 billion unique addresses. Even if we were to give every subscriber an address for every one of their subscribed delivery endpoints (SMS, web, etc.), we'd still fit nicely in IPv4 space. Raffi's slides show that they want to serve all of earth which means eventually switching to IPv6, but that's so far away from the trend line that we can ignore it for now. That means we can handle addressing (source and destination) and data in the size of a single IP packet and still have room to grow.

So now we're down to the question that's been in the back of my mind for years: can we buy Twitter's core routing and delivery function off the shelf? And if so, how much would it cost, assuming continued network growth? Assuming 4x average peak and a 2K/s inbound message baseline (enough to get them through 2011?) and an average fanout of 300 (we're being super generous here, after all), we're looking at 2.5 million packets to route per second. If we treat each delivery endpoint as an IP address and again multiply deliveries by endpoints and assume 4 delivery endpoints per user, we 're looking at a need to provision for 10M deliveries per second.

Is that a lot? Maybe, but I have reason to think not.

10M 1.5KB packets is ~15GB/second of traffic. Core routers now do terabits of traffic per second (125GB), but most of that traffic doesn't correspond to unique routes. Instead, we need to figure out if hardware can do either the 2.5M or 10M new "connections" per second that the Twitter workload implies. Ciscos's mid-range 7600 series appears to be able to handle 15M packets per second of raw forwarding. Remember, this is an "internal" network, no advanced L3 or L4 services -- just moving packets from one subnet to another as fast as possible, so quoting numbers with all the "real world" stuff turned off is OK.

I'm still not sure that I fully grok the limits of the gear I see for sale since I'm not a network engineer and most "connection per second" numbers I see appear to be related to VPN and Firewall/DPI. It looks like the likely required architecture would have multiple tiers of routing/switching to do things efficiently and not blow out routing tables, but overall it still seems doable to me. This workload is admittedly weird in it's composition relative to stateful TCP traffic and I have no insight into what that might to do in off-the-shelf hardware -- it might just be the sticky wicket. Knowing that there's some ambiguity here, I hope someone with more router experience can comment on reducing the Twitter workload to off-the-shelf hardware.

Perhaps the large number of unique and short-lived routes would require extra tiers that might reduce the viability of a hardware solution (if only economically)? ISTM that even if hardware can only keep 2-4M routes in memory at once and can only do a fraction of that in new connections per second, this could still be made to work with semi-intelligent "edge" coalescing and/or MPLS tagging...although based on the time it takes to get a word of memory from main memory (including the cache miss) on modern hardware, it seems feasible that tuned hardware should be able to do at least 1M route lookups per second which puts the current baseline well within hardware and the 2011 growth goals within reach.

So I'm left back where I started, wondering what's so hard? Yes, Twitter does a lot besides delivering messages, but all of those things (that I understand and/or know about) have the wonderful behavior that they're either dealing with the (relatively low) inbound rate of 4K messages/s (max) or that they're embarrassingly parallel.

So I ask you, lazyweb, what have I missed?

Older Posts

Newer Posts