Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Comments for Twitter By The Back Of A Napkin


Hey Jorge,

Uptime? I dunno, what's the average uptime for a redundant pair Cisco's based on MTBFs? Lets say that failover sucks and upgrades take time and cause problems...lets set our sights low can call it four 9's for the core function. That's an hour of downtime per year. Other systems inside of Twitter (search, web UI, various delivery backends) will probably have dependent downtime larger than that but within the same constraints since they're more-or-less parallizable and some can remain up even if the core function goes down (unless, again, there's something about Twitter that I don't grok). ISTM they're a long way off.

As for 911 replacement, one of the most compelling uses for Twitter that I've ever heard is my friend Stasha (who's heavily involved w/ local disaster recovery preparedness efforts here in SF) telling me a story about a friend whose best way in an emergency to tell family and friends that they were OK was via Twitter. It was a compelling story that gets to the core of why Twitter is valuable. The relationship asymmetry makes it even more valuable in some situations and so does the compelling story about APIs and endpoints.

Regards

by alex at
(Re-posted from my comment on Louis Gray's Google Buzz)

OK, but based on these numbers, what do you expect twitter's uptime or stability to be? and how far are they from that?

And, more importantly: what relevance does that have in deciding whether or not to use the service? A quick Google search reveals a Techcrunch article putting their 2007 downtime at 6 days, I feel recent years have been much better.

I probably wouldn't want to use it as a 911 replacement, but a couple of days a year doesn't mean you should stay away from a service (free, at that), if you can get something useful out of it.

Not yet, but Pete's got my (fake) back: http://twitter.com/FakeAlexRussell
by alex at
"In which you talk me into finally getting a Twitter account ..." So, you got a Twitter account?
by mde at
Hey Matt,

I agree this problem isn't interesting, but I find it curious how knowledge gets lost. Reminds me of: http://web.mit.edu/krugman/www/dishpan.html

Eventual consistency seems good enough for Twitter. You don't want packet-level TTL for durability, we want ACK so that senders can know to retry on the send side (modulo backoff). That'll double the work for large senders, but twitter doesn't guarantee delivery, let alone timely delivery. Given that only routes look to be expensive (not actual traffic), that's probably fine regardless. We can have longer conversations without hitting our bottleneck for some time. From there, it's good-ole HA: duplicate what you must, over-provision by less where you can, and for godsake don't ask a DB to answer questions that you can keep the answers to in memory (who are my subscribers? who do I subscribe to?). Heck, SSD is cheap and will do 10K+ random read IOPS.

Front-ends could connect to user-storage-servers (the "external" IP for the daemon connected to the message routing system), not "app servers" in the traditional sense (search and API are special since they rarely write and will want to shard by things other than user). Each user server can manage its own state (tweets, following (inexact), followers (exact), etc.), answer questions via an API, manage its own inbound and outbound queues, and coordinate with one or more fail-over replicas. Daemon startup would obviously be read-heavy from a central server, but in normal operation can be phased for code rollouts assuming some care is taken in forward-compatibility for RPC message format (protobufs, thrift, etc.). Inbound message processing will need to look aside to multiple services before eventual delivery, but those are independent. What matters here is that the inbound system keeps state sanely, can flush to some form of replication quickly (network to other DC + memory store may be better than disk, but I don't get the sense that they're burdened by write rates at the edge), and can walk through delivery incrementally, eventually informing a central DB of the message and successful delivery.

As for how to survive everything, that's not in Twitter's SLA (obviously), and it may not be in their economic interests. Would depend on OPEX, most likley.

by alex at
I'm under the impression that message routing hasn't been their major problem in some time. That seemed to be the case as of Chirp. I admit I didn't sit through their most recent presentation so perhaps I'm behind.

Assuming for a moment that I'm behind and tier'ed message queues can't do for Twitter what they've done for the stock exchanges for many years and that there is a compelling reason to model application layer message delivery as routed packets. How do you address durability? TTL on a packet isn't going to buy you much. Your routers aren't designed to queue packets for long, much less persist. What do network partitions look like, e.g. where do the messages go when your routing table is flapping.

It's absolutely true that you can model Twitter messages as packets. You could also model them as messages over SMTP. Or messages on a message queue ala Kestrel ... etc. It seems like you're attacking an uninteresting problem or at least one of the less interesting problems in their space.

For your next napkin post I'd be curious to see how you would address durability in the face of various failure scenarios. Imagine every component in your system will fail [because they will]. You can be eventually consistent but you can't accept even modest data loss. Even more interesting would be you address the read and write stories for delivering to a collection of timelines.

What bit is the endpoint? The sending client? Then instead of sending one message to Twitter, I send a lot more messages. If the endpoint is basically a host within the Twitter internal system which is accepting my message, exploding it and then sending it to a lot of other endpoints, then the work of multicast is being done by the Twitter system.

The better analogy here would be email messages and mailing lists with VERP. Send in one message, send out a lot more individual messages to each recipient address.

That is a slightly harder problem to solve.

While I agree that good network design does imply intelligent edge nodes and stupid central routing, Twitter doesn't have that benefit.

They are more of a telco style model with multicast on top.

by Devdas Bhagat at
In the design I described, endpoints pre-explode all packets meaning they don't have to be multicast. So long as you can route them, you can have the endpoints do the packet generation. Multicast makes the routers do that explosion, but if they can do it to subnets, that's might good enough. I was thinking something similar w/ MPLS tagging and/or broadcast to an individual subnet to keep the total # of routing operations down (just match the tag, send to the next hop where it'll pop the tag, and then maybe have receivers filter on local subnet broadcast which will never be more than peak-inbound rate, which is low). Endpoints might only need to generate one packet per destination subnet in that case, which should lower total core traffic by an order of magnitude.

If you were really in an AS-based configuration, you'd be routing on prefix so the core traffic wouldn't need the full 150M addresses (which is only 10 /8's), just the subnets you've decided to group addresses by. That'd increase the hardware requirements to 2 tiers plus endpoints, but that's still relative cheap for only 150M addresses.

by alex at
The concurrent session stuff would be interesting if the numbers were for multicast streams.

Routers do a best match, with about 337181 active routes in the Forwarding Information Base. See http://bgp.potaroo.net/as1221/bgp-active.html for the source of that number.

The routing information is fairly stable, with each packet coming in, getting matched to one route and being pushed out. One packet enters, one packet leaves. No new packets are generated.

With twitter, the lookups are slightly more complex, where one packet enters and multiple packets leave. That adds some complexity to the lookup.

New packet generation is significantly more effort in terms of CPU performance.

With Twitter, you are doing exact match lookups on ~ 150M addresses in software, which is expensive (as opposed to 337K which is pushing at the limits of hardware right now).

by Devdas Bhagat at