Infrequently Noted

Alex Russell on browsers, standards, and the process of progress.

Absorbing

In the course of life, there are some moments where you are just so damned thankful to be alive that you almost feel guilty for being in your own shoes. Foo Camp was one of those times.

I'm still exhausted from the whole thing, and my brain is full. Entirely full. It's going to take some time to digest all the great stuff I learned, but a couple of things stood out such that I'm terrified of forgetting any of them. First, Avi Bryant's discussion about "pipes for the web" was a discussion of how we build the small, chainable pieces for the current and next generation of things that we're all hacking on. The discussion of feeds as a generic transport type between processing systems (i.e., the Unix pipe) was amazing. With that back-to-back with Tom Coates' talk on "Dirty Semantics", I got the feeling that we're finally organizing an answer to all the things that have bugged me about the semantic web vision of the future. By acknowledging that the web is dirty, and that it's OK, Tom presented a vision of the kinds of apps that I work on that doesn't have an undercurrent of academic condescension about how you should be doing things. It buckets things into "better because the market will say so" and "worse, because the market will ignore it", and those are the kinds of quality metrics I can get behind.

I also got to meet Ed Loper, the guy who did Epydoc, and we got into a discussion about how computational linguistics and machine translation people can fix the problem of having artificial test sets that cause algorithm mutation towards solutions that might not actually be desirable in the real world. What if, instead of some test suite that has a non-human testing the various algorithms, the system were a front-end to bablefish that would allow researchers to submit a web-services call into a queue of potential translators? The system would shunt off some percentage of the overall traffic to each registered system and collect "good" or "bad" rankings (the UI is tricky here) for various translations. By using the scope of the system to test quality and then to perhaps create a leader-board so research teams can compete, it would allow translation research teams to both provide results to sponsors that are trustable enough to fund ongoing work and, eventually, to provide data to support adoption of the resulting systems either through Open Source or commercialization.

Thanks to Foo, I've got a hundred other things rattling around my head right now, and the worst bit of it is that there were so many people that I wanted to meet and things I wanted to see but couldn't. I've never experienced that depth and breadth of experience in one place before. Yesterday morning, I woke up at 9:30 after having gone to sleep at about 4:30 and I was kicking myself for having not been up at 7 because I could have been talking to people instead of sleeping.

It was that awesome.