shaming “Web 2.0” into utility

So this morning I was looking at the website for the “Web 2.0” conference that ORA puts on. The speaker list looks great, the registration is booked full, and I’m sure it’ll be blogged to death. Another smashing success.

But then I noticed this quote in eye-catching green text at the top of the page:

Web 1.0 was making the Internet for people, Web 2.0 is making the Internet better for computers — Jeff Bezos

The rotation of supposedly incisive quotes from people who are most notable for being..erm..notable seems like an apt metaphor for what “Web2.0” really is: unspecified. That they rotate is the most fitting thing about them.

Now, I don’t know anything about Mr. Bezos’ programming or HCI acumen, but it would seem to this developer that if Web2.0 is indeed about making things better for computers, then we’re all (collectively) f’d. If that’s the goal, I want off the train. Perhaps the quote is out of context, but nevertheless it outlines one of my biggest fears with fads like tagging and overloading of RSS for any-and every-thing, which is that we are now celebrating the failure of systems to make user’s lives better as the indomitable march of progress.

Lets take tagging for example. Tagging is the pet rock of such “Web2.0” companies as Technorati and Flickr. With Flickr, it’s almost excusable since the problem of creating relationships between images requires both more space as well as some sort of synthesis into a format that is more easily handled (i.e., text). But then why provide stricter organizational tools as well? Is it unstructured or structured? And if structured, why can’t the metadata be mined to remove the burden to users without fobbing the work onto them?

And what’s Technoratis excuse? That other things support tagging and therefore they use it? Sorry, but that’s just sloth. I already typed in my search term. Isn’t it a search engine’s job to synthesize context out of raw data? If so, why are we then jumping up and down about something that lets us, the users, do the job of a search engine for it? Color me unimpressed.

And it’s just one example of where real improvement (REST web services APIs, machine learning) are being conflated with ideas with marginal end-user utility (microformats and tagging) as somehow being ambassadors for “Web2.0”.

We should have learned something from Web1.0. We should have a higher bar for allowing memes into the party this go ’round.

7 Comments

  1. Jim
    Posted September 29, 2005 at 2:56 pm | Permalink

    Perhaps his statement was a little imprecise. I can guess at what he actually meant though. He meant Web 2.0 was about making the web *understandable* by computers.

    A bunch of people posting on their weblogs under different categories is unremarkable. A computer can’t do anything useful with that.

    A bunch of people posting on their weblogs using a common tagging system is Web 2.0 stuff. Dumb scripts can get an idea of what’s related, track trends, and so on, with some measure of success, and without needing a degree in artificial intelligence and a team of five to develop.

    Yes, it’s all about letting computers understand data, but what you are missing is that this isn’t a goal in itself; it’s merely a necessary component in having our computers do more for us.

    > We should have learned something from Web1.0.

    If there’s anythinf we can lelarn from Web 1.0, it’s that the simple, easy to implement, extendable technologies are more likely to catch on than overreaching complex architectures. Which category does tagging fall into? Which category does automated machine categorisation fall into?

    Which is more likely to work in the long run – tools to encode the information we have better, distributing the work over the set of all authors who want good search results, or a single search company like Google inventing some method of understanding natural language and context much better than current state of the art?

  2. Posted September 29, 2005 at 3:14 pm | Permalink

    Hi Jim,

    I completely disagree with the argument that computers can’t do anything useful with people posting to their blogs in different categories.

    Of course they can, and they should, and that’s the whole point. Tagging takes the view that “it’s hard!” which, of course, it is, but that doesn’t mean that we shouldn’t be expecting more of the systems we use. And frankly, to make tagging useful in the long term, the consumer systems will have to re-invent most of the same infrastructure for automatic content introspection and idea mapping that traditional search requires. This usually relates to “solving” spam.

    You’ll note that Google Desktop Search and Spotlight aren’t asking you to somehow “tag” your data in order to make it more relevant to you.

    Tagging, and “folksonomies” in general, are cop-outs.

    Regards

  3. Jim
    Posted September 29, 2005 at 3:45 pm | Permalink

    > I completely disagree with the argument that computers can’t do anything useful with people posting to their blogs in different categories.

    Well perhaps I was overstating things there. More accurately, it’s harder, raises the bar significantly for implementers, and may not provide as high a quality of results as the alternative.

    In the context of what made Web 1.0 successful, I’d say raising the bar for implementers when a less sophisticated approach is available is a losing strategy.

    > You’ll note that Google Desktop Search and Spotlight aren’t asking you to somehow “tag? your data in order to make it more relevant to you.

    That’s because it’s merely text search from a small set of documents. It doesn’t scale to the kinds of numbers found on the Internet, and it isn’t prone to large amounts of intentional deception.

    Furthermore, doesn’t Spotlight already take into account manually tagged data, in the form of the iTunes metadata? Doesn’t it do the same thing for iPhoto labels?

  4. Posted October 6, 2005 at 3:30 am | Permalink

    You have to remember that Jeff Bezos probably hasn’t been in the trenches for quite some time. From my perspective Web 2.0/AJAX is the stuff you’ve been doing since I met you back in 2001, and I’m still waiting for your book. :-)

    Jeff’s quote will be pertinent once the real-time Internet starts buzzing like a bee, and we’re literally whipping XML around like there’s no tommorrow with programs groping around for data they understand.

  5. Miles
    Posted December 6, 2005 at 11:16 am | Permalink

    I think Paul summed it up pretty well, and it is a definition that I like. But then Web 2.0 is really anything you want it to be

    http://www.paulgraham.com/web20.html

  6. Posted December 29, 2005 at 3:38 am | Permalink

    My apologies for the delayed response- just got pointed here by a friend.
    The point of the rel=”tag” microformat is to help authors be more specific about their content than we can derive using computational analysis. A year and well over 50 million tagged blog posts on, it seems to be working.
    The key point of tagging is to encourage writers to add keywords to their posts without over-thinking it.

  7. Posted February 3, 2006 at 11:50 am | Permalink

    Great postings! I absorbed and feel like I understand both Alex and Jim’s perspectives. They both make interesting and positive points. Both 2.0 advancements highlighted have value for people, and one doesn’t trump the other necessarily. However, what Alex seems to be missing (or downplaying) is that tagging and folksonomies are collective index “creations” in the form of tagging words, photos, etc. In other words, WE create the index as part of the system. Google search and Spotlight are indexing for you, their way, their algorithm/method wholely determines our results. Big difference there. A great infusion of the Web 2.0 movement is in users using 2.0 tools to develop the content, in this case – the index – by which results can be obtained, shared. Spotlight and Google desktop can certainly have these capabilities built on.