Comments for shaming "Web 2.0" into utility
I completely disagree with the argument that computers can't do anything useful with people posting to their blogs in different categories.
Of course they can, and they should, and that's the whole point. Tagging takes the view that "it's hard!" which, of course, it is, but that doesn't mean that we shouldn't be expecting more of the systems we use. And frankly, to make tagging useful in the long term, the consumer systems will have to re-invent most of the same infrastructure for automatic content introspection and idea mapping that traditional search requires. This usually relates to "solving" spam.
You'll note that Google Desktop Search and Spotlight aren't asking you to somehow "tag" your data in order to make it more relevant to you.
Tagging, and "folksonomies" in general, are cop-outs.
Regards
Well perhaps I was overstating things there. More accurately, it's harder, raises the bar significantly for implementers, and may not provide as high a quality of results as the alternative.
In the context of what made Web 1.0 successful, I'd say raising the bar for implementers when a less sophisticated approach is available is a losing strategy.
> Youâll note that Google Desktop Search and Spotlight arenât asking you to somehow âtagâ? your data in order to make it more relevant to you.
That's because it's merely text search from a small set of documents. It doesn't scale to the kinds of numbers found on the Internet, and it isn't prone to large amounts of intentional deception.
Furthermore, doesn't Spotlight already take into account manually tagged data, in the form of the iTunes metadata? Doesn't it do the same thing for iPhoto labels?
Jeff's quote will be pertinent once the real-time Internet starts buzzing like a bee, and we're literally whipping XML around like there's no tommorrow with programs groping around for data they understand.
http://www.paulgraham.com/web20.html
A bunch of people posting on their weblogs under different categories is unremarkable. A computer can't do anything useful with that.
A bunch of people posting on their weblogs using a common tagging system is Web 2.0 stuff. Dumb scripts can get an idea of what's related, track trends, and so on, with some measure of success, and without needing a degree in artificial intelligence and a team of five to develop.
Yes, it's all about letting computers understand data, but what you are missing is that this isn't a goal in itself; it's merely a necessary component in having our computers do more for us.
> We should have learned something from Web1.0.
If there's anythinf we can lelarn from Web 1.0, it's that the simple, easy to implement, extendable technologies are more likely to catch on than overreaching complex architectures. Which category does tagging fall into? Which category does automated machine categorisation fall into?
Which is more likely to work in the long run - tools to encode the information we have better, distributing the work over the set of all authors who want good search results, or a single search company like Google inventing some method of understanding natural language and context much better than current state of the art?