So the folks in TPTE595 did their group presentations about wikis today. Each group put together a wiki about wikis over the weekend, and covered a common set of general ideas about them – a brief history, what they’re good for, what’s good and bad about them, some examples of wiki sites, etc.
My group decided to divvy up our efforts by first having each person select a page/idea to cover and create. I did advantages and disadvantages. I found an abstract for an article that sounded like it covered a couple really interesting points, and then had to go sort out what the APA format is for citing papers presented at conferences. After ironing out that little detail, I realized that the site I was on offered only the abstract, and that I still needed to go find the full text article. Found that, started reading, got bowled over by a really cool idea.
I went on about this in class for a while, but I think it bears repeating. In one sense, you can look at a wiki as being like a modern-day web encyclopedia that anyone can help you create. We talked about the collaborative aspect of creating a wiki a fair bit today. In another sense, though, a wiki is just an interface between people and information, and just one of many interfaces, at that. A conventional web site contains information, a database aggregates information and allows searching via a web-based front end, a search engine pores over indexes of the web to find the information you’ve requested in a search string.
In all those cases though, the interfaces are designed with human consumption in mind. They’re all made to be read by people. The idea that got my head spinning originated in this quote, detailing one of the weaknesses of a wiki when compared to other means of aggregating and presenting information: “There is no way to automatically gather information scattered across multiple articles, like ‘Give me a table of all movies from the 1960s with Italian directors (Völkel, 2006).'” The point I made in class about it was that you could read a wiki about bananas, and one about apples, and grapes, and tangerines and whatnot, but that you have to make the inference about the commonality of all of those as being fruits yourself – the wiki doesn’t “know” anything about any of those subjects.
A search engine gives you a different way to approach getting those various concepts together. You can put together a Boolean search string that says show me apples AND grapes AND pears OR peaches NOT kiwis, etc, and come up with a set of results that contain much more specifically what you’re looking for, and not the things you’re trying to exclude. But still, the value of the pages is really only as good as the odds that your result set gave you pages that were really “about” that subject, and not just containing an oblique reference somewhere. When I worked in the reference department at the library, that sort of thing happened all the time in students’ searches – the results contained the word they’d said to go look for, but a bunch of the articles weren’t useful because they were actually about something completely different.
It turns out, though, that people have started thinking about creating content that would allow the wiki to “know” what the page is about. It gets into a concept called the semantic web, and if that happens, then some really interesting and powerful things suddenly become possible. Tim Berners-Lee, inventor of the world wide web, has this to say about it:
“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”
In order for that sort of analysis to happen, though, the web itself has to change. Rather than being coded to be read strictly by humans, pages must be developed in a way that allows them to be read by machines. I tossed up a Wikipedia page during class talking about cephalopods (and never got around to saying why I chose that… woops). I chose it because the page details that a cephalopod is a kind of mollusc, and a mollusc is a member of the animal kingdom. Going the other direction, a cuttlefish is one particular type of cephalopod, etc etc. A semantic web functions because you specify in a way that’s interpretable to a computer, that cephalopod isn’t just a word on a page, it’s a thing, and you have to define the nature of the thing. $1.99 isn’t five characters, it’s a numeric value, and not only that, it’s a price.
This has little to do with wikis, though, I know. I went into this tangent because the course is talking about Web 2.0 applications, and a semantic layer, on top of what we’re currently working on, has been tossed about as being what Web 3.0 will look like. Another Tim Berners-Lee quote:
“People keep asking what Web 3.0 is. I think maybe when you’ve got an overlay of scalable vector graphics – everything rippling and folding and looking misty – on Web 2.0 and access to a semantic Web integrated across a huge space of data, you’ll have access to an unbelievable data resource.”
Anyway, the wiki is really a cool tool. It’s easily authored and a great collaborative tool, but it’s limited really – from the reader’s standpoint – in that you have to lay eyes on it yourself to get to the parts you need. On the semantic web, or on Web 3.0 if you subscribe to that idea, people aren’t the only ones who can read that page any more. The machine can too. Stay tuned for about fifty years from now, when there are tools for the semantic web that are as easy to use as this wiki software was!