Syndication’s something that really hasn’t started to take off with me just yet. I’ll dabble around in it periodically, but then fall back on just going to the sites themselves. Not sure why… maybe it has something to do with me being a good little consumer and staying “on brand”: my target activity isn’t “I’m going to go read the news,” it’s “I’m going to MSNBC now” – complete with a layout and a color scheme and all sorts of visual content that doesn’t translate across an RSS feed.

The Google reader really is an impressive aggregator though. And, where I’m sort of set in my ways in how I go to see some content – others I don’t really have an established pattern of use for. Take blogs, for example. For the most part, I think blogs are really pretty much solely about the ideas contained in the posts. With a rare few examples, the goal isn’t to drive traffic to the site and make money on advertising and subscriptions. If you want to switch from Helvetica to Tahoma it’s not a big deal, because there’s really not a whole ton of investment in site identity. If you’re reading the ideas in Google Reader instead of on the site itself, it’s not quite as striking a visual difference as it is if you’re reading a news article.

I think that’ll probably change to a degree, over time. One thing I think a RSS feed probably lags behind in is information graphics. I’m not sure you can convey rich multimedia over a feed in the way you can text and simple images. When that sort of thing is more commonplace, I think RSS stands to make an even bigger impact.

Anyway, a blog is a prime candidate for use in a feed reader. It would definitely be a boon for a school, too. As an example, I’ve created a folder in my Google Reader for TPTE595. Rather than go and read 16 individual blogs on 16 different sites, I can look at one page and keep up with everyone’s posts. That facilitates discussion, because rather than spending all that time bouncing from URL to URL, I can just sit down, read everything in one spot, and move right on to commenting on things that I think are compelling and that I want to talk more about.

You could use RSS in different ways for different levels of learners, too. Like I said a minute ago, for some people, they can be used to facilitate discourse. For others, it could be used to reinforce the importance of viewing things from multiple sources, from different perspectives. Go read news articles from one event, and read the coverage on CNN, BBC, Le Monde, Al Jazeera – you’ll probably see very different presentations based on the same sets of facts. Which elements did each article stress? What was omitted?

Anyway, I think there are probably all kinds of different things that RSS can be leveraged for in a learning context, based on its strengths.

I forgot in the midst of all that pontificating I did earlier – I’m supposed to put up a screenshot of our wiki! Here it is in all its glory:

(four years later edit: there was a screenshot here once, but then it died. in pace requiescat, screenshot)

Edit: Aaaaand fourteen hours later, I remember… a URL! We’re supposed to put up a link URL, too, so here’s Jessica, Katherine, Kelly and Jolyon’s wiki on wikis: wikiwikiinfo

So the folks in TPTE595 did their group presentations about wikis today. Each group put together a wiki about wikis over the weekend, and covered a common set of general ideas about them – a brief history, what they’re good for, what’s good and bad about them, some examples of wiki sites, etc.

My group decided to divvy up our efforts by first having each person select a page/idea to cover and create. I did advantages and disadvantages. I found an abstract for an article that sounded like it covered a couple really interesting points, and then had to go sort out what the APA format is for citing papers presented at conferences. After ironing out that little detail, I realized that the site I was on offered only the abstract, and that I still needed to go find the full text article. Found that, started reading, got bowled over by a really cool idea.

I went on about this in class for a while, but I think it bears repeating. In one sense, you can look at a wiki as being like a modern-day web encyclopedia that anyone can help you create. We talked about the collaborative aspect of creating a wiki a fair bit today. In another sense, though, a wiki is just an interface between people and information, and just one of many interfaces, at that. A conventional web site contains information, a database aggregates information and allows searching via a web-based front end, a search engine pores over indexes of the web to find the information you’ve requested in a search string.

In all those cases though, the interfaces are designed with human consumption in mind. They’re all made to be read by people. The idea that got my head spinning originated in this quote, detailing one of the weaknesses of a wiki when compared to other means of aggregating and presenting information: “There is no way to automatically gather information scattered across multiple articles, like ‘Give me a table of all movies from the 1960s with Italian directors (Völkel, 2006).'” The point I made in class about it was that you could read a wiki about bananas, and one about apples, and grapes, and tangerines and whatnot, but that you have to make the inference about the commonality of all of those as being fruits yourself – the wiki doesn’t “know” anything about any of those subjects.

A search engine gives you a different way to approach getting those various concepts together. You can put together a Boolean search string that says show me apples AND grapes AND pears OR peaches NOT kiwis, etc, and come up with a set of results that contain much more specifically what you’re looking for, and not the things you’re trying to exclude. But still, the value of the pages is really only as good as the odds that your result set gave you pages that were really “about” that subject, and not just containing an oblique reference somewhere. When I worked in the reference department at the library, that sort of thing happened all the time in students’ searches – the results contained the word they’d said to go look for, but a bunch of the articles weren’t useful because they were actually about something completely different.

It turns out, though, that people have started thinking about creating content that would allow the wiki to “know” what the page is about. It gets into a concept called the semantic web, and if that happens, then some really interesting and powerful things suddenly become possible. Tim Berners-Lee, inventor of the world wide web, has this to say about it:

“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”

In order for that sort of analysis to happen, though, the web itself has to change. Rather than being coded to be read strictly by humans, pages must be developed in a way that allows them to be read by machines. I tossed up a Wikipedia page during class talking about cephalopods (and never got around to saying why I chose that… woops). I chose it because the page details that a cephalopod is a kind of mollusc, and a mollusc is a member of the animal kingdom. Going the other direction, a cuttlefish is one particular type of cephalopod, etc etc. A semantic web functions because you specify in a way that’s interpretable to a computer, that cephalopod isn’t just a word on a page, it’s a thing, and you have to define the nature of the thing. $1.99 isn’t five characters, it’s a numeric value, and not only that, it’s a price.

This has little to do with wikis, though, I know. I went into this tangent because the course is talking about Web 2.0 applications, and a semantic layer, on top of what we’re currently working on, has been tossed about as being what Web 3.0 will look like. Another Tim Berners-Lee quote:

“People keep asking what Web 3.0 is. I think maybe when you’ve got an overlay of scalable vector graphics – everything rippling and folding and looking misty – on Web 2.0 and access to a semantic Web integrated across a huge space of data, you’ll have access to an unbelievable data resource.”

Anyway, the wiki is really a cool tool. It’s easily authored and a great collaborative tool, but it’s limited really – from the reader’s standpoint – in that you have to lay eyes on it yourself to get to the parts you need. On the semantic web, or on Web 3.0 if you subscribe to that idea, people aren’t the only ones who can read that page any more. The machine can too. Stay tuned for about fifty years from now, when there are tools for the semantic web that are as easy to use as this wiki software was!

I mentioned Clay Shirky in a previous post here – I’ve always found him to be a fascinating read. The first thing I ever stumbled across by him was a keynote speech titled “A Group is its Own Worst Enemy”. He spoke in it about the nature of groups, and social software made to facilitate the interaction of groups. I’ve been referring back to it for several years now.

Anyway, Clay Shirky’s blog is a treat. He posts relatively infrequently, but always about really intriguing stuff.  This post is a great example, at least for me – it discusses the challenges facing traditional newspapers, and goes beyond talking about immediate issues, to discuss fundamental changes in the ways that people seek information.

So before we go into Web 1.0, or 2.0, what’s this web thing? I take the web to refer to a particular type of content and traffic on the Internet, but I’m maybe a bit of an old-timer. Initially, you could probably safely say that the web meant web pages, documents written in hypertext markup language (HTML), and viewed in a web browser – NCSA Mosaic, Netscape Navigator, Microsoft Internet Explorer, etc. Over time though, the HTML language evolved, and more and more elaborate content was possible. Web browsers, and the “rendering engines” that really drive them, became capable of interpreting more and more types of content – markup languages like XML and audio and images, and their functionality was extended by plugins that could present multimedia content, like Flash or Shockwave.

So, loosely, you can maybe summarize Web 1.0 as being our first foray into knowing what’s possible within this “web” medium. While conceptually this whole Internet thing started out as a means for a bunch of scholarly types to share information with each other, the “first” iteration of the web can really be said to have been dominated by content creators (as opposed to collaborators like we’ll start seeing in a minute in Web 2.0).

While there were some notable exceptions, like say Geocities or Angelfire, Web 1.0 wasn’t really such an accessible thing for Everyone. Being able to create content and get it onto the web meant familiarity with a programming language, and having a grasp of file transfer protocol so you could get your stuff where it needed to be, and of having access to authoring software in many cases so that you could create your content.

In a sense, you could say that it was sort of antithetical to the original intent of the web, in that we sort of wound up patterning it after our other broadcast media, like television or radio. It became a means for content creators to reach information consumers, rather than connecting a bunch of collaborators on an even footing. Search engines allowed people to more readily find what was “out there”. Portals allowed people to customize their web browsing, and tailor their homepage to feature content relevant to them. Services like AOL or CompuServe or Prodigy made a specialty of simplifying and aggregating content, as well as packaging their software with the Internet access required to get to it. But in all these cases, the content being made was overwhelmingly created by professionals (granted, some were a lot more professional than others… there are wonderful examples all over the place of really terrible web design).

Enter Web 2.0. A really bright guy named Clay Shirky had an interesting thing to say about the development of the Internet. I’m completely butchering this I’m sure, but his essential idea was that the really interesting stuff starts to happen once the medium itself has become boring and commonplace. We’ve got this web thing down now – the Internet itself is nearly as completely ubiquitous as it can be. Really importantly, here, authoring tools have become staggeringly good. Anyone can be a content creator, no special training required, no special software required. You can literally “put yourself out there” in minutes without knowing much of anything about anything.

This blog itself is written in WordPress. The software was installed with the click of a button on a server I’ve never seen, and I haven’t spent even a minute concerning myself with how the software works. It isn’t even a particularly special thing, either – you can accomplish the same thing with Blogger, or Xanga, or Movable Type, or I’m sure many other pieces of software made for blogging.

There are a ton of social networking sites, too – Facebook, Myspace, Linked In or Plaxo, to name a few. All of these allow people to connect with other people, to share information, to post and share various forms of media – photos, videos, name it.

What’s more, there’s more flexibility than ever to share information. Instead of being at the mercy of some portal site to allow the aggregation of all the things you think are interesting, we have syndication now (RSS) to allow people to share their content.

The principle thing I think it’s important to get, in comparing Web 1.0 and Web 2.0, is that Web 1.0 allowed one to reach many. A handful of big commercial websites got millions of hits, and that was the way the model worked. One site – many viewers. Web 2.0 allows many to connect to one, instead of one to many, or even for many to connect to many. Through RSS I can get recent info from as many people as I like, through social networking sites I can create and maintain groups of people with a common interest… the ways to interconnect and share are boundless.