Education

Well the finish line is in sight! I’m working on a portfolio, of sorts – a site that details my curriculum in the IT program. It’s all under the new (aptly named) IT Curriculum heading at the top of the page, there. My program consisted of 11 courses, and there’s a little blurb for each of them describing what i was up to. Anyway, it’s all due day-after-tomorrow, and in typical fashion I’ve essentially left myself to burning the midnight oil to wrap it up.

Some better time management skills might have been in order, in retrospect. My wife’s in Seattle this week at a software conference, and I’m pooped from keeping up with my daughter 🙂 Anyway, it’ll all come together. Have a peep at the new section of the site, if you like!

It’s been a while since the class used Diigo, the social bookmarking tool, but I thought I’d comment on it (again). Aside from the semantic, topical links I discussed before, I thought it might also be worthwhile to talk about another valuable use for this sort of software – its practicality within communities of practice.

I can search on a subject keyword on a site like Google and get tens of millions of hits. Google’s server software has a sort of page ranking built-in, so that the first page of hits I receive are, in Google’s estimation, the ones that it’s likeliest that I’m going to be interested in seeing. If I remember correctly, the ranking system is based on how many other things are linking to that page. More things pointing to it means that that resource is more authoritative;  it’s a more “important” page.

The catch is, that may or may not mean that the first few pages I see are actually valuable to me. For example, if I search for a keyword in an academic database, the place I find my search term within an abstract may just be a supplementary reference, and the source itself may have nothing to do with my desired topic. And so with Google – the number of things pointing to another thing may not be the most reliable metric for deciding a page’s importance.

Enter social bookmarking. A site like Diigo allows me to create a community of practice, of sorts. I can find other people within my field, and look at the pages that they’re saying are useful to them. I can bookmark a site myself, and see that dozens or hundreds of people have also bookmarked it, and even left annotations leaving specific information about the site. Rather than start from zero in a search for valuable Instructional Technology information resources, I can look at the bookmarks of other people within the field, and have a great head start. It’s a step beyond what Google is capable of, because the sites have all been vetted by an actual person, who understands what the page is talking about, rather than being limited to parsing text that’s searchable with a keyword query. Social bookmarking is something I haven’t yet started doing a lot of, but when I think about it, that’s really my own failing, and not anything to do with the tool itself.

Good evening folks!

I uploaded a podcast to iTunes U earlier with modest success. Tooling around in Audacity and producing an mp3 file wasn’t any problem, and uploading the file itself wasn’t any problem. Unfortunately, following directions was in this instance a little problematic, as the ‘course content’ directory to which the mp3 was to be uploaded wasn’t available as an option to me. Instead, I had to opt for either the ‘Drop Box’ or ‘Shared’ directories. A screenshot of what I saw:

picture-11

My thinking at the moment is that the ‘course content’ directory is available only to the instructor, which hopefully I’ll be able to test out tomorrow in class by asking Dr. O’Bannon to log in and see what her upload options are, and then logging her out and then logging one of the students in on the same machine to see whether the options differ.

In other news, linking to podcasts, when they’re on iTunes U, is a little interesting. In iTunes U, you can opt to have course materials either viewable by the public, or private and requring authentication to access. Our course materials require authentication. If you wanted to put up a link to a podcast, you can right click on it in the ‘course contents’ tab, and then select ‘copy iTunes store URL’. Then in your blog post, select your text and create a link by using the address for your podcast on iTunes.

What’s going to happen is this: if you’re not already logged into “My UTK on iTunes U” the link isn’t going to fully work. What will happen is you’ll be diverted to the login page on the UT website, and once that’s verified, you’ll be brought into the iTunes program and put on the UTK homepage – not at the location of the podcast itself. From there, you have to browse manually to the correct course and podcast.

However, if you *are* already logged in to “My UTK on iTunes U”, the link address will work – it’ll bring you straight there.

So, if you want to put a link to your iTunes U podcast on your blog or somewhere, just do this. Also in your blog post, make a note that you need to log in first in order for the link to the podcast to work. The address to log into ‘My UTK on iTunes U” is:

https://itunesu.utk.edu/cgi-bin/login?where=UTK

Oh, and for example, here’s my podcast about uploading podcasts to iTunes U.

Happy podcasting!

So yesterday in class, we tooled around with Google Presentation. It’s a neat little piece of software! Especially for being web-based… it’s not on the order of Powerpoint or Keynote or anything, but for a free piece of software that’s accessible to anyone, anywhere, and doesn’t even need to be installed, it’s not bad at all.

Julie and I were assigned to do a scavenger hunt for a bunch of nature-based things. We went to the UT Botanical Gardens and got pretty much everything in one fell swoop.  Here’s a screenshot of the images we used for our presentations:

hunt-screenshot-300x242

This was a lot of fun. I’m a closet shutterbug to start with, and I’ll probably wind up taking this little presentation file and seeing what all I can do with it. I saw that it can be exported in a number of different formats, and thought I might try to import it into iMovie or something, and lay down an audio track and make it into a different sort of presentation, like one of the voiced over galleries you’ll see on the New York Times website or NPR’s or something like that.

Yesterday in class we experimented a little with photo sharing. We created accounts on Flickr, and poked around finding some images to upload.

We started out by talking a little bit about copyright, and the value of images released into the public domain. I wound up using 10 images of fish that I found on stockvault.net. After finding some content, we used the lab’s Photoshop Elements to optimize the images for the web. We’d started out with high-res imagery, which is really about the last thing you want to put up on Flickr: the file sizes are just huge. So in Photoshop Elements we adjusted the size of the images (a few inches high rather than the original size of thousands of pixels) as well as the pixel density (down to a web optimized setting of 72 or 96 pixels per inch, rather than 300ppi, which is itself really geared for print).

Anyway, I figured out how to make Photoshop Elements batch process the photos, and apply the same settings to each image rather than me having to resample and resize each image individually. It saved the results into a folder, and then I batch uploaded the pics to Flickr – which by the way works in Internet Explorer, but not in Firefox, at least when I tried it. Once they were there I still needed to rename them, because the batch process had just created filenames that were a meaningless number – so I went and named everything appropriately – goldfish, seashell, etc.

Oh, before I forget, I should probably put up a link to my little play gallery on Flickr.

collection-screenshot1-300x209

I’d really only dabbled a little bit in photo sharing previously; I have a handful of pictures up on Facebook. My wife and I have a ton of pictures, though. Right now they’re all on a file server at home, but it would probably be a good idea to look into archiving them all off-site somewhere in case the hard drives fail (had that happen once, bye bye $1000 for data recovery… we use mirrored RAID on the home file server now, lesson learned).

The question then would be what’s the best way? Flickr, Picasa, Facebook, Myspace, something else? For that matter, it might make the most sense just to put the stuff up on our own domains, and just use some open source web gallery software. Reason being, we have far more than 100 megabytes of images (which if I remember right is the maximum Flickr will let you upload, and if the others are much like Flickr, they’re going to a little sparing in how much free server space they’re going to offer us. They’re not in the business to be our free storage, I suppose.

That 100MB limit by the way is yet another very good reason to doctor your photos with Photoshop or some other software (there are several good open source options, like the Gimp, short for the GNU Image Manipulation Program) before you upload. Space is generally at a premium when someone’s giving you some for nothing! Google excepted possibly… if they’re giving me 8 gigs of email space I’ll be curious to see how much space I get to play with on Picasa.

Oh, we launched this and we tried it, and then the users came along and did all these weird things.
Clay Shirky

Social software seems to be a recurring theme for me here lately. I don’t suppose that should particularly come as a surprise considering I’m taking a course focusing on Web 2.0 apps, but the interesting thing is that it’s, for example, only one of three places I can think of off the top of my head that I’ve seen content pertinent to the topic in, oh, just the last day or so – without even looking for it specifically.

For class we read Steven Johnson’s article for Time Magazine, “How Twitter Will Change the Way We Live”. The article talks about how Jack Dorsey, Biz Stone and Evan Williams made this new tool with really simple functionality, and then the users came along with their own ideas about what that tool could be leveraged for:

“But the key development with Twitter is how we’ve jury-rigged the system to do things that its creators never dreamed of.
In short, the most fascinating thing about Twitter is not what it’s doing to us. It’s what we’re doing to it. ”
-Steven Johnson

This general idea rings a big bell for me, because it mirrors a pattern that’s been observed repeatedly, when people are given a new tool – they come up with their own ideas about how they’re going to use it, and the developers can never fully anticipate the ways in which the tool is going to be used. Going back to Clay Shirky, who I will probably refer to a bunch on this blog given enough time… Shirky had this to say on the topic, in his excellent keynote speech “A Group is its Own Worst Enemy”, years before Twitter even existed:

“Groups are a run-time effect. You cannot specify in advance what the group will do, and so you can’t substantiate in software everything you expect to have happen. Now, there’s a large body of literature saying “We built this software, a group came and used it, and they began to exhibit behaviors that surprised us enormously, so we’ve gone and documented these behaviors.” Over and over and over again this pattern comes up.”

For Twitter, rather than its enduring value coming from the fact that you can quickly and easily reach your personal social network to tell them about some minutiae in your life, Johnson posits that its real strength lies in that we’ve started using it as the medium for a sort of insta-dialogue about What’s Happening Right Now, and talks about its use at a private conference about education, and how before too long participants from outside the conference started getting engaged, and the topic itself took on a life of its own that extended for weeks beyond the conference itself.

Right Now is really a fascinating topic when it comes to the Web, and it’s the subject of no small amount of scrutiny – quite plainly because that’s where all the money is. Advertisers want to be positioned for the next big thing, and understanding the way the medium works puts you way ahead of the game.

An article I read on ArsTechnica earlier today that gets deeply into the subject, really blew me away – see “Tag Networks on Social Sites May Predict Next Internet Fad”. It talks about the findings of a bunch of European researchers who have been analyzing the way tagging indicates semantic, topical links that aren’t really observable on an individual level, but that start to emerge in a big way once you start looking at really large numbers of people.

“(Their) findings suggest that there are associations among concepts and users at work on the Internet that have yet to be taken advantage of by websites. It sounds quite a bit like algorithms that big-box stores often use to place merchandise—for example, if a store aggregates receipts and finds that people who buy fancy kitchen utensils often buy bananas as well, the store will place a few bananas by the kitchen utensils. The sight of bananas next to kitchen utensils is nonsensical to the average consumer, but you shouldn’t doubt for a second that there is an underlying logic to the seemingly random juxtaposition.”

In short, it turns out that when you look at enough people tagging content with something like delicious, or bibSonomy, or – in our class – Diigo, patterns emerge. It becomes evident that people who are tagging content in one particular subject area are likely to do so in other specific areas as well.

“In addition to previously unseen connections among tags, the application of Heaps’ law points to the possibility of a sort of precognition on the part of a social bookmarking site. By recognizing the kind of semantic connections described here, as well as the inevitability of shifting interests, it would be possible for a website to anticipate the swell of popularity of a particular topic before it happens and position themselves accordingly. If the bookmarking site were manipulative, it would be possible to put down or redirect the impending topic revolutions.”

So there you go. Our seemingly innocuous tagging for a class assignment may be contributing in a sense to defining what the Next Big Thing is. Pretty neat, huh?

Syndication’s something that really hasn’t started to take off with me just yet. I’ll dabble around in it periodically, but then fall back on just going to the sites themselves. Not sure why… maybe it has something to do with me being a good little consumer and staying “on brand”: my target activity isn’t “I’m going to go read the news,” it’s “I’m going to MSNBC now” – complete with a layout and a color scheme and all sorts of visual content that doesn’t translate across an RSS feed.

The Google reader really is an impressive aggregator though. And, where I’m sort of set in my ways in how I go to see some content – others I don’t really have an established pattern of use for. Take blogs, for example. For the most part, I think blogs are really pretty much solely about the ideas contained in the posts. With a rare few examples, the goal isn’t to drive traffic to the site and make money on advertising and subscriptions. If you want to switch from Helvetica to Tahoma it’s not a big deal, because there’s really not a whole ton of investment in site identity. If you’re reading the ideas in Google Reader instead of on the site itself, it’s not quite as striking a visual difference as it is if you’re reading a news article.

I think that’ll probably change to a degree, over time. One thing I think a RSS feed probably lags behind in is information graphics. I’m not sure you can convey rich multimedia over a feed in the way you can text and simple images. When that sort of thing is more commonplace, I think RSS stands to make an even bigger impact.

Anyway, a blog is a prime candidate for use in a feed reader. It would definitely be a boon for a school, too. As an example, I’ve created a folder in my Google Reader for TPTE595. Rather than go and read 16 individual blogs on 16 different sites, I can look at one page and keep up with everyone’s posts. That facilitates discussion, because rather than spending all that time bouncing from URL to URL, I can just sit down, read everything in one spot, and move right on to commenting on things that I think are compelling and that I want to talk more about.

You could use RSS in different ways for different levels of learners, too. Like I said a minute ago, for some people, they can be used to facilitate discourse. For others, it could be used to reinforce the importance of viewing things from multiple sources, from different perspectives. Go read news articles from one event, and read the coverage on CNN, BBC, Le Monde, Al Jazeera – you’ll probably see very different presentations based on the same sets of facts. Which elements did each article stress? What was omitted?

Anyway, I think there are probably all kinds of different things that RSS can be leveraged for in a learning context, based on its strengths.

I forgot in the midst of all that pontificating I did earlier – I’m supposed to put up a screenshot of our wiki! Here it is in all its glory:

(four years later edit: there was a screenshot here once, but then it died. in pace requiescat, screenshot)

Edit: Aaaaand fourteen hours later, I remember… a URL! We’re supposed to put up a link URL, too, so here’s Jessica, Katherine, Kelly and Jolyon’s wiki on wikis: wikiwikiinfo

So the folks in TPTE595 did their group presentations about wikis today. Each group put together a wiki about wikis over the weekend, and covered a common set of general ideas about them – a brief history, what they’re good for, what’s good and bad about them, some examples of wiki sites, etc.

My group decided to divvy up our efforts by first having each person select a page/idea to cover and create. I did advantages and disadvantages. I found an abstract for an article that sounded like it covered a couple really interesting points, and then had to go sort out what the APA format is for citing papers presented at conferences. After ironing out that little detail, I realized that the site I was on offered only the abstract, and that I still needed to go find the full text article. Found that, started reading, got bowled over by a really cool idea.

I went on about this in class for a while, but I think it bears repeating. In one sense, you can look at a wiki as being like a modern-day web encyclopedia that anyone can help you create. We talked about the collaborative aspect of creating a wiki a fair bit today. In another sense, though, a wiki is just an interface between people and information, and just one of many interfaces, at that. A conventional web site contains information, a database aggregates information and allows searching via a web-based front end, a search engine pores over indexes of the web to find the information you’ve requested in a search string.

In all those cases though, the interfaces are designed with human consumption in mind. They’re all made to be read by people. The idea that got my head spinning originated in this quote, detailing one of the weaknesses of a wiki when compared to other means of aggregating and presenting information: “There is no way to automatically gather information scattered across multiple articles, like ‘Give me a table of all movies from the 1960s with Italian directors (Völkel, 2006).'” The point I made in class about it was that you could read a wiki about bananas, and one about apples, and grapes, and tangerines and whatnot, but that you have to make the inference about the commonality of all of those as being fruits yourself – the wiki doesn’t “know” anything about any of those subjects.

A search engine gives you a different way to approach getting those various concepts together. You can put together a Boolean search string that says show me apples AND grapes AND pears OR peaches NOT kiwis, etc, and come up with a set of results that contain much more specifically what you’re looking for, and not the things you’re trying to exclude. But still, the value of the pages is really only as good as the odds that your result set gave you pages that were really “about” that subject, and not just containing an oblique reference somewhere. When I worked in the reference department at the library, that sort of thing happened all the time in students’ searches – the results contained the word they’d said to go look for, but a bunch of the articles weren’t useful because they were actually about something completely different.

It turns out, though, that people have started thinking about creating content that would allow the wiki to “know” what the page is about. It gets into a concept called the semantic web, and if that happens, then some really interesting and powerful things suddenly become possible. Tim Berners-Lee, inventor of the world wide web, has this to say about it:

“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”

In order for that sort of analysis to happen, though, the web itself has to change. Rather than being coded to be read strictly by humans, pages must be developed in a way that allows them to be read by machines. I tossed up a Wikipedia page during class talking about cephalopods (and never got around to saying why I chose that… woops). I chose it because the page details that a cephalopod is a kind of mollusc, and a mollusc is a member of the animal kingdom. Going the other direction, a cuttlefish is one particular type of cephalopod, etc etc. A semantic web functions because you specify in a way that’s interpretable to a computer, that cephalopod isn’t just a word on a page, it’s a thing, and you have to define the nature of the thing. $1.99 isn’t five characters, it’s a numeric value, and not only that, it’s a price.

This has little to do with wikis, though, I know. I went into this tangent because the course is talking about Web 2.0 applications, and a semantic layer, on top of what we’re currently working on, has been tossed about as being what Web 3.0 will look like. Another Tim Berners-Lee quote:

“People keep asking what Web 3.0 is. I think maybe when you’ve got an overlay of scalable vector graphics – everything rippling and folding and looking misty – on Web 2.0 and access to a semantic Web integrated across a huge space of data, you’ll have access to an unbelievable data resource.”

Anyway, the wiki is really a cool tool. It’s easily authored and a great collaborative tool, but it’s limited really – from the reader’s standpoint – in that you have to lay eyes on it yourself to get to the parts you need. On the semantic web, or on Web 3.0 if you subscribe to that idea, people aren’t the only ones who can read that page any more. The machine can too. Stay tuned for about fifty years from now, when there are tools for the semantic web that are as easy to use as this wiki software was!