Accuracy and the Blogosphere

As an academic librarian, I see one of the biggest practical challenges of our burgeoning information age as teaching our patrons (students, certainly, but also faculty and staff) how to identify the good (valid, authoritative, reasonable) stuff from the bad on the Internet. As I have discussed before (“Is the RSS World Flat?“), it can be difficult for novice (and even experienced) researchers to figure out the provenance of what they find through Google or their aggregator.
I recently stumbled on a parallel instance of this problem, this in the political sphere. A recent article by David Bauder entitled “Blogs Make Spreading Untruths Easier” (the version I found was published at Indystar.com, but the article was undoubtedly syndicated widely via that old-school syndicator, the Associated Press), notes how quickly the blogosphere can disseminate information — truths and untruths alike. The example Mr. Bauder starts with is that of an error, published in a magazine, about the nature of a school U.S. presidential hopeful Barack Obama attended. This error flew through the traditional print and broadcast media as well as the blogosphere. (There is no particular blame on bloggers here — none should be inferred.)
Rumors always fly, and errors — honest and otherwise — have always survived and spread. However, in a digital world that uses popularity as a proxy for importance (if not for validity), an honest mistake or a well-crafted fiction can appear as valid as the truth. The fact that an entry is broadly cited is a proxy for authority, but it is also a proxy for its catchiness. Look at the now-defunct “Miserable Failure” Google Bomb — which Google has defused by changing its ranking algorithm — to see how popularity-driven search results can be gamed.
As librarians, we have a special responsibility to provide access to information — without regard as to its source. At the same time, we have a responsibility to teach our users how to judge and value the significance of the information they receive from us.
The nature of RSS makes an already difficult task that much harder. Teaching people to think critically about the resources they go out and find by consciously looking for information is one thing. I’d like to think we’re collectively making progress in that arena. I wonder how we’re doing with teaching people how to sip from the fire hose of “push” content.
Here’s an example of what I mean. To help me stay on top of ways libraries are using RSS, I set up a Technorati keyword search for blog posts that contain the words “library” and “RSS”. Yes, that’s pretty broad, and the Boolean searcher in me knows that it’s not particularly well constructed. I get lots of good posts from the biblioblogosphere about libraries and RSS — but also lots of irrelevant stuff that happens to talk about a code library that generates RSS, or includes navigation to the university library and an RSS feed, and just plain spam. When I scan through the scores of posts I get each day.
Now, I like to think of myself as fairly savvy and discriminating. I know that some of what I’m seeing is what I deserve to see given a bad search technique. But I’m equally certain that this sort of search happens all the time and that users may not be as discriminating (even as I think I am).
Libraries cannot be in the business of approving each and every blog post for accuracy or validity; we collectively do not have the resources to do this for every blog publisher. At the same time, I think this highlights a librarian role that is not being filled. I see this as a “syndicated content” problem, not as a “weblog” problem. Perhaps there is a syndicated solution out there among the RSS4Lib readership?

Geotagging, RSS, and Photography

An article in today’s New York Times, “Pictures, With Map and Pushpin Included,” (registration required), talks about the increasing use of “geotagging” (see my June 17, 2005, post, “Geotagging“) in home photography. What is interesting is that Sony now has a small GPS unit designed to integrate with your camera’s EXIF data — so once you’ve taken your pictures and gone home, you can download both the pictures and your GPS data into your computer and merge them.
Flickr, of course, lets you manually add GPS data, and there are rumors — with some evidence to back them up — that Apple’s iPhoto has some currently-inactive code to integrate GPS data in iPhoto with Google Maps.
So what I’d like someone to do is this. Build a search tool that lets you look for pictures of the same place you took a memorable or significant picture. Then, sign up for an RSS feed for that location — that will deliver other people’s photos to you as they are taken. Curious to see a particular park where you used to play? Want to see the view inside a ballpark on different days? In a sense, this would be a webcam with highly irregular postings. This would also a way to link you — to build a community of a different kind — to other people who just might have more in common than having been in the same place at a different time.

Will 2007 Be the Year of RSS?

Is RSS on the cusp of moving from a neat tool for the geeks among us to a central part of Internet life? Richard MacManus of (“Read/Write Web“) answers in the affirmative in a post titled “2007 Will Be A Big Year For RSS. MacManus posits that enough major players have now made RSS a part of their tools — Microsoft’s IE 7 and Outlook 2007, Yahoo!’s webmail, MySpace, Safari, Firefox, and many others — that RSS will have a “break out” year.
This makes sense. Once users integrate a tool into their daily life — or the applications they use do that for them — that tool becomes akin to a utility in the physical world. It doesn’t matter whether or not the users know what they’re using. Most of us reading this blog, most of the time, take running water, electricity, and landline telephone service for granted. They are simply there, no questions asked. RSS seems to me to be moving from being a handy tool to being infrastructure, the glue that holds many disparate information services together.
As the tools our patrons use to interact with the online world adopt RSS, the more important it is that services libraries offer are at least capable of distributing information via RSS. There’s not a database or information service out there that couldn’t have a what’s new service (by RSS, by email, by passenger pigeon) — what’s new in terms of data in the database, and what’s new in terms of what the patron can do with the database as a tool. Once our user communities have tools that allow them to access RSS without a second thought, they will only notice it when it’s not there.

[Thanks to BlogBridge for pointing me to Read/Write Web, a blog I had missed until now.]

Serendipity at Risk?

A colleague forwarded me a link to an essay (“The Endangered Joy of Serendipity“) published in the St. Petersburg Times on March 26. In this essay, William McKeen, a journalism professor at the University of Florida, discusses what he describes as the loss of context that has come with Google, RSS aggregators, and much of the Internet. McKeen requires his freshman journalism class to subscribe to the paper version of the New York Times because readers of the online version will only find what they’re looking for.

So what’s the problem with finding what you’re looking for? McKeen writes,

Nuance gives life its richness and value and context. If I tell the students to read the business news and they try to plug into it online, they wouldn’t enjoy the discovery of turning the page and being surprised. They didn’t know they would be interested in the corporate culture of Southwest Airlines, for example. They just happened across that article. As a result, they learned something – through serendipity.

I agree with McKeen that serendipity is a wonderful thing. Heck, if you look at my career path to date (from Soviet Studies to archivist to techy librarian in just 12 years), you’ll understand what I mean. But as our search tools get better — and our RSS feeds get more specific — what are we missing in life? McKeen writes,

Technology undercuts serendipity. It makes it possible to direct our energies all in the name of saving time. Ironically, though, it seems that we are losing time – the meaningful time we once used to indulge ourselves in the related pleasures of search and discovery. We’re efficient, but empty.

This brings to the surface something I’ve noticed only subconsciously — I rarely stumble on really cool web sites anymore. Back in the day (the first years of the web, 1993-1997), I would often find myself doing a web search on AltaVista and getting all sorts of hits that were something much better than utterly wrong: they were interesting. Now that I use Google (or today’s AltaVista, for that matter), I don’t find myself stumbling down the “wrong” path nearly so often. And when I do, it’s not nearly wrong enough to be good.

The same is true with the feeds I’ve chosen to put in to my aggregator. While there’s still some opportunity for serendipity in the not-so-random choices of my favorite bloggers, it’s limited serendipty. By subscribing to feeds, I’m picking my headlines in advance, and somehow feel I’m missing good stuff. Even my keyword search feeds in Technorati and Bloglines are narrow (I haven’t struck the balance between specific enough to be manageable and broad enough to be interesting). Again, I’m not necessarily looking for good stuff that’s germane, but good stuff that makes me stop and think.

Which brings me, circuitously, to the role of libraries and librarians. As we build information systems to enable “Library 2.0,” we must remain cognizant of overtuning the system. I certainly don’t want to find just exactly what I’m looking for all the time. There are occasions — frequently — when I’m browsing and want to learn something orthogonal to my actual question. I just don’t realize it until I’ve found the catty-corner path and gone down it. I suppose this is not that much different from the shift from card catalog to OPAC, but it’s still a shift.

If we do not help people find what they didn’t know they were looking for, we will, to quote McKeen again,

The modern world is conspiring against serendipity. But we cannot blame technology. I’ve met this enemy, and it is us. We forget: We invented this stuff. We must lead technology, not allow technology to lead us. The world is a better and more cost-effective place because of technology, but we’ve lost the imperfections inherent in humanity – the things that make life a messy and majestic catastrophe. We must allow ourselves to be surprised. We must relearn how to be human, to start again as we did as children – learning through awkward and bungling discovery.

Publishers Missing a Niche?

I’ve stumbled on blogs describing new table of contents feeds direct from publishers. Aside from wondering what’s taking them so long, I’ve started wondering why publishers don’t better aggregate their own data, leaving that to other parties?
Why wouldn’t a publisher with a few dozen titles provide subject-based feeds across all their own journals? Or, for publishers with many titles, offer author-specific or institution-specific feeds? (Aggregators sometimes offer the former; I don’t think I’ve seen the latter anywhere.) While a prolific author may only have a couple articles a year, if you’re interested in the same research area as scholar Waldo McGillicudy, you probably know his name and would want an easy way to be notified — pre-publication, even — when something new is coming out.
It would also be interesting to see institution-specific feeds. Everything that comes from a faculty member at a particular research university or (for large institutions) department.

[Thought triggered by Rowland Institute Library Blog]

Library Thing

There’s been lots of traffic in blogland about Library Thing, a service that lets you build your own personal catalog of books you’ve read, link to them in Amazon, pull subject and library cataloging information from the Library of Congress, and tagalog them with your own ad hoc subject terms.
Steve Cohen of Library Stuff, among others, beat me to the punch by suggesting RSS feeds would be a great add-on feature for Library Thing. And he’s right — it would open up collective book clubs, reading lists of your friends, and so on. And it’s the sort of thing that libraries in general should be adding to their catalogs and patron services. Why not allow those patrons who wish to publicize their reading list to do so? Let them create book lists and tell their friends and family where their book feed is.

Why RSS, Anyway?

The New York Times had an article on Saturday — College Libraries Set Aside Books in a Digital Age — that got me thinking. The article describes how the University of Texas at Houston is converting its undergraduate library into an “electronic information commons.” (The books — about 70,000 of them — are being moved to other campus libraries where they will still be accessible to all.) As described in the article,

Their new version is to include “software suites” – modules with computers where students can work collaboratively at all hours – an expanded center for writing instruction, and a center for computer training, technical assistance and repair.

This reflects the changing ways that people, especially today’s teens and twentysomethings, approach scholarship.
Now, if physical libraries are being redesigned to provide space and facilities for digital learning and scholarship, then the library itself should make take advantage of the same technologies our patrons use. Give people what they want before they know they want it — or at a minimum, provide them with a suite of tools to make their quest for answers easier. Send them notices when books similar to items they’ve previously checked out are available. Let them save a catalog query as an RSS feed so they’ll know when new materials are available. Provide one-stop metasearch capabilities of all the databases the library offers. We are, after all, in the service industry — we provide people with the information they need to do whatever it is they do.