RSS Awareness Day

RSS Awareness Day

Thursday, 1 May 2008, is RSS Awareness Day. There’s a grassroots effort to increase the awareness and use of RSS (and syndication tools in general). On the RSS Awareness Day site, it is claimed that “Feedburner recently reported that they track around 60 million RSS subscribers.”
Of course, there are a lot more Internet users today than there were in 2005 (one estimate puts the total at 1.3 billion at the end of December 2007). I would go so far as to triple Feedburner’s estimate to 180 million RSS subscribers, to account for all the users that Feedburner does not know about. And there have to be millions of them: people who “use RSS” without being actively aware of it, such as through “live bookmarks” in Firefox, Safari, and IE, or from web sites that themselves are amalgamations of feeds from other publications. People do not need to know what RSS is to use it.
Still…. even if we triple the number of users Feedburner thinks there are to 180 million, it is still only 13.8% of 1.3 billion users out there. That’s not a particularly overwhelming market penetration figure for something as gosh-darned handy as RSS.
So — talk about RSS on May 1, especially if you can do so without preaching to the converted. You and I probably do not need to be sold on the benefits. But our patrons do. But our parents probably don’t. Take advantage of the first RSS Awareness Day to spread the word.

RSS and Legal Liability

A French court has found that the publisher of a web site is liable for invasion of privacy because it republished rumors, via RSS feeds, that were themselves libelous. See French Websites liable for story in RSS reader (Out-Law.com). The publishers of the 3rd-party sites had to pay fines of between 500 and 1,000 Euro. Out-Law.com, a British legal news site, notes that, “while there has not been a test case in the UK on link liability,” there is a legal precedent that could be relevant in English common law: “A Court of Appeal ruling … found that a man who stood by a roadside placard drawing the attention of passers by to it was liable for its defamatory content, even though he did not create or erect the placard.”
This French case may not have any relevance in the U.S., where the legal concepts of freedom of speech and privacy are differently construed. I find it interesting that one publisher could be guilty of libel by reproducing, without any conscious effort, an RSS feed from another source. One of the strengths of RSS is one of the drawbacks — you subscribe to the feed, come what may.
Do any RSS4Lib readers have opinions on this? Fire away in the comments.

Facebook Now Offers Chat

The various ways it’s possible to communicate with friends (in either the traditional “I’ve-known-you-since-childhood” sense or the current “who-the-heck-is-this-person” sense of the word) has just expanded by one more. Facebook now offers chat with any of your Facebook friends who happen to be visiting Facebook at the same time. Check out the new tool in the lower right corner of, apparently, every Facebook window (the chat tool is always in the lower right corner when you’re conducting important business on Facebook):

Facebook chat application

It helpfully shows you how many friends have a browser page open to the Facebook site, and (when you click the “Online Friends” link), how many are idle and how many are actively Facebooking something. It shows your current Facebook status message, too, when others look to see if you’re online.
The number of ways to stay in touch with people is exploding! I could Twitter you about the recent change to my Facebook status talking about the profile update I made in LinkedIn saying I posted a new entry to my blog, but I won’t. I’ll rely on good old RSS. The mind reels.

Libraries in Facebook

Facebook has a new tool, Lexicon, that “looks at the usage of words and phrases on profile, group and event Walls.” Similar to Google’s Zeitgeist, Facebook’s Lexicon shows how much people are talking about something.
So I tried a Lexicon search for “library“:

Facebook Lexicon for 'Library'

Predictably, Facebook’s users talk more about the library as the semester goes on — the rise through Fall 2007 is clear, peaking on December 10, plummeting over the winter holiday, and then slowly building through the winter until now; I suspect the peak is still a few weeks away.
Research and “study” show similar trends — you can see all three terms on one graph.

Next Generation Discovery Tools

There was a fair amount of discussion at the recent Computers in Libraries of “next generation discovery tools” — the technologies that, many of us hope, will supplant the now aging OPAC concept and provide better, more interactive, and more extensive access to our library’s holdings.
Marshall Breeding has posted a guide to who is using these new interfaces at http://www.librarytechnology.org/discovery.pl, part of his extensive Library Technology Guides site. (I blogged about Marshall’s two presentations at CIL here and here.)
These next-generation tools, whether commercial from the usually suspect ILS vendors or open source from various places, hold great promise for improving our user’s ability to browse and find items that we libraries have already acquired.

[Via Guideposts.]

CIL2008: Information not Location

My colleague Mike Creech and I presented on "Findability: Information not Location" (3.3 MB, PPT) this afternoon. The talk abstract:

Learn how to foster user-friendly digital information flows by eliminating silos, highlighting context and improving findability to create a unified web presence. Hear how the University of Michigan Libraries’ (MLibrary) are reinventing the libraries’ web sites to emphasize information over the path users previously took to access it. By elevating information over its location, users are not forced to know which library is the “right” starting place. The talk includes tips for your library web redesign process and user-centric design process.

Our talk was blogged by Librarian In Black.
I had a great time at Computers in Libraries — there were more interesting talks than I could attend, let alone blog. I have some catching up to do through the CIL2008 tag cloud, clearly.

CIL2008: Open Source Applications

Open Source Applications

Glen Horton is with the SouthWest Ohio and Neighboring Libraries
Libraries and Open Source both:
– believe information should be freely accessible to everyone
– give stuff away
– benefit from the generosity of others
– are about communities
– make the world a better place
Libraries create open source applications (LibraryFind, Evergreen, Koha, VUfind, Zotero, LibX, etc.)
Miami University of Ohio has a SOLR/Drupal OPAC in beta (beta.lib.muohio.edu). Not even a product — just a test environment.
How can you do this without a developer? You can contribute to the community in other ways. Teach how to use the open source tools your library has installed — even if not developed there. Hold classes for your patrons on how to use the tools that are available. Help build a user community around the open source tools that you think are of value.
You can document open source software — improve the documentation for other libraries. When you figure it out, help others down the same path. Documentation is often hit or miss; developers are not necessarily good documentation writers – or don’t have time to do so. You can help debug open source tools. Report bugs!Influence the development path for the software. Bigger projects often have active support forums — lots of people reporting and fixing bugs. Smaller projects may not have that infrastructure.
Even if you don’t create or use open source software, you can promote it by linking to it from your web site, distributing it on CDs or thumb drives, etc.
“Open Source or Die.” Libraries benefit from open source — make sure that you are giving back to equal the benefit. Teach it, use it, document it, evangelize it.
Slides are at http://www.glengage.com/.

Open Source Desktop Applications

Julian Clark is at Georgetown University Law Library.
Why open source? It’s free! As in kittens. Which means – acquisition is no cost, but you’ve got a lifetime of maintenance and upkeep. But even more so… you have control and customization. You can change it to make it look and act the way you want. Security — active communities keep applications safe and updated against whatever the latest attack might be.
Why now? FUD about Open Source is declining. (FUD = Fear, Uncertainty, and Doubt). As open source becomes more mainstream, gut reaction against it is on the decline.
When is best time to adopt? When you’re ready; there’s no easy way to gauge this. Depends on your IT support, library management, colleagues… But it can fit into your major upgrade cycle. If you’re planning a major upgrade anyway, why not consider a switch rather than an upgrade? These upgrades often have long lead times; why not take advantage of that planning process to migrate? Also could be triggered by reduced capital funding — where you have staff, but not money, to spend on your systems.
Can you do this? Do you have the right hardware to run the tool? (This applies to both back-end or web-based systems as well as to the operating system for public use computers — a replacement for Windows, for example.) Does your organization’s IT group support open source — how much can you do, with whom do you have to collaborate?
Support options — purchased 3rd-party support; often available, varying degrees of quality and availability depending on the software being supported. Can often hire for a project, for long-term, etc. Flexibility. Of course, there’s always in-house — someone on your staff who knows (or can learn) the software and who knows and understands your organization.
Q: Glen — what are risks of providing open source software to patrons who then want support from you for it
A: Well, you can provide it explicitly as-is.

CIL2008: The Open Source Landscape

This is the presentation I hoped to have in yesterday’s keynote

Marshall maintains a list of who has what catalogs on his Library Technology Guides site.
Federated search systems: LibraryFind; dbWiz (Simon Fraser); Masterkey (developed by Index Data). masterkey.indexdata.com for a demo.
OCLC offers some open source software — but not cutting edge stuff. Fedora is a major digital repository engine. VTLS Vital is based on Fedora. Fedora Commons is a support service around it. Keystone — also by Index Data.

Open Source Discovery Products (i.e, Next Generation Catalogs)

VUFind. Apache Solr/Lucene.
– eXtensible Catalog (Mellon funded). Not a product now, but will be one day. XC are currently seeking institutional participation. This will “probably become a player” in the coming years.
– Others, such as Fac-Bac-OPAC, Scriblio (formerly WPopac).

Open Source in the ILS Arena

Shifting from open source being risky to open source being mainstream. Medium-sized public libraries are going with open source solutions for catalog; it no long requires massive technological effort or as much risk as it did.
In 2002, the open source ILS was a distant possibility — 3 of 4 tools Marshall reviewed then (Avanti, Pytheas, OpenBook, and Koha) are now defunct. In 2002, open source ILS wasn’t a trend.
In 2007, world starting to change. Slowly. A few hundred libraries had purchased an open source ILS; 40,000 had purchased a commercial product. In March 2008 — early adopters are now catalysts for others. There’s a small installed base, which makes others see the possibilities as being real. It seems now that we have a bona fide trend.
The ILS industry is “in turmoil”. Companies are merging; libraries are faced with fewer choices from commercial vendors; this gives more credence to ILS arena from standpoint of competition.
Decision to go open source is still primarily a business decision — as a library, need to demonstrated that the open source ILS best supports the mission of the library.

Current Product Options

Koha first open source ILS. Based on Perl, Apache, MySQL, Zebra search engine (from Info Data). Has 300+ libraries using it. Including Santa Cruz Public Library, 10 sites and 2 million volumes. Has relevance-ranked search, book jackets, facets, all that jazz.
Evergreen. Developed by Georgia Public Library consortium. Two year development cycle (6/2004 – 9/2006). A single shared environment shared by all libraries. One library card. Switched from SIRSI Unicorn. Succeeded in part because of standardization of policies across libraries (lending policies, etc.). Used in Georgia, British Columbia, Kent County (Maryland), and under consideration by a group of academic libraries in Canada. So far, only publics have adapted).
OPALS Open Source Automated Library System. Developed by Media Flex. Both installed ($250) and hosted ($170) services. Used by a consortium of K-12 schools in NY.
NextGenLib ILS designed for the developing world. 122 installations (India, Syria, Sudan, Cambodia). Originally closed, converted to open source in early 2008. More information from Library Technology.
Learning Access ILS. Designed for underserved rural public and tribal libraries — a turnkey solution. But may be defunct, according to Marshall. Built on an early version of Koha, but customized.

Open Source Business Front

Lots of companies offer a business plan to help support ILS software. Index Data, LibLime (Koha), Equinox (Pines), Care Affiliates; MediaFlex.
Duke is working on an open source ILS for higher education (looking for funding from Mellon; Marshall is involved).

Open Source Issues

Rise in interest led by disillusionment with traditional vendors. But total cost of ownership is probably about the same between open source and traditional tools. Libraries hope that they are less vulnerable to mergers and acquisitions. There’s no lump sum payment (though still need hardware, support — internal or external — and development costs. Not always clear who is funding the next generation of the current system.
Risk factors: dependency on community organizations and commercial companies. Decisions are often based on philosophical reasons, but they shouldn’t be — you need to consider the merits of the system itself. Make sure features and functionality are what you need.
Open Source vendors/providers need to develop and present their total cost of ownership — with documentation.
“Urgent need for a new generation of library automation designed for current and future-looking library missions and workflows.” That is, systems built for our digital and print collections. Open source tools do OK for systems of yesterday; will they meet the needs of the new library?
Q: How close are we to a system that does not utilize MARC records?
A: Not very. We need systems that do MARC, and Dublin Core, and ONYX, and RDF, etc., etc. The value in existing MARC records is too large to ditch. (Of course, it needs to be MARC XML.)

CIL2008: Drupal and Libraries

CIL2008: Drupal and Libraries, presented by Ellyssa Kroski
Uses a course page she set up for her library school course as an example. Students each had a blog; could tag their blogs and posts; favorite things within the community; share things via email; upload videos and photos; create and take user polls; buddy lists; guest book (i.e., Facebook Wall). A class chat room and tag cloud for site’s tagged content. What’s new on site — recently added/updated content.
Drupal runs on Apache, MySQL, and PHP. Has 3 components. 1) The core CMS that lets you organized and publish content to the web. This core functionality is well maintained, with a release schedule and bug fixing. 2) Contributed modules — things added by the user community. A bit of the “wild west” with these; not much oversight or control. Some are very well done; others not. 3) Themes. The skin on the site. Created with a combination of HTML, PHP, CSS.
A very active/engaged user community. Including many libraries. Most recognized, probably, is Ann Arbor District Libraries. Wrote a custom module to place OPAC into Drupal framework. L-Net staff intranet. Manages 65,000 virtual reference transcripts. Franklin Park Public Library uses Drupal. Done by one person, not an IT guy. St. Lawrence University Library — staff intranet as a communication tool for student workers on evenings and weekends. Using Drupal to plan redesign. Public web site, launching in fall 2008, will combine all library web sites. Includes course resources module that will allow faculty to build course resource lists; students will be able to vote on them and upload images, etc. IUPUI Library — pulls databases from Metalib, via X-Server, and organizes them into appropriate subject guides by categories. Librarians have subject guides, more frequently updated than before (ease of updating).
Simon Fraser University library uses Drupal for workshops page. Users can register, wait-listed, etc. Staff can manage registration lists. Uses Drupal events module. Florida State University Libraries. Content is currently managed through pages, but are moving into more of a true CMS implementation. Red Deer Public Library. And many other examples.
Slides and links are available at
http://oedb.org/blogs/ilibrarian/2008/drupal-and-libraries-at-cil2008/

CIL2008: The New Generation of Library Interfaces

Presented by Marshall Breeding, Director for Innovative Technologies and Research, Vanderbilt University
Marshall Breeding maintains Library Technology Guides site. Today’s topic is next-generation catalogs.
Patrons are steering away from the library. Scarily low percentages of users think to start their research at the library. Libraries live in an ever-more crowded landscape — there are so many places information seekers could go. Our catalogs and sites do not meet the expectations of our patrons. Commercial sites are engaging and intuitive. “Nobody had to take a bibliographic instructions class to use a book on Amazon.com.”
A demand for compelling library information interfaces. Need a “less underwhelming experience” at a minimum.

Scope

Current public interfaces have a wealth of defects: poor search, poor presentation, confusing interfaces, etc. Users need to go here, or there, or elsewhere, to find the kind of information they’re looking for. We make them make choices. The entire audience agreed (by show of hands) that the current state of OPACs is dismal.
We need to decouple front end from the back end. Back end systems are purpose-built and useful (to us). Front end systems should be useful for users.
Features Breeding expects to see in next generation.
Redefinition of “library catalog” — needs a new name. Library interface? Isn’t just an item inventory. Must deliver information better. Needs more powerful search. Needs, importantly, a more elegant presentation. Keep up with the dot com world.
It must be more comprehensive — all books, articles, DVDs, etc. Print and digital materials must be treated equally in the interface. Users must not be forced to start in a particular place to find the material they want. They want information, not format. More consolidated user interface environment is on the horizon.
Search — not federated, but something more like OAI — searching metadata harvested from databases, not just the first results returned by each database. Coordinated search based on harvested/collected metadata. Reduces problems of scale. Still great problems of cooperation. Also — questions of licensing.
Web 2.0 influences. Whatever the next system is, it needs to have a social and collaborative approach. Tools and technologies that foster collaboration. That means integrating blogs, wikis, tagging, bookmarking, user rating, user reviews, etc. Bring people into the catalog. At the same time, important to create web 2.0 information silos. Don’t put the interactive features off on the side — integrate it. Make it all mutually searchable.
Supporting technologies: Web services, XML APIs, AJAX, Widgets. The usual suspects.
New interface needs to have a unified interface. One front end, one starting point. Link resolver, federated search, catalog, web — all in the same place, same interface. Combines print and electronic. Local and remote. Locally created content, and even — gasp — user contributed content.

Features and Functions

Even if there is a single point of entry, there should be an advanced search that lets advanced users get to specific interfaces. Relevancy-ranked results. Facets are big and growing. Query enhancement (spell check, did you mean, etc.) — to get people to the right resources. Related results, breadcrumbs, single sign-on, etc.
Relevancy ranking — Endeca and Lucene are built for relevancy. Many catalogs have default results lists by date acquired. However it’s done, the “good stuff” should be listed first. Objective matching criteria need to be supplemented by popularity and relatedness factors.
Faceted browsing — users won’t use Boolean logic, need a point-and-click interface to add and remove facets. Users will do an overly broad search; you can’t stop them. Let them, but give tools that allow them to correct their “mistake” easily. Don’t force them to know what you have before they search.
Need spell check, automatic inclusion of authorized and related terms (so search tool includes synonyms without user having to know them). Don’t give them a link from “Did you mean…” to “no results found.” That’s rude. Improve the query and the results without making the user think about it.
Don’t get hung up on LCSH — think about FAST. Describe collections with appropriate metadata standards. Good search tools can index them all, anyway. Use discipline-specific ontologies — even if not invented by librarians! — as they are the language of the users.
More visually enriched displays. Make them look nice. Book jackets, ratings, rankings.
Need a personalized approach. Single sign-on. Users log in once, the system knows who you are, and that’s it. No repeated signing on. Ability to save, tag, comment, and share content — all based on the user’s credentials. Allows them to take library into broader campus environment.
Deep Search. We’re entering a “post-metadata search era”. We’re not just searching the headings of a cataloger, but we’re searching the full text of books and across many books. And we can soon search across video, sound, etc. Need “search inside this book” within the catalog.
Libraries aren’t selling things; we’re interested in an objective presentation of the breadth of resources available. Appropriate relevancy for us might include keyword rankings, library-specific weightings on those keywords, circulation frequency, OCLC holdings. Group results (i.e., FRBR). Focus results on collections, not sales.
What we do must integrate into our “enterprise” — university, government body, city government, etc. We need to put our tools out where the users are since (as we know) we’re losing the battle to make them come to us. Systems must be interoperable — get data out of ILS and into next generation systems. And hooks back into ILS from front end.
This won’t be cheap, in terms of money and effort both. But we can’t afford not to make this transition. We don’t have years to study and work to catch up with where we should have been years ago.
Is there an open source opportunity? Yes, but implemented systems are not taking the open source approach, for the most part.

I had hoped for a product review in this session, but the overview of features and desiderata was very helpful. There was a whirlwind tour at the end, but I would have liked an overview of what’s there.