Using WorldCat Grid Services in Library Applications — Access 2008

Roy Tennant
OCLC Research
Grid Services — a set of APIs to access WorldCat data. It’s not for human consumption, but for machine-to-machine communication. We’ll talk about a few services, with a few demos.
Many of APIs — OCLC’s and other’s — are collected at TechEssence.
These APIs are available for free to all OCLC member institutions.

Identifier Services (xISBN, xISSN)

xISBN If you have a book ISBN, OCLC will return a list of all the related works (in a FRBR sense) to that book. It’s a way to find a different edition, a different version, of the same book that might be in your catalog. Particularly helpful if someone comes into your catalog via Amazon or another bookseller — such as via the LibX Toolbar.
xISSN If you give this tool an ISSN, it sends back a chart showing the history of that ISSN in all its splits, merges, renamings, and everything else that has happened. Example.

Registry Services (Institution Registry)

Any institution can enter information for itself in this registry. It includes all sorts of things — hours, administrative contacts, root URLs for link resolvers, OPAC, and so forth. The WorldCat Registry also has an API.

Experimental Services (Terminologies, Metadata Crosswalk)

Terminology Search for terms (broader, narrower, related) in various taxonomies: FAST, LCSH, MeSH, etc.).

WorldCat Search API

This is the flagship OCLC API, released in August 2008 after an 8-month beta/test period. More than 80 institutions are signed up to use it. There are 110 million records, representing 1.3 billion holdings. This API supports OpenSearch and SRU. Responses come back in flavors of XML: RSS, Atom, MARC21 XML, and Dublin Core. JSON may be coming soon. It’s RESTful. Many indexes. Sorts by relevance, author, title, date, libraries that hold it. It can return standard citations (APA, Chicago, Turabian, MLA, Harvard).

Mashing Up and Remixing the Library Website — Access 2008

Karen Coombs
University of Houston
A couple definitions: Mashing up — taking data from different places and shmooshing it together. Remixing — rethinking the way we do the library web site.
University of Houston library web site, 3 or so years ago, had 1500 pages of content all managed centrally. Needed re-architecting for the same reasons we have all experienced: Staff have a wide range of HTML and computer savvy — running the whole gamut from technology illiterate to programmers. Library web sites incorporate information from lots of different sources. There’s lots of redundancy of data — the same information appears on many different pages. Library users don’t come (or wish to come) to library site — there’s a need to get library information where patrons are. And finally, library information is not well integrated into the curriculum.
The ACRL information literacy report reinforced that library instruction needs to take place as part of the curriculum, not as an add on. So — in the classroom, in the Course Management System.
The traditional source has been to implement database-driven sources, skinning pages to look like each other. What Karen wanted was an easy to use system with little Web Services (her department) intervention. Content that can be easily mixed, reused, and shared within and between systems.
Took inspiration from iGoogle, Drupal, netvibes, and WordPress. Liked content types, widgets (easy drag-and-drop customization).
In the system built there, content owners are responsible for content organization and metadata. Librarians have responsibility; supervisors have review role. A librarian owns pages but also items (building blocks of pages).
Use many tools: LibraryFind, WorldCat, Archon, Serials Solutions, flickr, WordPress multiuser.
The site is completely modular — librarians can add modules to pages, organize them how they want. Fonts and colors are controlled centrally, but layout is up to the content owner.
Content is remixable. This means that content can be used elsewhere inside the site, but also used outside the site. The external uses aren’t quite ready — the API is still in development. But they do use microformats, so if you view an event, the event is stored as an HCalendar microformat. Contacts are HCards, to be added directly to visitor’s address book. Flash objects can also be embedded into a web page.

Virtual Subject Library

This is essentially a subject guide. It brings together federated search, new books, relevant databases, subject liaisons, text blocks, etc. Sample (may not be permanent — this link is on a test server).
Karen walked us through a quick demonstration of building a page. It’s efficient and elegant — assuming that all the resources have already been added. An advantage to this is that there are no broken links. Or if there are, they are found and corrected very quickly since there is only one database entry. Most staff built their libraries in the course of a week — but it took more time for them to conceptualize what they wanted to have on the page and where to put it.

Content Creation and Management

Tab editor works similarly. Through the admin interface, a librarian provides the names of the tabs, gives each tab a type, sets the order, and then (for each tab) makes it do the right thing. A search tab is configured to search something. A text tab is configured to display text. And so forth.
An interesting add-on is that the link tool in the basic text editor takes the librarian to a search interface to all the links in the database. So an existing link can be added (and is therefore recorded once, used multiple times). New links can be added if the link is not already in the database.
Since the site is modular, it is easy to replace functionality. Google Calendar could replace the current events tool. One staff directory could supplant another.

Next Steps

It needs an API — badly. Both to integrate content into external sites and to improve internal AJAX content which will run much faster via API than via direct database query. While images and video can be uploaded easily, it’s not easy to get them into a text block, for example — so this part is only half done. Integration with WorldCat — put bibliographies from WorldCat into a page.
Staff interface is very personalized. User interface is not, yet. A main impediment is that university information systems do not offer easy ways to get data about students (role, major, etc.) Bring federated search results into context of web site.

Questions

Q: How does this work with accessibility?
A: The site, as seen by the public, is very accessible. The staff interface is not accessibility compliant yet — nor is it cross-browser compliant (FIrefox only).
Q: Can you talk about use cases for the API under development, and demand?
A: The first use case is internal users — to make current actions based on database connections will be faster via an API. Also, want to be able move things between Intranet and public site — some content belongs in both places. Desire to put library content into University web site. Courseware is difficult because UH is a WebCT site and that’s not particular API-friendly. An API would make building applications for mobile devices easier.
Q: Could you talk about development effort?
A: Been in the works for about 3 years. It’s written in Cold Fusion. This was because old site was in that, and they didn’t want the site frozen. They could replace parts as they went along. One developer FTE for one year built it; they now have two developers working on it.

Thunder Talks — Access 2008

Thunder Talks are brief (4 minutes 30 seconds!) talks on any subject the speaker wants to talk about. Without further ado:

BiblioCommons

Rolled out at Oakville (Ontario) Public Library. Live implementation of the tool (the research leading to BiblioCommons was described in one of yesterday’s post. Lesson learned: Don’t roll out a JavaScript-heavy site on IE 6 browsers. In a few weeks, have received thousands of ratings, reviews, and lists.

OLE — Open Library

OLE is a framework for libraries that support research, teaching, and higher education. Led by Duke University. By July 2009 there will be a completed design document. Focuses on design, not software development, but expects that a follow-on implementation phase will happen.

How To Adopt Open Source

Things they’ve done recently in University of Prince Edward Island library catalog. Since there was no reserves module in Evergreen, they used book bags to gather resources and use RSS to populate subject guides. Added linkable subject terms. Switched thumbnail images from Amazon to Google Books API. Also added a tab in the catalog record for “Excerpt”, which pulls an excerpt of the book from Google Books.

What’s Happening in Saskatchewan

One system for all public libraries to share resources. Single library card, no ILL — just place a hold. This project was going well, and then it wasn’t when the budget went away. But progress was still being made. Libraries created consortia, did an RFP, and are moving forward. A consortial integrated library system (CILS). Start of a three-year process.

RSS Feeds for New Books

Based on Doran’s New Books List for Voyager catalog. A 3 element table: Use a single LC class, a multi-character LC class (A, B, and J), or ranges of call numbers (AB 123 to AB 155). Script outputs both RSS and HTML. They can be mixed and match to pull together a new books list that matches specific patron needs.
See KSU’s New Books Feed. Custom feeds are possible — for KSU staff only — but requires that patron enters request in words, library staff translate into LCSH speak. But custom feeds are underutilized. Only 10 custom feeds, mostly for librarians, so far.

Koha in a Small Public Library

Hanover, Ontario, public library wanted to go in with a small number of libraries (14) nearby. Couldn’t find a Koha support vendor to do the implementation the way they wanted. So they’re doing it themselves. Building on virtual machines to allow flexibility — in implementation, to add new libraries.

Fedora Drupal Module

University of PEI built a Fedora module for Drupal. Will be open-sourced “very soon.” Have several content models. Makes ingest very easy — to solve the basic problem of many repositories: nobody contributes because it’s hard. Have an example of a RefWorks collection — to ingest citations and, if allowed locally, if it’s allowed to put in the full text (if ROMEO allows it). Metadata are editable in Drupal but stored in Fedora.

Zotero Connection to Evergreen

If you have the Zotero Firefox plugin, you see Zotero cues in the Firefox URL bar. It then lets you select items from the results list into Zotero. Accomplished through a LINK item in the document head that points to the unAPI service that Zotero uses.

Dashboard for Library Information

Summary information for administrators. It shows trends — not individual items, but gross numbers over time that can help administrators understand what is going on. There’s a back end that allows users to easily create widgets, pulling together numbers from various systems. The tool allows easy creation of these widgets, but the data are entered by hand. A storehouse for library data with graphical presentation.

Drupal Module for ContentDM

A Drupal module built at Simon Fraser University to search ContentDM through Drupal for the Multicultural Canada site. It uses the ContentDM API and the Drupal API, or “one big ‘appy family.”

Drupal: Content Management and Community for your Library — Access 2008

Ilana Kingsley
Dave Mitchell
Harish Nayak
Debra Riley-Huff
Nick Ruest

University of Alaska, Fairbanks (Ilana Kingsley)

Movie collection: Movie covers (built with Drupal 4). Pulls in movie records from catalog (it’s a Sirisi catalog, so they need to screen-scrape) and matches with images and ratings from the Internet Movie Database and/or Rotten Tomatoes.
Library web site: Ilana got tired of making small changes to site and wanted to get staff more involved in content editing. Using Drupal’s modules, can customize what appears where and when.
Looked at lots of CMS tools (leaving out Plone, since Ilana didn’t know Python). Installation was easy, didn’t need to know lots of PHP. There’s a huge Drupal community — lots of support.
Had a two-year implementation process. Part of problem was political; campus IT department was not in favor of PHP/MySQL. Content analysis was a key element — making sure she understood the content types so that, ultimately, they could all be defined in the database and then assigned to individuals for maintenance and upkeep.
Keeps updating/adding modules — after testing on a development server.
Has a number of content types: Advertisement, annual report, collection guides, exhibits and collections, news & events, article indexes and collections, etc. Roles form basis for content types. Roles started with departments.

University of Mississippi (Debra Riley-Huff)

Subject guides: Used content construction kit to create a content type for subject guides. Customized navigation and presentation. The Presidential Debate guide (set up for the first U.S Presidential debate (at the campus) got heavy use. The Drupal install held up well under heavy traffic.
Themes are what makes a Drupal site look like you want it to. You can make Drupal content look the way you want it to. Best to start with “Zen” theme, which is bare-bones and easier to customize than out-of-the-box themes that come with Drupal. Matching existing site is difficult. Relied heavily on Content Construction Kit.
Government documents: A government documents repository site — government documents librarian can maintain the content through Drupal.

University of Rochester, River Campus (Harish Nayak)

Revamped library web site into Drupal. Also, Drupal is being used in the eXtensible Catalog (XC) project at Rochester, so there’s a large internal drive to make it happen there.
Their redesign process involves numerous activities: Several items center around the content: User research — the library has a staff anthropologist at UR did an ethnographic survey of how students use the library (broadly, not just online). Technology — showing new technologies to library staff. Usability — this is the checkpoint to make sure that the technology is being applied in good ways. Design — where the programming requirements come from. These are all interconnected in various ways.
Customization of user content (through MySite and/or Panels themes) gives a more personalized user experience. Rochester used MySite to allow users to rearrange their pages. Relies on JavaScript in the page. More interaction with server is necessary (pages aren’t all the same for all users) so can increase load.

London Public Library (Dave Mitchell)

Picked Drupal because of cost. But got very easy customization as a result.
Modified the comment tool so that comments could exist across sets of pages, not just on a single page — so that, for example, election information comments and questions could appear on all government-related pages as a single thread.

Nick Ruest (McMaster Library)

Library’s Digital Collections. Drupal isn’t an out-of-the-box digital collections tool, but Drupal’s CCK allows for the creation of Dublin Core metadata set.
OAI-PMH & CCK: The site has been harvested by several OAI-compliant harvesters, putting digital content into broader access.

User-Generated Content and Social Discovery in the Academic Library Catalogue: Findings from User Research — Access 2008

Martha Whitehead, Queens University
Steve Toub, BiblioCommons

The problem is “discovery” — getting answers to questions that you don’t know how to ask. In other words, finding things you don’t know about. Not just updating the catalog. They were dissatisfied with the federated search tools.
Catalogs are solitary experiences, but learning and research are social activities. User-generated content is what this project is about. Narrowly, tags, ratings and reviews. In the broader sense, curating that information.
The research project with BiblioCommons was aimed to figure out how tagging works in the academic environment. Reading lists are an obvious, and old, form of user-generated content. Research paths in libraries — how to do subject research — are another (librarian-generated, but we’re users, too). Faculty members are the “ultimate research advisor.”
The ideal research process, in an Ontario Council of University Libraries study, users want to see recommendations from “authorities,” wanted to find classics in the field, and also wanted to find surprises — serendipity.
Draws a distinction between social discovery and social networking. The former is serious. What features should be built into an academic research site? Fear that information would be misleading, that faculty (who know subjects best) wouldn’t have time to contribute, that students (for any number of reasons) won’t contribute.
But students are inherently social and even when in the library want friends to know where they are. Study participants wanted to know what their trusted colleagues (professors, fellow researchers) think.

User Research in Academic Environment

BiblioCommons is a next generation discovery tool, a social network, and an OPAC. In March 2008, Steve Toub recruited Queens University facutly, students and librarians to talk about how they do their research.
Non-librarians do not limit (i.e., use facets) very much. Students don’t reformulate queries; they go back to original search and re-do it. Users would avoid LCSH at all costs in the catalog (but would use it as a browsing tool). Students don’t “experience pain” when manually formatting citations — it’s just part of the process. Librarians think direct export to RefWorks a must. Librarians want to help; users want to be independent.
Second round of research in June about user-generated content (UGC). Went through a variety of tests, from paper prototypes to full mock-ups. Focus of this study on a vary narrow sense of UGE. Not much understanding of why people should tag. Not clear understanding of motivations for tagging in the academic library catalog.
Started by asking students, if you want to buy a camera or see a movie, where do you go? Asked if students looked at comments by others. Most of this 18-22 age range said they sought out sites with UGE. Preferred comments from “people like them” over recognized critics/reviewers/professionals. Most had used ‘Rate My Professors.’ They mostly looked at comments, not paying lots of attention to ratings.
One student said, “I don’t necessarily want the opinion of a professor — I’m looking for people who are as incompetent as I am.”
Two of 10 students knew what tagging was by name. But they didn’t have any idea what it was when they saw the MTagger tag cloud. Tagging in Facebook pictures is utterly different from tagging text. Ideas to change it: change labeling — use “themes”, “keywords”, “what terms you use to help others find this”, explain tagging in the cloud, not via a link.
In the catalog, showed a mockup of a review system. For recently-returned books, user can say how useful it was and for which course (from list). Provide brief survey of what user used in the book (whole thing, just a chapter and which one), etc.
Another version — provide sliders for “relevance to course”, “level of difficulty”, “personal interest”, etc.
The most important data element requests were things like “is it going to be on the test”? “How is this related to other texts”? “How is this related to the lecture”? Users requested clear signals about how important the item is to the class.
Most students wouldn’t fill in more than 1-2 data elements — so opportunity to collect data is limited. Most wanted anonymity, one that’s not personally identifiable. Most students wanted to share their comments — that was the point.
Asked, what if — when you logged in to the library — you saw the syllabi for your courses? Very popular.
When are students most likely to contribute? Only if syllabus is online. Probably not for current week’s syllabus, for immediately previous week. Netflix-style “you just returned this item, would you rate it?” sort of interaction. Putting collection point for UGC at the right point in the workflow is trick.
What kinds of rewards are of interest? From list of choices, top response was to help others get to resources faster. Idea of “paying it forward” — if I do it now, it will help others later, which will help me when I need it. Sense of “empty restaurant syndrome” — if no tags are there, why would I join in? “Buying” student participation seems pretty easy.
Barriers to contributing: nobody wants to support freeloaders (help those who don’t contribute), but I want to have content there when I want it. Fears of plagiarism overweigh willingness to share with others — even at level of sharing reading list for a paper through an online system.
Three strategies for ensuring quality:
1) Authentication — people log in to library and library knows who you are (even if it’s not your university ID).
2) Aggregation — pool content from multiple systems provides more content and helps “smooth out” details. Ability to identify individual users while seeing the mass.
3) Marketplace of ideas — create a self-managed system (no editorial review) to make sure reviews themselves are vetted by the masses.
BiblioCommons roadmap
Near term — provide an outstanding user experience — make interface simpler, cleaner, and more intuitive.
Mid term — organize catalog experience around courses and assignments — not LCSH or broad subject guides. You see a course- or assignment-specific view when you log in to the catalog.
Long term — breaking down barriers between silos. Federated search isn’t the answer. Everything is integrated.
BiblioCommons Status
User research led to current priorities. This year and next — an iterative beta release process.

MyLibrary: A Digital Library Framework and Toolbox — Access 2008

Eric Lease Morgan
University of Notre Dame

MyLibrary is about creating relationships. It’s a way to catalog resources — very broadly defined (people, databases, books, you name it). MyLibrary invented about 10 years ago, had a lot of success/popularity then. Concept of “my library” picked up by others such as MyILibrary, etc. It was a turnkey application — download, install, and run. It was simple, and it worked, but wasn’t as complex.
MyLibrary is made up of four kinds of resources:

  1. Resources
  2. Patrons
  3. Librarians
  4. Facets and terms

All of these resources are stored in “Dublin Core-esque” data structures. Patrons in system have name, major, etc. Librarians have name, subject areas, contact info, etc. Resources have material types, subjects, academic level of primary audience. All of these descriptive terms are “facets and terms”. Facets are classes of terms. For example: format: book; subject: forestry; and so on. You can have as many facets as you like, and as many subjects under each facet. It’s all 2 levels deep.

Examples

Examples from the Notre Dame site:

  • Research tools — lists of research tools
  • Subject
  • Reading list — combination of things classed “format: journals” organized by subjects. This was created via OAI from the Directory of Open Access Journals. Specific subjects or specific journals can be added to “my library.”
  • Facebook MyLibrary widget. It’s not “facebook” that’s important. The fact that the MyLibrary toolbox allows it to happen is important.
  • FAQs — each frequently asked question/answer pair is a resource. They’re cataloged. Then they are browsable and can be displayed on relevant subject or topic pages.

MyLibrary is not meant to do everything — just managed “piles of stuff”. It does not support search. It does not support OAI. But data can be pulled out of MyLibrary and fed to a search engine. For example, Alex Catalogue of Electronic Texts. Browsable and searchable lists of 14,000 full-text public-domain books.
MyLibrary is not a particularly strong open source project — there isn’t a community around it, for which Eric takes blame. It’s in Perl, but that’s a passé language now. Coming up is a web services interface on top of it, probably Atom. But some sort of RESTful web service is coming.
Question: It’s been in operation about 5 years; how are students using it?
Answer: Students don’t know they’re using it. They don’t customize it — it’s just the way the web site work.
Question: What are privacy issues with patron data?
Answer: Librarians take privacy more seriously than patrons. Patrons expect easy to use interface that gets them what they want. Libraries are behind the curve on this. MyLibrary makes some broad guesses about what patrons are likely to want. Any future personalization effort will be opt-in. Individuals won’t get assigned resources, but aggregates (freshmen, math majors, etc., not John Smith).

We Love Open Source Software. No, You Can’t Have Our Code — Access 2008

Dale Askey
Kansas State University

Libraries are not particularly good at making their own code open source and sharing. This is especially true of the small, lightweight applications that we build to make ILS systems “work right” or to solve small problems. These are frequently small (a few hundred lines of code). Why not? Several reasons…
Perfectionism — the code’s not ready yet, there are bugs, not commented well, it’s inefficient. Even though it gets the job done — which is really what counts, many developers are hesitant to share their code with others.
Dependency — we don’t want to be supporting you; we can barely support ourselves. Putting it in a repository, with documentation — a good idea. Puts a bit of distance between developer (library) and user. Rutgers is planning to launch a library open source platform — but it hasn’t happened yet (announced in April 2008).
Quirkiness — What we do is so unique there’s no point to releasing it; our problems are ours alone. This is false; while the exact problem may be different, but the general problem very often is shared. But — if you don’t share the code, you end up with the full support and updating burden. There’s nobody else who can help you find and fix bugs, add new features.
Redundancy — Perfectly good software already exists that works for most people, so why should we offer our own? Good enough — the available solution — is often seen as better than doing it oneself.
Competitiveness — Our code is better than someone else’s, so we want pride of ownership and don’t want to share it. We build our own to be the best, not to share the technology. Institutional Repositories are a case in point — institutions develop/implement their own but all too rarely share their successes to save others time.
Misunderstanding — Administrators do not understand nature of OSS tools — they understand and know how to deal with vendors. Functionality can be built on a good foundation — the open source tool — and customized. This is the antithesis of what vendors offer. Open source puts responsibility for getting it right in institution’s hands, not in a vendor relationship.

What Can We Do?

Figure out a way to share software among libraries. There are methods for “big stuff” (Koha, Evergreen, etc.). But what about small stuff? Several initiatives, but none global Google code is one, but it doesn’t meet everyone’s need and isn’t accessible to non-technical librarians. A library-specific repository might be useful.
Put a license on our code and let it go when asked to share it. Even for the small snippets.
Commit to the necessary human investment to build and maintain open source software for our own good.
Reward staff for contributing to open source communities. This should be viewed as a form of professional development/contribution.
Re-prioritize internally to make open source contributions happen.
Favorite soundbite from Dale’s talk: “Minesweeper is like digital heroin.”

Open++: Dispatches from the OSS Frontlines — Access 2008

Keynote: Karen Schneider
Community Librarian
Equinox Software (Evergreen)

Karen Schneider on “Open++: Dispatches from the OSS Frontlines”. Karen’s job is to travel around Georgia talking with libraries around the state helping them with the Evergreen installation.

Evergreen

We’ve seen lots of open source software in libraries in recent years. Tons of experimentation has taken place, lots of it by and for libraries.
Pines had a need for a consortial catalog for 270+ libraries across a large state. Some vendors said they couldn’t do it; others offered far-too-expensive options. In 2004, a development team started building their own ILS. In 2006, Evergreen was launched in 200 libraries. Version 1.4 is imminent (first couple weeks of October). They have kept a tight development cycle.
Key point: With Evergreen, librarians are once again writing their own ILS. This is analogous to what happened about 30 years ago (for example, with the Melvyl catalog, built at home in pre-vendor times). In recent decades, libraries strayed from path of doing their own stuff and went down the vendor path. Now, we’ve come full circle and libraries are once again starting to take control of their own destiny again.
Network effect has been huge (combined with general state of economy and price of fuel): holds and interlibrary loans within the Evergreen system is growing exponentially.
There are now 275 libraries in PINES. Other consortia include those in British Columbia (Sitka) and Michigan. There’s also an academic installation — Indiana University — live now. In development are other consortia, including an academic one. But these are just the “known” sites — it’s open source, so many other K-12 schools likely use it.

Observations about open source and libraries

  1. Documentation is critical — must be a formal requirement. Documentation doesn’t come easy. Evergreen got a Mellon grant to write it — but that’s not the normal path.
  2. Trickle-up Engagement: Originally, it was thought that libraries would automatically know how to “do open source.” However, that turns out not to be the case. Libraries need some help getting started — getting re-engaged in the software development process.
  3. Gift economy: Community around Evergreen is small, skilled, and dedicated — a smaller community of developers than they initially expected. People contribute actively, though not as broadly.
  4. A surprising revelation: end-users are all alike – but library workflows are unique. Users are much more similar than libraries. Evergreen has a very flexible back end. This turns out to have been a very good idea. Flexibility in the workflows is critical.

Features of Openness

Open has several positive features. Communication becomes distributed — no longer vendor-contact, it’s many people looking, many people fixing. Many eyes makes a better product, with many hands to fix them. The network effect is significant on the library side: the more libraries participate, the better. Local issues and requests lead to global improvements. Customization is the user side of back-end flexibility. Fosters partnerships — there’s no need for secrecy, keeping vendors in the dark about local implementation, and libraries in the dark about vendor plans.
Cost — it’s not necessarily cheaper to go open source, but it moves the costs around from licensing to updating/maintenance.

CIL2008: Information not Location

My colleague Mike Creech and I presented on "Findability: Information not Location" (3.3 MB, PPT) this afternoon. The talk abstract:

Learn how to foster user-friendly digital information flows by eliminating silos, highlighting context and improving findability to create a unified web presence. Hear how the University of Michigan Libraries’ (MLibrary) are reinventing the libraries’ web sites to emphasize information over the path users previously took to access it. By elevating information over its location, users are not forced to know which library is the “right” starting place. The talk includes tips for your library web redesign process and user-centric design process.

Our talk was blogged by Librarian In Black.
I had a great time at Computers in Libraries — there were more interesting talks than I could attend, let alone blog. I have some catching up to do through the CIL2008 tag cloud, clearly.

CIL2008: Open Source Applications

Open Source Applications

Glen Horton is with the SouthWest Ohio and Neighboring Libraries
Libraries and Open Source both:
– believe information should be freely accessible to everyone
– give stuff away
– benefit from the generosity of others
– are about communities
– make the world a better place
Libraries create open source applications (LibraryFind, Evergreen, Koha, VUfind, Zotero, LibX, etc.)
Miami University of Ohio has a SOLR/Drupal OPAC in beta (beta.lib.muohio.edu). Not even a product — just a test environment.
How can you do this without a developer? You can contribute to the community in other ways. Teach how to use the open source tools your library has installed — even if not developed there. Hold classes for your patrons on how to use the tools that are available. Help build a user community around the open source tools that you think are of value.
You can document open source software — improve the documentation for other libraries. When you figure it out, help others down the same path. Documentation is often hit or miss; developers are not necessarily good documentation writers – or don’t have time to do so. You can help debug open source tools. Report bugs!Influence the development path for the software. Bigger projects often have active support forums — lots of people reporting and fixing bugs. Smaller projects may not have that infrastructure.
Even if you don’t create or use open source software, you can promote it by linking to it from your web site, distributing it on CDs or thumb drives, etc.
“Open Source or Die.” Libraries benefit from open source — make sure that you are giving back to equal the benefit. Teach it, use it, document it, evangelize it.
Slides are at http://www.glengage.com/.

Open Source Desktop Applications

Julian Clark is at Georgetown University Law Library.
Why open source? It’s free! As in kittens. Which means – acquisition is no cost, but you’ve got a lifetime of maintenance and upkeep. But even more so… you have control and customization. You can change it to make it look and act the way you want. Security — active communities keep applications safe and updated against whatever the latest attack might be.
Why now? FUD about Open Source is declining. (FUD = Fear, Uncertainty, and Doubt). As open source becomes more mainstream, gut reaction against it is on the decline.
When is best time to adopt? When you’re ready; there’s no easy way to gauge this. Depends on your IT support, library management, colleagues… But it can fit into your major upgrade cycle. If you’re planning a major upgrade anyway, why not consider a switch rather than an upgrade? These upgrades often have long lead times; why not take advantage of that planning process to migrate? Also could be triggered by reduced capital funding — where you have staff, but not money, to spend on your systems.
Can you do this? Do you have the right hardware to run the tool? (This applies to both back-end or web-based systems as well as to the operating system for public use computers — a replacement for Windows, for example.) Does your organization’s IT group support open source — how much can you do, with whom do you have to collaborate?
Support options — purchased 3rd-party support; often available, varying degrees of quality and availability depending on the software being supported. Can often hire for a project, for long-term, etc. Flexibility. Of course, there’s always in-house — someone on your staff who knows (or can learn) the software and who knows and understands your organization.
Q: Glen — what are risks of providing open source software to patrons who then want support from you for it
A: Well, you can provide it explicitly as-is.