Martha Whitehead, Queens University
Steve Toub, BiblioCommons
The problem is “discovery” — getting answers to questions that you don’t know how to ask. In other words, finding things you don’t know about. Not just updating the catalog. They were dissatisfied with the federated search tools.
Catalogs are solitary experiences, but learning and research are social activities. User-generated content is what this project is about. Narrowly, tags, ratings and reviews. In the broader sense, curating that information.
The research project with BiblioCommons was aimed to figure out how tagging works in the academic environment. Reading lists are an obvious, and old, form of user-generated content. Research paths in libraries — how to do subject research — are another (librarian-generated, but we’re users, too). Faculty members are the “ultimate research advisor.”
The ideal research process, in an Ontario Council of University Libraries study, users want to see recommendations from “authorities,” wanted to find classics in the field, and also wanted to find surprises — serendipity.
Draws a distinction between social discovery and social networking. The former is serious. What features should be built into an academic research site? Fear that information would be misleading, that faculty (who know subjects best) wouldn’t have time to contribute, that students (for any number of reasons) won’t contribute.
But students are inherently social and even when in the library want friends to know where they are. Study participants wanted to know what their trusted colleagues (professors, fellow researchers) think.
User Research in Academic Environment
BiblioCommons is a next generation discovery tool, a social network, and an OPAC. In March 2008, Steve Toub recruited Queens University facutly, students and librarians to talk about how they do their research.
Non-librarians do not limit (i.e., use facets) very much. Students don’t reformulate queries; they go back to original search and re-do it. Users would avoid LCSH at all costs in the catalog (but would use it as a browsing tool). Students don’t “experience pain” when manually formatting citations — it’s just part of the process. Librarians think direct export to RefWorks a must. Librarians want to help; users want to be independent.
Second round of research in June about user-generated content (UGC). Went through a variety of tests, from paper prototypes to full mock-ups. Focus of this study on a vary narrow sense of UGE. Not much understanding of why people should tag. Not clear understanding of motivations for tagging in the academic library catalog.
Started by asking students, if you want to buy a camera or see a movie, where do you go? Asked if students looked at comments by others. Most of this 18-22 age range said they sought out sites with UGE. Preferred comments from “people like them” over recognized critics/reviewers/professionals. Most had used ‘Rate My Professors.’ They mostly looked at comments, not paying lots of attention to ratings.
One student said, “I don’t necessarily want the opinion of a professor — I’m looking for people who are as incompetent as I am.”
Two of 10 students knew what tagging was by name. But they didn’t have any idea what it was when they saw the MTagger tag cloud. Tagging in Facebook pictures is utterly different from tagging text. Ideas to change it: change labeling — use “themes”, “keywords”, “what terms you use to help others find this”, explain tagging in the cloud, not via a link.
In the catalog, showed a mockup of a review system. For recently-returned books, user can say how useful it was and for which course (from list). Provide brief survey of what user used in the book (whole thing, just a chapter and which one), etc.
Another version — provide sliders for “relevance to course”, “level of difficulty”, “personal interest”, etc.
The most important data element requests were things like “is it going to be on the test”? “How is this related to other texts”? “How is this related to the lecture”? Users requested clear signals about how important the item is to the class.
Most students wouldn’t fill in more than 1-2 data elements — so opportunity to collect data is limited. Most wanted anonymity, one that’s not personally identifiable. Most students wanted to share their comments — that was the point.
Asked, what if — when you logged in to the library — you saw the syllabi for your courses? Very popular.
When are students most likely to contribute? Only if syllabus is online. Probably not for current week’s syllabus, for immediately previous week. Netflix-style “you just returned this item, would you rate it?” sort of interaction. Putting collection point for UGC at the right point in the workflow is trick.
What kinds of rewards are of interest? From list of choices, top response was to help others get to resources faster. Idea of “paying it forward” — if I do it now, it will help others later, which will help me when I need it. Sense of “empty restaurant syndrome” — if no tags are there, why would I join in? “Buying” student participation seems pretty easy.
Barriers to contributing: nobody wants to support freeloaders (help those who don’t contribute), but I want to have content there when I want it. Fears of plagiarism overweigh willingness to share with others — even at level of sharing reading list for a paper through an online system.
Three strategies for ensuring quality:
1) Authentication — people log in to library and library knows who you are (even if it’s not your university ID).
2) Aggregation — pool content from multiple systems provides more content and helps “smooth out” details. Ability to identify individual users while seeing the mass.
3) Marketplace of ideas — create a self-managed system (no editorial review) to make sure reviews themselves are vetted by the masses.
BiblioCommons roadmap
Near term — provide an outstanding user experience — make interface simpler, cleaner, and more intuitive.
Mid term — organize catalog experience around courses and assignments — not LCSH or broad subject guides. You see a course- or assignment-specific view when you log in to the catalog.
Long term — breaking down barriers between silos. Federated search isn’t the answer. Everything is integrated.
BiblioCommons Status
User research led to current priorities. This year and next — an iterative beta release process.
Some Interesting User-Centric Feedback
BiblioCommons and Queens I thought there were some interesting gems in this piece from Martha Whitehead and Steve Toubon on user-genrated content in the library catalogue. Some examples: Non-librarians do not limit (i.e., use facets) very much. Librari…
Results of literature search
# Kerry O’Neill, “‘TILE’ project explores challenges facing libraries in the changing environment ,”