The 4th Floor and Library Transformation

This is the second keynote address at the LITA Forum in Louisville. The speaker is Nate Hill, assistant director of the Chattanooga Public Library. Follow him on Twitter at @natenatenate.

Nate Hill speaking at LITA Forum 2013The 4th Floor project is more a community organizing project than a technology project. When Nate started there a few years ago, the Chattanooga Library was seriously broken. Technology improvements are just one portion of the overall improvements being made. Chattanooga has gigabit networking throughout the city. So the city has a lot of potential and lots of recognized need for change and reinvention.

Unlike many brutalist all-concrete buildings, the CPL has large amounts of open space on each floor — it was designed with an open plan, so they aren’t as constrained by solid concrete walls. This gives them some flexibility.

Nate is going to focus on one aspect of this reinvention. We’ll start with the “why:” moving from Read to Read/Write. Everyone in the LITA audience at the moment can create something and make it available to everyone. Before that was possible, we needed libraries to store relatively rare copies of things. Library was about access. Now, it’s about providing tools to create things. Connectivity is a key underpinning to these tools.

CPL uses their 4th floor space as a “beta space” — the library can experiment, and the public can experiment. 14,000 square feet of space was used as an attic. They solved the problem collaboratively — invited people to meet in that space. Started brainstorming what might be useful to do. This started about 18 months ago (around January 2012).

Had a public auction, got rid of all the stuff. Net profit: $1500.

So, now what? A vast amount of empty space, with no added staff resources to do new things. Answer? Strategic partnerships with other organizations. First was with the Chattanooga chapter of AIGA. AIGA got a home for their meetings, brought in presentations, and started the seeds of current programming.

The next major milestone was the first DPLA “appfest” — 100 people came to CPL from around the country. Realized that people didn’t necessarily want to work at desks in these informal arrangements, so started to create less rigid workspaces.

Next was a local collaboration space, co.lab. Got 450 people to attend a series of pitches — entrepreneurial ideas. Again, community was amazed to see what the library could do.

The library is losing ownership of the space; it’s becoming a community platform.

“We make all of this stuff up all the way.” CPL has an amazing tolerance for experimentation and trial-and-error.

They moved their IT staff to the 4th floor, creating a coworking space.

Using Chattanooga’s gigabit network, they have done performances where dancers in two locations perform with projected images, passing the image back and forth between two locations in the city.

Making Maker Libraries — LITA Forum Keynote

I’m attending LITA National Forum 2013 in Louisville, Kentucky. I’ll be posting some conference notes sporadically. The opening keynote session is a talk by Travis Good, contributing editor of Make Magazine. His blog is He talked about “Making Maker Libraries.”

Travis Good

Once a “nerd” was not a particularly flattering thing to be called. Now, that has changed. Nerds are the smart guys you go to in order to solve a problem. Nerds have arrived. Library IT groups have solved, in a nerdy way, many kinds of problems: online catalog, computer workstations, wired Internet access, wireless internet access, ebooks… It is not just making things work, though; it is making things work comfortably in a library context.

Through making wifi available, we redefined why people go to a library.

Changes in technological landscape are a threat — and an opportunity. We will talk about just one of these changes: the maker movement. It’s a broad movement with lots of definitions. Humans have been making things since we developed opposable thumbs and tools.

What was “making”? It was done by craftsmen, focused on trades, with years of training and practice, with rudimentary tools. Took lots of practice to do well because the tools were “dumb.” Now, tools are “smart”, and more people can make things. Moore’s law has affected tools. Technology brought smarts to making; computers can manage processes. Costs drop, power rises, steadily. Tools are smarter, more powerful, and more capable. The Internet has simultaneously opened up collaboration across distributed communities. Open source software came along. And now… open source is not just software. It is hardware, too.

New, smarter, tools are already here. CNC Mill (Computer Numerically Controlled) Mill. It’s a subtractive tool — it mills away something, until what is left is the product you want. Designs can be shared, tailored, and made. 3D printing is the opposite, in a sense — it extrudes material to make something. An additive tool.

Laser cutters — these are two dimensional, and cuts a flat surface with a laser. Can cut wood, leather, acrylic, metal, and similar materials. Can create very intricate designs.

For all of these products, there are libraries of models that you can download, modify, and make yourself. Powerful tools and shared designs can make anyone a maker of things.

At the same time, we are getting cheap, flexible electronic micro controllers, sensors, and actuators. Sensors make measurements of things; actuators create a response of some kind.

Simple embedded electronics made a turn signal for a bike rider — left arrow, right arrow LEDs on the back, and a switch in each sleeve for the biker to turn them on and off. Another example — a switch in a chair that turns the TV on when you sit on it; turns the TV off when you stand up. Third example — an Arduino on a Venetian blind that opens or closes the blinds when the room is too cool or too warm.

Barriers to creating things have been reduced. Long apprenticeships to become competent are no longer required. And it’s now easier to become good at lots of things. So more people can make, more making can take place, and more people can be collaborating.

The question that arises: where is this making happening? You need spaces in which people can learn, create, share, and collaborate. Threshold to entry is low, but you still need to cross it. This is a clarion call to libraries. Libraries are already the places that offer lifelong learning. And are looking for new ways to deliver on their traditional missions.

Libraries are experimenting with maker spaces in different ways. Experimenting with different tools and technologies, seeing what local patrons will want to use. Can vary from branch to branch.

Maker spaces are catching on in libraries. It is seen, broadly, as an opportunity to be valuable to the community (in public & academic libraries). There is lots of experimentation on what kinds of services and tools to offer — it is something of the Wild West.

There are some basic things that are needed to foster the growth and development of maker spaces:

  1. A source of best practices. Why does every library need to invent this service on their own?
  2. A database of maker helpers. People who would come to your library and talk about specific topics. Tap into maker spaces, meet up groups, etc. But there is no vetting — lots of interested people, but needs to be a way to make sure the volunteers are good teachers, reliable, etc.
  3. New sources of funding. There is lots of competition for scarce resources (e.g., IMLS). Corporations are interested in funding maker spaces — they see it as future employees and future innovations. Skills of successful makers are the skills of successful innovators and inventors.
  4. Kits that fit into a library. A maker space in a box, and maker supplies that are reusable and affordable. For example, Arduino prototyping kits that can be reset and tested for basic functionality by completely non-technical library staff.
  5. Finding good projects. This is already in the works. Make it @ Your Library ( 100,000 crowdsourced projects have been uploaded and categorized.

We can build tools for our library community at large.

The power of making grows when the various maker communities collaborate and communicate — libraries, incubators, schools, government. It’s a network.

Internet Archive Tries to 404 the 404

The Internet Archive announced today a new service — creating a permalink for a web page that leads to copy of the page at the Internet Archive. So, for example, I just created a permanent snapshot of this blog’s homepage as of 25 October 2013 at 19:35.43, preserved forever and fully citable:

This blog probably doesn’t deserve that sort of immortality. But what about more significant things? Rather than citing a web page with a note “accessed on 25 October 2103”, let the Internet Archive grab a snapshot of it and link to that. It would be lovely if this service could be extended into licensed content so that citations to academic (and all-too-often behind a pay wall based on one’s affiliation with the library’s parent institution) content could be equally persistent.

Scholarly content, as a rule, is provided through a non-persistent URL, if we ignore DOIs and Handles. Those valuable tools, of course, are only good as long as the owner of the content maintains their persistence. The owner of the content is responsible for updating destination links. That may not be the  highest priority in a bankruptcy or other sudden and unexpected cessation of operations.

This new service makes possible better back-references to the historical record.

Apple Takes on RSS NotificationsOne of the features of Apple’s soon-to-be-released Mavericks operating system is Safari “push notifications.” Similar to what you might be familiar with on an iOS device, these are updates that you can subscribe to from participating websites that will send an alert to Safari when content is updated. Apple’s site says that notifications will be updated even when you are not actively using your computer — meaning that the information you are being sent will always be available to you.

This sounds a wee bit like RSS, doesn’t it? Participating websites can send you updates as they happen, and Safari will track what you have seen. I am assuming that updates will be synchronized across your various devices so that if you read an article on one device, it will be marked as seen on your others (this will probably require an iCloud account).

This is a feature only available to people using Mavericks and Safari 7 (it is not clear if this will be available to earlier versions of the Mac OS or Safari). You also must have an account on Apple’s developer website to access the instructions for setting this up for your website.

It will be interesting to see if Apple manages to replace RSS in its ecosystem with this custom setup, at least for publishers or tech-savvy website managers who can adopt the technology.

A tip of the hat to MacRumors.

Perspective on Discovery

I’ve been reading with interest the items that have been written in the past few weeks about library discovery by Lorcan Dempsey, Dale Askey, Aaron Tay, and Carl Grant, among others. Library discovery, of course, is the capability to search effectively across a wide range of online materials available through a given library (whether owned, licensed, leased, open source, locally digitized, or what have you) through a single search box. There are vendor products and homegrown solutions, and hybrids of the two.

Is discovery dead already? Is it still the hot new thing, the Holy Grail of disintermediated patron interaction?

No, and no.

Askey makes great points about the serious challenge we libraries in digitizing our materials for access (not to mention preservation). I’ll call this the “last shelf” challenge. Just as incredible high-speed internet is within the reach of just about every urban home, it’s the “last mile” that’s the kicker. Getting fiber to the door of every abode is and expensive, slow, process. Getting the “last shelf” digitized is similarly expensive and slow. We’ve done the easy stuff — non-unique, commodity items — already. Digitizing the “last shelf” should rightly be a significant goal for all libraries holding unique materials.

A discovery tool is only as good as its content for the intended use by the individual patron. Yes, libraries should be proud of, should enable access to, and should promote the living daylights out of the items that are uniquely theirs. These “lost” items can provide researchers at all levels with paths to innovation and discovery (in the traditional sense of the word).

Where I think the value of discovery could be, for academic libraries in particular, is in customizing the results of discovery for the user’s need. Why not offer a “personalized” slice of the discovery pie, perhaps as a facet, that filters results based on the user’s presumed context? So a patron, logged in to the system, might get results focused on those appropriate to each of the enrolled classes (by level or department, for example). Or to remove one’s own native discipline from the results and focus on results from an entirely different one. That could be a powerful tool to enhance research at the interdisciplinary boundaries of two subject areas.

The power of discovery, in my way of thinking, is not just in harnessing the local and the global — which is something in and of itself — but in providing tailored, focused access to that breadth. It’s not just the Mississippi as it dumps into the Gulf of Mexico; it’s just the right tributaries out of thousands that feed into the torrent.

So I don’t think discovery is doomed, or misguided. But I do believe that the path forward is in more focused, context-aware, services.

No Personal Spacecraft or Colonies on Mars

I’ve been thinking recently about how yesteryear’s promises of technologies to come have missed the mark. In the classic science fiction novels I devoured as a child in the late 1970s and early 1980s, humanity a generation or two thence (that is — us, now, today, in the early 21st century) would have some practical means of interstellar travel available to it, be living on space stations around Earth and in colonies on the nearby planets, have encountered and made peace with (or subdued) alien races around every turn.

The actual reality of 2013 is quite different. We have one space station in orbit, with a steady but small population of three. Colonies on other bodies are still a generation or two away (I admit to being skeptical of NASA’s timeline for lunar and Martian outposts). Commercial ballistic flights, to get a taste of near-Earth orbit, are a year or two away. Interstellar travel remains, tantalizingly, several miracles removed from today’s understanding of the universe and the laws that govern it. And alien species, friendly or otherwise, are still out there, waiting (I like to think).

Yet… As I think about the stories that grabbed hold of my childhood imagination and haven’t really let go, it is largely the main thrust of the stories that we’ve failed to achieve. The small things are all here. Instant communication with just about any other person on earth who could communicate with you? Check. Immediate and unfettered access to a global encyclopedia of much of the world’s knowledge? Check. Interact verbally with your computer and get a plausible, often useful, response in return? Check. Sit comfortably on one’s couch typing a short essay that is instantaneously saved to some quasi-permanent storage system, in what we now quaintly call “the cloud”? Create a “photocopy” of a physical object, not just a tax form? Check and check.

Pervasive knowledge by the powerful about the goings on of everyone else? Alarmingly, and increasingly so, check. (The future is not all that it was cracked up to be, as noted sci-fi author David Brin let us know in his non-fiction work, The Transparent Society.)

We are living in the margins of the science-fiction universe I dreamed of. Not the grand, gee-whiz Buck Rogers world posited in the 1930s-1950s that I read of in the 1970s. But we live and breathe the minor plot devices and deus ex machina resolutions to tricky problems faced by the hero. It turns out that the small stuff of the stories, the little throwaway details — those describe the reality we live in. We seem to have made tangible the backdrop to the fantasy, leaving the big picture to be invented. Where I was caught up in the overarching plot, I should have been paying attention to the background. The science fiction writers of the past got the big things wrong, but I’m pleasantly surprised at how much of the small stuff has already come to pass.

Feedly Offers an API

Screen shot from
Screen shot from

Feedly, the web service that inherited a large number of Google Readers users when Google pulled the plug on it, is now offering an API for developers who want to use the Feedly Cloud. You can use the API to access the more than 30 million feeds harvested and indexed by Feedly. The API allows the application to authenticate as a particular Feedly user, or to access everything.

Developers can sign up for the Feedly Cloud Developer Program and gain access to the developer sandbox. Signing up gives you a client id and client secret you can use to authenticate to Feedly. Completed applications can be pointed at the full Feedly data store.


Google Reader Latest Victim of Google’s Spring Cleaning

Oh nos! Google announced on March 13, 2013, that Google Reader would be shut down on July 1, 2013. When Bloglines shut down (and then was resurrected in a slightly different form in 2010), Google Reader was the last truly functional web-based feed reader left. I use it daily, as I’m sure others do. Not enough of us, it seems, for Google.

It seems I need to find another decent client for my RSS feeds. I may be in the minority, but I find a feed reader the best way to keep up.

Introducing Qrius, RSS for the other 90%

A new RSS service, Qrius (pronounced, I assume, “curious”), is aiming to bring RSS to the vast majority of Internet users who don’t read it. While the Qrius site is devoid of details, an article in AppleInsider today describes it like this:

The goal is to make subscribing to RSS feeds a painless process for a first-time user. With Qrius, users will simply click the icon featured on any of their favorite news sites, then sign in to the service using an existing Facebook, Twitter or Google+ login.

In its first iteration, Qrius will automatically send subscribed content to Taptu — a news reading platform also owned by Mediafed that offers content aggregation.

The idea is to add yet another “chicklet” icon to your web page (next to your Tweet this, Facebook this, etc., badges) that would send your RSS feed to the Taptu application. Qrius apparently plans future integration with Google Reader, but isn’t aiming for that user set yet — after all, people who use Google Reader are already the same folks who understand what RSS is for in the first place.

Google Getting out of the Advertising-in-Feeds Business

In another sign that RSS is continuing to lose its consumer focus, Google announced on Friday that it is eliminating the “AdSense for Feeds” business (see More Spring Cleaning on the Google blog). AdSense for Feeds allowed blog publishers to put ads directly into their RSS Feeds, item-by-item. As long as you channeled your RSS feed through FeedBurner, you could have Google apply advertisements to your feeds as they were displayed in the end-user’s browser.

While RSS feeds clearly have much utility, Google’s action is another clear signal that consumers are not reading RSS feeds directly in aggregators or their browsers the way they once did.  Google is moving fairly quickly to eliminate AdSense for Feeds. According to the announcement, they will “start to retire it” on October 2, and close it on December 3. This does not effect FeedBurner URLs directly, just the ability to have Google place advertisements in them. Presumably, Google will continue to place advertisements in Google Reader when it displays feeds, but you won’t get a cut of the action.

If you’re an AdSense for Feeds user, you can read more about what this means at