No Personal Spacecraft or Colonies on Mars

I’ve been thinking recently about how yesteryear’s promises of technologies to come have missed the mark. In the classic science fiction novels I devoured as a child in the late 1970s and early 1980s, humanity a generation or two thence (that is — us, now, today, in the early 21st century) would have some practical means of interstellar travel available to it, be living on space stations around Earth and in colonies on the nearby planets, have encountered and made peace with (or subdued) alien races around every turn.

The actual reality of 2013 is quite different. We have one space station in orbit, with a steady but small population of three. Colonies on other bodies are still a generation or two away (I admit to being skeptical of NASA’s timeline for lunar and Martian outposts). Commercial ballistic flights, to get a taste of near-Earth orbit, are a year or two away. Interstellar travel remains, tantalizingly, several miracles removed from today’s understanding of the universe and the laws that govern it. And alien species, friendly or otherwise, are still out there, waiting (I like to think).

Yet… As I think about the stories that grabbed hold of my childhood imagination and haven’t really let go, it is largely the main thrust of the stories that we’ve failed to achieve. The small things are all here. Instant communication with just about any other person on earth who could communicate with you? Check. Immediate and unfettered access to a global encyclopedia of much of the world’s knowledge? Check. Interact verbally with your computer and get a plausible, often useful, response in return? Check. Sit comfortably on one’s couch typing a short essay that is instantaneously saved to some quasi-permanent storage system, in what we now quaintly call “the cloud”? Create a “photocopy” of a physical object, not just a tax form? Check and check.

Pervasive knowledge by the powerful about the goings on of everyone else? Alarmingly, and increasingly so, check. (The future is not all that it was cracked up to be, as noted sci-fi author David Brin let us know in his non-fiction work, The Transparent Society.)

We are living in the margins of the science-fiction universe I dreamed of. Not the grand, gee-whiz Buck Rogers world posited in the 1930s-1950s that I read of in the 1970s. But we live and breathe the minor plot devices and deus ex machina resolutions to tricky problems faced by the hero. It turns out that the small stuff of the stories, the little throwaway details — those describe the reality we live in. We seem to have made tangible the backdrop to the fantasy, leaving the big picture to be invented. Where I was caught up in the overarching plot, I should have been paying attention to the background. The science fiction writers of the past got the big things wrong, but I’m pleasantly surprised at how much of the small stuff has already come to pass.

Presentation at Internet Librarian

I joined my colleagues, Karen Reiman-Sendi and Mike Creech, in presenting about my library’s web site redesign at Internet Librarian 2009 this past week. The presentation, titled “Designing for Content-Rich Sites,” was streamed live; an archived copy is available on UStream:

If you want to follow along, the slides are on SlideShare.

More than a Quarter-Century On the Net

I’ve been thinking about how much the Internet and computers have been part of my life, and for how long. My first exposure to computers, and to online life, was way back in the spring of 1978 when my family moved to New Hampshire and I started attending the public elementary school there. Dartmouth College, in the town where I grew up, had created a time-sharing computer network in the mid-1960s and put terminals in Hanover’s secondary schools. By the time I moved there a decade later, Dartmouth had also put terminals in the elementary schools. So there I was, in 5th grade, dialing up on a 300-baud acoustic modem with a DEC Writer terminal playing computer games.
In middle school, the next year, I was learning BASIC and spending all my free time in the library’s computer lab — where they had a CRT orange-screen terminal in addition to a DEC Writer. I spent far too much much time trying to play a college student on Dartmouth’s chat system… Typing “joi xyz” would let me speak to the world, or at least, the world of people online. My conversations, if they were recorded for posterity, would certainly not be worth the cost of the storage media to save them, and I’m sure they didn’t fool anybody as to my age.
A few years later, in high school, I was still geeking around. By then, I had purchased my first “real” computer, an Atari 800. I wrote simple games (and memorably, a Star Wars trivia test) in BASIC filled with references to GOTO this line number… True spaghetti code. At school, the math lab had several Ohio Scientific PCs where I taught myself the bare basics of Assembler and wrote some simple games and other applications. I wish I still had the 8″ floppy disks used to save applications. And I continued my forays into the online world. Thanks to Google’s indexing of Usenet, I discovered that I can trace my online presence back to twenty-five years ago today… To a very geeky “warez” post on the net.micro.atari Usenet group asking if anyone had Atari 800 software to trade. I even got some takers — from England as well as the United States, and early demonstration of the power of the Internet.
And then college — where technology was much less a significant part of my life than it has been at just about anytime in the past 30 years. Grinnell had computer labs and was online, but most of my friends were far less computer-focused than I. I mostly wrote papers and occasionally chatted with another paper-writer in another computer lab on campus, but pretty well left the nascent Internet alone until I got to graduate school in the early 1990s.
And that’s where my interests and technology came into sync. During my first semester of library school, in 1993, the first graphical web browser, NSCA Mosaic, was released. I jumped on the bandwagon, and haven’t fallen off yet. Back when there actually was a reasonably accurate “What’s New” service for the Internet — listing new servers and new sites, day by day, as they came on line — I was playing around on the library school’s web server, posting web pages, and being amazed when things like centering text and tables were added to HTML. Fast forward another decade, and the web is, well, my job — who would have thought it?

Tentative Settlement in the Google Book Search Lawsuit

The path is now open for millions of books digitized through the Google Book Search project to be available to the public. This includes in-copyright books as well as out-of-print and out-of-copyright books. The tentative agreement announced today would settle the class action lawsuit filed against Google in 2005 by the Authors Guild, the Association of American Publishers and a handful of individual authors and publishers.
The big news, from the perspective of libraries, is that libraries will be able to provide their patrons with access to vastly increased numbers of digital volumes. (See the Google Book Search Copyright Settlement for the full text and details.) Two points are worth noting in particular:

  1. Free online viewing of books at U.S. public and university libraries — “public libraries, community colleges, and universities across the U.S. will be able to provide free full-text reading to books housed in great libraries of the world like Stanford, California, Wisconsin-Madison and Michigan [the Google Book partners]. Public libraries will be eligible to receive one free Public Access Service license for a computer located on-site at each of their library buildings in the United States.
    Non-profit, higher education institutions will be eligible to receive free Public Access Service licenses for on-site computers, the exact number of which will depend on the number of students enrolled.”
  2. Institutional subscriptions to millions of additional books — “[L]ibraries will also be able to purchase an institutional subscription to millions of books covered by the settlement agreement. Once purchased, this subscription will allow a library to offer its patrons access to the incredible collections of Google’s library partners from any computer authorized by the library.” In other words, libraries can add Google Books to their proxy servers (for a fee) and thereby allow full-text access to millions of books.

While the settlement has not been approved by the courts, the news that Google and the publishers have agreed on terms is terrifically exciting.
Disclaimer: My employer is one of the Google Book partner institutions.

Search Flickr for Color Schemes

The Multicolr Search Lab site lets you search through 3 million Flickr images for those that match a particular color. You can pick one or more colors from a swatch on that web page and it will display Flickr image thumbnails that contain the color (or colors) you pick. Assuming the photographer allows use of the images, you could use them to jazz up your web site with color-coordinated graphics. Of course, you still need to find one that suits your content.
If you’re not satisfied with the 144 colors offered, you can easily customize the tool to add the exact colors on your web page. For example, RSS4Lib uses three main colors: orange (#f1671f), dark blue-gray (#a3b8cc), and light blue-gray (#e6e2f2). By adding these to the site’s URL, as in this sample, I can get a customized set of images that match RSS4Lib’s color scheme.

RSS4Lib Color Swatch

This was generated from the following URL:
http://labs.ideeinc.com/multicolr/#colors=f1671f,a3b8cc,e6e2f2;
If you wanted to use your own colors, simply replace the 6-character color codes (in my example, the bolded f1671f, a3b8cc, and e6e2f2) with the colors you want to use. Add more by separating them with commas (no spaces!). End the list of colors with a semicolon.

Geotagging Photos When You Take Them

I wrote about geotagging photographs way back in the fall of 2006. This means attaching longitude and latitude data to a photograph so it can be mapped. Most shutterbugs who geotag their photos do so after the fact, using various mechanisms, in Flickr or with iPhoto plug-ins, for example.
Now, it seems, easy geotagging of photo easy will be added to Apple’s iPhone — at least, according to one report at AppleInsider. If the iPhone does have GPS (or even if it relies on the current mechanism of approximating the iPhone’s location, by triangulating on cell phone towers), every picture taken with an iPhone could have location data attached.
Depending on the resolution of the location, it would be possible to build collections of photos from multiple users over time of a given location.
Of course, the privacy implications are interesting, too — will a photograph taken of someone without the subject’s knowledge, published to Flickr with a geotag, be considered evidence of that person’s whereabouts? This expands the risks to privacy already created by CCTV systems, such as those installed in many cities (Singapore, London, etc.). If everyone has a camera with geotagging capability built in, and publishes their photos to the Internet — how easy will it be to scan them to learn if a person suspected of being at that location at a particular time might be in the background?

Snazzy Icons for iPhone/iPod Touch Web Clips

Apple’s iPhone and iPod Touch allow users to save “web clips” — favorite web pages — directly to the device’s home screen — so one tap of the finger on the icon takes you directly to that site. By default, the iPhone or iPod Touch use a nearly-impossible-to-read screen shot to represent the web clip; few web sites end up being visually identifiable on the home screen.
There’s a great opportunity for branding here. Apple has made it very easy to create custom icons for your web site. There are 2 steps:

  1. Make a graphic that is 57 x 57 pixels and save it in PNG format.
  2. Name this file apple-touch-icon.png and save it to the main (“root”) level of your web server.

(Credit to The Primary Vivid Weblog for documenting the process in plain English.)
When you add a web clip to your iPhone or iPod Touch’s home screen, it automatically (and unavoidably) adds a glow effect and rounded corners to the graphic you provide. To compensate for this, use this web clip Photoshop template (from iPhoneMinds) that shows just where the usable space in that 57 x 57 pixel square is and how it will look with the glow effect.
How easy is it? It took me (a truly novice Photoshop user) about 5 minutes to make an icon for RSS4Lib — if you’re using an iPhone or iPod Touch, save this page to your home screen to see it, or just view http://www.rss4lib.com/apple-touch-icon.png.
If your library has iPhone or iPod Touch users, why not extend your brand to their mobile desktop?

High Tolerance for Ambiguity

The 2.0 world — in libraries in particular, or the web in general — is helping to address the information management problem of ambiguity. In his inaugural column in the May issue of KMWorld,” Now, everything is fragmented,” David Snowden notes: “The more you structure material, the more you summarize (either as an editor or using technology), the more you make material specific to a context or time, the less utility that material has as things change…”
Much of what the knowledge management world, and, for that matter, librarians more broadly, seek to accomplish is to get the right bit of information to the person who is looking for it at the right time. However, as we build systems to accomplish that task, we often run counter to both the defining characteristic of our age and what he describes as one of the defining characteristics of our species. Snowden writes:
First, we live in a world subject to constant change, and it’s better to blend fragments at the time of need than attempt to anticipate all needs. We are moving from attempting to anticipate the future to creating an attitude and capability of anticipatory awareness. Second, we are homo sapiens at least in part because we were first homo narrans: the storytelling ape. Dealing with anecdotal material from multiple sources and creating our own stories in turn has been a critical part of our evolutionary development.
Information systems are typically built to remove ambiguity. They are tailored to the specific need at hand. Snowden notes that there is a risk to building systems that remove ambiguity by “chunking” information into discrete elements. This risk is shown through research (in national security, in particular) that indicates raw intelligence is more useful over longer periods of time than the reports based on that raw data. 2.0 environments, in which users of information build on the raw materials, mixing and matching sources in novel ways, are more flexible, allowing for changing needs to reflect themselves over time.
A mentor and twice-supervisor of mine described someone’s ability to survive in an organization by saying that the individual either had or lacked a “high tolerance for ambiguity.” Having a high tolerance was a good thing: if you could keep your relative sanity as organizational priorities and day-to-day exigencies changed, you were in good shape. As librarians, we need to develop a high tolerance for ambiguity in the information systems we design and provide. By this, I don’t mean developing to wishy-washy specifications. I do mean that we need to build systems that enable our users to pursue information-seeking paths we don’t, or can’t, anticipate. Systems must be built to allow others to get to the raw data, manipulate it, and do what they will. As we today’s information needs, we must also allow for flexible interpretation and serendipity of discovery.

Inaugural Issue of Code4Lib Journal

As a member of the editorial board for the just-launched Code4Lib Journal, I’m pleased to point the way to the inaugural issue. The Code4Lib Journal covers the intersection of libraries, technology, and the future. The idea for the journal came out of last year’s Code4Lib conference, but the journal’s content comes from across the spectrum of libraries.
The first issue of this OPENACCESS journal contains:

If you’re interested in contributing to a future issue, please see the Call for Submissions. We’re accepting proposals for articles, book & software reviews, code snippets & algorithms, conference reports, opinion pieces, etc.