In Today’s Internet of Things, YOU Are the Thing

By on January 20, 2015
Follow me on Twitter: @varnum

The Internet of Things was a hot item at the Consumer Electronics Show in Las Vegas earlier this month. A vast array of Internet-enabled devices was on display, everything including the Baby Glgl smart bottle holder (it tells your smartphone if your baby’s bottle isn’t at the optimal angle), the Belty (it automatically loosens your belt if you overindulge at a meal), to Whirlpool’s “detergent assistant,” a feature on one of its high-end washers, that can order detergent when your stock is running low.

Of course, a slew of more useful network-enabled devices was also on display, including gizmos for monitoring health, home-monitoring items, and more. But the theme here is that the Internet of Things seems to be at that stage in the adoption cycle where manufacturers and inventors are hell-bent on network-enabling ALL THE THINGS in the hope that, someday, the market will tell us what actually makes sense, according to the ancient adage: “Network it all! Let the market sort it out.”

While the market is busy sorting out just what makes sense in the Internet of Things via the proxy of what we consumers will actually buy, I keep thinking that the Internet of Things, as it exists today, is really the Internet of You. Much as with Facebook or Google, you, the consumer, are the “thing” being networked. (The CEO of Jawbone claims this proudly in a January 5 column in the Huffington Post; I am not quite as sanguine.)

Here’s what I mean. The ubiquitous smartphone that so many of us carry around is giving off endless data about you, harvested by smart retailers and others. Your phone, the most networked thing in any of our lives, is a proxy for you. Here’s an example. Several years ago, I went to my local Kohl’s in search of some shoes. On entering the store, I saw an advertisement that I could text a phrase to a certain number to receive 15% off that day’s purchase. I did so, of course, not thinking through the reason that Kohl’s — a consummate retailer — would be offering a surprise additional discount to someone who was already in the store.

Later, it dawned on me. My desire to save a few bucks on that shopping trip gave Kohl’s the ability to connect my location in the store, my cell phone number, my cell phone’s MAC (which the wireless network in the store could pick up), and my purchases (when I used the coupon at checkout). If I paid by debit or credit card (which I did), Kohl’s had opportunity to capture my name, and by extension, all sorts of additional information about me. Not only that, but thanks to the cell phone metadata, assuming they had installed inexpensive wireless devices around the store in each department to gather data, they would know pretty well where I was in the store and how much time I spent in each section. As it turns out, this is almost certainly what was happening. As early as summer 2013, The New York Times was reporting on this sort of technology.

This sort of user tracking is common across many retailers. If your cell phone is turned on in a store, you can be certain that information about where your phone goes and where it spends time is being tracked, even if it’s anonymous. If you pull up a coupon on your phone to be scanned at checkout, all your in-store behavior is suddenly directly connected to you, the individual — to be used across time and space.

This does not even scratch the surface of what could be done by legally empowered law enforcement or other, less legally grounded, agencies.

I do not suggest you leave your smartphone at home, or event put it in airplane mode when you walk into a store. But I do want to highlight that the Internet of Things, as described in the media, is really two approaches. One is using your smartphone as a proxy for you — the Internet of You. The other is using the network and a computer to interact and learn from your environment — the Internet of Things. Don’t confuse one for the other and be discouraged about the entire concept based on the former.

A Year in Reading (2014)

By on December 22, 2014
Follow me on Twitter: @varnum

My personal reading for 2014 has been mostly for entertainment. The list is shown in chronological order. My favorite five from the list below are noted with bold text.

  1. Arsenals of Folly: The Making of the Nuclear Arms Race, by Richard Rhodes
  2. The Ocean at the End of the Lane: A Novel, by Neil Gaiman
  3. The Martian, by Andy Weir
  4. Like a Mighty Army (Safehold), by David Weber
  5. Redshirts, by John Scalzi
  6. Bad Monkey, by Carl Hiassen
  7. Old Man’s War, by John Scalzi
  8. 2312, by Kim Stanley Robinson
  9. Trojan Horse: A Novel, by Mark Russinovich
  10. Quarter Share, by Nathan Lowell
  11. Ready Player One: A Novel, by Earnest Cline
  12. A Delicate Truth: A Novel, by John LeCarré
  13. Reamde: A Novel, by Neal Stephenson
  14. Good Faith, by Jane Smiley
  15. Existence, by David Brin
  16. Luna Park, by Kevin Baker
  17. Mr. Penumbra’s 24-Hour Bookstore: A Novel, by Robin Sloan
  18. Divergent (Divergent Trilogy, Book 1), by Veronica Roth
  19. Half Share, by Nathan Lowell
  20. LEGO: A Love Story, by Jonathan Bender
  21. World War Z: An Oral History of the Zombie War, by Max Brooks

What Could the “Internet of Library Things” Be?

By on July 29, 2014
Follow me on Twitter: @varnum

At the recent ALA Annual Conference, I attended the OCLC Symposium on the Internet of Things, hosted by Lisa Carlucci Thomas and presented by Daniel Obodovski, co-author of The Silent Intelligence: The Internet of Things. (I wrote up my notes in an earlier post.) At the end of the talk, Mr. Obodovski asked the audience what they thought libraries should do if/when the Internet of Things came into being? The responses were varied, but were more like “RFID on steroids” — better circulation of materials, availability of equipment, and the like. These are mostly evolutionary steps, but the last one or two are more revolutionary.

So, I’ve been trying to think of less evolutionary and more revolutionary ideas. I have not, frankly, been particularly successful. But here are some of the things I can see happening:

  • Help library visitors find a space suitable to their needs (quiet study areas, low noise, full-on conversation) by installing noise-level monitors in each study space and simple sensors in each chair. This way, someone looking for a deserted, quiet area can easily find the available table off in the back corner, while a small group looking to conduct a group study session can find a free table in an area where there is already light conversation.
  • Put motion sensors in study rooms so that a list of available and in-use study rooms can be shown to library visitors. Library visitors will know which correctly sized study area is free, and they can then let their study group know where to come. Bonus points for tying these sensors into the study room lights for energy savings — the lights go off when the room is empty.
  • Show library visitors newly purchased books they are likely to enjoy when they enter the library. Combine information about books on your new-book shelf with each visitor’s checkout history to send a list of books that are on the new-book shelf to their device as they enter the building.
  • Impromptu book discussion clubs. (This is bordering on the creepy, but I wanted to see if you are paying attention.) Identify other people in the library with have similar reading interests and offer to introduce them to each other.

What would you do with pervasive connectivity of everything within your library? Let me know in the comments.

Incidentally, Jason Griffey talks about a number of other things libraries could do with cheap sensors in his chapter, “The Case for Open Hardware in Libraries,” in the recently-published LITA Guide, Top Technologies Every Librarian Needs to Know.


Technology Priorities for the New Library Reality

By on June 28, 2014
Follow me on Twitter: @varnum

These are lightly edited notes from Sarah Houghton’s talk at ALA Annual 2014. Tweets from this presentation may be found at #alaac14.

Starts off with results of a survey: ‘Why are we talking about this now?’ Now that budgets are starting to recover from the Great Recession, libraries have the option to think about where to allocate restored funds. Do we spend on the things we did 10 years ago, or do we choose new priorities?

About half of libraries are losing money; half are gaining. Everyone feels that they don’t have enough and cannot keep up. No matter what kind of library responded, we all wanted the same things.

Libraries who thought they would get an increase were spending on staffing (27%), digital materials (26%), information technology (22%), facilities (17%). (137 respondents). Facilities were a smaller set, but the things that were wanted were often building safety and maintenance, not technology.

How is technology support managed? About 42% of respondents had libraries that ran their own IT. 28% by a parent organization, 24% some combination thereof, and 6% outsourced.

How much spending control does library staff have over the IT budget? 50% had none or “a wee bit”.

Your web services librarian doesn’t have to be a librarian. Get someone qualified, and have a librarian advisory group to advise.

Fewer people made collection decisions based on usage statistics for digital materials than for physical materials. Seems odd because it is so much easier to gather statistics on the digital materials.

If libraries had $1k, 42% chose non-tech things to spend it on. One said “actually pay the visiting clown.” If libraries had $100K, non-tech was still 42%, but answers were much more diverse. Hardware, digital content, software & staff, and other stuff are the big desiderata in technical areas.

If libraries could get one extra staff position of any kind, 42% said tech-oriented NON-librarian. 23% said tech librarian and non-tech librarian (each),

What concerns do people have? Staff capacity is biggest: 47%. Training (23%), outdated mindsets (14%), outdated technology (12%)

Libraries see using hosted services as a good way to get around IT’s rules (33%). Simply breaking the rules is also popular: 39%.

As technology integrates more and more into our jobs and lives, everyone has an opinion on how we should focus our technology spending. Few know what the hell they’re talking about.

How do you develop a budget? Establish priorities first. Determine needs for each. Draft a budget, revise with broad feedback. Make mid-year adjustments.

The Internet of Things

By on June 27, 2014
Follow me on Twitter: @varnum

These are the notes I took during today’s OCLC Symposium on “The Internet of Things” at ALA Annual 2014. For tweets from the presentation, please see the Tweets at #oclciot.

The presentation was by Daniel Obodovski, co-author of The Silent Intelligence: The Internet of Things.

How do humans and machines communicate and connect? This is the Internet of Things [IoT]. But what is that? It’s all kinds of things today: smart thermostats, medical sensors and alert systems, smart electric meters… And more. Package, and person tracking is enabled through scannable codes or RFID tags for low-value things, and GPS devices for high-value (people, pets, valuable items). What are the privacy concerns around this? How to ensure that data are used as intended, by whom intended?

The IoT allows us to connect to the broader analog world around us in a digital way, to integrate, interpolate, and benefit us all. Relates to a new digital nervous system connecting us with our environment?

How big will this be? There could be as many as 50 billion by 2020. We have a lot more “smart” technology in our homes already than we might think. Up to 7% of U.S. population already has some sort of wearable technology (exercise trackers, medical monitors, etc.). By the end of this year, it is forecast that 10% of U.S. population will have wearable, internet-connected device on their person. And today, 45% of fleet vehicles in the U.S. have some form of monitoring — for vehicle maintenance, for driver compliance, for vehicle location, etc.

This is, all together, what we call “The Silent Intelligence.” And it is, ironically, very verbose.

We think of the future as rocket cars and jetpacks. But the reality is, it’s already here, slowly emerging, out of these interconnected devices. The most exciting area is healthcare — with immediate feedback for how treatment is working, or if there is an emergent situation before the individual even knows something is wrong.

What we have seen in social media — where the user is the source of data that the social media company then sells — is already emerging in the Internet of Things. Your car’s data is being sold to third parties. (I wonder, if it’s so easy to get the vehicle’s diagnostic reporting codes out of the vehicle, why it costs so much at a dealer to read the code and translate it into a fixable problem.)

The Internet of Things is very complex. Requires that many individual device manufacturers talk to each other and interplay. Need standards not just for communication, but for data itself. All of these data will be collected, analyzed, resold — after being anonymized. A new range of services will emerge around this data collection and processing. This opens up a new world of services, but also opens up a huge range of data privacy and security concerns.

We are currently missing a clear set of rules about privacy of data — who can have access, and what do they do with it? We are generally very bad about understanding the terms of service when we click through to use some online service.

This technological revolution has an uncertain impact on the nature of jobs. We have gone through one technological revolution, in which technology replaced many manufacturing jobs, leading those workers to move into service jobs. What happens if many services can be automated; what is the next kind of job that current service workers can move into?

What will Internet of Things mean for libraries? What will interconnections enable? Combined with knowledge of other things than where physical items are located, and what rooms are being used, or aisles in the stacks, etc., you can customize and improve services. Without data, you can’t improve your services in the optimal way.

We should think about how we can understand the patterns, and the data that generate them. Connecting patrons to their needs, more effectively and efficiently, is the goal. Let needs drive the technology.

How the Feed Changed the Web

By on January 28, 2014
Follow me on Twitter: @varnum

Mashable published an interesting post and infographic about how the “feed” changed the way we consume information. The author notes: “The feed now dominates online content consumption, from the news we read on our mobile devices to the social networks we check constantly throughout the day…” (emphasis mine).

Just another indication that RSS has become plumbing, or infrastructure. It’s no longer the goal of itself, it’s the mechanism.

Discovering Discovery at LITA Forum

By on November 9, 2013
Follow me on Twitter: @varnum

Notes from a  talk by Annette Bailey of Virginia Tech at the LITA National Forum, “Discovering Discovery.”

Virginia Tech has been a Summon customer since 2010. They have leveraged Summon to change cataloging practices locally. Still using original Summon (1.0) interface.

Library users are shifting behaviors. Increasing usage of online resources, physical spaces — but not physical resources. Discovery largely happens through Summon. How can VT know what its users are doing? COUNTER provides some information, but its delayed, and hard to process. Summon provides aggregate data on search terms and click data. How can we know what users are doing in real time? And share it with other members of the community, show visually what research is happening, live?

Discovery VisualizationThat is the heart of Discovering Discovery — what are users clicking on in Summon, in real time. Can’t tell if they use the item, but can tell that they accessed it.

This tool helps everyone — librarians, the public, students — to understand what is being done in the library. User does a search. There’s some custom JavaScript in the Summon interface that sends a record of the click to the visualization server, which stores it in a database. A visualization tool then makes a display on demand. It grabs the Summon record ID, unique for each item. They then use the Summon API to grab the metadata for that query — because Summon IDs are not persistent over the long term. All of that is stored in an SQLite database.

As a side note, they can tell how many unique items were clicked on over time — hard to do otherwise.

Current log analysis extracts and tabulates data at 1 minute, 5 minute, 1 day, 1 week intervals. Tabulates by discipline, content type, source of record, publication year. All comes from Summon, which means data are problematic. Does word frequencies for abstract, title, and abstract & title combined, and keywords & subject terms.

Use the d3.js library to do visualizations. It’s a powerful tool, but hard to work with. Follows jQuery in style. Also uses a variety of server-side technologies.

Summon 2.0 — not there yet. Unlike Summon 1.0, there is now an officially sanctioned way to include JavaScript (it’s a hack in 1.0). It now includes d3.js in Summon — they do not appear to be using it yet, but it’s there. Look out for visualizations at some point…. But they need to reverse engineer Summon 2.0 to achieve the same effect as in Summon 1.0.

Using this with other discovery services. You need to be able to record clicks, in real time. You need an API to get the machine data. If you use a different discovery service and want to try adapting this code, VT would like to work with you.

The visualization is the hard part; getting the data was the relatively easy part. Code needs to be consolidated, into a cloud solution, to make your version for your own use. (Like the Libx edition builder).

The 4th Floor and Library Transformation

By on November 9, 2013
Follow me on Twitter: @varnum

This is the second keynote address at the LITA Forum in Louisville. The speaker is Nate Hill, assistant director of the Chattanooga Public Library. Follow him on Twitter at @natenatenate.

Nate Hill speaking at LITA Forum 2013The 4th Floor project is more a community organizing project than a technology project. When Nate started there a few years ago, the Chattanooga Library was seriously broken. Technology improvements are just one portion of the overall improvements being made. Chattanooga has gigabit networking throughout the city. So the city has a lot of potential and lots of recognized need for change and reinvention.

Unlike many brutalist all-concrete buildings, the CPL has large amounts of open space on each floor — it was designed with an open plan, so they aren’t as constrained by solid concrete walls. This gives them some flexibility.

Nate is going to focus on one aspect of this reinvention. We’ll start with the “why:” moving from Read to Read/Write. Everyone in the LITA audience at the moment can create something and make it available to everyone. Before that was possible, we needed libraries to store relatively rare copies of things. Library was about access. Now, it’s about providing tools to create things. Connectivity is a key underpinning to these tools.

CPL uses their 4th floor space as a “beta space” — the library can experiment, and the public can experiment. 14,000 square feet of space was used as an attic. They solved the problem collaboratively — invited people to meet in that space. Started brainstorming what might be useful to do. This started about 18 months ago (around January 2012).

Had a public auction, got rid of all the stuff. Net profit: $1500.

So, now what? A vast amount of empty space, with no added staff resources to do new things. Answer? Strategic partnerships with other organizations. First was with the Chattanooga chapter of AIGA. AIGA got a home for their meetings, brought in presentations, and started the seeds of current programming.

The next major milestone was the first DPLA “appfest” — 100 people came to CPL from around the country. Realized that people didn’t necessarily want to work at desks in these informal arrangements, so started to create less rigid workspaces.

Next was a local collaboration space, co.lab. Got 450 people to attend a series of pitches — entrepreneurial ideas. Again, community was amazed to see what the library could do.

The library is losing ownership of the space; it’s becoming a community platform.

“We make all of this stuff up all the way.” CPL has an amazing tolerance for experimentation and trial-and-error.

They moved their IT staff to the 4th floor, creating a coworking space.

Using Chattanooga’s gigabit network, they have done performances where dancers in two locations perform with projected images, passing the image back and forth between two locations in the city.

Making Maker Libraries — LITA Forum Keynote

By on November 8, 2013
Follow me on Twitter: @varnum

I’m attending LITA National Forum 2013 in Louisville, Kentucky. I’ll be posting some conference notes sporadically. The opening keynote session is a talk by Travis Good, contributing editor of Make Magazine. His blog is He talked about “Making Maker Libraries.”

Travis Good

Once a “nerd” was not a particularly flattering thing to be called. Now, that has changed. Nerds are the smart guys you go to in order to solve a problem. Nerds have arrived. Library IT groups have solved, in a nerdy way, many kinds of problems: online catalog, computer workstations, wired Internet access, wireless internet access, ebooks… It is not just making things work, though; it is making things work comfortably in a library context.

Through making wifi available, we redefined why people go to a library.

Changes in technological landscape are a threat — and an opportunity. We will talk about just one of these changes: the maker movement. It’s a broad movement with lots of definitions. Humans have been making things since we developed opposable thumbs and tools.

What was “making”? It was done by craftsmen, focused on trades, with years of training and practice, with rudimentary tools. Took lots of practice to do well because the tools were “dumb.” Now, tools are “smart”, and more people can make things. Moore’s law has affected tools. Technology brought smarts to making; computers can manage processes. Costs drop, power rises, steadily. Tools are smarter, more powerful, and more capable. The Internet has simultaneously opened up collaboration across distributed communities. Open source software came along. And now… open source is not just software. It is hardware, too.

New, smarter, tools are already here. CNC Mill (Computer Numerically Controlled) Mill. It’s a subtractive tool — it mills away something, until what is left is the product you want. Designs can be shared, tailored, and made. 3D printing is the opposite, in a sense — it extrudes material to make something. An additive tool.

Laser cutters — these are two dimensional, and cuts a flat surface with a laser. Can cut wood, leather, acrylic, metal, and similar materials. Can create very intricate designs.

For all of these products, there are libraries of models that you can download, modify, and make yourself. Powerful tools and shared designs can make anyone a maker of things.

At the same time, we are getting cheap, flexible electronic micro controllers, sensors, and actuators. Sensors make measurements of things; actuators create a response of some kind.

Simple embedded electronics made a turn signal for a bike rider — left arrow, right arrow LEDs on the back, and a switch in each sleeve for the biker to turn them on and off. Another example — a switch in a chair that turns the TV on when you sit on it; turns the TV off when you stand up. Third example — an Arduino on a Venetian blind that opens or closes the blinds when the room is too cool or too warm.

Barriers to creating things have been reduced. Long apprenticeships to become competent are no longer required. And it’s now easier to become good at lots of things. So more people can make, more making can take place, and more people can be collaborating.

The question that arises: where is this making happening? You need spaces in which people can learn, create, share, and collaborate. Threshold to entry is low, but you still need to cross it. This is a clarion call to libraries. Libraries are already the places that offer lifelong learning. And are looking for new ways to deliver on their traditional missions.

Libraries are experimenting with maker spaces in different ways. Experimenting with different tools and technologies, seeing what local patrons will want to use. Can vary from branch to branch.

Maker spaces are catching on in libraries. It is seen, broadly, as an opportunity to be valuable to the community (in public & academic libraries). There is lots of experimentation on what kinds of services and tools to offer — it is something of the Wild West.

There are some basic things that are needed to foster the growth and development of maker spaces:

  1. A source of best practices. Why does every library need to invent this service on their own?
  2. A database of maker helpers. People who would come to your library and talk about specific topics. Tap into maker spaces, meet up groups, etc. But there is no vetting — lots of interested people, but needs to be a way to make sure the volunteers are good teachers, reliable, etc.
  3. New sources of funding. There is lots of competition for scarce resources (e.g., IMLS). Corporations are interested in funding maker spaces — they see it as future employees and future innovations. Skills of successful makers are the skills of successful innovators and inventors.
  4. Kits that fit into a library. A maker space in a box, and maker supplies that are reusable and affordable. For example, Arduino prototyping kits that can be reset and tested for basic functionality by completely non-technical library staff.
  5. Finding good projects. This is already in the works. Make it @ Your Library ( 100,000 crowdsourced projects have been uploaded and categorized.

We can build tools for our library community at large.

The power of making grows when the various maker communities collaborate and communicate — libraries, incubators, schools, government. It’s a network.

Internet Archive Tries to 404 the 404

By on October 25, 2013
Follow me on Twitter: @varnum

The Internet Archive announced today a new service — creating a permalink for a web page that leads to copy of the page at the Internet Archive. So, for example, I just created a permanent snapshot of this blog’s homepage as of 25 October 2013 at 19:35.43, preserved forever and fully citable:

This blog probably doesn’t deserve that sort of immortality. But what about more significant things? Rather than citing a web page with a note “accessed on 25 October 2103″, let the Internet Archive grab a snapshot of it and link to that. It would be lovely if this service could be extended into licensed content so that citations to academic (and all-too-often behind a pay wall based on one’s affiliation with the library’s parent institution) content could be equally persistent.

Scholarly content, as a rule, is provided through a non-persistent URL, if we ignore DOIs and Handles. Those valuable tools, of course, are only good as long as the owner of the content maintains their persistence. The owner of the content is responsible for updating destination links. That may not be the  highest priority in a bankruptcy or other sudden and unexpected cessation of operations.

This new service makes possible better back-references to the historical record.

Older Posts

Drupal in Libraries

Learn more about Ken's book, Drupal in Libraries