Prediction. Parts 1 and 2. And more to come.

Alhambra, Granada

Image by james.gordon6108 via Flickr


Now that we’ve passed the middle of 2011, I feel confident enough to suggest some of the things that I think we will have to think about in the near future.  I did once acquire a crystal ball, but it didn’t work: I therefore offer no predictions, but rather some thoughts on what seems to be going on at the moment, focussing on the possible effects on the information management professions.  I will mention some of these each day for the next couple of days.  Please do not hesitate to comment, as well as to add issues and phenomena that are important in your field of endeavour.

1. Multifunctionality and convergence

We have seen, for more than a decade, increased multifunctionality of information and communication technologies (ICTs).  The phone is now a camera, voice recorder, workout monitor, letter writer, internet accesser, aide de memoire and map finder, as well as other things.  This is a continuation of the development of computers, which were soon used for a lot more than just arithmetic and calculation.  The social media and search engines are moving in the same way: FaceBook does email, Bing integrates FaceBook data, FaceBook can also be used to become a member of various blogs and webistes of interest.  Google appears to positioning itself to run the Googleverse, as it develops its own versions of popular software – such as email and wordprocessing and social networking – as well as interesting additions such as Skype, blogging, Flickr and, of course, the library: Google books.  And then there’s Google Scribe, which anticipates what you are going to write; Google Body, which allows you to peel back, layer by layer, the human body; and Google Goggles, which enables you to search Google using pictures from your smartphone.

I posited previously (2002) that converging technologies have led to increasing convergence between the information professions: will this continue?  I believe that this would be desirable, but whether it is practicable and attainable is, of course, a different matter.  The arguments for increased convergence – or at least collaboration and multidisciplinary interaction – include a stronger public presence and perhaps more political clout (within organisations and communities); sharing of solutions to problems which have perhaps been located within particular disciplines/professions, but which are experience by all; recognition of the similarities, rather than the differences, of the challenges that face the information professionals.  Some of the more complex issues that must be dealt with include the retention of professional profiles, as each discipline/profession has unique characteristics and different contributions to make; the plethora of professional associations, all of which require membership fees and produce newsletters and journals that must be read; and lastly, the overwhelming number of subdivisions that can be identified in this enormous field.  Too much ‘multifunctionality’ can be diffuse – Jack of all trades, master of none.  But such demands are presently made on us: just consider the number of different tasks that must be executed in the role you currently occupy.

2. Social networking and user-generated content

The appearance and ongoing development of Web 2.0 appears to have no end.  In the analogue world, because of the relatively tedious ways in which documents were created and distributed, more control was possible, perhaps because of necessity.  Documents were not created or published unless it was necessary for whatever reason.  Publishing procedures were closely linked to bibliographic control systems: ISSNs, ISBNs, in book cataloguing information, edition statements and so forth formed part of a vast mechanism.  But even before the 1980s, people complained about information overload.  Then the internet appeared, and information professionals groaned: how on earth were we going to manage this flood of documents?  It appeared that every Tina, Dorothy and Helen could publish whatever they liked.  We didn’t even know what was out there, never mind trying to keep up with classification and cataloguing.  And then Web 2.0 happened, with amazing social possibilities.  The hallmark of this version of the internet is user creation and interaction.  Barthes mentioned the ‘death of the author’, in the sense that each reader will recreate an author’s text, an idea explored also, in some detail, by Umberto Eco in his ‘The open workThe death of the Author, with a capital A, has another interpretation now: the Author does not have to condoned, approved, validated, lionised or even recognisable to be able to publish as much as s/he wants to.

Part of the problem for the reader is being able to contextualise the author, in order to draw meaning and fully understand the ideas that are being conveyed.  The Author is no longer automatically an ‘authority’ (“I read it in a book so it must be true”): far more sophisticated skills are required in order to select, understand, analyse and critique the information with which we are now overwhelmed.  This is sometimes called ‘critical information literacy’ which is quite different from the ‘information literacy’ that librarians used to know and love.  In fact, it might almost be called ‘critical media literacy’ or, the term I currently prefer, ‘Critical Digital Literacies’.  All the technology in China – and the rest of the world – will not help us one jot if the general population does not develop these skills.  I believe that we, as guardians of memory and cultural heritage, are the very people to undertake this.

Increasing epublishing and ereading means, at the very least, familiarisation with the tools that are required is necessary.  Does this mean the end of publishers?  How does it change the publishing cycle?  There have already been huge shifts in educational resources and scholarly communication patterns (more on this at another time); Open Access and Open Source are widely used and increasingly popular.  This will have, perhaps, the greatest impact on poor countries – but what will the nature and consequences of this be?

Consider the rise of civilian journalism.  I grew up in an environment in which it was natural to doubt every word on the radio or in the newspapers on current events; we needed to understand that we were being fed half news or even no news at all.  Sadly, in environments were ‘free speech’ is protected by law, too many accept that what news is being reported, and what comments are made on it, is both important and authentic.  The ways in which journalism (‘churnalism’ is a new aspect of this – see and the media operate is accepted as part of the transparent background.  Civilian journalism empowers ordinary people to report directly on what is happening: this, enhanced by Twitter and Facebook, provide different interpretations and views.  It can be said, therefore, that in this regard, the internet is like Foucault’s Bibliotheque Fantastique: a place where we go to discover ideas and to have them challenged.  The new heroes are, if you like, at the bottom of the pyramid, in terms of sheer number, at least.

The other aspect of this is that printed newspapers are likely to shift to online only.  An advantage of this for individuals is that they can use push technologies – news aggregators such as RSS feeds – to deliver only the bits they want to know about.  And then there was Twitter – and now, for those with iPad tablets, FlipBoard, which allows you, effectively, to create your own magazine.

As information professionals, what are we going to do about this?  How will we manage and encourage access to all these ideas?   A Sisyphean task, seemingly.  How can our knowledge and skills be used?  How can we access and use user commentaries and annotations?  At the same time, we must ask, “Who is NOT using the internet?  Who is NOT publishing their ideas?”  This group may include anyone from serious scholars to the illiterate and disadvantaged: whose voices need to be heard?  Should we have any involvement with this – knowledge creation and distribution?

The rise of secret gardens, or, the Splinterweb.  Social networking is all well and good, but perhaps the hysteria is now over: do we all want everybody to know our every move, our ever mundane and trivial thought?  And let’s not mention the time it takes to pursue this triviality.  It seems that people are becoming more selective, perhaps more discreet and attempting to use their internet space and time more meaningfully.  This would suggest not only targeted audiences, but a judicious and discriminating approach to who can see what.  There is little doubt that, with the emphasis on intellectual property (note for example the astronomical number of patents that are being applied for and approved), most knowledge creators/publishers wish to protect and preserve theirs.  So, while a considerable portion of the internet will remain public and open, increasingly we are likely to see inaccessible areas.  Costs will be involved, flying in the face of the open access movment.


Digital Rights Management (DRM)

BBC DRM protest image

Image via Wikipedia

My FaceBook friends will have noticed that my current profile picture is of an upraised fist, with the slogan “Readers against DRM”.  DRM = Digital Rights Management.  DRM is concerned with security and privacy: it is against privacy, and seeks to protect intellectual property owners to ensure that they are appropriately rewarded for their efforts in sharing their information – as well as, supposedly, to protect digital files from viruses.  Copyright is protected by using computer programs: even though you may have bought a digital file, your use of this file (i.e. the contents of the document, or the recorded information) is circumscribed by the ‘rights management’ that has been programmed into it.  This avoids copying, duplicating or forwarding information that the owner or distributor feels must be controlled to a greater or lesser extent.  In other words, as a purchaser of a digital file, you may have less use of this record than you may have had of a physical version.  This extends even to being able to read the document only on a particular e-reader.  DRM can control altering, viewing as well as copying or duplicating: in fact, anything that you might wish to do with a digital file above reading it once – or perhaps twice.

DRM therefore controls access.  It is, in a manner of speaking, the opposite of Open Access and in fact often goes beyond what current copyright legislation provides for.  In the effort to protect copyright and intellectual property, access to information becomes even more circumscribed and limited.  Games, music, ebooks or any other digital file may be subject to, and controlled by, DRM.  It is illegal to try and avoid or circumvent DRM, at least in the US:  the Digital Millennium Copyright Act (DMCA) was passed in 1998,  and this makes illegal the development or use of any technology which can somehow circumvent DRM restrictions.

This has become a moral issue, as DRM is understood to restrict freedom of speech.  Organisations such as the Electronic Frontier Foundation (which has been around since the early 1990s) and understand that DRM is a restriction of civil liberties, as well as going against the principles of Free Trade.  DRM has become the digital management of rights, rather than the management of digital rights – and there is a profound difference.

So: privacy, security and protection of financial interests: these are the main motivators or raisons d’etre of DRM.  As we can see, however, these matters soon become political, in that what is privacy and security to some, may mean something inhibiting and constrictive of one’s civic or human rights.  And this is where the problem comes in as far as libraries are concerned.  Libraries seek, as a fundamental principle, to allow access as much as possible to the ideas of others, no matter where or when they were created, so that present generations can consider them.  If DRM means, as Harper Collins recently suggested, that once a library has purchased an ebook, it may only be ‘circulated’ or lent out 26 times, before access is entirely blocked.  This was done through a change in their agreement with OverDrive, a major ebook distributor.

It’s really like the old idea: if you have a hammer, everything looks like a nail.



Open Access – do you really think it’s a good idea?

Yale University's Sterling Memorial Library, a...

Image via Wikipedia

I seem to have developed the habit of starting off with questions, but I think that only reflects – or perhaps even highlights – the areas about which I need to know more, or that I have not yet formed a view or understanding that is satisfactory or complete.

Open Access.  This is increasingly a phrase that is associated with ‘digital libraries‘, as much as it is with ‘Google ebooks’ and ‘scholarly communication’.  One understanding is that the results of all publicly funded – i.e. tax-funded – research should be made available freely to all.  This is considered to be a more equitable model than relying on for-profit scholarly journals to publish such materials, where neither the author, nor reviewers, nor author’s employer, nor research funder, receive any portion of the monies that are made by selling subscriptions to such journals.  Is this truly fair?  Doesn’t this mean that wealthier research institutions or nations are supporting those less privileged?  Possibly.  But what’s wrong with that?  Let us never forget that ideas or information cannot change hands like entities: they are more like phenomena in which sharing or exchanging enriches both giver and receiver.  Ideas multiply as they spread, not only proliferating but stimulating new conversations and insights.

There are more serious problems, however.  Now that we are aware that much research is culturally mediated, this would suggest that what is chosen for study, and how entities and phenomena are studied and reported, and how these results are disseminated, may all be governed by some or other hegemonic cultural code.  We would be foolish to think that ‘scientific research’ is, or can ever be, free of such biases.  Thus it would follow that the cultural expressions of scientific knowledge which are created and produced by specific cultural communities would differ, and those which are most prolific would dominate.  Ironically, as has been well documented, these communities would most commonly be found in Minority World (‘developed’) countries, who publish predominantly in English.  The knowledges of the Majority World remain, to all intents and purposes, more or less invisible, particularly in the formal research arenas.  In order to succeed, scholars from the Majority World follow Minority World traditions and mores in order to receive appropriate recognition and respect.

Another problem has come to light with the possibility that ‘Open Access’ may be a snare and a delusion.  There have now been several court cases regarding copyright issues and Google’s proposed digitisation of the library collections of many major academic libraries.  As this constitutes new legal territory which changes as the technologies change, I daresay we have not seen the end of this saga.  But there are three problems that must be resolved in such a case: firstly, will a company or companies (any company, not specifically Google or its relations and descendants) ‘own’ access to all such intellectual properties (even when they are out of copyright) simply through the access mechanisms – the digitisation protocols employed when digitising these works?  Secondly, if access is not dependent on Google’s goodwill (or payment to Google), much existing access to GoogleBooks is only possible if you are a member of the holding library’s community.  So, for example, if you are not a student or staff member of, say, Yale, you cannot digitally read in full all of the works held by the Yale University Library which Google may have already digitised.  Lastly, what will happen to such digitised collections over time?  Will Google continue to update and migrate the data as technologies change?  What if Google, as a company, ceases to exist?  I must say, at this stage it does appear rather unlikely – Google is apparently now entering the travel industry as well – but we know that empires come and empires go, and Google will probably not last nearly as long as the Roman Empire.

Another point that must be made is this: ‘Open Access’ is, to all intents and purposes, a term that can only be used in the digital environment, partly because it is so extraordinarily cheap and easy to transmit and store digital data.  In other words, if you do not have a computer, an internet connection, and a robust download allowance, you remain even more on the back foot.

Many of the decisions regarding Open Access seem to be being taken by people other than librarians (in particular), who have long wrestled with precisely the problems that Open Access once again raises.  Publishers, scholars, tertiary educational establishments, charities, technologists – all of these and more are interested in the phenomenon, but I would like to know to what extent libraries have been consulted (rather than the comments that we make to each other).  Robert Darnton recently suggested a ‘Digital Public Library‘ for the United States of America, and the discussion list on this topic has made it abundantly clear that all of these concepts are unclear and up for grabs:  What exactly do we mean when we say ‘digital’ or ‘public’ or library’ – or ‘document’ or ‘access’ or, indeed, anything else that we thought we had known?

See also: Digital Koans:

Implementing time travel for the Web

Dipping a toe in digital librarianship

Everybody’s libraries

Study queries open access benefits

Internets = Parody motivator.

Image via Wikipedia Communities and collaboration – thriving as a 21st century information professional