Information overload, then and now

| April 5, 2011 | 0 Comments

The amount of digital data available on the web every day reaches records of mindboggling proportions – now more than a zettabyte and presumably accumulating at an ever-increasing rate, estimated at 30% growth per year from 1999 to 2002.

But such figures – often presented as evidence of unprecedented and stressinducing overload – don’t mean much. After all, it takes only one or two pages of Google hits to overwhelm the average reader. Does it really matter whether there are hundreds or thousands more pages after those?

Information overload an ancient complaint
More important, information overload was experienced long before the appearance of today’s digital gadgets. Complaints about “too many books” echo across the centuries, from when books were papyrus rolls, parchment manuscripts or hand-printed. The complaint is also common in other cultural traditions, like the Chinese, built on textual accumulation around a canon of classics.

Writing was very likely the first culprit, making possible the accumulation of texts beyond what a single mind could master – even a mind trained to memorise Homer or biblical texts. Writing on durable surfaces (like parchment or paper), with a high level of redundancy (when multiple copies were produced, whether manuscript or  printed), also made it possible to recover texts after they had fallen into oblivion, so that being in continuous active use was no longer essential to a text’s transmission, as is the case in an oral culture.

Reactions to overload have often been emotional, whether hostile or enthusiastic. Early negative responses include Ecclesiastes 12:12 (“Of making books there is no end”, probably from the fourth or third century BC) and Seneca’s “distringit librorum multitudo” (“the abundance of books is distraction”, first century AD). But we also find enthusiasm for accumulation – of papyri at the Library of Alexandria (founded in the early third century BC) or of the 20 000 ‘facts’ that Pliny the Elder accumulated in Historia naturalis (completed in AD 77). Though we no longer care especially about ancient precedent, we hear the same doom and praise today.

Overload has also triggered pragmatic responses, as generations have done their best to locate and use the information they seek, under inevitable constraints of time, energy and other resources. Typically we select from collective storage facilities, like libraries and the internet, and not only books and web pages but also specific parts of them (like arguments, quotations, or facts). If we wish to revisit results, we need to store them so that they are retrievable. Electronic media have prompted attempts (as in Microsoft’s MyLifeBits) to store the entirety of an individual’s experiences, but among scholars a more conventional method is to take notes and store selections or summaries.

Search and retrieval tools also have a long history
Tools for searching for and retrieving that information have a long history, too, although it is obscured by the fact that working texts are often not considered worth preserving. Notes could be taken on temporary surfaces like reusable wax tablets – even when they were written on more permanent materials (like the 160 papyrus rolls bequeathed by Pliny the Elder to his nephew), they were typically not copied for other people. Although annotations survive in the margins of medieval manuscripts, significant collections of working papers and free-standing reading notes come down to us only from the start of the Renaissance, when paper became widely available. Humanists taught the art of note-taking under topical headings called ‘commonplaces’, generating reams of excerpts selected for rhetorical, historical or moral interest.

In doing so, they looked back to ancient calls for note-taking, but they were also deeply indebted to medieval practices of text management. The oldest was the florilegium: collected bits (or ‘flowers’) from authoritative texts, sorted by topic (often the same headings that the humanists pick up; for example, vices and virtues).

Starting in the 13th century, the scholastics devised powerful new tools, notably the alphabetical index (including biblical concordances and subject indexes) and the structuring and layout of a text into numbered sections and subsections, so that it could be consulted rather than read through.

Printing gave tools to a broader, reading public and magnified perceptions of overload
Printing, which spread rapidly after its invention (circa 1450), triggered only a few genuine innovations in the presentation of texts: title page, pagination and the use of white space instead of colour to highlight section breaks. Above all, printing created the conditions under which a broad reading public was able to use tools that had previously been limited to a small, specialised elite. Early printed books boasted of their indexes. Printed reference books – most of them in genres that had existed in the Middle Ages – became steady sellers, with frequent, and frequently enlarged, new editions. Printing also hugely magnified the perception of overload. Books were produced and accumulated in unprecedented numbers and, given their drop in cost, many more readers than before had access to more books than they could read.

Especially after the mid-16th century, contemporaries frequently complained of the over-abundance of books. Of course complaints, made in print, typically were levelled at an excess of ‘bad books’ – offering a new, good book as a solution, the goodness of which might stem from greater inclusiveness or selectiveness, depending on the kind of book.

Several solutions since the 16th century
The genres promoted as solutions to overload in the 16th century included bibliographies (some universal, others selective) and many kinds of compilations that gathered the best bits from all those books one couldn’t manage to read. The last essentially offered readymade reading notes of the kind humanist pedagogues recommended taking oneself. The periodical and the book review were also advertised as remedies to over-abundance when they appeared, in the late 17th century. And during the 18th century, contemporaries commented on the explosion of dictionaries and encyclopaedias. Those vernacular works ocused on recent writing and  borrowed features from Latin reference genres, ven as they abandoned the Latin works’ focus on classical culture.

The new volumes, including Encyclopaedia Britannica, remained the staple of reference rooms until recently. But bear this in mind: early reference works were not only valued but also decried as shortcuts that threatened to lure students and scholars away from learning. Thus the anxieties surrounding Wikipedia and other shortcuts available today have historical antecedents.

What do we risk losing in electronic age?
Electronic media has made overload seem universal, spreading well beyond scholarly fields already familiar with the phenomenon and into almost every activity – including shopping and entertainment – and visible to anyone doing a basic internet search. We have all acquired various coping skills, often without giving our methods much thought. My study of information management in the era of humanist note-taking and early rinted reference books has left me wondering about what we risk losing in academic scholarship as we move more of our work to electronic media:
Storing. It has been a long time since scholars have been concerned – as the humanists were in the Renaissance – about being unable to recover long-forgotten texts. Historians routinely look at old manuscripts, preserved but then forgotten n a personal or institutional  library or archive. Although the handwriting may require some deciphering, ink on paper remains legible for centuries. But despite the rise of standards to make digital coding durable, the inevitable obsolescence of hardware and software creates major barriers to reading electronic texts after they have fallen into disuse. True, the internet and electronic files offer great redundancy, which raises the chances that information will be preserved. (Indeed, book history suggests that redundancy is more effective than the durability of the medium in ensuring  preservation – for example, ancient stone inscriptions often survive only in the printed transcriptions made of them.) But computers preserve only what has been upgraded to match their ever-changing specifications. Documents without anyone interested in using them and upgrading them to new platforms may become  inaccessible.

What are the odds of being able to read one’s own or someone else’s digital notes in 20 years, let alone a few hundred? Sorting. Early printed indexes were  notoriously difficult to use, even in their day. One needed to search under different synonyms for material on a topic, and users complained that they could never find what they were looking for. By the late 19th century, the professionalisation of library cataloguing and indexing had brought some relief in the form of a controlled vocabulary – a set of agreed-upon subject headings. Although subject and index headings perforce change over time – as do the categories by which we remember and manage our own notes – they remain a uniquely powerful tool.

Today search engines can track the keywords chosen by individual users and writers, but we still need library cataloguers and indexers who can identify relevant category terms that do not appear explicitly in the text, and who can group related topics under consistent subject headings. Selecting and summarising. Keyword
searches and data mining offer tempting alternatives to earlier methods used by readers and authors to select the ‘best bits’ to store and refer to later. Microsoft Word and websites like (for ‘too long, didn’t read it’) even offer automated summarising (which seems to operate by selecting a few complete sentences from a text, without very convincing selection criteria). As we have seen, it is precisely amid great over-abundance that selection and summarising become all the more valuable.

Shortcuts like indexes or reference books have long been available – as well as people on whom the wealthy and powerful have relied to take notes. Now we have Wikipedia, and we can rely on computers for many tasks.

Will judicium continue to define intellectual work?
But making and using shortcuts skilfully and responsibly requires judgement, too. I hope that such judicium – a central value in education since the Renaissance – will continue to define intellectual work and to spur demand for high-quality information, contextual understanding and methods for building on previous reading and experience, so that we are not reduced to searching for everything anew. It’s important to remember that information overload is not unique to our time, lest we fall into doom-saying. At the same time, we need to proceed carefully in the transition to electronic media, lest we lose crucial methods of working that rely on and foster thoughtful decision-making. Like generations before us, we need all the tools for gathering and assessing information that we can muster – some inherited from the past, others new to the present. Many of our technologies will no doubt rapidly seem obsolete – but, we can hope, not human attention and judgement, which should continue to be the central components of thoughtful information management.

Ann Blair is a Professor of History at Harvard University and author of Too Much to Know: Managing Scholarly Information Before the Modern Age, recently published by Yale University Press.

Tags: ,

Category: Autumn 2011, e-Education

About the Author ()

News posts added for Independent Education by Global Latitude DMA

Leave a Reply

Your email address will not be published. Required fields are marked *