Attributed to Aaron Swartz, but the editor of his collected writings suggests this is a contentious issue.

Information is power. But like all power, there are those who want to keep it for themselves. The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitized and locked up by a handful of private corporations. Want to read the papers featuring the most famous results of the sciences? You’ll need to send enormous amounts to publishers like Reed Elsevier.

There are those struggling to change this. The Open Access Movement has fought
valiantly to ensure that scientists do not sign their copyrights away but instead ensure their work is published on the Internet, under terms that allow anyone to access it. But even under the best scenarios, their work will only apply to things published in the future. Everything up until now will have been lost.

That is too high a price to pay. Forcing academics to pay money to read the work of their colleagues? Scanning entire libraries but only allowing the folks at Google to read them? Providing scientific articles to those at elite universities in the First World, but not to children in the Global South? It’s outrageous and unacceptable.

“I agree,” many say, “but what can we do? The companies hold the copyrights, they make enormous amounts of money by charging for access, and it’s perfectly legal — there’s nothing we can do to stop them.” But there is something we can, something that’s already being done: we can fight back.

Those with access to these resources — students, librarians, scientists — you have been given a privilege. You get to feed at this banquet of knowledge while the rest of the world is locked out. But you need not — indeed, morally, you cannot — keep this privilege for yourselves. You have a duty to share it with the world. And you have: trading passwords with colleagues, filling download requests for friends.

Meanwhile, those who have been locked out are not standing idly by. You have been sneaking through holes and climbing over fences, liberating the information locked up by the publishers and sharing them with your friends.

But all of this action goes on in the dark, hidden underground. It’s called stealing or piracy, as if sharing a wealth of knowledge were the moral equivalent of plundering a ship and murdering its crew. But sharing isn’t immoral — it’s a moral imperative. Only those blinded by greed would refuse to let a friend make a copy.

Large corporations, of course, are blinded by greed. The laws under which they operate require it — their shareholders would revolt at anything less. And the politicians they have bought off back them, passing laws giving them the exclusive power to decide who can make copies.

There is no justice in following unjust laws. It’s time to come into the light and, in the grand tradition of civil disobedience, declare our opposition to this private theft of public culture.

We need to take information, wherever it is stored, make our copies and share them with the world. We need to take stuff that’s out of copyright and add it to the archive. We need to buy secret databases and put them on the Web. We need to download scientific journals and upload them to file sharing networks. We need to fight for Guerilla Open Access.

With enough of us, around the world, we’ll not just send a strong message opposing the privatization of knowledge — we’ll make it a thing of the past. Will you join us?

An interesting summary of Thomas Jefferson’s views on intellectual property, written by a 17 year old Aaron Swartz, printed in The Boy Who Could Change The World, loc 370-380:

No one seriously disputes that property is a good idea, but it’s bizarre to suggest that ideas should be property. Nature clearly wants ideas to be free! While you can keep an idea to yourself, as soon as you share it anyone can have it. And once they do, it’s difficult for them to get rid of it, even if they wanted to. Like air, ideas are incapable of being locked up and hoarded. And no matter how many people share it, the idea is not diminished. When I hear your idea, I gain knowledge without diminishing anything of yours. In the same way, if you use your candle to light mine, I get light without darkening you. Like fire, ideas can encompass the globe without lessening their density. Thus, inventions cannot be property. Sure, we can give inventors an exclusive right to profit, perhaps to encourage them to invent new useful things, but this is our choice. If we decide not to, nobody can object. Accordingly, England was the only country with such a law until the United States copied her. In other countries, monopolies may be granted occasionally by special act, but there is no general system. And this doesn’t seem to have hurt them any—those countries seem just as inventive as ours.

I’ve written in the past about my dislike for Evernote and near continuous search for an alternative to it. I won’t rehearse my issues with it here but the one that really matters is that I simply can’t stand the interface. I find it hard to pin down precisely what my problem with it is but I feel immensely antipathetic towards using it. It just doesn’t cohere with how I think or with the kinds of information I want to use it to record. The notebooks soon become arbitrary structures, filled with information organised in a sub optimal way and I’m never known how to rectify that state of affairs. To be fair, this was every bit as true when I used to carry organisational clutter around in moleskine notebooks instead: ‘notebooks’ provide too much organisation at the macro level and too little organisation at the micro level. Perhaps for these reasons, I’ve long since come to the conclusion that there’s something about Evernote and something about myself which just isn’t going to be compatible, no matter how many times I hear people who I respect sing its praises. I’ve tried Centrallo, which uses a structure that does work for me, though I realised that in spite of the ontology (I like lists much more than notebooks!) being more suitable, as well as the interface and synching being excellent, it was set up to store much more information than I was ever likely to need it for.

I recently started using Day One journal instead. It’s a carefully designed app, available for iOS and OS X, described as a “simple and elegant journal”. However it’s remarkably feature rich in spite of this simplicity, including reminders, photos, location, automatic backup, iCloud synching, publishing to social media and PDF exports amongst many others. I suspect there’s a risk the developers compromise its ‘elegance’ if they continue to add functionality but at least thus far they have not. The thing that made me fall in love with this app was the experience of writing – in a manner only matched by the Medium blogging platform, it makes writing a pleasure with a lovely distraction-free white screen waiting to be filled, complete avoidance of the lag that often characterises typing on iOS apps, markdown support and oddly satisfying Tweetbot like tapping noises as you type. The entries are filed chronologically, which I realised I associate with blogging these days much more readily than I do an actual journal, though can be favourited and tagged, as well as searched in a variety of ways.

The material I wanted to use Evernote for is probably much more specific than what most people use it for. I want a place to store my plans – I’ve been using Omnifocus for a few years now and I’m so entrenched in this way of reflexively organising my life that I would probably cease to function without it. However Omnifocus is task-orientated – the whole system is designed around the enactment of short, medium and long-term projects as sequences of discrete actions which should only be visible to you at the correct moment. It’s a system designed to overcome procrastination and inertia by offering you a continuing stream of relevant actions which you can take to work towards overarching projects of whatever sort, avoiding overwhelm by shielding the many actions which aren’t relevant (at this particular moment in this particular context) from your awareness. It’s hard to use, literally taking me a year to get to grips with the software, but when it does work it’s difficult to describe how powerful it is. Hence I think the creepy tone which often creeps into discussions about it. The problem with Omnifocus is that it’s not set up to store reference material (in the GTD sense) adequately* – the information which both informs your planning and is required by it, stuff you need to consult in the process of doing things but also to work with as a basis to decide what to do. This is what I’m now using Day One journal for and it really seems to work – I write ad hoc notes in the diary as things occur to me, stuff that I used to put in my Omnifocus inbox but that isn’t actually action orientated and so shouldn’t be in there, which I then review in the same way as I do with Omnifocus. Those thoughts, ideas, realisations etc that are important get tagged and incorporated into a structure which keeps track of the broader perspectives (20,000 to 50,000 feet in GTD terminology) which I’ve found tend to be collapsed into the temporal horizon of a few months at most in Omnifocus:

photo (1)

really like this way of working and it’s the first time I’ve found an app like this which I suspect I’ll stick with. However I think my experience illustrates a broader point about information capture and organisational apps like Evernote: what do you actually want to use it for? What is it you’re trying to capture? How are you trying to organise it? It’s only when we address these questions that we can begin to get a handle on which apps will actually help us do things more effectively in a way that avoids distraction and procrastination. So in that spirit, here are the various apps I use and the purposes I use them for:

  1. I use my Gmail account as a catch all place to store URLs that I might later want to retrieve. I can access it from anywhere I have an internet connection and everything goes into two folders ‘blogging/twitter’ and ‘reading’ (for academic papers) which then become inboxes of sorts for blogging (particularly for Sociological Imagination) and for research (the papers are unstructured but the reason I’ve saved them is because they’re relevant to a project).
  2. I use Pocket to capture online stuff (up to and including LRB length long reads) which I want to read but don’t care about saving the citation details for. If I don’t think I’ll pay attention to it when I come across it or if it would distract me to do so then I save it to Pocket. This leaves it accessible on my iPhone and/or iPad at a time which is more conducive to reading it attentively.
  3. I use Bundlr to organise online stuff for other people. If I think it’s useful to others to collect a package of links and share on Twitter then this is an easy and effective way to do it.
  4. I use Papership to collect PDFs, bibliographic details and notes I’ve made on journal articles and books etc.
  5. I use my blog as a commonplace book – extracts, videos or images that I’ve found interesting in some way and want to ensure I can retrieve at a later date (i.e. unlike things in Pocket where I just want to make sure I read them properly).
  6. I use my blog as a research journal – collecting short thoughts, mini essays, notes on reading, responses to papers etc in a way that I group into thematic tasks and come back to as a resource when I’m doing ‘serious’ academic writing.
  7. I use Day One to keep track of what I’m doing and why in a general overarching sense.

I suspect Evernote works very well for 1-6. I’m not convinced it works well for 7. Part of the reason I’m writing this post is to disentangle my own use of apps from the broader practical needs they serve because I’m writing a chapter of my social media book on curation tools and managing information at the moment. So if anyone has got this far, I’d love to hear whether activities 1 to 7 map on to your own use of apps and experience of reflexively approaching your work.

*You can add attachments to projects but this atomises overarching plans. There’s no space for ‘big picture’ stuff in Omnifocus.

I realised when looking back over old notes that someone asked me to write this for them and then never published it. So here’s a quick post about curation I wrote a couple of years ago: 

For all that digital technology offers the academy, it also presents new problems. The instant availability of information from all over the word poses the inevitable challenge of how to collect, sort, evaluate and share this information. These are tasks which those working in universities, across the full range of roles, have always performed. However the sheer abundance which characterizes our modern knowledge environment too often results in information overload for those whose professional and personal interests give them no choice but to engage with this torrent. It is for this reason that curation tools, often ‘seen as the next big thing’ of social media, offer the potential for such enormously gainful use by university staff.

Curation is the broader concept behind Pinterest, by far the most well known of these tools. The service operates as a virtual ‘pinboard’, allowing the user user to explore the internet, collecting images they find through the use of a convenient browser button (in a similar way to creating new browser bookmarks) and make these titled pinboards available online. However Pinterest is just one tool amongst many and, with its central focus being on images, in many ways it is less versatile than some of the others. Here are three of my favorites:

Storify allows users to search multiple social networks and knit together items they find into sequential stories. I’ve personally found this useful for preserving Twitter debates that I’ve found particularly intellectually stimulating. However this only represents part of what the tool is capable of if you combine a sufficiently diverse range of elements, whereas my uses have been merely been reconstructing conversations on one medium that I was actively involved in. The most impressive uses I have seen have tended to revolve around covering events, either live or retrospectively.

Bundlr is my personal favorite and I can’t recommend it enough. As with the others, you use a browser button to ‘bundle’ content. When you’re on a web page which you want to curate, press the button and either choose an existing bundle or make a new one. What’s most impressive about Bundlr is how it combines the ability to handle many types of content (e.g. youtube videos, images, tweets, presentations, web pages) with effortlessly making the finished product look aesthetically appealing. It’s also incredibly easy to pick up and use. Within a few hours of signing up to Bundlr I had multiple bundles which had collectively received hundreds of hits.

Scoop.It allows you to publish ‘magazines’ based on content you ‘scoop’ through a browser bookmark. Whereas some of the other tools focus more on collating items, Scoop.it offers more room for curation in the strict sense of the term: it gives you more opportunity than the other tools to control what aspects of your ‘scooped’ items are highlighted and what commentary you offer about them. It also has an interesting, though in my experience not quite perfected, tool which automatically offers you ideas about things to ‘scoop’.

If the concept of curation interests you then I would advise experimenting with a few tools to see which one is right for you. While there are undoubtedly objective differences between them, there is also a large aspect of subjective fit: each of them rests on some underlying embodied metaphor (e.g. pinning on your pinboard, putting items in a bundle, scooping up items for your scrapbook newspaper) and what works for one person might not necessarily work for another. Furthermore, it is worth bearing in mind that there is no ‘right’ or ‘wrong’ way to use these tools. Here are some of the things I have used them for: making resource packs for social media training, inventorying journal articles I use in my research, producing a portfolio of projects I have been involved in, pulling materials together to help prepare for projects I have yet to start and collecting materials about my favourite authors. But there are many other ways in which they can be used. Curationtools will enhance any task that involves collecting, sorting, evaluating and sharing digital material.

I was intrigued by this paper analysing the evolution of human culture in terms of ‘tipping points’ in our capacity to store and transmit information. It has a different focus but it’s nonetheless entirely consistent with Margaret Archer’s work on culture and could be used to historicise her account. Her approach to culture focuses on the interface between what she terms the cultural system (the logical relations between propositions) and socio-cultural interaction (the casual relations between people). Her central claim is that relations between ideas condition interactions between people and that this socio-cultural interaction leads, in turn, to the elaboration of the cultural system. This account rests on Popper’s notion of World 3:

The objectivity or “autonomy” of such World 3 objects as theories and arguments is especially evident in Popper’s view from the fact that they have logical relations – and in particular, unforeseen implications and unnoticed inconsistencies – that may not be noticed until well after we first consider them, but which were evidently there all the time waiting to be discovered. Naturally, he takes mathematics to illustrate the point vividly, but it is in his view no less evident from empirical scientific theories. The clearest mark of the reality of all three worlds is in Popper’s judgment the fact that World 3 has a causal influence on World 1, and does so only via World 2. For example, the scientific theories which entailed the possibility of nuclear weapons have had an obvious impact on the material world – they have resulted in various nuclear tests, in the destruction of Hiroshima and Nagasaki, and so forth – but only because scientists carried out the mental activity of working out the implications of the theories and applying them.

Popper’s World 3 is often compared to Plato’s realm of the Forms, and Popper himself acknowledges that there are similarities. But he also emphasizes the significant differences between his view and Plato’s, not the least of which is that he takes World 3, despite its objectivity or autonomy, to be something “man-made,” its objects in the strict sense being what the human mind “abstracts” from their World 1 embodiment. Though Popper does not take note of the fact or develop the theme in much detail, this is clearly reminiscent of an Aristotelian or “moderate realist” approach to the traditional problem of universals, as distinct from the “extreme realism” of Platonism. (See here and here for a useful short account of the traditional Aristotelian-Thomistic-Scholastic approach to the issue and its significance.)

On my reading this is fundamentally about information storage. So for instance a book retains the capacity to be understood regardless of whether anyone happens to read it. The information is latently present in the book such that, while it requires a reader to be understood, this reader is making use of a power of the book itself. Human beings objectify thought in our cultural products. What I enjoyed so much about this paper was how much more broadly it made me consider the historical context within which these practices of objectifying thought evolved and how, furthermore, Archer’s account could be elaborated on by considering changes in the structure of information storage, diffusion and retrieval which surely constrain and enable possible patterns of both socio-cultural interaction and cultural system elaboration.

About 45,000 years ago much of western Europe witnessed a proliferation of cave art, personal adornment, and rituals—what archaeologists refer to as the “Upper Paleolithic Revolution” (Bar-Yosef, 2002; Mellars, 2005). The roots of this proliferation appear to lie in migrations of anatomically and cognitively modern humans out of sub-Saharan Africa (Ambrose, 1998; Lahr and Foley, 1998; Ray et al., 2005), where there is intermittent evidence of symbols and personal adornment (e.g., shell beads and engraved chunks of ochre and ostrich egg shells) that in some cases dates several tens of thousands of years earlier (McBrearty and Brooks, 2000; Henshilwood et al., 2002, 2011; Mackay and Welz, 2008). Regardless, there is no denying the explosion in creative expression that occurred in western Europe around 45,000 B.C., coupled with the appearance of such features as long-distance exchange, grinding implements (Wright, 1992), and storage facilities, especially in northern latitudes where underground freezing kept food edible (Soffer, 1989). What caused the explosion? The answer may not be what was in humans’ heads but in how the heads, and how many heads, were interconnected. (Bentley and O’Brien 2012: 23)

A larger local population meant that more people were around to invent new ideas, build on earlier ideas (the cumulative aspect of culture), and, crucially, pass on those ideas before they were lost. This population aggregation provided the critical “ratchet” (Tomasello, 1999) to push culture to a tipping point  (Bentley and O’Brien 2012: 24)

Writing, which began in Mesopotamia roughly during the fourth millennium B.C. (Fischer, 2004), was used mainly as a form of bookkeeping. Later, more-expressive writing then became a specialist endeavor, meaning it was limited to a few, from the priests of Mesopotamia to the scribes entrusted with the Code of Hammurabi (Van de Mieroop, 2004). As writing became a means of creative expression—from the Greek tragedies, to the tales of Chaucer and the plays and sonnets of Shakespeare, and finally to the novels of the eighteenth and nineteenth centuries—writing became a means for the few to communicate with many, either through performance or later through readings by literary societies (Bradway-Hesse, 1998). In doing so, this “library” gradually became the hub of culture itself rather than just a means of expressing it. The intricately connected web of broadcast media that we see today is just the extreme of this trend.

Because of writing, the vast amount of specialized knowledge in the world is perhaps a billion times the technological variation contained in a prehistoric hunter– gatherer community (Beinhocker, 2006). This specialization has intensified incredibly, such that only a tiny fraction of people in the developed world knows how to produce food, and modern urbanites can often specialize in quite arcane knowledge that would have virtually no value for survival on one’s own or in the small groups of our ancient ancestors. Culture has, to this point, provided the means for even semi-isolated individuals to maintain a subsistence while simultaneously engaging in other pursuits.

Today, many of those pursuits revolve around the Internet, which in a way represents a return to the past, before mass media, by making local craft traditions and self-expression possible again through uploading personal videos, blogs, pictures, and social-network homepages. Cultural change has accelerated not because of the larger network but because Web 2.0 motivates many more people to create new ideas. The interconnectedness of online endeavors—through blogs and social-network sites as well as search tools such as Google and Bing—has not homogenized culture, as some feared. Rather, it has fractionated it, as like minds find each other to create cultural “niches” that branch off from one another. Interconnectedness, paradoxically, allows groups to differentiate by copying each other, which homogenizes the group but further distinguishes it from all other groups. Instead of family and political ties organizing people in geographic space, it is now ideas and common interests that organize people online. This change in tempo necessarily brings about a change in mode of evolution because modern technology is no longer a set of knowledge that people can teach to the younger generation in the way craftspeople apprenticed their children or teachers taught pupils for millennia. (Bentley and O’Brien 2012: 26-27)

Humans have now created so much knowledge and skill outside their own bodies, in computers and information networks, that that knowledge may soon feed back on its own into this process. Anderson and Abrahams (2009) argue that progress in science requires true outliers—those creative geniuses that fit Thomas Edison’s inspirational one percent. Part of the debate is the mystery over the origin and timing of genius. Gladwell (2008) suggested that some geniuses, such as Cézanne, needed thousands of hours of practice before achieving the height of their skill, whereas others, such as Picasso and Einstein, made (arguably) their most profound achievements at a young age, with very little practice. Similarly, Galenson (2005) divided artists into two groups based on an assessment of when they made their greatest contributions—conceptualists, who are innovative at an early age, and experimentalists, whose innovations come much later, after considerable experimentation and refinement. Regardless, every so often an innovation comes along that is rare enough to begin a new paradigm in the true Kuhnian sense of the term (Kuhn, 1962). As soon as a good new idea is demonstrated, many clamber to copy it and modify it slightly. This alteration may underlie a continual budding-off process as new, more-specialized niches are created and then developed (O’Brien and Shennan, 2010) (Bentley and O’Brien 2012: 31)

First, the pace of cultural evolution has accelerated over time, much of it tied to the ways in which humans have been able to harness and manage information. Second, although it is easy to be persuaded that there was some inevitability or direction to this harnessing and management, it is true only in a retrodictive sense. In other words, we see nothing in the archaeological record that would have allowed us at any point in the past to predict what the future might entail in terms of information management. Third, of the three tipping points we discuss, perhaps the most significant was the ability to store, and tap, information outside one’s own brain, whether that information resides in the heads of others or on a cave wall. Fourth, the emergence of this ability was tied directly to local population size, meaning that more people increased the odds both of new ideas ratcheting up old ones and of passing those ideas on before they were lost. Fifth, tempo and mode are both important components of the myriad ways in which humans have stored and manipulated information, but it is clear that changes in tempo can bring about changes in mode of evolution. (Bentley and O’Brien 2012: 35)