From To Save Everything, Click Here by Evgeny Morozov. For a talk about dystopias I’m doing next month, I’m trying to consider the implications of this technology at the level of social ontology. What does it mean to see sinister possibilities inherent in ‘innovations’ like this? Is there anything we can say in the abstract about how likely these possibilities are to be realised? It strikes me that this is necessary, at least if we are to avoid an empiricist attitude of ‘wait and see’ on the one hand or the systematic suppression of technological change on the other.
Or consider a prototype teapot built by British designer- cum- activist Chris Adams . The teapot comes with a small orb that can either glow green (making tea is okay) or red (perhaps you should wait). What determines the coloring? Well, the orb, with the help of some easily available open- source hardware and software, is connected to a site called Can I Turn It On? ( http://www.caniturniton.com ), which, every minute or so, queries Britain’s national grid for aggregate power- usage statistics. If the frequency figure returned by the site is higher than the baseline of 50 hertz, the orb glows green; if lower, red. The goal here is to provide additional information for responsible teapot use. But it’s easy to imagine how such logic can be extended much, much further, BinCam style. Why, for example, not reward people with virtual, Facebook- compatible points for not using the teapot in the times of high electricity usage? Or why not punish those who disregard the teapot’s warnings about high usage by publicizing their irresponsibility among their Facebook friends? Social engineers have never had so many options at their disposal.