An important idea offered by Mike Caulfield. The embrace of frictionless sharing and the relentless pursuit of engagement have created the problems which are now being naturalised by the emerging ‘did Facebook lead to Trump’ discourse:

We have prayed at the altar of virality a long time, and I’m not sure it’s working out for us as a society. If reliance on virality is creating the incentives to create a culture of disinformation, then consider dialing down virality.

We know how to do this. Slow people down. Incentivize them to read. Increase friction, instead of relentlessly removing it.

Facebook is a viral sharing platform, and has spent hundreds of millions getting you to share virally. And here we are.

What if Facebook saw itself as a deep reading platform? What if it spent hundreds of millions of dollars getting you read articles carefully rather than sharing them thoughtlessly?

What if Facebook saw itself as a deep research platform? What if it spent its hundreds of millions of dollars of R & D building tools to help you research what you read?

A fascinating article on the LSE’s Media Policy Blog about the global ambitions of contemporary technology giants and the corporate structures which facilitate them:

The folks who run these companies understand this. For if there is one thing that characterises the leaders of Google and Facebook it is their determination to take the long, strategic view. This is partly a matter of temperament, but it is powerfully boosted by the way their companies are structured: the founders hold the ‘golden shares’ which ensures their continued control, regardless of the opinions of Wall Street analysts or ordinary shareholders. So if you own Google or Facebook stock and you don’t like what Larry Page or Mark Zuckerberg are up to, then your only option is to dispose of your shares.

Being strategic thinkers, these corporate bosses are positioning their organisations to make the leap from the relatively small ICT industry into the much bigger worlds of healthcare, energy and transportation. That’s why Google, for example, has significant investments in each of these sectors. Underpinning these commitments is an understanding that their unique mastery of cloud computing, big data analytics, sensor technology, machine learning and artificial intelligence will enable them to disrupt established industries and ways of working in these sectors and thereby greatly widen their industrial bases. So in that sense mastery of the ‘digital’ is just a means to much bigger ends. This is where the puck is headed.

A slogan more frequently encountered on pro-police demos has been repeatedly daubed inside the Facebook headquarters, creating embarrassment for a corporation whose staff are overwhelmingly white and male:

Facebook CEO Mark Zuckerberg has reprimanded employees following several incidents in which the slogan “black lives matter” was crossed out and replaced with “all lives matter” on the walls of the company’s Menlo Park headquarters.

“‘Black lives matter’ doesn’t mean other lives don’t – it’s simply asking that the black community also achieves the justice they deserve,” Zuckerberg wrote in an internal Facebook post obtained by Gizmodo.

Will such attitudes inevitably thrive under the conditions of meritocratic elitism which characterise much of the technology world?

This interesting article (HT Nick Couldry) explores the challenge faced by Facebook in imposing standards on a user base distributed around the globe:

As Facebook has tentacled out from Palo Alto, Calif., gaining control of an ever-larger slice of the global commons, the network has found itself in a tenuous and culturally awkward position: how to determine a single standard of what is and is not acceptable — and apply it uniformly, from Maui to Morocco.

For Facebook and other platforms like it, incidents such as the bullfighting kerfuffle betray a larger, existential difficulty: How can you possibly impose a single moral framework on a vast and varying patchwork of global communities?

If you ask Facebook this question, the social-media behemoth will deny doing any such thing. Facebook says its community standards are inert, universal, agnostic to place and time. The site doesn’t advance any worldview, it claims, besides the non-controversial opinion that people should “connect” online.

Their ‘global community standards’ are the mechanism through which the digital activity of over one and a half billion users is policed. But these regulations have an uncertain grounding in the normative judgements of the user base: the aggregate of users are far too heterogeneous (to say the least) to facilitate any layer of moral intuition which can reliably buttress the legitimacy of the global community standards. This problem is amplified by two factors:

Facebook has modified its standards several times in response to pressure from advocacy groups — although the site has deliberately obscured those edits, and the process by which Facebook determines its guidelines remains stubbornly obtuse. On top of that, at least some of the low-level contract workers who enforce Facebook’s rules are embedded in the region — or at least the time zone — whose content they moderate. The social network staffs its moderation team in 24 languages, 24 hours a day.

Having moderators embedded in a region might help on occasion. But this would assume the normativity of the region is any less fragmented and, as the Centre for Social Ontology’s recent book explores, we cannot assume this to be true. What’s more likely is that this vast army of poorly paid moderators will exercise little to no autonomy over their tasks, with the Facebook standards nonetheless being inflected through their variable judgement i.e. they won’t try and deviate from the global standards but they inevitably will do, in an unpredictable way, as any individual evaluator necessarily does when imposing a rule in particular cases.

So companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them—a vast, invisible pool of human labor. Hemanshu Nigam, the former chief security officer of MySpace who now runs online safety consultancy SSP Blue, estimates that the number of content moderators scrubbing the world’s social media sites, mobile apps, and cloud storage services runs to “well over 100,000”—that is, about twice the total head count of Google and nearly 14 times that of Facebook.

This work is increasingly done in the Philippines. A former US colony, the Philippines has maintained close cultural ties to the United States, which content moderation companies say helps Filipinos determine what Americans find offensive. And moderators in the Philippines can be hired for a fraction of American wages. Ryan Cardeno, a former contractor for Microsoft in the Philippines, told me that he made $500 per month by the end of his three-and-a-half-year tenure with outsourcing firm Sykes. Last year, Cardeno was offered $312 per month by another firm to moderate content for Facebook, paltry even by industry standards

Is there any accountability here? It’s certainly possible to influence the global community standards but, as the article notes, this influence is profoundly opaque. Meanwhile, there are good reasons to think that challenge and adjudication simply couldn’t work at this scale. How would it operate? Given it seems content moderators might compromise as much as half the workforce of social media sites, it’s worth thinking about how labour intensive a potential appeals process would be. Why go to that trouble when you can err on the side of simply taking down something on the grounds that someone thinks it’s offensive? Without finding some way to solve the normativity problem described earlier, how to underwrite legitimacy within an aggregate characterised by low social integration, there’s also no obvious ethical counter balance to this organizational tendency.

An absolutely fascinating account of developments in the newsfeed algorith at Facebook since its introduction:

Adam Mosseri, Facebook’s 32-year-old director of product for news feed, is Alison’s less technical counterpart—a “fuzzie” rather than a “techie,” in Silicon Valley parlance. He traffics in problems and generalities, where Alison deals in solutions and specifics. He’s the news feed’s resident philosopher.

The push to humanize the news feed’s inputs and outputs began under Mosseri’s predecessor, Will Cathcart. (I wrote about several of those innovations here.) Cathcart started by gathering more subtle forms of behavioral data: not just whether someone clicked, but how long he spent reading a story once he clicked on it; not just whether he liked it, but whether he liked it before or after reading. For instance: Liking a post before you’ve read it, Facebook learned, corresponds much more weakly to your actual sentiment than liking it afterward.

After taking the reins in late 2013, Mosseri’s big initiative was to set up what Facebook calls its “feed quality panel.” It began in summer 2014 as a group of several hundred people in Knoxville whom the company paid to come in to an office every day and provide continual, detailed feedback on what they saw in their news feeds. (Their location was, Facebook says, a “historical accident” that grew out of a pilot project in which the company partnered with an unnamed third-party subcontractor.) Mosseri and his team didn’t just study their behavior. They also asked them questions to try to get at why they liked or didn’t like a given post, how much they liked it, and what they would have preferred to see instead. “They actually write a little paragraph about every story in their news feed,” notes Greg Marra, product manager for the news feed ranking team. (This is the group that’s becoming Facebook’s equivalent of Nielsen families.)

“The question was, ‘What might we be missing?’ ” Mosseri says. “‘Do we have any blind spots?’” For instance, he adds, “We know there are some things you see in your feed that you loved and you were excited about, but you didn’t actually interact with.” Without a way to measure that, the algorithm would devalue such posts in favor of others that lend themselves more naturally to likes and clicks. But what signal could Facebook use to capture that information?

From The Boy Kings, by Katherine Losse, pg 201. Losse was asked to write blog posts about Mark Zuckerberg’s philosophy, something which he outlined to her in general terms:

“It means that the best thing to do now, if you want to change the world, is to start a company. It’s the best model for getting things done and bringing your vision to the world.” He said this with what sounded like an interesting dismissal of the other models of changing the world. I could imagine, like he may have, that countries were archaic, small, confined to one area or charter. On the other hand, companies— in the age of globalization— can be everywhere, total, unregulated by any particular government constitution or an electorate. Companies can go where no single country has gone before. “I think we are moving to a world in which we all become cells in a single organism, where we can communicate automatically and can all work together seamlessly,” he said, by way of explaining the end goal of Facebook’s “big theory.”

My sense is this view of ‘companies over countries’ is a relatively common one amongst the digital elite. It’s also key to understanding philanthrocapitalism: the political complexity of the world melts away in the face of a single minded concern to discover the most efficient way of bringing your vision to the world.

But what are the political implications of this? As Losse goes on to write, “It sounded like he was arguing for a kind of nouveau totalitarianism, in which the world would become a technical, privately owned network run by young “technical” people who believe wholeheartedly in technology’s and their own inherent goodness, and in which every technical advancement is heralded as a step forward for humanity.” This is roughly speaking what I’m trying to explore with the techno-fascism idea: how will what is currently a vague musing on the part of digital elites develop as part of a broader set of social transformations currently underway? How will their own vested interests, against a background of growing social upheaval and threats to their accumulated wealth, shape the development of what is at present little more than a mystical faith in solutionism and the singularity?

The extent to which this idea is discursive can be overstated, as Losse explains, she told Zuckerberg that she struggled to articulate his philosophy for him in a coherent way:

The question was what did any of these values actually mean, and why should we want them? This was something only Mark could explain. I told him that I was having trouble coming up with satisfactory essays on the topics he’d assigned, and asked him to schedule time to explain his ideas in more detail, but he was too busy or wasn’t inclined to explain further— it was hard to tell. I came to the conclusion that perhaps he thought I could invent these arguments of whole cloth, or that we already were cells in a single organism and I should be attuned enough to intuit what he meant, but I couldn’t, and so the essays were never written or posted

From The Boy Kings, by Katherine Losse, pg 200:

Most employees I talked with seemed not to be particularly bothered by the company’s decision to forcibly adjust people’s expectations of privacy, preferring instead to focus on the light and almost childlike- sounding goals of sharing and connecting people. “She just doesn’t get it,” a user support manager told me about one employee who was soon to be terminated. “She doesn’t believe in the mission. She thinks that Facebook is for people without any real problems and isn’t actually changing the world. Can you believe that? This afternoon I’m going to have to let her go.” 

I wondered who the heretic employee was. I guessed that she must have been like all of the user support team members: well- educated in the humanities at an Ivy League school, and probably unaware when hired that she had walked into a new kind of technical cult. At any rate, her awareness of issues beyond Facebook was a problem. The company wasn’t paying anyone to be aware of the world beyond the screen. The only questions you were supposed to ask or ideas you were supposed to have at work, as a good citizen of the Facebook nation, were about new ways to technologize daily life, new ways to route our lives through the web.

Do other prominent social media companies have a comparable sense of mission? Do they demand and/or inspire similar loyalty amongst their staff?

One final snippet from The Boy Kings, by Katherine Losse, that I can’t resist posting. It seems that Mark Zuckerberg has a secret back room in his private Facebook office, allowing him to retreat into opacity while sustaining the glass fronted and open plan layout of the corporate offices:

Mark’s office sat adjacent to our pod, with its secret back room (for especially important meetings, because the front room of his office had a glass window onto the hallway that made meetings transparent) hidden behind a wallpapered door and a single table illuminated by a Mad Men –style modern lamp, receiving a constant stream of celebrities and tech luminaries and wealthy Russians in silk suits. (Pg 196)

This is the same Zuckerberg who bought four homes adjacent to his in order to ensure his own privacy. His own power dramatically illustrates the politics of transparency and opacity in digital capitalism. We can see this even more dramatically in the private retreats of the digital elites: if transparency gets tiring, why not just head off to your super yacht or Hawaii estate for a while? As Zuckerberg describes it, quoted on pg 198: “We are pushing the world in the direction of making it a more open and transparent place, this is where the world is going and at Facebook we need to lead in that direction.” The key terms here are pushing and lead. The pushers and the leaders are able to take a break when they’d like, without worrying about someone else perpetually trying to push and lead them.

I think this could be analysed in a similar way to how Bauman explored mobility in his work on globalisation: those at the bottom of the hierarchy are transparent because they lack the resources to escape the filter bubble, while those at the top of the hierarchy are usually transparent as a function of their own commercial success. But one condition is forced, leaving the people in question susceptible to manipulation, while the latter is chosen and can be voluntarily withdrawn from in private life.

From The Boy Kings, by Katherine Losse, pg 194:

The floor around Sheryl’s desk was piled with the endless gifts that she received from business contacts in lofty positions at Fortune 500 companies. People sent her Louboutin heels and Frette candles the diameter of dinner plates, which she unpacked while on speakerphone with some CEO or another. Sometimes, she passed them over the desk to me offhandedly, just trying to get rid of them, but usually they just sat in piles under the desk until someone cleared them away, to be replaced by new, just as superfluous, luxury gifts. Mark’s desk was similarly surrounded by boxes and gifts, but they were more boyish: a sports jersey signed by a soccer star, some video game that hadn’t been released yet. I didn’t have any presents (other than Sheryl’s cast- offs), but I had a front- row view of the business lives of the extremely rich and powerful, whom I now knew spend much of their days managing the world’s desire to be close to them.

From The Boy Kings, by Katherine Losse, pg 191-193:

The catch for Facebook was that the more successful we became (and we were still, despite all the competition, dominant), the more likely employees were to be distracted by money and the new pastimes it enabled: fine dining, bar hopping, five- star vacations, expensive cars. In this sense, winning the game completely was a bit of a curse, because as our user numbers climbed quickly to 250 million in July 2009 and 350 million in December 2009, early employees had less incentive to work constantly, and more leeway to play games and party earlier in the night instead of waiting until the dead hours of two in the morning to socialize like we used to. New engineers were being hired all the time to take up the slack of bug fixing and code development from employees who had been there longer. The Facebook product itself made staying on task difficult: With the steady stream of pictures flowing down our pages, how could we be expected to focus on anything but planning our next photo opportunities and status updates? Looking cool, rich, and well- liked was actually our job, and that job took a lot of work.

I’d like to know if this goes hand-in-hand with a ratcheting up of corporate perks, at least for the engineers and executives, in an attempt to at least keep the hedonism in house. 

From The Boys, by Katherine Losse, pg 146:

My career upgrade from dungeon department to quasi- technical role meant, along with a better salary and more respect from the technical echelon of the company, that I was now on engineering time. This meant that while I could come to work later, as late as lunchtime, I was expected to stay up until all hours answering emails and devoting myself even more monastically to our new enterprise. However, even as the respect and pay were higher, which was a huge relief, genuflecting to external application developers, even if I didn’t agree with what they were doing, felt a lot like the eternal reverence we nontechnical employees were all expected to exhibit for Mark and the engineering department.

From page 152:

Becoming a fully fledged member of the engineering team that winter felt, as I long dreamed of doing, like going from being slave to being conqueror. Suddenly, I could arrive at work on my own time, as long as I was working late into the night, because it was assumed that I, like all the engineers, was upholding and advancing a whole new world, even if sometimes we were just sitting around in the office eating snacks and playing games. In engineering, getting to work late was cool, even necessary. It meant, in the ideology of the lone and maverick hacker, that you weren’t beholden to authority, and that you might have been up late coding something brilliant and life- changing and disruptive (even if you were just trolling Facebook or watching porn). Being in engineering wasn’t an escape from the game so much as the ultimate playground.

From page 155 to 156:

I spent days with the professional translators while they read through pages of translations and made corrections as needed. They were working by the hour, clocking out at six o’clock, and thought it strange that I seemed perennially online the entire week, answering chats, reading Facebook, talking with them, answering questions, and responding to emails at all hours. When they left the office at the end of the day, they were done until the next morning. That, in turn, seemed strange to me. I couldn’t remember when the last time was that I wasn’t within spitting distance of my computer and smart phone. As much as I had once made fun of the Facebook boys for staring at their phones more often than they looked up, I had become one of them.

A fascinating snippet from The Boy Kings, by Katherine Losse, describing the approach of a new operations director joining Facebook in 2007. From pg 144:

The next week, Chamath asked me and my management colleagues in customer support to do an evaluation exercise in which we ranked everyone on the Customer Support Team from highest to lowest. Sitting up late that night in the office, I assigned a score to each person on the team. Some were easy to score: They were either spectacularly hard workers or rather lazy, preferring to play company- sponsored Beirut games to the alternately hard and tedious work of solving user problems, but for most it was a queasy and difficult process of comparing apples to oranges, which, in this case, might be one person’s quickness at answering emails versus another’s thoroughness and accuracy. 

When the results were in, Chamath came back to deliver a speech. “Look around you,” he told us. “In a few weeks, some of the people in this room won’t be here. They will be moved to other departments, because they’ve worked hard and have made themselves valuable to the company. Other people in this room won’t be here, because they haven’t worked hard enough. I’m telling you this because you need to understand that this is how it works: You are always being ranked, and it’s your job to perform. How you do here is up to you, but no one’s going to let you get away with not pulling your weight.”

From The Boy Kings, by Katherine Losse, pg 134:

That Sunday, after I’d slept off our long night, I logged in to Facebook to see an endless stream of videos that the boys had filmed at the club. In them, the boys were not chatting up or kissing girls they had met, as I had expected. Instead, they were performing an elaborate ritual only they would have the strange, cold vanity to invent, in which they would methodically chat up and reject girls that the bouncers had brought to their table. “Leave! You’re not pretty enough!” one of them seemed to say over the din of the club as he shooed the girls away in succession like so many servants. Even though I had been living in this boys’ world for almost two years, I was still a bit shocked. Their products ultimately reflected their real- life behavior. Instead of making a technology of understanding, we seemed sometimes to be making a technology of the opposite: pure, dehumanizing objectification. We were optimizing ways to judge and use and dispose of people, without having to consider their feelings, or that they had feelings at all.

The intruiging suggestion made by Losse is that these tech bros represent an epochal transformation in American alpha masculinity. She doesn’t really follow it up but I’m completely persuaded that tech bros, as well as bro culture in general, represent something of profound sociological significance.

From The Boy Kings, by Katherine Losse, pg 25:

For example, on Mark’s birthday, in May 2006, I received an email from his administrative assistant telling me that it would be my job that day, along with all the other women in the office, to wear a T- shirt with Mark’s picture on it. Wait, what? I thought, he’s not my god or my president; I just work here . The men in the office were told that they would be wearing Adidas sandals that day, also in homage to Mark. The gender coding was clear: women were to declare allegiance to Mark, and men were to become Mark, or to at least dress like him. I decided that this was more than I could stomach and stayed home to play sick that day. I was the only one. The other women in the office, including Mark’s girlfriend, who did not work at Facebook, but had come to the office to celebrate his birthday, happily posed for pictures wearing identical shirts printed with Mark’s picture, like teenage girls at an *NSYNC concert or more disturbingly, like so many polygamous wives in a cult.

Gawker apparently posted photos of this at the time but I’m struggling to find them.

From The Boy Kings, by Katherine Losse, pg 13:

I liked to listen to Mark’s discussion of the product philosophy and goals at these meetings, which were to me the most fascinating part of the job: what were we trying to do, with this fledgling Internet identity registration system? “I just want to create information flow,” he said in his still nearly adolescent voice, lips pursed forward as if jumping to the next word, and everyone would nod, all cogitating in their own way about what this meant. Mark’s idea of information flow, though vague, was also too vague to be disagreed with, and even if we came up with counter- instances to a model of pure information efficiency (for example, I wondered, do I want my Social Security number to flow freely?), we knew that we weren’t supposed to disagree. Mark was our leader, for better or worse. When the meetings ended he would say either “domination” or “revolution,” with a joking flourish of a fist, and everyone would laugh, nervously, but with a warm and almost chilling excitement. It was like we were being given a charter, by a boy younger than most of us, to take over the world and get paid to do it.

Thanks to Peter Holley for sharing this with me. The Finnish Foreign Ministry has launched a “don’t come” Facebook campaign in Iraq and Turkey:

The thrust of the Ministry’s Facebook campaign is to persuade young men coming from conflict-ridden areas that it’s not work the risk and expense to come to Finland, said Finns Party MP Sampo Terho.

“This realistic message about the possibility of receiving asylum status in Finland is in the best interests of Finland as well as those who are planning the journey. If it’s practically a sure bet that you will be repatriated, why then would you waste up to 10,000 euros on the trip?”  Terho queried.

According to the Finns Party parliamentary group the campaign has been rolled out in Arabic and is directed at young men planning to travel to Finland to seek refuge. The Foreign Ministry said Friday morning that the Facebook update had received close to 80,000 views.

Terho, who is the head of the Finns Party parliamentary group, said that the aim of the campaign is to try to curb the so-far “uncontrolled” influx of people.

It makes the British Home Office’s ‘Go Home’ van seem remarkably low tech in comparison:

Given what seems likely to be a hardening climate of opinion across Europe, it strikes me that some disturbing examples of digital authoritarianism might be enacted, in a register of exceptionalism, normalising their potential wider application in the future. As Peter observed to me, it’s the use of the capacity for modelling built into the Facebook platform that’s really interesting here: the efficacy of the intervention rests upon a claimed capacity to identify and engage with “young men planning to travel to Finland to seek refuge”. How might this same ambition manifest itself domestically?

This post by Zeynep Tufekci on her Medium site is the best thing I’ve read yet about the recent facebook controversy.

I’m struck by how this kind of power can be seen as no big deal. Large corporations exist to sell us things, and to impose their interests, and I don’t understand why we as the research/academic community should just think that’s totally fine, or resign to it as “the world we live in”. That is the key strength of independent academia: we can speak up in spite of corporate or government interests.

To me, this resignation to online corporate power is a troubling attitude because these large corporations (and governments and political campaigns) now have new tools and stealth methods to quietly model our personality, our vulnerabilities, identify our networks, and effectively nudge and shape our ideas, desires and dreams. These tools are new, this power is new and evolving. It’s exactly the time to speak up!

That is one of the biggest shifts in power between people and big institutions, perhaps the biggest one yet of 21st century. This shift, in my view, is just as important as the fact that we, the people, can now speak to one another directly and horizontally.

This strikes me as an important fault line, in so far as a superficial difference (i.e. whether or not this bothers you) tracks much broader divergences in political orientation which are likely to become more pronounced as these trends develop over time. However the risk is that this one contentious study becomes a distraction because, as Tufekci points out, this is something Facebook does on a daily basis. What could be lost here is a sense of the political apparatus coming into being and its broader implications:

I identify this model of control as a Gramscian model of social control: one in which we are effectively micro-nudged into “desired behavior” as a means of societal control. Seduction, rather than fear and coercion are the currency, and as such, they are a lot more effective. (Yes, short of deep totalitarianism, legitimacy, consent and acquiescence are stronger models of control than fear and torture—there are things you cannot do well in a society defined by fear, and running a nicely-oiled capitalist market economy is one of them).

The secret truth of power of broadcast is that while very effective in restricting limits of acceptable public speech, it was never that good at motivating people individually. Political and ad campaigns suffered from having to live with “broad profiles” which never really fit anyone. What’s a soccer mom but a general category that hides great variation?

With new mix of big data and powerful, oligopolistic platforms (like Facebook) all that is solved, to some degree.

Today, more and more, not only can corporations target you directly, they can model you directly and stealthily. They can figure out answers to questions they have never posed to you, and answers that you do not have any idea they have. Modeling means having answers without making it known you are asking, or having the target know that you know. This is a great information asymmetry, and combined with the behavioral applied science used increasingly by industry, political campaigns and corporations, and the ability to easily conduct random experiments (the A/B test of the said Facebook paper), it is clear that the powerful have increasingly more ways to engineer the public, and this is true for Facebook, this is true for presidential campaigns, this is true for other large actors: big corporations and governments. (And, in fact, a study published in Nature, dissected at my peer-reviewed paper linked below shows that Facebook can alter voting patterns, and another study shows Facebook likes can be used to pretty accurately model your personality according to established psychology measures).

really hate Facebook. I only joined when I moved from London to Coventry in 2006, largely because everyone I met at Warwick used it. It’s something that’s useful to have when you’ve moved to a new place and you don’t know anyone. It makes it easy to archive new social connections, if that doesn’t sound like a horribly instrumental notion to apply to making new friends. But I soon started to hate it. I’d love to say this hated is high minded, reflecting an informed contempt for the corporation, its data mining and their project of digital enclosure. I’d love to say this hated is low minded, reflecting a deep contempt for Mark Zuckerberg as a human being. But it’s more visceral than that. Facebook just irritates me deeply.

Partly this might be because of the behavioural tendencies it inculcates. I’ve been prone to procrastination for as long as I can remember. In recent years, I’ve learnt to channel this procrastination in a productive and enjoyable way – a form of productive procrastination or structured procrastination in which a desire to avoid one task can be a creative stimulus towards another. Facebook is the one thing which short circuits this approach. If I’m stuck with work, I’ll sometimes find myself mindlessly browsing Facebook before stopping and registering the sheer number of things I would rather do, things I would actively like to do, yet I find myself exploring Zuckerberg’s private kingdom in a semi-conscious haze of habituation.

I deleted my facebook account a couple of years after joining. It was quite a liberating thing to do. It damaged my social life but I was in a situation at the time where this didn’t really bother me and I soon got used to being absent from facebook. A couple of years later, I reluctantly rejoined. I can’t remember why. But it didn’t really work for me a second time. Again, I found myself wasting time on facebook in a way I found peculiarly irritating, over and above the actual amount of time that I ended up wasting. Part of the problem was the lack of a clear public/private distinction. I vaguely understand the value of having a closed network of friends but only 20%-30% of my then facebook ‘friends’ were people I wanted on this closed network. Contrary to much theorising about facebook, I could see the point of it if the network was sufficiently locked down that I didn’t do self-presentation. By which I mean something different to ‘self-presentation’ as it’s commonly used I guess. Perhaps I mean self-presentation grounded in self-editing, thinking and sharing things but then suppressing them because they are incongruous with my intended self-presentation, as opposed to the sense in which ‘self-presentation’ is a mundane feature of all social interaction*. My point is that I could see the appeal of facebook (sort of) if it was unambiguously ‘private’ but it just isn’t, the massive data privacy issues not withstanding, nor could it be unless I deleted 80% of my then facebook friends.

So eventually I deleted my facebook account again. In the meantime I setup a facebook account for Sociological Imagination. I can’t remember why I chose a facebook account rather than a facebook page. Perhaps because I found the interface for the latter somewhat confusing. It turned out to be quite effective, given the later restrictions upon sharing applied to the latter, such that posts are only ‘seen’ by a small percentage of a page’s fans unless you pay to promote them. Initially, I just plugged in the rss feed to the sociological imagination facebook account. Then I plugged in the rss feed to my blog. When I worked at LSE I plugged in the rss feed to LSE British Politics and Policy. Some of my ‘real life’ friends started using the messaging service to keep in touch with me. I occasionally added friends on that account when I’d not seen them for ages and wanted to get back in touch but was too lazy/busy to just phone them**. I began to sometimes post status updates or use the facebook share button on websites.

It’s become a bit of a joke with certain friends about me ‘not being on facebook’. I obviously am. But I don’t want to be. I love Twitter (some people may have noticed this) but I really truly despise facebook. Both for the personal reasons described above and the social and political reasons too innumerable to list (but here are 10 to get you started). So I’m wondering if I should delete the Sociological Imagination facebook account. I think I’ll always gravitate back towards using facebook when I have this account. In a narrowly instrumental sense, deleting this account will knock down the sociological imagination traffic but not by any great amount. In a more meaningful sense, it seems a shame to delete it because for reasons I don’t understand, I’m pretty sure the facebook following is much more internationally diverse than the website’s twitter following. It’s also a wonderful place to find web memes – much more so than twitter, again for reasons I don’t really understand. But I really don’t want to be on facebook and I resent the feeling of having been lured back in. Though perhaps this is bad faith and I should take responsibility for my own actions. Or maybe I should just delete the account. Ok, let’s do that then.

*If I’m actually going to write this paper about Goffman, I should urgently address this issue, otherwise I’m probably going to piss off any symbolic interactionist who reads it.

**I used to spend hours on the phone with my close friends prior to social media. Excepting blogging, which I’ve been doing since my first few months of university, I didn’t use social media until late 2006. The amount of time I spend on the phone to my friends immediately began to decline and has been declining ever since. There are other factors at work here but this is surely one of them.

Screen shot 2014-04-12 at 10.00.22

This is what happens when you try to deactivate your account. Manipulative, no?