Updates from May, 2019 Toggle Comment Threads | Keyboard Shortcuts

  • Mark 9:12 am on May 31, 2019 Permalink | Reply
    Tags: ,   

    Thinking on the Move: The Possibilities and Problems of Walking Sociologically 

    I’m so excited for this event I’m doing with Emma Jackson and Les Back, kindly funded by The Sociological Review and Goldsmith’s CURC. Full information and registration available here.

    The Sociological Review is delighted to bring you a 2-day walking conference organised by Emma Jackson, Mark Carrigan and Les Back. The event has been funded by The Sociological Review Foundation and supported by Centre for Urban and Community Research, Goldsmiths, London 

    What are the risks and the opportunities of thinking on our feet? This two-day conference explores what it means to walk sociologically. The event will provide an opportunity to examine the potentials of using walking within sociology including; walking as method, walking as theorizing, walking as a way of knowing the city, walking as activism. Rather than talking about this in a conference room we will do this on the move exploring the practice of walking and its significance for the production and communication of sociological knowledge. The event draws on the success of the sociological walks and movement session at the Undisciplining conference by interrogating and providing space for critical reflection on sociological walking practices. All walks will take place in the environs of Southeast London near the conference base of Goldsmiths. 
    The two days will consist of a series of guided sociological walks exploring topics including: the histories of anti-racist struggle and sound system cultures, an exploration of the relationship between material infrastructures and the urban form, inclusive walking practices and destabilising the figure of the able-bodied male walker, as well as a number of short talks and time for discussions. Delegates will also be given the opportunity to produce digital mapping of the different walks.

  • Mark 3:47 pm on May 27, 2019 Permalink | Reply
    Tags: , , ,   

    The ecological limits of datafication 

    I went to a mind blowing talk by Kira Allmann this morning about the ecological costs of digital activity. This is something I was aware of but entirely in the abstract, recognising that digitalisation manifests itself climatically without any specificity about what this relationship entails. There are many things this talk made me think about which I’m noting here for future retrieval:

    1. The climactic cost of computation means there are ecological limits to datafication which social scientific accounts of it are entirely failing to recognise. For instance can we have a viable theory of surveillance capitalism which doesn’t have the climactic impact of ever expanding datafication at its heart?
    2. What will the politics flowing from these look like? For instance Kira flagged the significance of the Chinese Big Tech complex and the difficulty of reigning it in in terms of carbons costs. We are already seeing the geopolitical significance of this, suggesting how states might get behind their tech champions under the sign of national security in a way that overpowers any countervailing civic movement to reform and control on environmental grounds.
    3. How do we increase awareness of the climactic impact of digital technology in a way succeptable to politicisation? Kira had a fascinating comparison to past environmental campaigns and some of the tactics may certainly be applicable here. But there’s an underling problem of public understanding of digital infrastructure, something which might be declining as this infrastructure is platformised because it is becoming vastly more opaque. She convincingly argued that there are many steps one must go through before being able to have a dialogue about the politics of this.
    4. The cultural barriers to recognising this are deeply entrenched, after decades of thought and talk about digitality in terms of distance, disintermediation and immateriality. Despite being immersed in these debates, I only really ‘got’ the environmental materiality of the digital during Kira’s talk: the climate cost is not a manifestation or consequence of digital activity, as I would have previously put it, it’s actually constitutive of it.
    5. To what extent are there competing narratives at work here? The great disruptive project being led by Facebook, Amazon, Uber et al and the expected rise of the robots (and mass automation) surely constitutes a meta narrative which has little place for climate considerations. Can we fuse the two into a climactic narrative of digital disruption and what we must do to reverse it?
  • Mark 8:09 am on May 27, 2019 Permalink | Reply

    Are you interested in the future of digital sociology? 

    Myself and Huw Davies are organising this networking event for the BSA Digital Sociology group, taking place in London on July 17th. If you’re interested in meeting other digital sociologists, talking about common issues, finding collaborators for projects and defining the future of the subdiscipline then please consider coming along. There are full sign up details here.

    The Digital Sociology Study Group is seeking to invigorate digital sociology through a networking event which will facilitate communication and collaboration. We invite all self-identifying digital sociologists to give a 10 minute talk introducing yourself, your work and/or your ideas for collaborative projects. These will be followed by organised group discussions with the intention of identifying concrete collaborations and developing an agenda for how the study group can support them.

  • Mark 8:28 am on May 25, 2019 Permalink | Reply
    Tags: alt liberals, , intellectual responsibility   

    To what extent are intellectuals responsible for how their ideas are taken up? 

    To what extent are intellectuals responsible for how their ideas are taken up?There a great example of this on pg 126 of Winners Take All. Stephen Pinker’s entirely legitimate study of declining human violence is taken up as theodicy for contemporary capitalism, explaining away the appearance of injustice as a failure to take a sufficiently long term view. A former TED editor describes this as ‘Pinkering’: “using the long-run direction of human history to minimize, to delegitimize the concerns of those without power”.

    Pinker’s actual point was narrow, focused, and valid: Interpersonal violence as a mode of human problem-solving was in a long free fall. But for many who heard the talk, it offered a socially acceptable way to tell people seething over the inequities of the age to drop their complaining. “It has become an ideology of: The world today may be complex and complicated and confusing in many ways, but the reality is that if you take the long-term perspective you will realize how good we have it,” Giussani said. The ideology, he said, told people, “You’re being unrealistic, and you’re not looking at things in the right way. And if you think that you have problems, then, you know, your problems don’t really matter compared to the past’s, and your problems are really not problems, because things are getting better.”

    The more urgent esample can be seen in militant centrist thinkers (alt-libs) whose principled defences of free speech, as well as denunciations of the ‘regressive left’, often attract openly racist and far right support. It would be wrong to suggest ideas are intrinsically tarred by how they are taken up. But it seems equally problematic to deny any link. 

    How do we think through this relationship at the level of political and moral theory? My hunch is the contemporary politics of circulation, to use Dave Beer’s phrase, renders this more important than ever. The speed with which ideas circulate within networks which themselves rapidly change, means that alliances form more rapidly and precariously. An adequate theory of intellectual responsibility must surely take account of this.

  • Mark 10:54 am on May 20, 2019 Permalink | Reply  

    A social media sabbatical 

    After five years of scheduling 50+ social media posts per day, I’m stepping down as The Sociological Review’s Digital Engagement Fellow next week. This seems like the perfect time for me to take a break from social media. I’ve locked my Twitter and Instagram accounts, deleted my Facebook account and the only place you will find me online is here on my blog. It’s so key to my writing and thinking that blogging isn’t really social to me, even if I try my best to engage. But apologies in advance if I’m slow to respond to comments, as I’m desperate for a break from mediated conversation at this stage.

    • Jeff 7:57 am on May 21, 2019 Permalink

      Excellent and commendable… all the best.

  • Mark 10:50 am on May 20, 2019 Permalink | Reply

    Interviews with Nature Index about social media 

    I did an interview with Bec Crew from Nature Index recently and it featured in a series of articles:

  • Mark 10:35 am on May 20, 2019 Permalink | Reply
    Tags: , cultural ontology, , , Rein Raud   

    The strange fate of ‘culture’ 

    Meaning in Action is a thought provoking book by Rein Raud, motivated by the strange fate which has befallen ‘culture’. At precisely the time when “the concept of culture, sloppily defined or not at all, is occupying an increasingly central place in social and political debate” the study of the concept has become ever narrower, such that “it has almost become improper to theorise about culture in broader terms” (1). In its place he argues for an approach to culture that “is able to account for all phenomena related to the production, dissemination, transmission and interpretation of meaning” (6). Culture in this sense is the “total of our efforts to make sense of our world” from the smallest to the largest scale, from the individual through to whole societies; it must be studied from parallel perspectives of textuality and practice, paralleling the distinction Margaret Archer makes between the cultural system and socio-cultural interaction. It is a compelling framing and I’m looking forward to reading the rest of this book which I happened to pick up at random.

    • Sourav Roy 10:53 am on May 20, 2019 Permalink

      I have been increasingly thinking of something akin to ‘Semantic Capital’ especially related to Contemporary Visual Arts where the endless interpretability of an object of art determines its status across the global art markets’ different venues, systems and cultures. This book might be helpful. Bookmarked the Kobe preview. Will read.

  • Mark 6:24 pm on May 19, 2019 Permalink | Reply
    Tags: , , , the robots are coming,   

    Climate change and digitalisation 

    I’ve been thinking a lot in the last couple of weeks about climate change and digitalisation. For instance the climatic significance of digital technology is increasingly recognised, as well as the resource constraints this implies for some of the wilder claims made about the coming frontiers of digitalisation. This also represents an ideological tension as one emerging grand narrative, or rather cluster thereof, concerns the implications of digitalisation (‘the robots are coming for our jobs!’, the possibility of fully automated luxury communism, industrial revolution 4.0, Jeff and Elon taking us into space etc) whereas the other concerns the implications of climate change. But one aspect I hadn’t thought about was this argument about the implications for economic productivity. From The Unhabitable Earth by David Wallace-Wells pg 120:

    For the past few decades, economists have wondered why the computer revolution and the internet have not brought meaningful productivity gains to the industrialized world. Spreadsheets, database management software, email—these innovations alone would seem to promise huge gains in efficiency for any business or economy adopting them. But those gains simply haven’t materialized; in fact, the economic period in which those innovations were introduced, along with literally thousands of similar computer-driven efficiencies, has been characterized, especially in the developed West, by wage and productivity stagnation and dampened economic growth. One speculative possibility: computers have made us more efficient and productive, but at the same time climate change has had the opposite effect, diminishing or wiping out entirely the impact of technology. How could this be? One theory is the negative cognitive effects of direct heat and air pollution, both of which are accumulating more research support by the day. And whether or not that theory explains the great stagnation of the last several decades, we do know that, globally, warmer temperatures do dampen worker productivity.

  • Mark 6:16 pm on May 19, 2019 Permalink | Reply
    Tags: clouds, , data centres,   

    Emerging computational mega structures 

    My notes on Delic, K. A., & Walker, M. A. (2008). Emergence of the academic computing clouds. Ubiquity, 2008(August), 1.

    I was intrigued by this short paper from 2008, prefiguring a number of themes which are central to contemporary debates about digital infrastructure. It reflected on the “emergence of the cloud as the generic infrastructural fabric enabling a huge number of services”, as well as what this might mean for research practice. They talk about the emergence of cloud computing in terms of the transition from data centres to grids: 

    Grids are very large-scale virtualized, distributed computing systems. They cover multiple administrative domains and enable virtual organizations. The key characteristic of grids is their ability to upscale and downscale rapidly and gracefully. So, they provide utility type of computing which enables another type of business model and spawns start-up businesses.

    Grids are collections of data centres, operating through “aggregation, virtualization and scheduling” to provide a platform for executing immensely computationally intense  applications. They draw an analogy between computing grids in this sense and energy, communication and transportation grids. Grids transform large swathes of hardware and software assets into computational services which can be drawn upon by many users. The cloud is what we get from the assembly of distinct grids, though I was slightly confused by this definition as it seems to reproduce their distinction between grids and data centres.

    They make the important observation that “the typical workloads for academic/scientific applications are very different from commercial workloads”, with the former revolving around the ‘mega-scale simulations’ which are becoming possible due to the affordances of the cloud. They correctly predict this will lead to new forms of scientific inquiry, driven by the “vast amounts of data coming not only from the web but also from a rising number of instruments and sensors”.

  • Mark 4:19 pm on May 18, 2019 Permalink | Reply
    Tags: , ,   

    The singular innovation which explains capitalism’s growth 

    I thought this was an incredibly evocative description, from pg 116 of The Unhabitable Earth by David Wallace-Wells. Contrary to the hagiographic orthodoxy we find in accounting for the history of capitalism, the reality is that one single innovation explains the turbo charged growth which the world saw over a comparatively short period of time. It is a resource which is swiftly running out, as a consequence of that very growth, without any hope that it could subsequently be replenished:

    a singular innovation, one engineered not by entrepreneurial human hands but in fact millions of years before the first ones ever dug at the earth—engineered by time and geologic weight, which many millennia ago pressed the fossils of Earth’s earlier carbon-based life forms (plants, small animals) into petroleum, like lemon under a press. Oil is the patrimony of the planet’s prehuman past: what stored energy the earth can produce when undisturbed for millennia. As soon as humans discovered that storehouse, they set about plundering it—so fast that, at various points over the last half century, oil forecasters have panicked about running out.

  • Mark 8:11 am on May 18, 2019 Permalink | Reply
    Tags: , Chris newfield, , , unbundling,   

    Unbundling the university 

    My notes on Newfield, C. (2019). Unbundling the knowledge economy. Globalisation, Societies and Education, 1-9.

    Far from being distinct institutions at a remove from society, this special issue explores their many interconnections with social and political life. Once we recognise the mutating character of the university, transforming and growing in a way which reflects wider social life, it becomes difficult to see what it means to defend ‘the university’. As Newfield writes, drawing on Jana Bacevic’s contribution to the issue:

    In a sense there is no ‘university’ to defend from marketisation and splitting up. Teaching could be handled by online courses supported with course assistants who do not need expensive doctoral training. ‘Coding camps’ are the policy world’s exemple du jour. Research could be moved into the companies that benefit from it, supplemented by hospitals and national laboratories, who could also handle advanced research training. Civic education could be handled peer-to-peer by social media, and cultural education by our wall-to-wall visual media. Perhaps our real motto should be ‘shed no tears for the university. (1-2)

    What does the tertiary landscape look like without it? As he writes on pg 2, “if we remove the university from the tertiary landscape, we are left with commercial publishers, ed-tech providers, think tanks, marketing firms, quasi-governmental agencies, ministries, supra-national advisory bodies, and technical service companies”. They have built up around supporting higher education and there is little reason to think they could provide it were universities to gradually dissolve.

    Newfield suggests five basic features of university education: autonomy (in opposition to control by state or industry), self-formation, integrated learning, basic research (funded without promise of application) and public service. Each has private and public effects, as well as pecuniary and non-pecuniary dimensions. Unfortunately, discourses of higher education tend to focus on the private benefits and pecuniary dimensions to the exclusion of all else. However it has been estimated that private pecuniary benefits is only about 1/3 of the overall value, though the irony that this relies on placing a monetary value on non-pecuniary aspects is striking. Newfield describes this as ‘dark matter’ which is excluded by the aforementioned discursive narrowness (4). Pickering’s concept of ontological veiling could be usefully applied here.

    Could the unbundled university still produce this dark matter? Or does the datafication at the heart of the project mean that a focus on the calculable private and pecuniary aspects would be the sole concern? Could post-university systems increase value in a way that would outweigh these losses? He argues that opening out this debate involves recasting claimed drivers (automation, digitisation etc) as uncertainties with contingent outcomes. Rather than being what Filip Vostal calls ‘mega forces’ which lead ineluctably to their outcome, they are factors which operates amongst others within open systems.

    He ends by framing unbundling in terms of austerity. As he writes on pg 7, “every analysis of educational quality has been predetermined by constraints of cost”. The logic of unbundling involves extracting an element in order to maximise incomes and minimises costs, away from the messy interconnections which hinder this process. If I understand him correctly, he’s saying it’s a form of hyper-rationalisation which cuts out chunks out of an organisation in order to work on them in isolation rather than merely reconstructing an organisation piece by piece. He argues on pg 8 that ‘tech isolationism’ is a core problem which points to the sustained relevance of a humanities education:

    the failure to integrate technological with sociocultural skills. In rea- lity, the world’s enormous problems require massive numbers of non-routine problem solvers. They must have higher order cognitive skills, be able to cross disciplinary and linguistic barriers, and be capable of continuous improvisation. They need what used to be called humanistic creativity, for the simple reason that every global problem had fused technical issues to extremely difficult socio-cul- tural challenges. You can’t have renewable energy conversion unless you can reduce environmental racism, for example: cultural knowledge workers will need to be seen as essential equal partners with photovoltaic device engineers. STEM education unbundled from the social sciences, arts, and huma- nities is the problem, not the solution.

  • Mark 10:32 am on May 11, 2019 Permalink | Reply
    Tags: ,   

    The fantasies of wealthy city dwellers 

    From The Unhabitable Earth by David Wallace-Wells pg 89:

    Sitting in a living room in a modern apartment in an advanced metropolis somewhere in the developed world, this threat may seem hard to credit—so many cities looking nowadays like fantasies of endless and on-demand abundance for the world’s wealthy. But of all urban entitlements, the casual expectation of never-ending drinking water is perhaps the most deeply delusional. It takes quite a lot to bring that water to your sink, your shower, and your toilet.

  • Mark 7:41 pm on May 10, 2019 Permalink | Reply
    Tags: , narratives, rise of the robots   

    Should climate change be a master narrative? 

    Should climate change be a master narrative? It certainly has competition from neo-conservative narratives of the Chinese century, techno-dystopian narratives of the ‘rise of the robots’ or populist narratives of the great revival. But I find David Wallace-Wells very plausible here in The Unhabitable Earth on pg 53:

    In this way, climate change appears to be not merely one challenge among many facing a planet already struggling with civil strife and war and horrifying inequality and far too many other insoluble hardships to iterate, but the all-encompassing stage on which all those challenges will be met—a whole sphere, in other words, which literally contains within it all of the world’s future problems and all of its possible solutions.

  • Mark 6:37 pm on May 10, 2019 Permalink | Reply
    Tags: class dojo, , , ,   

    Datafication and discipline in educaiton 

    My notes on Manolev, J., Sullivan, A., & Slee, R. (2019). The datafication of discipline: ClassDojo, surveillance and a performative classroom culture. Learning, Media and Technology, 44(1), 36-51.
    To understand how digital technology is reshaping education, it’s necessary to analyse how datafication (“the conversion of social action into quantifiable data in a manner that enables the tracking of people in real-time”, 36) changes educational processes. This includes the role of power within them: “the ways in which power is implicated in decisions such as what constitutes and is selected as data, who controls it, who can alter it, how it is interpreted, and what purpose it will serve” (36). Class Dojo is the foremost platform driving this process within education, with implications for every facet of activity within a school. It provides a social platform which allows interaction and activity to take place in structured ways between all actors within the school.
    They analyse this with a Foucauldian approach of a sort rarely seen within educational technology research, concerned with the school as “a data-rich site of surveillance” (38) since its inception. It has always ordering and classification in order to bring about certain states of affairs in students, with contemporary platforms being a novel means through which to accomplish these longer term ambitions. New technologies are designed to “increase the effectiveness, efficiency and productivity of surveillance and data-related capabilities in schools” (38). This context helps explain the rapid growth of ClassDojo from 80 users in its first week into a worldwide sensation with a translation function for 35 languages which reflect its success in markets such as Vietnam, Turkey, China and India. More than three million teachers and 35 million children are claimed to be using the platform in 180 countries worldwide (39). They make the interesting observation that this growth appears to be “impervious to the typical bureaucratic gatekeeping processes of national education systems” (39). However despite its apparent novelty, “closer inspection reveals a technological layering over older ideas and practices” (47).
    It functionality rests on tracking student behaviour and allowing teachers to immediately respond to that behaviour. This came with reporting of records of behaviour at the individual or class level, all of which could be accessed in real time from any location. It incorporates behavioural reinforcement, with students rewarded with positive, neutral and negative ‘dojo points’ in order to encourage and discourage behaviours. Each student has an avatar and teachers are able to plan the behaviours which they wish to try and cultivate in students. The two feedback categories of “positive” and “needs work” clearly represent positive and negative reinforcement. These operate as numerical values which impact upon the students dojo points, accompanied by a positive noise or an abrasive one. Reports can be shared with parents over the platform, with registered parents able to receive weekly e-mail notifications inviting them to check their child’s report.
    In a context where discipline is a practical concern and politicised topic, the positive reinforcement of Class Dojo discipline has an obvious appeal. Despite concerns that external rewards undermine internal standards and self-control, they offer a quick and easy behavioural fix which has clearly proved alluring to teachers. It provides game like elements for students (points system, avatars, leader board, badges etc), technological solutions to institutional problems for policy makers and a sense of control for teachers. This incentives the continual expansion of surveillance in order to ensure more data. ClassDojo “requires teachers to monitor students constantly, catching students performing particular behaviours, generating, storing and analysing data through its software as this occurs” (43). The teacher becomes the conduit of datafication through assigning points to students for designated activities, feeding into the gamified elements of the system. It inclines the user towards “a standards-based approach to discipline, in which the standards take the form of numerical targets or benchmarks that have been affixed to a range of predetermined behaviours” (44). However this conversion into numerical rankings decontextualises behaviour and reduces its complexity. This renders action as a performance on which students can be continually judged and ranked, with the tracking of performance standing in for their underlying worth. The effect of this is to “force students to understand themselves through a process of calculation, constantly measuring themselves against narrow representations of ideal behaviours derived from dominant cultural understandings” (47).
  • Mark 1:12 pm on May 9, 2019 Permalink | Reply
    Tags: , global personhood, oecd   

    Gobal Personhood in Education 

    My notes on Robertson, S.L. & Mocanu, A.M. (2019) The Possibilities of a Radical Diasporic Epistemology for the Development of Global Personhood in Education. International Studies in the Sociology of Education

    The notion of ‘global competence’ was added by the OECD to its Program of International Student Achievement (PISA) in 2018. This was necessary in order to equip children to participate in a “more interconnected world but also appreciate and benefit from cultural differences”.  This was explicitly framed in terms of winers and losers from globalisation, with the attendant distribution of uncertainly and profanity. Robertson and Mocanu recognise that “it is important that schools and their societies actively promote the knowledge, skills, attitudes and values in young learners that enable them to live in a complex society” but remain critical of how this has been framed and its inclusion in large scale testing regimes.

    The OECD’s large scale testing (reading, maths, science) are an important means by which the governance of national education systems seeks alignment with global economic competitiveness. Only 32 countries participated in PISA in 2000 and this had risen to 80 by 2018. This take-up has led to the development of other global assessment tools, including Teaching and Learning International Survey, the Survey of Adult Skills and PISA for development. These collect data on national educational systems and then feeds these back to “use this information to fuel a national conversation and as evidence to guide policies and practices”. There is a vast literature critiquing their operation and their focus in this paper is on the assumptions made in the framing of the ‘globally competent’ student and the challenges entailed in measuring it. It has been introduced in direct response to rising inequality, with its impact on social cohesion and economic development. Their concern is with the political instability and potential populist backlash these in turn give rise. As the authors put it, “A globally-competent student in the 2018 Framework Report is now one who seeks to dissolve tensions through building social bonds in the community, rather than potentially developing a deeper understanding of the consequences of unfettered global capital and predatory transnational firms”.

    The contradictions of contemporary global capitalism are resolved in the imagined figure of the globally competent student. It is a limited framing of global issues involving measures which will be difficult to apply in many settings, reducing complex factors into a limited range of responses. The globally competent student might be culturally tolerant and sensitive to difference but no curiosity about structural causes is mandated, with the focus being on their capacity to navigate global labour markets in constructive and communal ways rather than being a matter of understanding their shared conditions. 40% of PISA member countries have declined to use it, due to the simplistic cultural assumptions undermining it, in spite of having paid for the test tool. Many of the issues at stake elude simply framings and correct answers, running contrary to the mechanics of developing a test where definitional consensus is needed and knowledge must be assumed to cross national borders.

    In contrast they advocate a radical disasporic stance for understanding global competence. The originally essentialist sense of this term has come to be replaced by a use “to describe groups who have migrated involuntarily, preserving their interest – or connection- to their homeland, but also who have multiple groups worldwide”. This informs a way of understanding and engaging with cultural differences, moving beyond the OECD’s focus on thinking difference towards one of relating through difference. This involves more than empathy for someone’s circumstances something and instead recognising how one could find oneself in the same circumstances. It is something which can’t be developed through “decontextualised top down global tests”.

    It must instead take place at the level of teacher and learner in the classroom. One way might be through un/settling, drawing on experiential resources to open up new ways of thinking and seeing an issue that might formerly have seemed familiar. Another way would be through im/mobility, drawing on experiential resources to consider the role of mobility in human life and how the dynamics of mobility shape common circumstances. The third could be be/longing, encouraging students to reflect on the character of being and belonging within a global landscape. These are accompanied by ideas about supporting exercises. They are offered in the spirit of agreement with the OECD’s intention that “the global in the form of knowledge, skills, attitudes and values could, and should, be bought into classrooms in the form of intercultural values, thinking and practices” while highlighting the question of the pedagogical means through which this can be made to happen.

    • landzek 10:06 pm on May 9, 2019 Permalink

      The agenda of a global system seems to me A kind of culture in itself. A sort of presumptuous position that suppose it self culture less by being able to embrace their own particular culture and an openness to learning of other people‘s cultures.

      That seems to me to be … i’m not thinking of that term; A “jumbo shrimp” is an example of this term that I’m not thinking of right at this moment. Lol.

      I’m not sure if it’s necessary to be a global citizen and less you are involved in a particular cultural group that sees itself in the context of the globe, which is to say as a group of people that are in communication. It seems to me that to say “global culture“ denies that there are individual cultures that we have to be learning about or somehow be educated or conversant with. Like the term at self identifies a kind of culture that is separate or distinct from the underground actual cultures that it proposes to be excepting or learning about. It seems to me the same problem with colonial anthropology and whether or not we can study cultures as a participant observer and not affected.

      But I’m not sure that it’s necessary to have this kind of perpetuation of which one could call a kind of superstition about what humanity is. And I don’t mean that in the sense of a bad thing or that it’s wrong or anything; rather, I mean it in the sense that I’m not sure it’s necessary that I give a fuck about what people in Mongolia might believe about their world. I think that all this required is that I have an open mind to except that they might have a different way to go about things. Because this great liberal global idea of everyone getting along on a happy planet kind of seems to me as agenda-loaded. Because I’m sure that many of these cultures, these groups of people that are that we are identifying as culture, whether it is in Siberia or in the middle of Los Angeles, could give less than a shit about whether we are understanding each other: I think it is a modern presumption to figure that people have not been getting along and exactly the same way today as they have throughout history: some people try to spread peace, other people could give a shit, other people kill and don’t give a crap or have any sort of ethical problem with it, other people think that all human beings are equal.


      I mean I’m saying kind of a Jordan Peterson kind of theme: it could be that our global problem is that everyone is trying to enact these super large systems without first understanding themselves and how they relate to the world. It is though somehow I can come to terms with myself in the world if I see myself through these large global systems.

      It may very well be that the problems we are trying to confront up, as global problems or state problems, are due to this over determination of what the human subject is towards what it is capable of enacting. That is, at least with any sort of benefit.

      Our fucked up climate could be the discrepancy between the individual human being and the global system itself in as much as these people are involved with understanding this global system without first really understanding what’s going on with them as a global system, which is to say as if these are the same.

      So much philosophical problem here though. Thx

  • Mark 8:52 am on May 9, 2019 Permalink | Reply
    Tags: , ,   

    The environmental impact of Bitcoin 

    From The Unhabitable Earth by David Wallace-Wells pg 33:

    Five years ago, hardly anyone outside the darkest corners of the internet had even heard of Bitcoin; today mining it consumes more electricity than is generated by all the world’s solar panels combined, which means that in just a few years we’ve assembled, out of distrust of one another and the nations behind “fiat currencies,” a program to wipe out the gains of several long, hard generations of green energy innovation. 117 It did not have to be that way. And a simple change to the algorithm could eliminate that Bitcoin footprint entirely.

  • Mark 6:03 pm on May 8, 2019 Permalink | Reply
    Tags: ,   

    What comes after 2100? 

    I was fascinated to learn in The Unhabitable Earth by David Wallace-Wells that climate models end in 2100 as a matter of convention. I’d be interested to learn about how this convention emerges and what effect it has had on climate science. It’s easy to see the epistemological reasons for this, as the conditions being modelled become sufficiently complex that forecasting past a certain point becomes close to untenable. But why choose the turn of the century? It’s such a resonant point at which to cut off. I find it unnerving because, for example, my niece and nephew (5 and 3 respectively) could be expected to have an excellent chance of living to see the next century. A generation are being born now who will live to see past this horizon, with their children living to confront what could possibly come to be a literally uninhabitable earth.

    • landzek 5:22 pm on May 9, 2019 Permalink

      Philosophically I think it is interesting how science can only see so far into whatever universe into which it wants to turn its gaze.

      I remember in the 90s reading a story about how there was a coincidence and technological ability and the edge of the universe itself. It so happened that our technology had reached a certain level that he was actually able to see the edge of the universe, but the universe is expanding so quickly that now our technology can no longer see the Edge universe, Or something like that.

      So I think it’s kind of funny that for all of our faith in technology and what it is able to give us, for example the prediction models that you’re talking about ending in 2100.

      I think it is more a comment on the actual state of the universe then it is about our ability to know of it.

  • Mark 11:40 am on May 8, 2019 Permalink | Reply
    Tags: , data events, deleuze, rupture, testing,   

    The ontology of (digital) testing 

    My notes on Thompson, G., & Sellar, S. (2018). Datafication, testing events and the outside of thought. Learning, Media and Technology, 43(2), 139-151.

    In this paper Thompson and Sellar cast a Deleuzian lens upon the data hungry character of contemporary educational institutions. As they put it on 139, “Education institutions, and the people who work and learn in them, are constantly generating and using more and more data”, much of which is digital. This proliferation of digital data reflects and in turn encourages complex forms of software, in turn driving the development of interoperability systems to ensure they can work together. This interoperability makes ‘big data’ techniques possible in a way they otherwise wouldn’t be. This is an important reminder that data analytic possibilities don’t unfold naturally from new technology but rather require institutional work with influential sponsorship in order to make them happen. They caution that the “the conceptual tools that we have for thinking through these new technologies in assessment and learning, particularly in relation to implications for politics, policy and practice, remain caught between dystopian visions of technology undermining human values, culture and subjectivity, or utopian fantasies about continual improvements to the efficiency and effectiveness of learning.” (139-140).

    Deleuze and Guattari help that revolutions in institutions such as education proceed through challenges to orthodox images of thought (“the set of implicit presuppositions about what it means to think”) which “create new possibilities for thought that can bring about new activities, behaviours, organisations and connections” (140). However “thought that is formed in digital learning environments such as CATs ultimately conforms to a model: an information ontology” and cannot be revolutionary in this sense (141). The production of ever more data produces events, in so far as the data leads things to happen which otherwise wouldn’t, but it is constitutively unable to produce a new image of thought. This runs contrary to the disruptive rhetoric of educational technology which promises innovation and change.

    Data can be produced in educational settings through a variety of means: deliberate production in assessment and management procedures, volunteered through the use of digital technology or produced through covert monitoring. It assumes a form as information through models encoded within hardware and software architectures at work in its production, as well as the processes at both levels at work in its generation. Informational ontologies are crucial to the operation of computing processes and facilitating interoperability between processes.

    But where does the desire for data come from? Their Deleuzian approach sees desire as primary, productive of subjectivity. But this is where their philosophical approach seems weak to me, in comparison with the digital sociology approach they distinguish it from. It makes it difficult to think systematically about who desires it, how they desire it and how the context frustrates or facilitates it. The turn towards the para-subjective makes it hard to pin down subjects in any concrete way. They reflect on how “Groups and individuals come to believe that they need something to solve their problems and this lack is felt as a lack of data” (143). But their concern for the “abstract essence of this desire” immediately leads them away from this empirical specificity towards a ‘bargain with modernity’ in which data helps fortify people against the uncertainty which is endemic in modernity. Well sure but how does this specifically play out in educational systems? Or is this dynamic manifest uniformly across all spheres of society? The philosophical heterogeneity transmutes into an analytical monism and this is a problem for me. I’m not sure how it helps us make sense of education at all, as opposed to providing a vast panorama through which to sketch the questions about education we have in much grander terms.

    I find the analysis much more useful when it comes to standardised tests, consisting of sample tests (a representative sample of pupils) and census tests (all within a population). These are adminsitered uniformly and scored in a predetermined way. They predate neoliberalism and the authors observe that many critics of these ‘neoliberal instruments’ fail to pay attention to the process by which they are constructed:

    • The domain is “the specific area of interest that is being measured, whether it is a body of knowledge, skills, abilities or attributes” (144). Sub domains have a relationship with each other. What are tested are constructs which are designed to track unobservable characteristics through evaluating observable behaviours.
    • This involved sampling from within the domains, drawing on a selection of potential questions within a domain in order to construct a test which can be done in a finite amount of time while still facilitating inference in the way described above. This is guided by a test specification plan.
    • Inference involves using test scores to assess achievement within a domain. Standardisation means this can be undertaken at the level of the individual, group, school or nation. Not all inferences are valid.

    The computerisation of these tests promises capture transactional data, facilitating big data analytics. Selection can be built into the test itself, allowing branching or pathways depending on the student’s response and aptitude. But what sort of event is this? They suggest three concepts to classify these:

    1. Break-events involve the movement from one pre-existing category to another. As they put it, “Labelling a student as above-average, a school as failing, or a system as excellent based on aggregated data, are all examples of break-events that express potentiality as information” (146). These categories matter and they bring some possibilities to the fore while suppressing others.
    2. Crack-events are forms of change that lack the perceptibility of moving between pre-constituted categories. They occur all the time (“such as feelings of bewilderment, anxiety or elation when sitting a test”) but often don’t manifest in a recognisable way.
    3. Rupture events occur when the coordinates of thought are lost, as cracks aggregate into a potentially transformative moment of change. Revolutionary technologies could only be such if they produce rupture events.

    Standardised testing can generate breaks and cracks but not ruptures becauset they are “created according to an image of thought that limits (a) what tests should be and (b) what the purposes of schooling are” (148). They cannot break with the past because they are premised on a sense of correct answers and simple errors, moving people between discrete categories on that basis.

  • Mark 7:59 pm on May 7, 2019 Permalink | Reply
    Tags: , ,   

    The movement of people around an uninhabitable earth 

    The full significance of this cannot be overstated. If a million Syrians pushed Europe to the brink of fascism, what might ten or a hundred times that number do? The horrible irony is that the far right coming to the power makes it less likely that steps will be taken to control the climatological processes driving mass migration. But estimates of 200 million or even a billion climate refugees suggest a world radically different from the one we known, raising the distributing question of how fascism might seek to reinforce ‘borders’ that otherwise cease to function through militarised means. From pg 6-7 of The Unhabitable Earth by David Wallace-Wells:

    Beginning in 2011, about one million Syrian refugees were unleashed on Europe by a civil war inflamed by climate change and drought—and in a very real sense, much of the “populist moment” the entire West is passing through now is the result of panic produced by the shock of those migrants. The likely flooding of Bangladesh threatens to create ten times as many, or more, received by a world that will be even further destabilized by climate chaos—and, one suspects, less receptive the browner those in need. And then there will be the refugees from sub-Saharan Africa, Latin America, and the rest of South Asia—140 million by 2050, the World Bank estimates, meaning more than a hundred times Europe’s Syrian “crisis.” The U.N. projections are bleaker: 200 million climate refugees by 2050.23 Two hundred million was the entire world population at the peak of the Roman Empire, if you can imagine every single person alive and living anywhere on the planet at that time dispossessed of their home and turned outward to wander through hostile territories in search of a new one. The high end of what’s possible in the next thirty years, the United Nations says, is considerably worse: “a billion or more vulnerable poor people with little choice but to fight or flee.” A billion or more.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc