My notes on Delic, K. A., & Walker, M. A. (2008). Emergence of the academic computing clouds. Ubiquity, 2008(August), 1.

I was intrigued by this short paper from 2008, prefiguring a number of themes which are central to contemporary debates about digital infrastructure. It reflected on the “emergence of the cloud as the generic infrastructural fabric enabling a huge number of services”, as well as what this might mean for research practice. They talk about the emergence of cloud computing in terms of the transition from data centres to grids: 

Grids are very large-scale virtualized, distributed computing systems. They cover multiple administrative domains and enable virtual organizations. The key characteristic of grids is their ability to upscale and downscale rapidly and gracefully. So, they provide utility type of computing which enables another type of business model and spawns start-up businesses.

Grids are collections of data centres, operating through “aggregation, virtualization and scheduling” to provide a platform for executing immensely computationally intense  applications. They draw an analogy between computing grids in this sense and energy, communication and transportation grids. Grids transform large swathes of hardware and software assets into computational services which can be drawn upon by many users. The cloud is what we get from the assembly of distinct grids, though I was slightly confused by this definition as it seems to reproduce their distinction between grids and data centres.

They make the important observation that “the typical workloads for academic/scientific applications are very different from commercial workloads”, with the former revolving around the ‘mega-scale simulations’ which are becoming possible due to the affordances of the cloud. They correctly predict this will lead to new forms of scientific inquiry, driven by the “vast amounts of data coming not only from the web but also from a rising number of instruments and sensors”.

My notes on Newfield, C. (2019). Unbundling the knowledge economy. Globalisation, Societies and Education, 1-9.

Far from being distinct institutions at a remove from society, this special issue explores their many interconnections with social and political life. Once we recognise the mutating character of the university, transforming and growing in a way which reflects wider social life, it becomes difficult to see what it means to defend ‘the university’. As Newfield writes, drawing on Jana Bacevic’s contribution to the issue:

In a sense there is no ‘university’ to defend from marketisation and splitting up. Teaching could be handled by online courses supported with course assistants who do not need expensive doctoral training. ‘Coding camps’ are the policy world’s exemple du jour. Research could be moved into the companies that benefit from it, supplemented by hospitals and national laboratories, who could also handle advanced research training. Civic education could be handled peer-to-peer by social media, and cultural education by our wall-to-wall visual media. Perhaps our real motto should be ‘shed no tears for the university. (1-2)

What does the tertiary landscape look like without it? As he writes on pg 2, “if we remove the university from the tertiary landscape, we are left with commercial publishers, ed-tech providers, think tanks, marketing firms, quasi-governmental agencies, ministries, supra-national advisory bodies, and technical service companies”. They have built up around supporting higher education and there is little reason to think they could provide it were universities to gradually dissolve.

Newfield suggests five basic features of university education: autonomy (in opposition to control by state or industry), self-formation, integrated learning, basic research (funded without promise of application) and public service. Each has private and public effects, as well as pecuniary and non-pecuniary dimensions. Unfortunately, discourses of higher education tend to focus on the private benefits and pecuniary dimensions to the exclusion of all else. However it has been estimated that private pecuniary benefits is only about 1/3 of the overall value, though the irony that this relies on placing a monetary value on non-pecuniary aspects is striking. Newfield describes this as ‘dark matter’ which is excluded by the aforementioned discursive narrowness (4). Pickering’s concept of ontological veiling could be usefully applied here.

Could the unbundled university still produce this dark matter? Or does the datafication at the heart of the project mean that a focus on the calculable private and pecuniary aspects would be the sole concern? Could post-university systems increase value in a way that would outweigh these losses? He argues that opening out this debate involves recasting claimed drivers (automation, digitisation etc) as uncertainties with contingent outcomes. Rather than being what Filip Vostal calls ‘mega forces’ which lead ineluctably to their outcome, they are factors which operates amongst others within open systems.

He ends by framing unbundling in terms of austerity. As he writes on pg 7, “every analysis of educational quality has been predetermined by constraints of cost”. The logic of unbundling involves extracting an element in order to maximise incomes and minimises costs, away from the messy interconnections which hinder this process. If I understand him correctly, he’s saying it’s a form of hyper-rationalisation which cuts out chunks out of an organisation in order to work on them in isolation rather than merely reconstructing an organisation piece by piece. He argues on pg 8 that ‘tech isolationism’ is a core problem which points to the sustained relevance of a humanities education:

the failure to integrate technological with sociocultural skills. In rea- lity, the world’s enormous problems require massive numbers of non-routine problem solvers. They must have higher order cognitive skills, be able to cross disciplinary and linguistic barriers, and be capable of continuous improvisation. They need what used to be called humanistic creativity, for the simple reason that every global problem had fused technical issues to extremely difficult socio-cul- tural challenges. You can’t have renewable energy conversion unless you can reduce environmental racism, for example: cultural knowledge workers will need to be seen as essential equal partners with photovoltaic device engineers. STEM education unbundled from the social sciences, arts, and huma- nities is the problem, not the solution.

My notes on Manolev, J., Sullivan, A., & Slee, R. (2019). The datafication of discipline: ClassDojo, surveillance and a performative classroom culture. Learning, Media and Technology, 44(1), 36-51.
To understand how digital technology is reshaping education, it’s necessary to analyse how datafication (“the conversion of social action into quantifiable data in a manner that enables the tracking of people in real-time”, 36) changes educational processes. This includes the role of power within them: “the ways in which power is implicated in decisions such as what constitutes and is selected as data, who controls it, who can alter it, how it is interpreted, and what purpose it will serve” (36). Class Dojo is the foremost platform driving this process within education, with implications for every facet of activity within a school. It provides a social platform which allows interaction and activity to take place in structured ways between all actors within the school.
They analyse this with a Foucauldian approach of a sort rarely seen within educational technology research, concerned with the school as “a data-rich site of surveillance” (38) since its inception. It has always ordering and classification in order to bring about certain states of affairs in students, with contemporary platforms being a novel means through which to accomplish these longer term ambitions. New technologies are designed to “increase the effectiveness, efficiency and productivity of surveillance and data-related capabilities in schools” (38). This context helps explain the rapid growth of ClassDojo from 80 users in its first week into a worldwide sensation with a translation function for 35 languages which reflect its success in markets such as Vietnam, Turkey, China and India. More than three million teachers and 35 million children are claimed to be using the platform in 180 countries worldwide (39). They make the interesting observation that this growth appears to be “impervious to the typical bureaucratic gatekeeping processes of national education systems” (39). However despite its apparent novelty, “closer inspection reveals a technological layering over older ideas and practices” (47).
It functionality rests on tracking student behaviour and allowing teachers to immediately respond to that behaviour. This came with reporting of records of behaviour at the individual or class level, all of which could be accessed in real time from any location. It incorporates behavioural reinforcement, with students rewarded with positive, neutral and negative ‘dojo points’ in order to encourage and discourage behaviours. Each student has an avatar and teachers are able to plan the behaviours which they wish to try and cultivate in students. The two feedback categories of “positive” and “needs work” clearly represent positive and negative reinforcement. These operate as numerical values which impact upon the students dojo points, accompanied by a positive noise or an abrasive one. Reports can be shared with parents over the platform, with registered parents able to receive weekly e-mail notifications inviting them to check their child’s report.
In a context where discipline is a practical concern and politicised topic, the positive reinforcement of Class Dojo discipline has an obvious appeal. Despite concerns that external rewards undermine internal standards and self-control, they offer a quick and easy behavioural fix which has clearly proved alluring to teachers. It provides game like elements for students (points system, avatars, leader board, badges etc), technological solutions to institutional problems for policy makers and a sense of control for teachers. This incentives the continual expansion of surveillance in order to ensure more data. ClassDojo “requires teachers to monitor students constantly, catching students performing particular behaviours, generating, storing and analysing data through its software as this occurs” (43). The teacher becomes the conduit of datafication through assigning points to students for designated activities, feeding into the gamified elements of the system. It inclines the user towards “a standards-based approach to discipline, in which the standards take the form of numerical targets or benchmarks that have been affixed to a range of predetermined behaviours” (44). However this conversion into numerical rankings decontextualises behaviour and reduces its complexity. This renders action as a performance on which students can be continually judged and ranked, with the tracking of performance standing in for their underlying worth. The effect of this is to “force students to understand themselves through a process of calculation, constantly measuring themselves against narrow representations of ideal behaviours derived from dominant cultural understandings” (47).

My notes on Robertson, S.L. & Mocanu, A.M. (2019) The Possibilities of a Radical Diasporic Epistemology for the Development of Global Personhood in Education. International Studies in the Sociology of Education

The notion of ‘global competence’ was added by the OECD to its Program of International Student Achievement (PISA) in 2018. This was necessary in order to equip children to participate in a “more interconnected world but also appreciate and benefit from cultural differences”.  This was explicitly framed in terms of winers and losers from globalisation, with the attendant distribution of uncertainly and profanity. Robertson and Mocanu recognise that “it is important that schools and their societies actively promote the knowledge, skills, attitudes and values in young learners that enable them to live in a complex society” but remain critical of how this has been framed and its inclusion in large scale testing regimes.

The OECD’s large scale testing (reading, maths, science) are an important means by which the governance of national education systems seeks alignment with global economic competitiveness. Only 32 countries participated in PISA in 2000 and this had risen to 80 by 2018. This take-up has led to the development of other global assessment tools, including Teaching and Learning International Survey, the Survey of Adult Skills and PISA for development. These collect data on national educational systems and then feeds these back to “use this information to fuel a national conversation and as evidence to guide policies and practices”. There is a vast literature critiquing their operation and their focus in this paper is on the assumptions made in the framing of the ‘globally competent’ student and the challenges entailed in measuring it. It has been introduced in direct response to rising inequality, with its impact on social cohesion and economic development. Their concern is with the political instability and potential populist backlash these in turn give rise. As the authors put it, “A globally-competent student in the 2018 Framework Report is now one who seeks to dissolve tensions through building social bonds in the community, rather than potentially developing a deeper understanding of the consequences of unfettered global capital and predatory transnational firms”.

The contradictions of contemporary global capitalism are resolved in the imagined figure of the globally competent student. It is a limited framing of global issues involving measures which will be difficult to apply in many settings, reducing complex factors into a limited range of responses. The globally competent student might be culturally tolerant and sensitive to difference but no curiosity about structural causes is mandated, with the focus being on their capacity to navigate global labour markets in constructive and communal ways rather than being a matter of understanding their shared conditions. 40% of PISA member countries have declined to use it, due to the simplistic cultural assumptions undermining it, in spite of having paid for the test tool. Many of the issues at stake elude simply framings and correct answers, running contrary to the mechanics of developing a test where definitional consensus is needed and knowledge must be assumed to cross national borders.

In contrast they advocate a radical disasporic stance for understanding global competence. The originally essentialist sense of this term has come to be replaced by a use “to describe groups who have migrated involuntarily, preserving their interest – or connection- to their homeland, but also who have multiple groups worldwide”. This informs a way of understanding and engaging with cultural differences, moving beyond the OECD’s focus on thinking difference towards one of relating through difference. This involves more than empathy for someone’s circumstances something and instead recognising how one could find oneself in the same circumstances. It is something which can’t be developed through “decontextualised top down global tests”.

It must instead take place at the level of teacher and learner in the classroom. One way might be through un/settling, drawing on experiential resources to open up new ways of thinking and seeing an issue that might formerly have seemed familiar. Another way would be through im/mobility, drawing on experiential resources to consider the role of mobility in human life and how the dynamics of mobility shape common circumstances. The third could be be/longing, encouraging students to reflect on the character of being and belonging within a global landscape. These are accompanied by ideas about supporting exercises. They are offered in the spirit of agreement with the OECD’s intention that “the global in the form of knowledge, skills, attitudes and values could, and should, be bought into classrooms in the form of intercultural values, thinking and practices” while highlighting the question of the pedagogical means through which this can be made to happen.

My notes on Thompson, G., & Sellar, S. (2018). Datafication, testing events and the outside of thought. Learning, Media and Technology, 43(2), 139-151.

In this paper Thompson and Sellar cast a Deleuzian lens upon the data hungry character of contemporary educational institutions. As they put it on 139, “Education institutions, and the people who work and learn in them, are constantly generating and using more and more data”, much of which is digital. This proliferation of digital data reflects and in turn encourages complex forms of software, in turn driving the development of interoperability systems to ensure they can work together. This interoperability makes ‘big data’ techniques possible in a way they otherwise wouldn’t be. This is an important reminder that data analytic possibilities don’t unfold naturally from new technology but rather require institutional work with influential sponsorship in order to make them happen. They caution that the “the conceptual tools that we have for thinking through these new technologies in assessment and learning, particularly in relation to implications for politics, policy and practice, remain caught between dystopian visions of technology undermining human values, culture and subjectivity, or utopian fantasies about continual improvements to the efficiency and effectiveness of learning.” (139-140).

Deleuze and Guattari help that revolutions in institutions such as education proceed through challenges to orthodox images of thought (“the set of implicit presuppositions about what it means to think”) which “create new possibilities for thought that can bring about new activities, behaviours, organisations and connections” (140). However “thought that is formed in digital learning environments such as CATs ultimately conforms to a model: an information ontology” and cannot be revolutionary in this sense (141). The production of ever more data produces events, in so far as the data leads things to happen which otherwise wouldn’t, but it is constitutively unable to produce a new image of thought. This runs contrary to the disruptive rhetoric of educational technology which promises innovation and change.

Data can be produced in educational settings through a variety of means: deliberate production in assessment and management procedures, volunteered through the use of digital technology or produced through covert monitoring. It assumes a form as information through models encoded within hardware and software architectures at work in its production, as well as the processes at both levels at work in its generation. Informational ontologies are crucial to the operation of computing processes and facilitating interoperability between processes.

But where does the desire for data come from? Their Deleuzian approach sees desire as primary, productive of subjectivity. But this is where their philosophical approach seems weak to me, in comparison with the digital sociology approach they distinguish it from. It makes it difficult to think systematically about who desires it, how they desire it and how the context frustrates or facilitates it. The turn towards the para-subjective makes it hard to pin down subjects in any concrete way. They reflect on how “Groups and individuals come to believe that they need something to solve their problems and this lack is felt as a lack of data” (143). But their concern for the “abstract essence of this desire” immediately leads them away from this empirical specificity towards a ‘bargain with modernity’ in which data helps fortify people against the uncertainty which is endemic in modernity. Well sure but how does this specifically play out in educational systems? Or is this dynamic manifest uniformly across all spheres of society? The philosophical heterogeneity transmutes into an analytical monism and this is a problem for me. I’m not sure how it helps us make sense of education at all, as opposed to providing a vast panorama through which to sketch the questions about education we have in much grander terms.

I find the analysis much more useful when it comes to standardised tests, consisting of sample tests (a representative sample of pupils) and census tests (all within a population). These are adminsitered uniformly and scored in a predetermined way. They predate neoliberalism and the authors observe that many critics of these ‘neoliberal instruments’ fail to pay attention to the process by which they are constructed:

  • The domain is “the specific area of interest that is being measured, whether it is a body of knowledge, skills, abilities or attributes” (144). Sub domains have a relationship with each other. What are tested are constructs which are designed to track unobservable characteristics through evaluating observable behaviours.
  • This involved sampling from within the domains, drawing on a selection of potential questions within a domain in order to construct a test which can be done in a finite amount of time while still facilitating inference in the way described above. This is guided by a test specification plan.
  • Inference involves using test scores to assess achievement within a domain. Standardisation means this can be undertaken at the level of the individual, group, school or nation. Not all inferences are valid.

The computerisation of these tests promises capture transactional data, facilitating big data analytics. Selection can be built into the test itself, allowing branching or pathways depending on the student’s response and aptitude. But what sort of event is this? They suggest three concepts to classify these:

  1. Break-events involve the movement from one pre-existing category to another. As they put it, “Labelling a student as above-average, a school as failing, or a system as excellent based on aggregated data, are all examples of break-events that express potentiality as information” (146). These categories matter and they bring some possibilities to the fore while suppressing others.
  2. Crack-events are forms of change that lack the perceptibility of moving between pre-constituted categories. They occur all the time (“such as feelings of bewilderment, anxiety or elation when sitting a test”) but often don’t manifest in a recognisable way.
  3. Rupture events occur when the coordinates of thought are lost, as cracks aggregate into a potentially transformative moment of change. Revolutionary technologies could only be such if they produce rupture events.

Standardised testing can generate breaks and cracks but not ruptures becauset they are “created according to an image of thought that limits (a) what tests should be and (b) what the purposes of schooling are” (148). They cannot break with the past because they are premised on a sense of correct answers and simple errors, moving people between discrete categories on that basis.

My notes on Orben, A. & Dienlin, T. & Przybylski, A.K. (2019). Social media’s enduring effect on adolescent life satisfaction. Proceedings of the National academy of Sciences

Does social media make young people unhappy? This is the question which this paper by Amy Orbena, Tobias Dienlinc and Andrew K. Przybylskia addresses using the Understanding Society (the UK Household Longitudinal Study) data from 2009–2016. They caution that most responses to this question have been synchronic, comparing different people at the same point in time in order to draw conclusions about something that necessarily relates to the person over time. They offer their study against a background where “trivial trends are routinely overinterpreted by those under increasing pressure to rapidly craft evidence-based policies” (1).

The longitudinal data concerning 10- to 15-y olds (n=12,672) means that within person questions can be asked of the data. They produced a range of models working with different subgroups, exploring the statistical relationship between self-reported hours spent using social media (“on a normal school day”) and different areas of reported life satisfaction (Friends, Appearance, School, Work, Life, Family, Mean). For male adolescents social media predicted “tenuous” (does this mean extremely small…?) decreases in satisfaction with life and mean satisfaction. For female adolescents it was “a predictor of slightly de- creased life satisfaction across all domains, except satisfaction with appearance” (3). Most of the effects they found are trivial and were not statistically significant in over half of the models. They recognise the limitations of self report data but in the absence of social media firms “sharing granular user engagement data and participating in large-scale team-based open science” (2) the best datasets we have are similarly reliant upon self report.

My notes on Wood, D. M., & Monahan, T. (2019). Platform Surveillance. Surveillance & Society, 17(1/2), 1-6.

In this editorial, David Murakami Wood and Torin Monahan introduce a special issue of Surveillance & Society which considers platform capitalism from the perspective of surveillance studies. Their focus is on how “digital platforms fundamentally transform social practices and relations, recasting them as surveillant exchanges whose coordination must be technologically mediated and therefore made exploitable as data” (1). This highlights hoe surveillance is intrinsic to platforms, using the epistemic primacy they enjoy over activity taking place through them in order to “become dominant social structures in their own right, subordinating other institutions, conjuring or sedimenting social divisions and inequalities, and setting the terms upon which individuals, organizations, and governments interact” (1). They draw other institutions into their logic of surveillance, while encouraging the emerging of subjective orientations consistent with and supportive of that logic. They argue that the centrality of surveillance to platforms mean that “surveillance studies is uniquely positioned to investigate and theorize these phenomena” (2).

Reflecting on the slipperiness of the term ‘platform’, which has transmuted from a precise computer science term to a pervasive concept in only a few years, they suggest it is “not just a particular kind of organizational form associated with the tech industry and social media, but an entirely new mode of governance, perhaps an authentic political economic descriptor of the structure of the information age” (2). In computing “it is the foundation upon which other computing processes are built, or the environment within which such processes run” (2) which can encompass hardware, operating system and higher level processes within other applications can run. This can computing use spawned a broader tech use in which services like social media are seen as platforms, in that they “form the basis for other activities, which can be experienced and described from the individual user perspective as something with minimal technical knowledge and input” (2). Then there is what Wood and Monahan describe as a maximalist approach in which ‘platform’ is used was a metaphor for infrastructural operations across multiple arenas, as in the case of someone like Bratton who sees the platform as a replacement for corporation and nation state. They summarise the classical sense of infrastructure on pg 3:

Infrastructures establish contexts for practice. They enable, support, and afford certain practices while necessarily disabling, eroding, and resisting others. Whereas classically, one might envision infrastructures as the various forms of hardware that sustain life—the pipes, roads, electrical lines, and communication grids that form the backdrop of modern existence—they manifest analytically as relational properties apprehended through use (Star and Ruhleder 1996; Larkin 2013) or failure (Bowker and Star 1999; Graham and Thrift 2007). Infrastructures are also necessarily political in their differential allocation of resources and services and in their establishment of regimes of capital and violence (Cowen 2014; Parks and Starosielski 2015).

In this sense, we can say that “digital platforms are already becoming infrastructural in their properties and effects” (3). The big tech firms are increasingly involved in undersea cabling operations as an example of infrastructure in the classical sense. But scale of operation on some digital platforms rivals that of existing public or private infrastructure, breaking down any sense of a clear distinction between them. If I understand correctly, their point is to ask how something like Facebook could be regards as non-infrastructural while something like the cellular network would be infrastructural. If it’s just a matter of how many people are on the grid and how pervasive it is, the biggest digital platforms are clearly becoming infrastructural. But this reflects a wider ambiguity, which I think cuts through this literature, concerning whether platforms are a type of infrastructure, the emerging infrastructure will take or something analogous to but different from infrastructure.

But this still raises the question of what the systemic implications are of the proliferation of platforms. Is this the next stage of capitalism or possibly something beyond it? Is it something which deviates from the logic of capitalism or something which expresses its next stage? Surveillance capitalism is just one of “an increasing multitude of terms for a hydra-headed phenomenon that also includes the information economy, affective capitalism, the gig economy, the sharing economy, and many more” (4). They argue that the commodification of ‘behavioural exhaust’ described by Zuboff might be reaching its peak given the growing political and regulatory backlash against it. They suggest her “inevitable path to a Skinnerian society of rational control” in which datafication consumes life itself lacks an economic rationality and that “what appears to be emerging is a battle for the ‘payment space’ and specifically how micropayments might be made properly functional in both online and offline spaces” (4). The partnership between Microsoft and MasterCard suggests a new infrastructure of persistent identification without the ubiquitous datamining and advertising which constitutes the current business model of platforms. Recognising this gap between technical infrastructure and business model is important if we are to see the opportunities which become apparent during such a stage of transition. As they write, “The platform could be just another differently shaped vessel for capital accumulation, or it could be a way of finally breaking the alignment of the state and nation and reorienting more ineluctably with capital through pervasive surveillance and the persistent manipulation of data, in an entirely new form of governmentality”. But there are more hopeful possibilities which are opened up by the same transition which we must “see and seize” (5).

My notes on Rahwan, I. et al. (2019) Machine Behaviour. Nature, 568, 477–486

The proliferation of intelligent machines, ranging from machine learning systems through to their embodiment in robotics, raises the question of how their behaviour should be studied and understood. In this agenda setting paper, the team of authors suggest this now requires the deliberate formation of a new field, bringing together many parts which have already taken shape. From 477:

This field overlaps with, but is distinct from, computer science and robotics. It treats machine behaviour empirically. This is akin to how ethology and behavioural ecology study animal behaviour by integrating physiology and biochemistry—intrinsic properties—with the study of ecology and evolution—properties shaped by the environment. Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur. Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate

These AI agents, virtual and embodied, tend to be studied by their creators with a narrow focus on whether they are performing their intended tasks. However a broader approach is necessary as they “increasingly integrate into our society and are already involved in a variety of activities, such as credit scoring, algorithmic trading, local policing, parole decisions, driving, online dating and drone warfare” (477). This would respond to growing concerns over the lack of predictability surrounding their consequences, the loss of human oversight and the potential harms arising from once human led tasks being increasingly taken over by AI agents. They cite three main motivations for this:

  1. The ubiquity of algorithms has reached an unprecedented point, operating across fields as diverse as news, credit, entertainment, travel, security and dating. They are increasingly likely to play a role in raising the young, caring for the old and coordinating collective behaviour.
  2. The complexity and opacity of algorithms is increasingly rapidly, leading to black boxes in which the mechanism linking inputs and outputs is uncertain, even if the initial code was well understood because the training data and training process have uncertain results. This is compounded by the proprietary character of the work underlying them. Their feedbacks with diverse environments adds another layer of complexity to this process.
  3. Their capacity to have beneficial or detrimental effect on humanity necessitates scrutiny because “with the power to nudge human behaviours in positive or intended ways comes the risk that human behaviours may be nudged in costly or unintended ways” (478).

This is a necessarily interdisciplinary field which is currently in a nascent period of integration, operating in an ad hoc way. There is a skills gap on both sides of the groups with a stake in this, described on 478:

These scientists may be expert mathematicians and engineers; however, they are typically not trained behaviourists. They rarely receive formal instruction on experimental methodology, population-based statistics and sampling paradigms, or observational causal inference, let alone neuroscience, collective behaviour or social theory. Conversely, although behavioural scientists are more likely to possess training in these scientific methods, they are less likely to possess the expertise required to proficiently eval- uate the underlying quality and appropriateness of AI techniques for a given problem domain or to mathematically describe the properties of particular algorithms.

But their work will necessarily move beyond optimisation of AI agents against benchmarks, necessitating “a broader set of indicators, much as social scientists explore a wide range of human behaviours in the realm of social, political or economic interactions” (479). It’s notable how there’s no room left here for interpretation of AI agents, human responses to them and the relation between the two, despite the potential value which a broader methodological repertoire could offer. The fact they later in the paper cite a paper from 2018 to say how lab based research has begun on human machine interaction, speaks volumes even on the most charitable reading that they’re aware of HCI but see it as a different endeavour. From 479:

As such, scholars of machine behaviour spend considerable effort in defining measures of micro and macro outcomes to answer broad questions such as how these algorithms behave in different environments and whether human interactions with algorithms alter societal outcomes. Randomized experiments, observational inference and population-based descriptive statistics—methods that are often used in quantitative behavioural sciences—must be central to the study of machine behaviour. Incorporating scholars from out- side of the disciplines that traditionally produce intelligent machines can provide knowledge of important methodological tools, scientific approaches, alternative conceptual frameworks and perspectives on the economic, social and political phenomena that machines will increas- ingly influence.

The theoretical approach they suggest is a taxonomic one, drawing on a parallel with ethology. As they put it on 480, “Machines have mechanisms that produce behaviour, undergo development that integrates environmental information into behaviour, produce functional consequences that cause specific machines to become more or less common in specific environments and embody evolutionary histories through which past environments and human decisions continue to influence machine behaviour”.

They see mechanisms for generating behaviour as emerging from the interplay between an algorithm and its environment, even if a particular algorithm is embedded in an interlocking structure to constitute a complex AI agent. Machinology (my term, not theirs) would ask how particular behaviours emerged from this interplay, whether from human engineer choices, training, interaction or feedback from the environment. These behaviours serve a function or otherwise for human stakeholders, leading to a selective force which might make some more common e.g. if useful behaviours are reproduced in further AI agents. These reference institutions as sources of incentives which shape the behaviour of human stakeholders. Institutional incentives can cause machine behaviours to spread which are socially pathological, for instance systems to maximise user engagement on social media which leads to the proliferation of ‘fake news’ while ensuring the success of the platform itself.

These are two parallel levels of explanation, one explaining ontogeny (how the entity has the characteristics it does) through the identification of causal mechanisms leading to the development of behaviour, the other explaining phylogeny (why the entity has developed these characteristics rather than others) through the analysis of adaptive value. The transmission of these characteristics is varied and open, though mediated through institutions like intellectual property laws and regulatory regimes. As they put it, “machines may exhibit very different evolutionary trajectories, as they are not bound by the mechanisms of organic evolution” (481).

They suggest three levels of analysis: individual machines, collective machines and groups of machines embedded in an environment. The first is currently undertaken by computer scientists and engineers, often looking at an individual machine solely in terms of properties arising from its source code and design. These could be conducted in a within-machine or between-machine manner, looking at variation in how one machine behaves across contexts or looking at the same behaviours as they vary between machines. The second looks at “the interactive and system- wide behaviours of collections of machine agents” (482) as can be seen in approaches such as Multiagent systems and computational game theory. Machines using simple algorithms for local interactions can aggregate into complex behaviours at a collective level, studied as aggregation dynamics, but also the forms of social learning which might take place when humans and institutions provide a potent source of environmental feedback. Thirdly, machines shape human behaviour and vice versa, necessitating study of these hybrid and embedded realities. But these dynamics are likely to operate at the same time, in complex and hybridising ways.

They end with a consideration of some of the methodological, theoretical and logistical obstacles to machinology, the most urgent of which is the uncertain over legal penalties for reverse engineering algorithms and violating terms of services & the risks involved in anthropomorphising machines (with their potentially alien intelligences) and doing so in a way which distracts from the crucial human operations involved in providing the training data.

My notes on Chase, Z., & Laufenberg, D. (2011). Embracing the squishiness of digital literacy. Journal of Adolescent & Adult Literacy, 54(7), 535-537.

Even if widespread disagreement remains about what constitutes digital literacy, everyone seems to agree that it is important to the success of students. As Chase and Laufenberg point out, “if digital literacy is simply reading and writing in a digital environment, there is no need for the new terminology” (535). Instead they suggest it is a format, genre and tool standing within literacy more broadly, as opposed to being a concept which tracks something distinct from it. To read digitally means “students and teachers must learn to read beyond the printed page” and “across all those platforms which can be used to create”. It is what enables reading “across multiple forums, media, linguistic registers, and purposes” (536). It entails following connections, contextualising what you find and switching codes as necessary through the journey you undertake on this process of discovery.

They offer an interesting example of an exercise that can be used to teach digital literacy which involves students finding articles on the same event from a regional publication within the United States, one from a different region and another from outside the US. In essence, it’s about triangulating between the accounts in order to explore the gap between the reality of the event and the different ways it has been constructed in different fora. Has any compiled a book of digital literacy exercises that are used by teachers? If anyone knows of this I’d really love to look at it.

They draw an important distinction between a curriculum predicated on digital technology and one served by digital technology. This places inquiry based learning at the centre, with digital technology being used as a powerful set of tools to this end. It aims to produce “students that are skilled in consumption, evaluation, and creation of content” (537).

My notes on Boath, E., Vigurs, K., & Frangos, J. (2018). Twittering Away-Is twitter an appropriate adjunctive tool to enhance learning and engagement in Higher Education?. Innovative Practice in Higher Education, 3(2).

Twitter has often be framed as a potential tool for teaching and learning. It can be used for virtual peer support groups, developing interactive networks, sharing knowledge and building networks. It “allows learning conversations to take place both virtually and publicly, thus removing them from the isolation of classrooms and academic ivory towers” (104). It can promote asynchronous learning, generating online community and facilitate immediate formative feedback. There are a whole range of ways in which it can be used but what do students make of these possibilities?

In this study, Elizabeth Boath, Katy Vigurs and Juliette Frangos investigate student experiences of Twitter through a study of a convenience sample of 44 social welfare law students. Its focus was on Twitter as “an adjunctive learning tool to provide learners with access to contemporary discussion relevant to their subject, which they were invited to identify, understand and disseminate to the wider group” (105). During a Welfare Benefits and Money Advice Module of a BA Social Welfare Law, Policy and Advice Practice students were invited to engage via Twitter lists, a twitter chat, direct engagement with lecturing staff, each other and experts in the field within and beyond the academy. They were asked to identify information relevant for their course and share it with others using a dedicated hashtag. A 17-item questionnaire using closed and open questions was designed to explore their views of this activity and the impact it had on their learning experience. It was completed by 11 of the 44 students (25%). Three of them had previously been regular users of Twitter, five had not used Twitter before but were now regular users and 3 were infrequent users previously and remained so now.

Their responses conveyed the usefulness of Twitter for enhancing knowledge, particularly on emerging event s and breaking news. Though this was coupled with concerns about the reliability of twitter sources. Some suggested they found the platform overwhelming, with too much information and too little time to process it. This reinforces existing research which has found that Twitter’s use to support students may be limited. The authors suggest that “if supported by institutional digital scaffolding such as time management strategies and training, Twitter may be a useful adjunct to traditional physical learning spaces that facilitates the enhancement of knowledge and building of professional networks” (108).

The real question is what from that scaffolding would take and whether this would be worthwhile even if it was provided. The teaching might be effective but are students interested?Interestingly, only two of them agreed that Twitter had added to their enjoyment of the model. Could there be much more enthusiasm for Twitter on the part of educators than on the part of students? They note that the students in question “tend to be more mature students, to be employed, have children and also some undertake additional caring roles” and so may be atypical of the broader population (108)

My notes on Selwyn, N. (2016). Minding our language: why education and technology is full of bullshit… and what might be done about it.

This wonderfully title editorial takes issue with the tendency for educational uses of digital technology to be “discussed in enthusiastic and often exaggerated terms”, leaving “idealistic and impassioned talk” proliferating in an “area awash with bold assertions and confident claims” (1). This has gone hand-in-hand with a rebranding from ‘computer-based instruction’ and ‘computer-assisted learning’ in the 1980s to ‘technology-enhanced learning’ and ‘connected learning’ in the 2000s, building in the assumption that learning is taking place and that technology is responsible for it. This language can be easy to diagnose in isolation, yet risks washing over us when we encounter it on a daily basis. They “should not be treated simply as benign or neutral words, terms, phrases and statements” because they are “powerful means of advancing the interests and agendas of some social groups over the interests of others” (2).

They operate by fixing outcomes as certain, squeezing out the messy realities of on the ground implementation and the possibilities which the technology could be applied for other purposes. They render unambiguous, treating as a technology to enhance learning, what in reality is profoundly ambiguous and should be treated as such. As Selwyn puts it, “the possibility of technology not leading to learning and/or other educational gains is rarely a matter for consideration.” (2-3). They render the process in active terms of the learner and learning, with technology impacting upon or transforming pre-existing educational activities and processes. Their confidence in bringing about a transformation is belied by “a cloying tone, involving the use of playful, homespun and self-consciously childlike language” (3). Unfortunately for them, “the past 100 years show that education has been largely un-transformed and un-disrupted by successive waves of techno- logical innovation” (3).

There is little evidence base for these grandiose claims. So why do these simplistic ways of talking about educational technology persists? Drawing on Frankfurt, Selwyn argues that bullshit involves a cynical disdain for the way things are, as opposed to lying which implicitly entails the recognition of a truth. Bullshit about educational technology is “the result of people talking loudly, confidently and with sincerity regardless of accuracy, nuance and/or sensitivity to the realities of which they speak” (4). So much is missed out and the politics of educational technology is deeply shaped by these omissions. From pg 4:

For example, it is surely not satis- factory that the dominant framing of education and technology blithely margin- alizes, ignores and/or denies the complex and compounded inequalities of the digital age. Similarly, it is surely not helpful to avoid proper discussion of the political economy of digital education, and the corporate reforms of public education through privately sponsored technological means. The limited language of education and technology therefore needs to be challenged by anyone concerned with matters of fairness, equality and genuine empower- ment through digital education.

He suggests a recoding in response to these trends, “encouraging a counter-lexicon that reflects more accurately the conflicts, compromises and exclusions at play” (5). This would be a “language of education and technology that unpacks more aptly” (6). It would involve debate being “prised away from celebrity musings and privileged pronouncements, and towards the voices, opinions and direct experiences of the various real-life ‘publics’ of edu- cation and technology – for example, students, educators, parents, employers, administrators, designers and developer” (6).

My notes on Selwyn, N., & Stirling, E. (2016). Social media and education… now the dust has settled. Learning, media and technology, 41(1), 1-5.

This special issue of Learning, Media and Technology is a sequel to a 2009 issue which began to inquire into the emergence of ‘social software’ and what it meant for teaching. Seven years later with social media platforms ubiquitous, the online/offline distinction having collapsed and a ‘social’ element being a standard feature of new technology, it asks how social media platforms are actually being used in educational setting, what the implications of this use, their interaction with their institutional context and how they are transforming it in the process.

The main difference they see between 2009 and 2015 is “the extent to which social media have become part of mainstream digital practices and everyday life in general” (2). They make the interesting point that this means the term itself now lacks resonance outside of the academy, as platforms have faded into the background of everyday life:

“The pervasiveness of social media is illustrated neatly by the lack of resonance that the term now has with the general population. The characteristics and qualities that made social media such a distinct and exciting ‘thing’ in 2009 are now normalized to the point of not being an obvious topic of conversation, let alone meriting a specific label” (2)

Yet their uptake is far from uniform. Many people don’t have internet access, many are subject to a ‘device divide’ in which they are only able to access platforms through phones and/or non-broadband connections. These divides are profoundly regionalised. They also note how significant it is that the study of social media has grown in the way that it has, with approaches as different as platform studies and computational social sciences illustrating how wide this field is if indeed it constitutes a feed at all.

Social media has been an increasingly prominent topic in education journals. However, as the put it, “many of the most interesting (and, we would argue, most important) questions about social media and education remain largely ignored by education researchers” who tend “to look primarily for good news, ‘best practice’ and examples of ‘what works’”. There is much hope still that social media will be “the ‘Killer App’ capable of initiating significant shifts in how people learn and engage with education”. However the social media research outside of education has shown us that its use by young people is complex, contradictory and contested. We need educational research that confronts this multifaceted character head on. There are exception to this but these studies “remain overshadowed by broad-brush accounts of social media use in the classroom” (4).

My notes on Burawoy, M. (2002). Public sociologies and the grass roots, speech to SWS Wrightsville Beach, February 7, 2002.

In this short text Burawoy takes issue with the mythology of decline which intellectuals are spreading about their own existence, as well as the associated belief that “a public sociology that dealt with the big issues of the day” has also begun to die out. However the supposed golden age of public sociology in the 1950s was in fact dominated by a small number of figures during the era of McCarthyism. Even if contemporary professional sociology has prioritised technique over substance, the same was true in this “era of sociology as messianic science”.  There are many more public sociologists today, in this sense of talking to the big issues of the day, then could be found at the time. They might be more narrowly focused but they are nonetheless tackling crucial issues of broader public concern. It therefore seems untenable to see public sociology as in decline.

However is it true that sociology no longer deals with the big issues of the day? Burawoy cites the public engagement activity of the ASA, in its establishment of Contexts magazines and its issuing of statements and authoring of Amicus Briefs. He suggests the belief in decline can reflect a narrow Ivy League focus, failing to recognise a shift in the centre of gravity away from private universities to the public ones that encompasses a deeper professionalisation alongside an expansion of public sociology. This narrow focus represents “an elitist conception of public sociology whose currency is writing op-ed pieces for The New York Times, visiting the White House or writing best-selling books for an emergent middle class” (3). In contrast Burawoy’s vision of public sociology with a wider range of publics, “not just the readerships of national media which is an amorphous, invisible, passive, public made up of strangers but also the much thicker publics that must begin with our students (our best emissaries to the world beyond), extending to local communities (such as communities of faith which we address in our churches), or social movements we stimulate to achieve greater self-awareness (such as civil rights or labor)” (3). He cites the feminist movement as a prototype, constituting its public and bringing it to self-awareness and mobilisation.

He suggests prophets of decline are actually talking about a particular type of public sociologist: male, inner-directed, alienated from public and profession. In contrast, he sees the rise of other-directed public sociologists connected to both sociology and publics. This is “not the free floating intellectual hoping to reach audiences on distant shores” (3) but rather the organic public sociologists whose work might be invisible to the discipline. This is why it’s now necessary to battle to make this work visible, democratising public sociology in the process. This involves collectivising public sociology, in order to recognise our common projects latent within work which might be undertaken as individuals, in communities and largely unrecognised within the profession.

My notes on Davies, H. C., & Eynon, R. (2018). Is digital upskilling the next generation our ‘pipeline to prosperity’?. New Media & Society, 20(11), 3961-3979.

It’s so rare for a paper to have such a wonderfully informative title. Huw Davies and Rebecca Eynon interrogate this assumption that “teaching young people digital skills and literacies will help advanced market economies compete with their rivals and deliver prosperity” (3961-3962). Computer Science is now part of the Natural Curriculum for all children in England aged 5-14 after a campaign by a range of actors, underscoring the creative dimension of computing alongside its importance to the economy and status as a life skill. Through doing so, “the creative use of technology was assimilated into a ‘set of capacities’ or skills that professionals acquire in order to participate in the labour market” (3962). Digital skills are framed as the “primary antidote to economic decline” and coming disruptive shocks, with the pipeline becoming “the default metaphor in policy discourse to suggest the economy is a machine that feeds on a fixed, constant supply of digitally up-skilled youngsters” (3692). These skills are presented as a way to enhance social mobility, incorporating digital skills into a particularly narrow and nationalistic  understanding of economic need. In the process, caution Davies and Eynon we see a “highly problematic co-option of important intrinsic or civic benefits of digital engagement into economic discourse” (3963).

Their study was undertaken in two deprived areas of Wales, withe fieldwork in two schools effected by similar inequalities, one in a former mining town and the other in a deprived area of Cardiff. Their questionnaire was administered to one year 9 and one year 10 class each school and one year 12 class in the Cardiff school (the other school had no sixth form). 15% of respondents reported parents who had been to unviersity and around 70% of these parents worked, typically in the manual or service sector. These questionnaires were then supplemented by workshops undertaking as ICT classes in a year 9 and a year 10 group at each school. These activities focused on gaming practice, marketing games and asking students to draw mind maps to represent their digital ecospheres. I thought this was a particularly interesting method and I’ve been thinking recently about how to use creative methods like this to explore people’s platform imaginaries. For the year 9s ICT was compulsory whereas it had been deliberately chosen for the year 10s. The third method was semi structured interviews with 10 students from each year group at each school (n=50) with questions about digital practice, motivations, ambitions and skills. These were used to build a typology of the ways in which young people talk about their technology practice, drawing on the interview and workshop data initially supplemented by additional data from the survey.

The cyber kid discourse seen as the start “tacitly assumes young people’s motivations and the class of conditions that influence these motivations are (or should be) universal” (3966). it goes hand-in-hand with a tendency to homogenise digital technology seeing it in more or less uniform terms. This is belied by their finding that “Digital technology’s multifunctionality is mobilised by young people who have different personalities and socially shaped motivations, incentives and constraints guiding them” (3966). These are the categories they developed for the taxonomy:

  • Non-conformists: mostly young women, experienced a sense of estrangement from the school’s prevailing culture yet were able to find interlocutors online. The majority were in the 15 years old group and were “using social media as a resource to develop their identity” (3967). Their orientation to digital opportunities was entirely on their own terms, including the possibility of entrepreneurial activity through these means.
  • PC gamers: mostly young men, with a passion for gaming and the technical skills that went with it. This included hands on experience building PCs, with CPUs adequate for the intense graphical demands of modern games. Their interest in PCs came from their experience of the limitations of consoles. Interestingly, many reported that it had initially been a way to spend time with their fathers but as they progressed it became a peer-to-peer activity, suggesting game as male sociality. Furthermore, their fathers were more likely to be technical or professional than other children’s, suggesting a vector of class reproduction. The coding they had been presented to them at school was largely unappealing, yet they engaged in highly technical pursuits ranging from the aforementioned PC building through to YouTube channels, writing games in C++ and some minor consulting.
  • Academic conservatives: mostly female, with a shared commitment to formal education which they saw as more important than digital technology. They framed it as a distraction from these much more important ends. They used social media but were measured and controlled in their digital practice, not having friends who they didn’t also know online. Their aspirations lay in what partisans of the digital future would see it as quintessentially 20th cnetury jobs.
  • Pragmatists: their use of technology was restricted to specific purposes, tending to see it as a means to an end rather than end in itself. For example social media would be used to arrange meet ups or reminisce about those that had taken place in the past. They had often experienced digital exclusion (e.g. “limited money for games their friends played, lack of home access to the Internet, feeling behind in terms of digital skills, or having constrained access to the Internet for safe- guarding purposes” 3972) and their pragmatism could be framed as a response to this.
  • Leisurists: the largest group, tending to see the internet primarily for entertainment and cultural consumption. For them technology is a way of pursuing their interests, often happily leaving them within walled gardens and synchronising devices with parents and family. These activities “tended not to translate into pursuing a passion or developing skills that could be monetised in the digital economy” (3973).

The digital skills discourse suggests (a) a convergence between the needs of the economy and the needs of young people which can be met through digital skills (b) a denial of alternative motivations for young people that may not feed into this (c) the lack of structural constraints upon where digital skills can take them in the labour market. Their paper is a challenge to “the deterministic discourses that tell young people learning to code would be an act of economic self-interest that will, in turn, defibrillate the economy” (3976). The fact of having coding skills won’t lead to some magical capacity to transcend structural conditions, particularly for young women in a overwhelmingly male dominated industry.

 

My notes on Marres, N. (2018). Why We Can’t Have Our Facts Back. Engaging Science, Technology, and Society, 4, 423-443.

“We want our facts back” is a semi-joking remarking Noortje Marres overheard an academic say which captures a wider response to what has been called ‘post-truth’. Many feel increasingly inclined to take a normative stance in support of ‘facts’ and feel nostalgic for “a time when experts still seemed to have unquestionable authority and felt secure in this authority, when government ministers didn’t say things like ‘people have had enough of experts,’ and the notion that evidence should play a central role in public debate and politics had widespread, even taken-for- granted, institutional support” (423-424). Appealing though it might be, Marres points out that this position ignores the fact that not only were partisans of evidence were a minority in public life in the 90s and 00s, it was also widely recognised that evidence-based debate was not in itself as solution to political problems and could even be problematic by putting politics at risk through an over reliance on experts. While recognising the growing indifference of public speech to factfulness and the lack of consequences attached to outright lies, Marres argues we need to look more deeply to the “changing architectures of the public sphere” (424). The many initiatives which seek to restore the place of factfulness within public life (disinformation awareness campaigns, knowledge literacy programme, fact-checking services) risk reinstating an outdated strategy for securing facts in public debate which is based on authority. It entails a divide between knowing and unknowing subjects, those with facts and those without, which runs contrary to any aspiration for a knowledge democracy. Achieving this will require institutional, media and technological arrangements which are very different to those from the much claimed golden age of factfulness.

Social media has become a battleground for these debates, with fact checking initiatives using techniques ranging from ‘human moderation’ through to automated fact verification in order to apply journalistic procedures to online content. The platforms themselves have invested increasingly in moderation teams, as well as using automated tools to seek to demarcate problematic and unproblematic material. This has led inter alia to ‘dispute contented’ banners which can now be attached to certain pieces of content on Facebook, highlighting that a third party fact checking operation has cast doubt upon it. There have been questioned range about the working conditions of those undertaking this epistemic labour in click farms, but less scrutiny of the epistemology and methodologies underpinning them. The rely for their legitimacy on ideals of public knowledge and scientific citizenship but operate on a basis which is in tension with these, assuming that “quality is an attribute of information itself” (426). This runs contrary to what had become an increasingly dominant sense of information as *social*, defined by its circulation and connections. In contrast now what is at stake is seen to be the properties of content itself: “What is said to be in need of attention and intervention is the “veracity” of online statements and the potential duplicity of online sources” (427). For instance Factmata seeks to “cross-reference any claim circulating online onto a database of so-called verified statements, in order to validate or invalidate it” (427). So for instance a claim about immigration would immediately be linked to public data about the issue, allowing users to ‘become their own fact checkers’. In this it embodied logical positivism, seeking to decompose statements into units which could be matched against experience or other verifiable statements. Marres makes a particularly interesting point here about how logical positive and computer science shared a common inspiration in Frege’s logic and similar work, going some way to explaining the tendency for positivism to be reinstated by the turn to AI in systems like Factmata.

Fact checking systems implement a methodology and perform a service, but they also carry a distinction: “that between legitimate and illegitimate claims to knowledge” (428). These putatively technical procedures in fact draw normative boundaries, ones which its important we understand. She references Rorty’s account of demarcationism: defining validity or its absence as a binary attribute of atomistic statements i.e. can be they be traced back to observational statements or not? The normative dimension comes from the question of how to police this boundary between different types of statements. It also entails a sense of actors as being responsible for the epistemic quality of debate, by drawing attention to the character of their statements. In this world view, ‘good’ sources reliably produce valid statements, with ‘good’ users capable of discerning their presence. This is what Marres calls the politics of demarcation. This seeks ‘fake news’ as something which emerges from outside the technology: “it is the type of information sources that the Internet makes available, on one hand, and the users’ lack of skills capable of discerning the difference between valid and invalid statements, one the other, that are said to be responsible for the prevalence of dodgy content in this media environment” (428). Fact vs fiction pages were part of internet culture in the 1990s and demarcationist technologies predate the rise of ‘fake news’. But whereas the blame was once attributed to deviant online subcultures such as vaxers or flat-earthers, it’s now increasingly marked in social terms such as education levels. This dichotomy of responsible and irresponsible users roughly maps onto a broader “opposition between educated progressives and, on balance, less educated supporters of populist and nationalist causes” which is at the heart of contemporary debates about ‘fake news’ i.e. it has the potential in practice to position nascent ‘populists’ as the epistemic crisis, who need to beaten back by and suppressed through technological means in order to ensure the health of the public sphere. They might even reinforce the distinction in a way that furthers the political project of the latter, as can be seen in the far-right backlash against social media firms ‘deplatforming’ leading figures.

Demarcationism can’t account for the role that digital media has played in undermining respect for knowledge in the first place, instead externalising it into the figure of deviant users and deviant content producers. The mechanism undermining this is simple, as algorithms for content selection are designed to ensure maximum circulation in order to build the widest possible audience. This account of this on 431 was excellent:

“Online platforms, then, reward messages that spread instantly and widely with even more visibility, and, as tabloid newspapers invested in maximizing advertising revenue also found out in previous decades, sensational rather than factual content turns out to satisfy this criterion of maximal “share-ability” best. A commercial logic here gives rise to a circular content economy, one without referent: content that gets shared a lot is rewarded with more visibility, thereby increasing its share-ability.”

Fact checking services address the bias of sources while veiling the role of this content economy in conditioning the behaviour of those sources. They render opaque the role played by “technologies of source selection that regulate content circulation online” (431). The network structure of online communities is another source of limitation, as groups spreading ‘fake news’ barely overlap with groups interested in debunking it. How do we make sense of these differences between knowledge communities without invoking the facile distinction of literate and illiterate? Fact checking and demarcation do not help us understand the problem with knowledge we face in digitalised societies, instead actually actively keeping us from this. This concern doesn’t mean we deny there is a “crisis of public evidence in today’s digital societies” but rather that we recognise it “goes well beyond da disregard for facts in digital media environments” (433). It’s crucial that we recognise how “the insertion of computational technologies into public infrastructures have resulted in deception and manipulation of the empirical record” (434) by undermining institutional architectures which ensured accountability across social life. The correspondence model of truth embedded in fact checking is inadequate to address the broader social challenges which these developments are posing for us. Its reliance on looking back, checking claims against a corpus of established facts, fails to grasp today’s “dynamic information environments, in which readings and behaviors are constantly adjusted as conditions change” (434). Marres argues for a dynamic conception of truth in debate to replace this retrospective one.

The behaviourism around which platforms have been designed uses a concept of users as “influenceable subjects, not knowledge agents”. It has facilitated a social science which does without interpretation, but this does not mean it is a knowledge free environment. It is, as Marres puts it, “a research-centric apparatus, in that their design directly reflects the epistemic needs of the data scientists whose analytic operations are key to their commercial model: to target information to groups of friends, to track shares and likes in the aggregate” (435). It is built around the influencibility of users, with an empirical register which is predicated upon this. This is the final problem which Marres raises with demarcationist fact checking: “the normative opposition between knowledge (good) and non-knowledge (bad) that it imposes makes it difficult to see that epistemic ideals––like behaviorism––themselves have played a role in encouraging a disregard for knowledge on the Internet” (437). Not least of all in the fundamental assymetry at its heart. From 437:

“social media present an environment in two halves, where, on the one side, we find users with “influence-able” and “target-able” opinions, tastes, and preferences, while, on the other side, we have authoritative data analysts who “know” the population’s fine- grained and ever-changeable preferences and tastes. Scientists––the proponents of knowledge–– haven’t been by-standers but active participants in the crafting of a media architecture designed to enable the influencing of users’ actions.”

Demarcationism reflects this bifurcation, with the knowing subjects seeking to redesign the information environment to correct the unknowing subjects. The “veritable army of social data scientists who monitor, measure, and seek to intervene in this behavioral theatre” do so on the basis of facts, but outside of the public sphere and in a way which precludes engagement between experts and citizens.

Fake news might be problematic in itself but it attaches itself to issues which matter to people, tracking controversies which define political life. Fact checking fails to address this connection for the reasons cited above, but Marres argues that ‘experimental facts’ might be better served for this purpose. This doesn’t entail a rejection of stable facts, well establish ed and stable conditions which play an important role in public debate. If I understand correctly, these “statements whose veracity is unstable and whose epistemic status changes over time” (438) because they reference a changing reality, can be interrogated in real time in order to facilitate debate about their character and implications, as opposed to being demarcated in relation to an established body of fact. But I found the example of the £350 million on the NHS claim slightly confusing. There’s so much in this paper to think about, I’m going to come back to it at a lot. I think the point is that ‘experimental facts’ in this sense are more common given the epistemic dynamism which characterised digitalised society. So in essence the argument is to find ways to stay with the difficulties these cause, rather than trying to shut them down in ways likely to be be epistemically short-sighted and politically counter-productive. This is a move from a politics of demarcation to a politics of selection: “while demarcation concentrates on the retrospective establishment of correspondence of public statements with presumably stable, pre-given atomistic statements, a politics of selection progressively establishes a referent for claims through an iterative process of locating and evaluating statement-networks in formation.” (441).

My notes on Barden, O. (2019). Building the mobile hub: mobile literacies and the construction of a complex academic text. Literacy, 53(1), 22-29.

In spite of the many things which smart phones can do, they have not been welcomed warmly within the classroom with many claiming they are “distracting, promote superficial learning, erode students’ ability to concentrate and teacher’s control over the classroom and entrench socio-economic divisions” (22). This is significant because the most recent figures suggest 90% of 16-34 year olds in the UK own a smartphone, with half reporting they check it within 5 minutes of waking up. They are the primary mode of engagement with digital life which makes the literacy or otherwise which users have crucially significant. But this has not been defined heretofore and this is what Barden sets out to do.

What attempts there have been have tended to focus on mobility for reasons that are probably obvious. But the meaning of mobile literacy has largely been taken for granted. Barden warns that the term ‘literacy’ is often used as a synonym for skill, as in computer literacy. Literacy in the broader sense is a capacity to manipulate symbols in order to communicate, something for which mobility offers “possibilities for different kinds of literacies, shaped by communication forms which are richer, more diverse and more flexible than before and supports multimodality, linguistic innovation, remix, playfulness, participation and connection in the in the production and consumption of texts” (23). He thus defines mobile literacies as “the use and interpretation of written or symbolic representation in texts and practices mediated by mobile digital technologies” (23).

The interview with a student he takes a case study stresses the haptic aspect of producing a text using a mobile phone, stressing the enjoyment and response which can be found through the necessity of continually manipulating a screen. I remember the discovery of this when I first got an iPad and my absolute delight in using mind mapping software, it felt like the structure of my ideas were flowing into the device in a way I hadn’t experienced properly. In this sense, touch is crucial to how we manipulate symbols through mobile computing. It constraints in some ways, providing a smaller and less powerful interface than a keyboard and mouse, but opens up new modes of engagement which are important to recognise.

This is combined with a capacity to work anywhere and at any time which increases the immediacy of the creative activity. The student describes the active working this facilitates, undertaken in the immediate moment of the lecture theatre rather than being displaced until a later date when the student would sit down at a computer. Learning an take place through text, voice, image and text. It involves rapidly moving between apps in an agile fashion, working outside of the institutional provision of computer labs and taught sessions.

 

My notes on Kember, S. (2016). Why publish?. Learned Publishing, 29, 348-353.

This short piece is based on Sarah Kember’s inaugrial professorial lecture at Goldsmiths, its writing timed to coincide with the launch of Goldsmith’s new press. Its establishment was explicitly motivated by a sense of “the opportunities afforded by digital technologies and the new DIY spirit of scholarly publishing”, as well as the challenges raised by contemporary scholarly communication. As Kember puts it, it was informed by “a stubborn refusal to accept the constraints of genre, style, and format; and a conviction that there is more to the future of publishing than it being online and open access” while also reflecting the specificity of Goldsmiths as an institution (348).

Racism and sexism is rife in publishing, “if not at the level of editorial decision making, then at the level of infrastructure (through marketing strategies; publishing systems that classify and categorize like with like; through policies that privatize higher education, introduce exorbitant fees, and preclude those from more diverse ethnic and social backgrounds from becoming students and practitioners of writing and publishing” (349). This is matched by discrimination reproduced through citation and review practices in a scholarly publishing culture driven by audit, metrics and professionalisation. These control mechanisms favour the already established academics, with their unsurprising demographic profile, as well as the already established ideas. Goldsmiths Press joined other new new presses (UCL, Westminster, Open Humanities Press, Open Books, Mattering Press, Mute and Meson) constituting a “collective manifesto for future publishing” (pg 249). This is Kember’s account of what that entails:

  1. Digital first, not digital only: digital first for Goldsmiths means being digitally led rather than solely digital. It is a context for publishing rather than an end point. People still like print books and digital can’t provide a magic bullet to solve the problems of publishing: books are sensory things. Unfortunately, the enormous changes in how books are produced and distributed hasn’t been matched by a change in what they are. Being digitally led can help prompt this reevaluation: “looking again, in a digital context, at once new, provisional, provocative but largely analogue forms like the essay, the pamphlet, and the manifesto” (350).
  2. Open out from open access: a terrifying percentage of journal articles and books are published but not cited and hardly read. However the solution to this fast publishing, taking place without much concern for demand, isn’t to go more slowly. Kember takes issue with the open access movement which “rightly challenges the spiralling costs and price barriers put up by commercial journal publishers in particular and the fact that they are draining library budgets while profiting from academic free labour” but increasingly encourage a “pay-to-say model of publishing” which is “not only exploitative but also dangerous because it makes the ability to say contingent on the ability to pay” (350). Furthermore, openness is too often openness to commercialisation, redesigning the public sector on behalf of the private sector. Both the top-down and bottom-up open access movements “conflate access and accessibility”: mistaking something being freely available online or it being readable. Instead, we need a research commons in which universities invest in an infrastructure to support grassroots publishing against the offerings of private platforms.
  3. Intervene below the line: established practices of scholarly publishing reproduce inequality off the page, through the mechanisms identified earlier. This is why alternatives need to intervene ‘below the line’ and explore new techniques, norms and routines which can avoid this careless reproduction of inequities.
  4. Crisis, what crisis? Crisis talk is psychologically enticing but it has little practical value and we should avoid it, not least of all because it gets in the way of recognising how new initiatives inevitably prop up existing power structures in some ways while resisting them in others.
  5. Take responsibility for companion species: a failure to recognise the particular circumstances facing different groups and career stages is a failure to recognise the opportunities which these differences offer for rethinking the forms and practices of publishing e.g. “Our forthcoming poetry pamphlet series, which puts undergraduate and postgraduate work alongside that of established poets, is just a start”
  6. Work harder there, unwork there: the criteria built into promotions mean that withdrawl from the journal system is impossible for most, leaving us with the question of how to reroute labour into less harmful and explotiative outlets. This is why academic run presses are so exciting, creating opportunities to work as publishers rather than for them.
  7. Write! It’s necessary to resist the pervasive instrumentalisation of writing, reclaiming a sense of what it do. The professionalisation of academic writing has forced us “to substitute the more writerly, discoursive forms, such as the essay, for the more measured and measurable –largely unread and unreadable – quasi-scientific journal article” 352). We need to make contact with a sense of writing as something that evades and exceeds the possibility of measurement.

My notes on Betta, M., & Swedberg, R. (2018). Heuristics and Theorizing as Work on the Self. Sociologica, 12(1), 21-25.

Heuristics are commonly seen as either rules of thumb, simple tricks used under conditions of uncertainty, or tools for discovery, practical steps facilitating knowledge about what was previously concealed. However in this short paper, Betta and Swedberg suggest a third meaning, connected to the increasingly apparent area of theorising. From pg 22:

During the last ten or so years a new field of knowledge has slowly begun to open up; and the knowledge of tricks, moves and advice is part of this field. This new field of knowledge is theorizing. Theorizing is about sociologists becoming aware of what they are actually doing when they work with theory, and also being aware of how they can use this knowledge to shape their work.

The concern of social scientists with their own practice has led to the development of the sociology of knowledge, the sociology of sociology and the sociology of ideas. However they argue that these have been prone to blunt and oversocialised generalisations about the objective determinants of ideas. Furthermore, they are unable to inform an account of how to improve that practice.  There are books which aim do this explicitly, providing guidance about the practical steps involved in common intellectual activities. But these, argue Betta and Swedberg mainly focus on justification and say little about discovery. The focus on theorising aims to correct this deficit, as they explain on pg 22-23:

Theorizing represents an attempt to portray how things are actually done, and how theory is actually used in research. The search light is directed straight at what the social scientist does for two reasons. First, by proceeding in this way, social scientists will become aware of what they are currently doing; and second, they will also learn what they should be doing.

This involves relating to the self as an object of knowledge, in a way analogous to moral action which seeks to ensure more than mere conformity to external rules.  Heuristics should be seen alongside metaphors, induction, deduction, explanation and generalizations as part of theory work. But if I understand them correctly, their point is that beginning to talk about theorising in these terms helps constitute oneself as a theorising subject, relating reflexively to activities which would once have been (largely) tacit in a way guided by these concepts. In doing so, it contributes to constituting theorising as an area of knowledge with a direct connection to practice.

Interestingly, they relate this to the progress of methods and the consequent impoverishment of theory in (American, though they don’t qualify it as such) sociology. They also suggest that the findings of the cognitive sciences could be brought in to help inform the theory and practice of theorising. They conclude by linking this to Kant’s “project of the thinking self, which can be described as persons who act on themselves by teaching themselves how to think” (pg 24): if I understand them correctly, ‘owning’ theorising in the way they suggest involves having the courage to use your own understanding in the Kantian sense.

My notes on Eshet, Y. (2004). Digital literacy: A conceptual framework for survival skills in the digital era. Journal of educational multimedia and hypermedia, 13(1), 93-106.

There is widespread agreement that the ubiquity of digital technology presents a whole range of challenges to the people living within these newly digital environments, but there is little agreement about what competencies are involved in meeting those challenges. The term ‘digital literacy’ has often been used as a blanket term to cover a range of competencies (technical, cognitive, psychological, sociological) but this ambiguity about which it refers to has created problems. Eshet-Alkalai is seeking to address this problem, as described on pg 94:

Development of a more clear-cut conceptual framework may improve the understanding of the skills encompassed by the term “digital literacy,” and provide designers of digital environments with more precise guidelines for effective planning of learner-oriented digital work environments

His new conceptual framework incorporates five types of literacy which “encompass most of the cognitive skills applied when using digital environments” (pg 94). It’s interesting to note the studies he references that suggest young people show higher photo-visual literacy and branching literacy than adults but adults show higher reproduction literacy and information literacy.

– Photo-visual literacy: whereas writing became more abstract with time in its transition from visual symbols to abstract letters, the opposite trajectory is true with digital technology as text-based interfaces have led to increasingly sophisticated graphical user interfaces which rely on visual language which is familiar and resonant with the user. Photo-visual literacy is what is necessary to “‘read’ intuitively and freely, and to understand the instructions and messages represented visually” (pg 95). It’s a responsiveness to visual cues for practical action and a capacity to form associations on this basis. In its most pronounced form this is a synchronic literacy, in which different modalities contribute simultaneously to the understanding of a multimedia text
– Reproduction literacy: reproduction became possible in a meaningful way with the invention of the printing press, as opposed to simple manual copying or oral reproduction. This went through its next revolution with digitalisation, leading to “new and unlimited possibilities for reproducing and distributing digital information have opened new horizons for scholars and artists, but they have also required the development of a new set of criteria for originality, creativity, and talent in art or academic work” (pg 97). This literacy involves “the ability to create a meaningful, authentic, and creative work or interpretation, by integrating existing independent pieces of information” (pg 98).
– Branching literacy: the replacement of the scroll with the codex book changed how information could be processed, facilitating navigating to particular points in the text rather than being confined to reading it through row-by-row as in a scroll. It made non-linear reading possible for the first time. Digital media offers a radicalisation of this process, providing users ” with a high degree of freedom in navigating through different domains of knowledge, but also presents them with problems arising from the need to construct knowledge from large quantities of independent pieces of information, reached in a nonlinear, “unordered” manner” (pg 99). It should be stressed this is a function of a particular digital environment, as opposed the technology itself. Early computing imposed a linearity on information retrieval (e.g. absence of hypertext, insularity of databases, paucity of metadata) which seems remarkable in the contemporary digital environment. Branching literacy is the skill at retaining orientation when navigating a complex information environment in a multidimensional way.
– Information literacy: even if the challenge of evaluating information isn’t unique to digital technology, the quantity of information which individuals have to evaluative is. As he puts it, “the unlimited exposure to digital information, which can be published easily and manipulated without difficulty, the ability to evaluate and assess information properly has become a ‘survival skill’ for scholars and information consumers” (pg 101). This involves assessing the credibility, originality and presentational integrity of information encountered online. Information literacy encompasses the cognitive skills used to evaluate information and their efficacy at filtering the torrents of information online for that which is biased, untrustworthy or erroneous.
– Socio-emotional literacy: the capacities for communication and collaboration opened up by digital media also present all manner of challenges about managing interactions through these new means. As he puts it, “Socially-literate users of the cyberspace know how to avoid “traps” as well as derive benefits from the advantages of digital communication” who he suggests are those “who are willing to share data and knowledge with others, capable of information evaluation and abstract thinking, and able to collaboratively construct knowledge” (pg 102).