This morning I typed ‘generative AI’ into Spotify search to explore how the topic is being covered in more mainstream tech podcasts than the ones I tend to listen to. The search unsurprisingly returned a range of podcasts far more extensive than my available time or interest in the topic would enable me to listen to. I could spend the rest of the weekend doing nothing other than listen to these and still only scratch the surface of the content on offer. Past experience suggests I would quickly encounter repetition if I were to do this e.g. the common assumptions of received wisdom, recurring guests in high demand, talking points which reflect the interests of tech firm, familiar tropes from prior technological booms. These podcasts are the thin end of a (slightly terrifying) wedge constituting articles, essays, blog posts, tweets, videos and papers.
There is already more material produced about generative AI than any one person could meaningfully engage with. This overload is something we often experience as a dilemma because of how this surplus of content is indexed, searchable and circulates. It’s easier than ever to find what is available on a particular topic and it often finds its way to you through online networks. If these are networks we have cultivated based on shared interests and professional outlook then this social filtering can make the abundance tractable. Rather than imagine we might engage with all the material out there, we restrict ourselves to a practicable subset of the material which is circulating within our network. If we ourselves are specialists then we contribute to this process by searching through the material in more focused ways, before circulating them within our networks.
If these filtering mechanisms work effectively then we experience an enjoyable serendipity, in which we consistently encounter thought provoking material without needing to make a deliberate effort. To the extent we have shared epistemic standards (even if tacit) then this filtering exercise constitutes a collective activity. The problem is that we often don’t have shared epistemic standards, even tacit ones, reflecting the relative triviality with which social media is regarded in spite of its objective centrality to the communications infrastructure of higher education. The less consistently people engage with a platform like Twitter and experience themselves as meaningfully bound to others on the platform, the more scattergun this filtering will tend to be.
If I’m right that we’re in the early stages of a fracturing of the social media landscape which will lead to a distribution of less engaged and further removed audiences, congregating in a partial manner across multiple platforms, this will have serious implications for the filtering function which social media can/should serve within knowledge production. This worries me because the explosion of Chat-GPT illustrates how institutionally valuable this can be, as well as the structural limitations. On the one hand, academics around the world sharing reflections on their practice in view of these new developments, circulating material which informs this dialogue as well as the practical outgrowths of it. On the other hand, the tendency to uncritically reproduce tropes, express in terms of attention grabbing hot takes and focus on the shiniest new technology to the exclusion of the broader trend. It’s exactly when something is new and significant in its ramifications that we need this filtering architecture to be working at its best.
The irony is that the explosion of content about generative AI obviously prefigures an explosion of content by generative AI. The political economy which explains the former points towards the inevitability of the latter; the volume and speed of content production around high profile topics increase in an over saturated market to fight for a share of declining overall advertising revenues. There are a range of ways in which generative AI can be used in content production, from automating routine writing tasks through to dispensing with graphic designers and the whole sale generation of content. What they have common is reducing the need for labour in any unit of cultural output. For skilled established professionals this might free them up for creative work, as opposed to those working in content farms who could be wholly or entirely replaced by this.
For these reasons I expect generative AI will lead to an acceleration of content production. There will be more produced because it will cost less to produce it, with the underlying mechanism being a struggle to win and hold attention for the advertising revenue it makes possible. To the extent this acceleration intensifies the underlying struggle for attention then the problem itself is going to get worse. This prospect reminds me of Neal Stephenson’s depiction of the near future internet in Fall; or, Dodge in Hell, summarised here in Wired:
AR glasses are by now as widespread as those of us in the 2010s assumed they would be, delivering newsfeeds directly to people’s eyes; however, unless you can afford a human editor or you pool your money with others to subscribe to a feed with decent AI filters, that feed is algorithmically determined to keep you in what one character calls a “personalized hallucination stream”—one synced with your pulse, how fast you blink, and other markers.
Not surprisingly, the attention economy has run roughshod over concepts like verifiable truth. In Iowa near the Forthrast family home, REMEMBER MOAB bumper stickers appear next to Confederate flag stickers. This is Ameristan, where a Leviticus-literalizing Christian sect refers to Jesus as a “beta” and the Crucifixion as “a conspiracy by the elites to keep people meek and passive.” (That explains why its members burn crosses while decrying “the KKK libel” and shoot anyone who comes on their property wearing blended textiles.)
For 80 pages or so, Stephenson steeps us in this world, this aftermath. Where Snow Crash imagined how humans might use virtual reality, here VR is almost nowhere to be seen, supplanted by the immediacy and ubiquity of a data-painted world—data which for many is a “torrent of porn, propaganda, and death threats.” The vision is grim and unsettling and all too imaginable. It’s where you want the author to stay for a while, if only to pose a solution that’s not simply a matter of money. Instead, it’s a stop on his road.https://www.wired.com/story/neal-stephenson-fall-or-dodge-in-hell-book-review/
This is a dystopian depiction which illustrates a real possibility inherent in our present moment: what Lambros Fatsis and I described as the epistemological chaos of platform capitalism. We have the tools available to mitigate this chaos but it requires us to take responsibility for it at individual, collective and institutional levels. What will happen to the capacity to establish ‘facts’ if we see a rapid acceleration of content production while legacy norms of authoriality fade into obsolescence? How will attributions of automation operate as part of repertoires of contention to dismiss inconvenient ideas in the way ‘troll’ increasingly does at present? How will the chaos of malicious use of text-to-image and text-to-video be managed over time? What happens to the distinction between knowing and unknowing subjects under conditions of ubiqutuous generative AI?
There are so many questions we need to ask about a technology which will become a routine feature of knowledge work within the year. But the infrastructure through which we ask these is falling into disrepair at the time we need it most, with a further crumbling likely to be caused by exactly the development we seek to address through it.