It occurred to me when watching this talk yesterday that what Tristan Harris describes as the breakdown of content verification has significant implications for digital methods. As Aza Raskin puts it in the video, “you do not know who you are talking to via audio or video”. The same is true for digital artefacts encountered online: you do not know that it depicts who it says it depicts or that it was created by who it says created it. This has significant epistemological implications because it introduces an unresolvable uncertainty into the implied relationship between online behaviour and digital artefacts, breaking the chain through which we can infer beyond digital content to states of affairs in the world. Obviously this is more of a problem for representational methodologies than non-representational ones but even for, say, speculative methods there’s an implied link between an intervention and the outcomes which is now subject to the same doubt. Does the epistemology of digital methods need to be reconstructed on this basis? How can it be reconstructed? The fact that digital methods could themselves be automated by generative AI (particularly on a speculative basis) further complicates the picture.