Once you get used to GPT-4 in sense of iterative dialogues consisting of paragraph long prompts, GPT 3.5 seems positively dense in comparison. I was firm member of Team Stochastic Parrot (as Zvi Mowshowitz put it) but I’m increasingly convinced GPT-4 is genuinely intelligent. I find this deeply unsettling and I’m going to spend next few years trying to subject it to empirical & theoretical scrutiny, not to mention normative implications of it. But it seems capable of conceptual and hermeneutic novelty I would have previously insisted was impossible. I thought this tech was world-changing, in mundane & practical terms, even when I was convinced it was fundamentally stochastic parrot. Now making sense of it theorising in meaningful way e.g. I’ve got hybrid of critical realism & Lacanian theory on the go at the moment.
Once I started to understand how to really have dialogue with it, it provided me with an analysis of its own implications for knowledge production (drawing on combination of cybernetics and classical sociology) that was much more thought provoking than anything I’ve read so far. It is increasingly infuriating when hit the cap and am forced to start talking to GPT-3.5 which simply cannot respond with the same level of conceptual and hermeneutic sophistication that its older sibling can, no matter how much effort I put into writing prompts. My gut feeling is that any knowledge-intensive organisations (oh hai, universities) should urgently be funding people to explore the use cases of this technology in an open-ended and dialogical way.
I increasingly feel we’ve barely scratched surface of potential outgrowths. In recent talks I’ve been prone to comparing generative AI to the web and social media. But I increasingly feel it is more significant than this. We need to rapidly build purposive cultures around this technology or we are going to be consumed by unintended consequences.