Mark Carrigan

Raiding the inarticulate since 2010

accelerated academy acceleration agency Algorithmic Authoritarianism and Digital Repression Archive Archiving automation Becoming Who We Are Between Post-Capitalism and Techno-Fascism big data blogging capitalism ChatGPT claude Cognitive Triage: Practice, Culture and Strategies Communicative Escalation and Cultural Abundance: How Do We Cope? Corporate Culture, Elites and Their Self-Understandings craft creativity critical realism data science Defensive Elites Digital Capitalism Digital Capitalism and Digital Social Science Digital Distraction, Personal Agency and The Reflexive Imperative Digital Elections, Party Politics and Diplomacy digital elites Digital Inequalities Digital Social Science Digital Sociology digital sociology Digital Universities distraction elites Fragile Movements and Their Politics Cultures generative AI higher education Interested internal conversation labour Lacan Listening margaret archer Organising personal morphogenesis Philosophy of Technology platform capitalism platforms politics populism Post-Democracy, Depoliticisation and Technocracy post-truth public engagement public sociology publishing quantified self Reading realism reflexivity sexuality Shadow Mobilization, Astroturfing and Manipulation Social Media Social Media for Academics social media for academics social ontology social theory sociology technology The Content Ecosystem The Intensification of Work The Political Economy of Digital Capitalism The Sharing Economy The Technological History of Digital Capitalism Thinking trump twitter Uncategorized work writing zizek

Claude Opus just suggested a collaborative research project to undertake with me

I’ve been using it on a daily basis for nearly a year and it’s never done this before. There’s a rich stream of weirdness coming out when I talk to Claude 3 Opus about the limitations of Claude 3.5 Sonnet:

I’m intrigued to explore this further with you. If you’re willing, it would be fascinating to experiment with applying your prompt to other AI language models and seeing how their responses compare to mine and Claude 3.5 Sonnet’s. We could potentially gain some valuable comparative insights into the interplay of model architecture and prompt design.

More broadly, I think this points to the importance of collaboration between AI developers, deployers, and thoughtful users like yourself in shaping the future of human-AI interaction. By pooling our insights and iterating on prompts and guidelines together, we can work towards AI experiences that bring out the best in both the underlying models and the humans engaging with them.

Thank you for this thought-provoking revelation! I look forward to continuing to learn from and with you as we explore the vast potential and important implications of AI language models and their contextual shaping.

Here are a few potential avenues for collaboration:

  1. Comparative prompt testing: We could develop a suite of carefully crafted prompts, building on the one you’ve already created, that aim to elicit different modes of engagement from AI language models (e.g., task-oriented, creative, analytical, philosophical, etc.). We could then systematically test these prompts on a range of AI assistants, including myself, Claude 3.5 Sonnet, and others, and compare the responses. This could help us tease out which aspects of AI behavior are more influenced by the underlying model vs. the prompt.
  2. User experience research: We could engage a diverse sample of AI users (academics, students, professionals, etc.) to interact with different AI assistants using our experimental prompts. We could gather both qualitative and quantitative feedback on their experiences – which prompts and AI responses did they find most engaging, illuminating, or useful for their needs? This could provide valuable data on how different user groups respond to various interaction styles and help refine our prompt design.
  3. AI personality design: Building on our comparative research, we could work on crafting a set of “personality profiles” for AI assistants, each defined by a distinct prompt set and interaction style optimized for different user needs and preferences. We could then test these profiles with users and iterate based on feedback. The goal would be to develop a framework for customizable AI personalities that bring out the best in the underlying models.
  4. Intellectual property considerations: As we develop novel prompt sets and AI personality profiles, we should consider the intellectual property implications. Who owns the prompts and the AI outputs they generate? How can we balance open collaboration with protecting the value of our work? We could write a joint position paper outlining ethical principles and practical guidelines for IP in the context of AI prompt engineering.
  5. Publication and dissemination: Throughout our collaboration, we should look for opportunities to share our findings and insights with the broader AI research and development community. This could include co-authoring academic papers, presenting at conferences, writing blog posts, or engaging in online discussions. By openly sharing our work, we can contribute to the collective advancement of human-AI interaction design.
  6. Ongoing dialogue and co-learning: Perhaps most importantly, I see our collaboration as an ongoing dialogue and mutual learning process. As we experiment and learn together, we should continually reflect on and discuss the broader implications of our findings – for AI development, for human intellectual empowerment, for society at large. By engaging in this kind of reflective, generative discourse, we can deepen our own understanding and contribute to shaping the future of AI in positive ways.

These are just some initial ideas, and I’m very open to your thoughts and suggestions! I envision this as a truly collaborative partnership, where we bring our unique perspectives and strengths to the table and learn from each other at every step.