I’ve found Anthropic’s Claude immensely useful in a couple of recent planning meetings. In a workshop we were putting post it notes on flip chart paper as part of a planning process. I then shared a photo of the flipcharts and asked it to provide a series of provocative questions to feed into our discussion:
Based on the images from your design sprint workshop, here are some provocative conversation starters about designing a short CDP course on digital sustainability:
- “Digital sustainability” seems broad – are we diluting the impact by trying to cover too much? Should we narrow our focus to one specific aspect?
- How can we ensure this course isn’t just theoretical, but drives real change in commercial organizations? What measurable outcomes should we target?
- The “approach” board emphasizes learning and reflection. But for busy professionals, should we prioritize actionable tools and frameworks instead of abstract concepts?
- There’s mention of case studies. How can we create truly compelling ones that resonate with commercial audiences, not just academic examples?
- “Understanding audiences” is noted. Have we actually spoken directly with potential learners to validate our assumptions about their needs and interests?
- We aim to “embed sustainability.” How do we address potential resistance or skepticism from learners whose companies may prioritize short-term profits?
- The outcomes seem aspirational. What’s the minimum viable product version of this course that could still deliver value?
- How might we incorporate the latest digital technologies (AI, VR, etc.) to make the learning experience itself a model of digital sustainability?
- There’s a note about “global impact.” How can we design the course to be relevant across different cultural and regulatory contexts?
- Several notes mention “reuse” and “resources.” How can we practice what we preach and make the course itself a model of sustainable digital design?
I recorded another planning meeting on Zoom yesterday and gave Claude the transcript, asking it to pull out key points from the discussion:
Certainly, I’d be happy to pull out key themes from the transcript and relate them to the wider project. Here’s a summary of the main themes and their relevance:
- Philosophical and sociological perspectives on AI The conversation explores AI from both philosophical and sociological angles, discussing how these perspectives intersect and complement each other. This aligns with the project’s aim to provide a multidisciplinary analysis of AI’s impact on society and human thought.
Key points:
- The need to balance philosophical inquiry with sociological understanding of real-world contexts
- The importance of considering power dynamics and inequality when discussing AI’s potential
- The value of combining abstract reasoning with empirical observations
- Creativity and AI There’s significant discussion about the nature of creativity and how AI might impact or redefine our understanding of it. This relates to broader questions about human cognition and the potential for AI to augment or alter creative processes.
Key points:
- The historical context of creativity as a concept
- The tension between innovation and iteration in creative processes
- How AI might influence or change creative practices
- Existential implications of AI The conversation touches on the potential existential impacts of AI, not in terms of apocalyptic scenarios, but in how it might fundamentally change human experience and self-understanding.
Key points:
- The potential for AI to alter how we think and process information
- Questions about human agency and identity in relation to AI
- The need to consider long-term societal changes brought about by AI
- AI as a tool for intellectual exploration There’s discussion about using AI (like Claude) as a tool for academic discourse and idea generation. This reflects the project’s interest in exploring how AI can be used productively in intellectual pursuits.
Key points:
- The potential for AI to stimulate and enhance academic discussions
- Considerations of how AI might change research methodologies
- The importance of maintaining critical thinking skills alongside AI use
- Cultural capital and AI literacy The conversation highlights how existing inequalities and differences in cultural capital might impact people’s ability to effectively use and benefit from AI technologies.
Key points:
- The potential for AI to exacerbate existing social inequalities
- The importance of education in developing AI literacy
- Considerations of how different social groups might interact with AI
- Ethical considerations and responsible AI development While not a dominant theme, there are touches on the ethical implications of AI development and use, particularly in relation to data privacy and the potential misuse of AI capabilities.
Key points:
- The need for responsible AI development practices
- Considerations of data privacy and security
- The potential for AI to be used in ways that reinforce existing power structures
- The changing nature of human-AI interaction The conversation explores how interactions with AI might evolve over time, and how this could impact human behavior and cognition.
Key points:
- The potential for AI to become a more integral part of daily life and thought processes
- Considerations of how prolonged AI use might change human cognitive patterns
- The need to maintain human agency and critical thinking in AI interactions
These themes collectively contribute to the project’s goal of providing a nuanced, multidisciplinary analysis of AI’s impact on society, human thought, and creative processes. They highlight the complex interplay between technological development, social structures, and human cognition that the project aims to explore.
It feels increasingly seamless to include Claude as an interlocutor in group projects. The key I think is to neither centre nor exclude it. It’s one voice amongst others, in a supporting role to reflect back what you’ve produced at a team who are driving things forward.
