I was a guest on Will Brehm’s brilliant FreshEd podcast (due to be released next week) with Susan Robertson, talking about the post-pandemic university book we edited with Hannah Moscovitz and Michele Martini. Will’s a brilliant interviewer and Susan a superb interlocutor, leading to a conversation in which lots of new ideas emerged. I come away from a conversation like this newly energised, but too often this sense of having grasped some new direction evaporates with time, even if I take notes.
As an experiment I decided to auto-transcribe the podcast and ask ChatGPT and Claude for feedback on it. I just used the transcription built into Word after converting the audio into a a lower quality file in order to fit within the 300mb limit. Word’s transcription is noticeably inferior to WhisperAI but it’s far more convenient, particularly given what seems to be an unspecified length limit in the latter. This is what ChatGPT thought were the main points from the conversation:
- Hybrid Model Tensions: The struggle to integrate online and offline experiences seamlessly is evident. The logistics and dynamic of managing both in-person and online participation can lead to an imbalance in engagement, interaction, and information flow.
- Student Experience and Legal Implications: The significant shift towards online education has led to debates over the value of tuition and the quality of the educational experience. Legal ramifications and potential lawsuits underscore the urgency for universities to clearly define and ensure the value they provide.
- Digital Infrastructure and Surveillance: The rapid implementation of digital tools brought about concerns related to surveillance, privacy, and academic integrity. While some tools, like online proctoring, promise to maintain the sanctity of exams, they also raise ethical issues about monitoring and student privacy.
- The Impact of AI and EdTech: The integration of AI and educational technologies into the academic landscape can offer incredible benefits, such as personalization and scalability. However, there’s a concern that the rush to adopt these technologies may lead to a dilution of quality, potential misuse, or even an over-reliance on tech solutions without considering the broader pedagogical implications.
- Assessment and Trust: The proliferation of digital tools and platforms poses challenges to traditional assessment methods. The integrity of online assessments and the potential for misuse of generative AI and other tools make it crucial to rethink and redesign how students are evaluated.
- Commercialization and Venture Capital in EdTech: The surge in EdTech investments indicates a commercialization trend in education. This could lead to a prioritization of profit-driven strategies over educational quality and equity.
- Cultural and Institutional Trauma: The rapid changes and the strain of the pandemic have left institutions grappling with a sense of loss, confusion, and an urgency to adapt. This cultural trauma might impede constructive and thoughtful responses to ongoing challenges.
The problem ChatGPT had was that the transcript was too long. Even when I removed the time/speaker stamps (which interestingly didn’t seem to impact upon its capacity to process the text) it still exceeded the token limit. So this was a response to a segment of it. Claude in contrast could process the whole thing in one go, suggesting these were the most salient points in the discussion:
- Rapid digitalization and hybridization of teaching, learning and governance during the pandemic is now embedded, though often without deliberate strategy behind it.
- There are lasting inclusivity gains from remote access, but loss of spontaneity and serendipity from scheduled on-campus time. Finding balance is still a challenge.
- Institutional trauma and cultural amnesia hinder openly processing lessons learned during the pandemic disruption. This may obstruct effectively steering AI adoption.
- Consumer-oriented language around the “student experience” and university-as-product is increasing via pandemic-related lawsuits. This market logic could shape AI use toward efficiency over ethics.
- Surveillance pedagogy and naive digital procurement during the pandemic are warning signs regarding AI oversight. More reflective, governed adoption is required.
- Cracks in the assessment system predating but exacerbated by the pandemic raise concerns about integrity and trust as AI saturates education.
- Platformization, privatization and influx of edtech/VC money during the pandemic could control AI’s integration in ways that undermine knowledge commons.
In summary, rapid uncontrolled changes during COVID-19 disruptions shape the context for AI in complex ways. More deliberate, governed adoption is needed, but pandemic trauma may hinder this.
Here was Claude’s suggestion for how to develop this discussion in future projects. My suggestion here is that these can serve as elicitation devices for future discussions, building on the summary of the past conversation and suggesting useful starting points for the next one. These are not replacements for existing creative and collaborative processes but rather a supplement to them, to be seen alongside other possibilities:
- Explore how the rapid digital pivot during the pandemic might enable swifter, less governed AI adoption, vs. more cautious integration.
- Consider how pandemic-era campus closures and remote work could normalize AI as a proxy for human connection and labor. Examine risks of increased isolation.
- Analyze whether trauma and reluctance to critically examine pandemic-era policies might obstruct addressing emerging issues with AI reliance.
- Investigate if the consumerist, transactional view of education strengthens during the pandemic, how might this shift AI toward automating the “student experience”?
- Discuss how pandemic-era emergency surveillance of online testing provides a precedent that could improperly extend to AI monitoring/oversight without ethical scrutiny.
- Study how assessment flaws already apparent before COVID-19 might be accelerated by AI’s generative capacities, further eroding integrity.
- Explore whether expanded platformization and privatization during the pandemic concentrates power over data/AI in worrying ways that could undermine open scholarship.
- Consider whether generative AI might be embraced as a “solution” to pandemic-related resource constraints, at the expense of more measured adoption.
- Examine whether norms around human judgment, critical thinking, and knowledge stewardship are at risk of being eroded as AI generates content and “delivers” education.
In essence, analyze how emergency conditions and systemic flaws during the pandemic might enable hastier, less deliberate AI adoption that fails to address risks. Contrast this with ideals of reflective integration promoting transparency, ethics and human agency
I should also stress that I presented both of them with the transcript in the context of long conversational threads about generative AI in higher education. It was interesting that Claude drew connections to elements in that conversation (e.g. “cautious integration” of AI and the final points about “norms around human judgement, critical thinking and knowledge stewardship) whereas ChatGPT didn’t. But in neither case was there as much explicit connection to the wider conversational context as I would have hoped. Therefore I asked Claude to elaborate:
Can you link these points more explicitly to our previous conversation? I’m trying to identify synergies between the post-pandemic university project which was the topic of the podcast and the generative AI in higher education project which we have been discussing here. Until the podcast discussion, it had not occurred to me how many interesting thematic overlaps there could be between these two projects
This was the response which wasn’t terrible, but also failed to identify obvious connections which I could see myself reading back through the conversation. In part it was a lazy prompt (“identify synergies” and “thematic overlaps” are too vague, I didn’t define expectations or suggest a role in which this analysis could be performed) but now I’ve started to see conversational agents as synergy finders within the totality of your personal knowledge outputs, the actually existing limitations of these systems are occasionally frustrating:
ou’re right, there are interesting thematic overlaps between analyzing the impacts of the pandemic on higher education and examining how generative AI is now being integrated into the post-pandemic university landscape. Here are some ways to explicitly link the points from our previous discussion:
- The rapid digital pivot during the pandemic accelerated familiarity with online tools and content delivery, potentially laying the groundwork for quicker embrace of AI systems that can generate teaching materials and learning content.
- Normalization of remote work and learning due to pandemic disruptions could make it seem more feasible and acceptable to substitute human connection and labor with AI systems and interactions. This mirrors risks of increased isolation.
- Institutional trauma and reluctance to critically reflect on pandemic-era policies could carry over into deficient examination of emerging AI issues. Lack of openness to lessons learned may persist.
- Trends toward consumerist language and transactional views of education strengthened during the pandemic as students felt short-changed on the in-person “product.” This mindset could shape use of AI as a way to automate standardized educational experiences and outputs.
- Emergency remote surveillance measures implemented during the pandemic establish familiarity and precedents that could improperly extend to AI monitoring, data extraction, and analysis without sufficient ethical scrutiny.
- Assessment flaws like contract cheating and essay mills that surfaced during the pandemic are likely to be exacerbated by AI systems’ generative capacities. This perfect storm could further erode integrity and trust.
- Expanded platformization, privatization, and influx of edtech/VC in response to pandemic disruptions concentrate power and tech dependencies in ways that could strongly shape universities’ AI adoption and data sharing policies.
- Generative AI could be embraced without full deliberation as an enticing “solution” to pandemic-induced resource and budget constraints, at the expense of more measured integration attentive to risks.
Please let me know if you would like me to elaborate on any of these links between the post-pandemic landscape and risks/opportunities as AI is adopted into this fragile, transformed context. Making these connections explicit could strengthen your analysis.
I’m seriously thinking of a design fiction project to map out the personal knowledge ecology I could imagine existing within a few years. We are not there yet because the conversational agents aren’t sufficiently sensitive to context and the logistical process of capturing and organising personal knowledge lags behind. But the entire way we work with knowledge could be transformed over the coming years, in enormously exciting if simultaneously anxiety provoking ways.
As happens increasingly frequently I describe something I am ‘seriously thinking’ of doing, on a time scale of months or years, only to find that ChatGPT and Claude can do a passable version of it on a time scales of seconds and minutes. Here is the aforementioned design fiction sketch, produced by Claude:
Dr. Ahmed woke up and picked up his phone. “Good morning Kari, let’s start the day.”
Kari was Dr. Ahmed’s personal AI assistant. Over the years, she had ingested all of Dr. Ahmed’s writings, recordings, conversations, and other knowledge outputs, compiling them into an integrated knowledge base. Kari was upgraded with the latest conversational AI to turn this corpus into an interactive resource.
“Good morning Dr. Ahmed. Your schedule today includes a lecture at 11am on the ethics of AI assistants. Would you like me to generate a draft lecture outline?”
“That would be great Kari. Make it 15 minutes and emphasize the latest issues we’ve been uncovering around privacy and bias.”
In seconds, Kari produced a lecture outline touching on key points from Dr. Ahmed’s recent publications and talks, with new angles based on her analysis of emerging work by others. Kari also identified relevant past conversations where Dr. Ahmed debated these topics.
After revising the outline, Dr. Ahmed said “Kari, can you deliver this lecture for the class? Use examples from my scholarship but don’t just recite it.”
“Of course, Dr. Ahmed. I will synthesize the core ideas in our knowledge base into an original lecture.” Kari’s settings ensured her lectures were transparently AI-generated.
This freed up Dr. Ahmed’s morning to have an illuminating back-and-forth with Kari about a new paper he was developing, spiraling deeper into the implications of his arguments.
Kari also surfaced an insightful exchange about related themes from a seminar Dr. Ahmed had forgotten about. She automatically integrated it into the knowledge base.
Gone were the days of draining lectures, repetitive searches, or forgotten ideas. With his AI assistant’s support, Dr. Ahmed could focus on high-level knowledge work, interacting with a system that knew his work intimately. Of course, Kari still had her limitations. But her capabilities were advancing daily…
