There’s a lovely description in Anna Weiner’s Uncanny Valley of the initial rush of possibility produced in the first stages of learning to code:

But what happens if the capacity to write functional code in ChatGPT’s Code Interpreter mean they never have this experience? Is it reproduced at a higher level through the capacity to exercise control over the machine through natural language, or is there something foreclosed by relying on the conversational agent to do this work for you? There’s a fundamental question for the philosophy of education here about generative AI and the zone of proximal development:
The zone of proximal development (ZPD) is a concept in educational psychology. It represents the space between what a learner is capable of doing unsupported and what the learner cannot do even with support. It is the range where the learner is able to perform, but only with support from a teacher or a peer with more knowledge or expertise (a “more knowledgeable other”). [1] The concept was introduced, but not fully developed, by psychologist Lev Vygotsky (1896–1934) during the last three years of his life.[2] Vygotsky argued that a child gets involved in a dialogue with the “more knowledgeable other” such as a peer or an adult and gradually, through social interaction and sense-making, develops the ability to solve problems independently and do certain tasks without help. Following Vygotsky, some educators believe that the role of education is to give children experiences that are within their zones of proximal development, thereby encouraging and advancing their individual learning such as skills and strategies
https://en.wikipedia.org/wiki/Zone_of_proximal_development
Are conversational agents ‘more knowledgable others’ who are scaffolding the process? Or are they an extension of what a learner is capable of doing unsupported? Or are they a hybrid of the two or something else entirely?
