Ethan Mollick has captured in a paragraph what I’ve spent thousands of words, arguably a whole book, trying to say:
This creates a trap when learning to use AI: naive prompting leads to bad outcomes, which convinces people that the LLM doesn’t work well, which in turn means they won’t put in the time to understand good prompting. This problem is compounded by the fact that I find that most people only use the free versions of LLMs, rather than the much more powerful GPT-4 or Gemini Advanced. The gap between what experienced AI users know AI can do and what inexperienced users assume is a real and growing one. I think a lot of people would be surprised about what the true capabilities of even existing AI systems are, and, as a result, will be less prepared for what future models can do.
https://www.oneusefulthing.org/p/captains-log-the-irreducible-weirdness
