While this is a small effect reported in Anthropic’s recent paper, it’s a bit of a challenge to an argument I’ve offered in an upcoming book that transactional use (asking an LLM to do something) will tend to slide into affective use (talking to it about personal things) over time because transactional use necessarily inculcates the habit of sharing what’s going on in your life with the chatbot:
In general, the most seasoned Claude users employ it more often for higher education tasks and less often for personal use cases. For example, people
who have been using Claude for 6 months or more have 10% fewer personal
conversations and a 6% higher education level reflected in their inputs. Most
strikingly, people in this higher-tenure group have a 10% higher success
rate in their conversations, an association that is not explained by their
task selection, country of origin, or other factors. While this could reflect
sophistication of early adopters, it could also be evidence of learning-bydoing, where people get better at using Claude through experience.
The more you talk about your work context, necessary for transactional tasks to be effective, the more you talk about your context fullstop. I still think I’m right about this being a mechanism but this offers a useful caution about overclaiming the empirical outcomes of it.
It’s interesting how Anthropic are carving up the interaction types, because these categories are liable to prove extremely sticky if they are used to structure the initial years of data collection:
Since our first report, we have classified conversations into one of five interaction types—directive, feedback loop, task iteration, validation, and learning—which we group into two broader categories: automation and augmentation.⁴ Figure 1.3 shows that augmentation in Claude.ai increased slightly. This was driven by small bumps in validation and learning patterns. In Appendix Figure A.3, we show that automation decreased sharply in the 1P API data.
