I think this is spot on from Casey Newton about the vision guiding OpenAI’s recent development. It would be easy to read their developments as throwing a million things at the world to see what sticks (social video, online shopping, pulse, ad tech etc) but they are explicitly saying these are all part of a more or less unified vision:
OpenAI seems more likely to monetize its platform through revenue-sharing deals or auctioning off placement. Maybe you ask for help with algebra, OpenAI loops in the Coursera app, and takes a finder’s fee if you become a paid user of the latter.
To OpenAI executives, the move helps them pursue what they describe as the goal they had before they got sidetracked by ChatGPT’s success: building a highly competent assistant.
“What you’re gonna see over the next six months is an evolution of ChatGPT from an app that is really useful into something that feels a little bit more like an operating system,” Nick Turley, the head of ChatGPT, told reporters in a Q&A session on Monday. “Where you can access different services, you can access software — both the existing software that you’re used to using, but … most exciting to me, new software that has been built natively on top of ChatGPT.”
https://www.platformer.news/openai-dev-day-2025-platform-chatgpt/?ref=platformer-newsletter
What will optimisation look like for them on this model? It’s not quite user engagement in the same way as social media platforms but equally there will be an incentive structure facing the firm and a range of data-intensive methods through which to act on these incentives.
And I think he’s right there’s a huge risk of a massive data privacy scandal:
At launch, OpenAI is promising a more rigorous approach to data privacy. OpenAI will share only what it needs to with developers, executives said. (They essentially hand-waved through the details, though, so the actual mechanics will bear scrutiny.) Unlike Facebook, though, OpenAI has no friend graph to worry about — whatever might go wrong between you, ChatGPT, and a developer, it will likely not involve giving away the contact information of all of your friends.
At the same time, the AI graph may prove even riskier. ChatGPT stores many users’ most private conversations. Leaky data permissions, either intentional or accidental, could prove disastrous for users and the company. It only took one real privacy disaster to end Facebook’s platform ambitions; I can’t imagine it would take much more to end OpenAI’s.
