I’m in total agreement with Carlo Perrotta here that custom GPTs and AI agents constitute a familiar platform economy being cultivated by OpenAI:
In all scenarios, from the lowest API access tier to the highest enterprise one, proprietary assets and infrastructure must be hired from OpenAI’s closed development environment according to a Software as a Service (SaaS) model. Consistent with this model, monetisation may occur in two ways: on a revenue share basis and/or through the payment of licensing fees. In the case of custom GPTs, OpenAI operates as a traditional intermediary platform retaining total control over a single point of access: a paywall. Users pay directly OpenAI to use a Custom GPT and a portion of that revenue goes to the developer. In the case of fully custom AI assistants developed through an enterprise license, organisations pay OpenAI for API access and data control but are then free to either charge directly their customers for usage, or in the case of the universities mentioned previously, to offer custom affordances for administrative staff, research and teaching staff, and students.
https://automatedonline.org/2024/07/12/the-platform-economy-of-genai-in-education/?trk=feed_main-feed-card_feed-article-content
But I think Carlo’s observation about the lack of uptake of education GPTs is more broadly true. As far as I can see OpenAI aren’t publishing usage data. Furthermore, the developer forums seem to be full of conversations in which people are asking for more metrics which are less opaque. My experience of trying GPTs has been that unless they serve an extremely specialised function (e.g. producing diagrams), usually involving calling on an external service, it’s quicker and easier to just use the core model, at least if you’re familiar and comfortable with prompting. But if you’re not familiar and comfortable with prompting, you’re unlikely to be delving into an aspect of ChatGPT which likely seems quite arcane to many end users. Furthermore, the rapid development cycles mean that specialised functions are being incorporated into the main models rapidly e.g. GPT 4o can produce a flow chart just as well as a specialised GPT I used to rely on. It’s an accelerated version of the familiar tendency for platform operators to use their epistemic privilege to see what works and steal it for the core product, even if that might not be an intentional strategy in this case.
For this reason I think we should be careful about saying this is a platform economy. It has features which suggest one is emerging, but it also has aspects which don’t fit this picture. I’m not sure we really know what the model is yet, nor do the firms themselves. They’re throwing things at the wall in the hope something will stick, while being so overflowing with capital that there’s no real pressure yet to define a longer term commercial strategy. Which means I think this is astute analysis from Carlo but which perhaps overstates how defensive OpenAI are being in their current moves:
Despite being the place where the memo originated, Google is arguably a case apart because its interest in AI, while enormous, is somewhat ancillary to its core businesses: search and cloud infrastructure. However, as far as Open AI is concerned, a moat is definitely being built following a textbook implementation of platformed and infrastructural monopolism: the tiered licensing structures, the timid attempts to launch an “app store” of custom GPTs based on revenue sharing, and the creation of an enterprise-level ecosystem where large and medium-sized organisations become invested in – and dependent on – a proprietary environment.
Open AI’s retrenchment into the comfort of familiar platform economics can therefore be read as a defensive and conservative move that hides a growing anxiety about the real-world viability of generative AI, with companies and users beginning to realise the limitations of a technology that promised to deliver “magic” through universal applicability and knowledge but is proving tricky and laborious to tame
https://automatedonline.org/2024/07/12/the-platform-economy-of-genai-in-education/?trk=feed_main-feed-card_feed-article-content
It will be interesting to see how differentiation happens across the competing firms because Claude, Copilot and Gemini appear to be developing in slightly different directions reflecting the varied positions of the operators and the different positioning they have in relation to end users. I think we should be sensitive to the emerging platform economy cultivated by OpenAI but there’s a risk that applying the conceptual framework of platformisation (at this stage) could close down as much as it opens up analytically. For example I’m not sure I see how this constitutes a moat for a particular platform, or at least not an effective one, as much as a rapid institutionalisation of a cluster of technologies:
The universities inviting research and teaching staff to identify and test application scenarios for generative AI; the scores of custom GPTs dedicated to various aspects of education, from language learning to research literature summarisation and essay writing; the tech-savvy educators and consultants developing curricula and models of professional practice. All of it represents “epistemological” free labor that creates the much-needed network effects underpinning crowdsourced value creation – value which will be captured and monetised when the time is right.
It matters analytically, among other reasons, because of the space for agency left in these competing perspectives. There’s little room for professional steering of moats, whereas there’s a lot of room for professional steering of institutionalisation processes. There’s a broader issue here, which I’ve intended to write about for ages, in which the structuralist tendencies of the platform studies literature are being exacerbated by how its taken up within education, often in ways which intersect with epistemological apparatus of ‘critique’ in manner which renders agency opaque. This is major theme in the monograph of the Platform University I’m working on with Susan Robertson, Michele Martini and Hannah Moscotvitz but I’m increasingly keen to put together a paper on this in the meantime, identifying how there’s a much broader tradition of platform studies which would be very fruitful for digital education researchers.
