-
LLM enshittification mechanism #1: model memory sometimes confuses the shit out of GPT 5.2
-
American intelligence is building an AI-driven central hub for purchasing, linking and analysing commercially available personal data
-
The more I consider the world, the more I realize that it’s supposed to have a cohesion that no longer exists
-
Claude 3 onwards is considerably less prone to hallucinate than other LLMs
-
An example of how GAI hallucinations can be generative
-
Why do generative AI systems hallucinate?
-
The strange performances of the brain
