This is an extremely interesting conversation with Noam Chomsky about the limitations of large language models: “if the system doesn’t distinguish the actual world from the non-actual world then it’s not telling us anything”. He distinguishes sharply between a scientific contribution and an engineering contribution, suggesting that it fails to make either. Gary Marcus builds on this discussing how these system “lie indiscriminately” because they lack models of the world, creating the risk they will soon be inundated in the industrial scale production of misinformation. He says “it’s autocomplete on steroids” but it gives the illusion it is more than this, which encourages people to take advice from these systems.
HT Julian Williams for sending this video
This is interesting about the water footprint of AI systems: https://themarkup.org/hello-world/2023/04/15/the-secret-water-footprint-of-ai-technology