I do not believe that LLMs are intelligent. That being said I have no fundamental understanding of how they work. I hear and often regurgitate things like “language prediction” but I want a more specific grasp of whats going on.
I’ve read great articles/posts about the environmental impact of LLMs, their dire economic situation, and their dumbing effects on people/companies/products. But the articles I’ve read that ask questions like “can AI think?” basically just go “well its just language and language isnt the same as thinking so no.” I haven’t been satisfied with this argument.
I guess I’m looking for something that dives deeper into that type of assertion that “LLMs are just language” with a critical lens. (I am not looking for a comprehensive lesson on technical side LLMs because I am not knowledgeable enough for that, some goldy locks zone would be great). If you guys have any resources you would recommend pls lmk thanks


Rob Miles has some stuff that I think aligns with what you’re after. Technical discussion, but tries to keep it reigned in and understandable.
Specifically, this one has some good explanations: https://youtu.be/viJt_DXTfwA
More from Rob on AI Safety:
https://youtube.com/playlist?list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps
https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg