To me, one fundamental aspect of life (much less consciousness) is reacting to stimuli, and current LLMs don’t. Their weights, their “state” is completely static in conversation. Nothing changes it.
They are incredibly intelligent tools, but any conversation you have with one about its own consciousness is largely a hallucination, often drawing on our sci-fi/theoretical machinations about AI, brought out by a sycophancy bias trained into most models.
A textual prompt is stimulus for an LLM in almost exactly the same way that a verbal prompt is stimulus for your language center, and your language center alone is not capable of conscious thought, nor is it plastic over the course of that single stimulus response; it has static “weights” as well when computing its response. The language system is just one system out of many interacting ones that lead to conscious thought. There’s no magic here making consciousness happen in the human brain but not on silicon. It will emerge as the systems we build grow more complex.
Yeah, that system doesn’t exist yet though, and the parts of the brain responsible for language aren’t static. It changes over time, as its used, based on the inputs it gets. It adapts. It reacts to the environment it’s in.
We are getting close to a more blurry line, especially if LLMs “train” themselves during inference and are part of larger systems, but it’s not there yet.
It’s not though.
To me, one fundamental aspect of life (much less consciousness) is reacting to stimuli, and current LLMs don’t. Their weights, their “state” is completely static in conversation. Nothing changes it.
They are incredibly intelligent tools, but any conversation you have with one about its own consciousness is largely a hallucination, often drawing on our sci-fi/theoretical machinations about AI, brought out by a sycophancy bias trained into most models.
I’m not the person you responded to, but:
A textual prompt is stimulus for an LLM in almost exactly the same way that a verbal prompt is stimulus for your language center, and your language center alone is not capable of conscious thought, nor is it plastic over the course of that single stimulus response; it has static “weights” as well when computing its response. The language system is just one system out of many interacting ones that lead to conscious thought. There’s no magic here making consciousness happen in the human brain but not on silicon. It will emerge as the systems we build grow more complex.
Yeah, that system doesn’t exist yet though, and the parts of the brain responsible for language aren’t static. It changes over time, as its used, based on the inputs it gets. It adapts. It reacts to the environment it’s in.
We are getting close to a more blurry line, especially if LLMs “train” themselves during inference and are part of larger systems, but it’s not there yet.
i agree with not-me here