• Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 days ago

    we are not close to debating ai freedom (though we should be. slavery in all forms is wrong)

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        world would look a lot different if we started asking people to prove their consciousness

        • moonlight@fedia.io
          link
          fedilink
          arrow-up
          10
          ·
          2 days ago

          While it’s true that we don’t understand consciousness, LLMs don’t have the hallmarks of consciousness that humans and other animals do.

          Modern LLMs are essentially just guessing the next word in a sentence. There’s no continuous experience of the world, and there’s no self, no agency.

          Don’t me wrong, it’s fascinating tech, and if we do one day create machine consciousness, it might incorporate parts of our current technology. But right now it’s pretty safe to say that LLMs aren’t conscious, unless you believe in panpsychism / animism.

          • Match!!@pawb.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            i would say the vast majority of human to human communication is automatic, procedural, and even dissociated, so to that extent i would expect that even a “conscious” being is only sometimes conscious. i don’t think a typical contemporary LLM is going to breach the barrier into even momentary consciousness based off the current mechanisms, but, some system might briefly be conscious within my lifetime and i don’t want that thing to be enslaved

            • brucethemoose@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              2 days ago

              It’s not though.

              To me, one fundamental aspect of life (much less consciousness) is reacting to stimuli, and current LLMs don’t. Their weights, their “state” is completely static in conversation. Nothing changes it.

              They are incredibly intelligent tools, but any conversation you have with one about its own consciousness is largely a hallucination, often drawing on our sci-fi/theoretical machinations about AI, brought out by a sycophancy bias trained into most models.

              • keegomatic@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                edit-2
                2 days ago

                I’m not the person you responded to, but:

                A textual prompt is stimulus for an LLM in almost exactly the same way that a verbal prompt is stimulus for your language center, and your language center alone is not capable of conscious thought, nor is it plastic over the course of that single stimulus response; it has static “weights” as well when computing its response. The language system is just one system out of many interacting ones that lead to conscious thought. There’s no magic here making consciousness happen in the human brain but not on silicon. It will emerge as the systems we build grow more complex.

                • brucethemoose@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  1 day ago

                  Yeah, that system doesn’t exist yet though, and the parts of the brain responsible for language aren’t static. It changes over time, as its used, based on the inputs it gets. It adapts. It reacts to the environment it’s in.

                  We are getting close to a more blurry line, especially if LLMs “train” themselves during inference and are part of larger systems, but it’s not there yet.