• anotherandrew@mbin.mixdown.ca
    link
    fedilink
    arrow-up
    1
    ·
    23 hours ago

    I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth. I don’t see that as bad news, I see it as understanding the limitations of the system. Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?

    • mbtrhcs@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth

      I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.

      Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?

      So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.

      • anotherandrew@mbin.mixdown.ca
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.

        I agree – That’s why I’m chalking it up to some kind of healthy sense of skepticism when it comes to trusting authoritative-sounding answers by themselves. e.g. “ok that sounds plausible, let’s see if we can find supporting information on this answer elsewhere or, maybe ask the same question a different way to see if the new answer(s) seem to line up.”

        So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.

        Interesting – I still see them largely as black boxes so reading about how people smarter than me describe the processes is fascinating.