• nomad@infosec.pub
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    Anybody watch house MD? There is an episode where house factors in race in a medical decision and explains why that is a good idea, the patient refuses to take the medicine for “blacks” and insists ont getting the “good shit” until house tricks him into taking it.

    I’m not a doctor and not saying there is not plenty of racism in medicine (and therefore training data), but instructing medical “AI” to ignore race would be potentially wrong as well. How would you even prompt that? “Treat every patient the same regardless of ethnicity and religion” would not really work as outlined above. Also there is no training data that is without bias, as there are no humans without bias. All we can do is develop continually learning ai and start catching the wrong decisions and correcting them to achieve the goal of LLMs making better decisions than humans ever will.

      • titanicx@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        19 hours ago

        This is a great exercise for a layman in what this bot did and why it wasn’t as big a deal as what’s being made in the headline.

    • Don Piano@feddit.org
      link
      fedilink
      arrow-up
      8
      ·
      22 hours ago

      This isn’t about a chatbot, it’s about one of the many more legitimate types of machine learning.

    • altkey (he\him)@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      There are cases where race/ethnicity can factor in what medical treatment a person should receive. There though race was unrelated but LLM came up with results differing for different demographics. Faulty distribution of training data, fake correlations etc are not unheard of, but it grows more and more of an issue since machine learning becomes less of an engineering speciality for professionals and more of an everyday tool injected everywhere by/for everyone without much scrutiny, and, in the case of LLMs, as a unpenetrateable unpredixtably biased black box.