Anybody watch house MD? There is an episode where house factors in race in a medical decision and explains why that is a good idea, the patient refuses to take the medicine for “blacks” and insists ont getting the “good shit” until house tricks him into taking it.
I’m not a doctor and not saying there is not plenty of racism in medicine (and therefore training data), but instructing medical “AI” to ignore race would be potentially wrong as well. How would you even prompt that? “Treat every patient the same regardless of ethnicity and religion” would not really work as outlined above. Also there is no training data that is without bias, as there are no humans without bias. All we can do is develop continually learning ai and start catching the wrong decisions and correcting them to achieve the goal of LLMs making better decisions than humans ever will.
There are cases where race/ethnicity can factor in what medical treatment a person should receive. There though race was unrelated but LLM came up with results differing for different demographics. Faulty distribution of training data, fake correlations etc are not unheard of, but it grows more and more of an issue since machine learning becomes less of an engineering speciality for professionals and more of an everyday tool injected everywhere by/for everyone without much scrutiny, and, in the case of LLMs, as a unpenetrateable unpredixtably biased black box.
Anybody watch house MD? There is an episode where house factors in race in a medical decision and explains why that is a good idea, the patient refuses to take the medicine for “blacks” and insists ont getting the “good shit” until house tricks him into taking it.
I’m not a doctor and not saying there is not plenty of racism in medicine (and therefore training data), but instructing medical “AI” to ignore race would be potentially wrong as well. How would you even prompt that? “Treat every patient the same regardless of ethnicity and religion” would not really work as outlined above. Also there is no training data that is without bias, as there are no humans without bias. All we can do is develop continually learning ai and start catching the wrong decisions and correcting them to achieve the goal of LLMs making better decisions than humans ever will.
I don’t think we should use House, or any TV show, as a guiding tool or example for actual real life medicine.
This is a great exercise for a layman in what this bot did and why it wasn’t as big a deal as what’s being made in the headline.
This isn’t about a chatbot, it’s about one of the many more legitimate types of machine learning.
There are cases where race/ethnicity can factor in what medical treatment a person should receive. There though race was unrelated but LLM came up with results differing for different demographics. Faulty distribution of training data, fake correlations etc are not unheard of, but it grows more and more of an issue since machine learning becomes less of an engineering speciality for professionals and more of an everyday tool injected everywhere by/for everyone without much scrutiny, and, in the case of LLMs, as a unpenetrateable unpredixtably biased black box.
Not what this is about.