- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
I thought this was interesting, because there’s a few ways to think about it. One is that of course you should never just blindly do that sort of thing. Is this guy at fault though for making a stupid decision? Is this the same as people drinking radium water til their jaws fell off?
There’s also the fact that he was likely using an older model (3.5) that is much more likely to hallucinate. I just tried getting gpt-5 to recommend bromide to me and it refused several times until I gave up, and in fact went and searched for authoritative health information and relayed it to me, instead of just relying on its own knowledge. Is the answer just better models?
In this case, this man was searching for medical answers. He should have asked a doctor or other qualified healthcare professional. But, being American, he likely avoided that due to lack of access. It was likely cost, but it also could have been the prospect of waiting weeks or months, or he could have been dismissed by overworked doctors in the past.
Either way, I think the real answer is that there shouldn’t have been systemic barriers in the way that made him turn to this potentially dangerous tool for this application.
I’m not sure it’s the same as Radithor, since doctors seemed to think it was an effective treatment and it was harder to verify this sort of thing in the early 1900s. If you can’t trust a doctor to give you medical advice, who can you trust.
I’m sure the more modern models make this less likely to happen, but with anyone’s determination, the tool can still be misused and might still give harmful advice.
I personally do not use AI, and couldn’t answer this from a quick search. Does it give you warnings about taking medical advice from it?
Then again, you can still get bad advice about non-medical matters. (Like putting glue on pizza.) It’s interesting to think about where the line is for when output should come with a warning, when output shouldn’t be made at all, or when output should just be checked with “common sense.”
If this weren’t mass marketed as a cure-all, I would have just said “you have to independently verify everything, and it’s on you if you don’t.” But with how hard it’s being marketed and pushed by tech companies as the answer to everything, I hesitate to say that. At this point, I want to place some of the liability on them.