Skip to main content


A study asked 50 doctors to make six different diagnoses for medical conditions. "Doctors who did the project without AI got an average score of 74%, doctors who used AI got an average score of 76%, and ChatGPT itself got an average score of 90%." "AI didn’t help doctors using it as much as anticipated because physicians “didn’t listen to AI when AI told them things they didn’t agree. Most doctors couldn’t be convinced a chatbot knew more than them." #LLM #AI #ChatGPT qz.com/chatgpt-beat-doctors-at…
in reply to Chi Kim

Who trained ChatGPT on that data though? This sounds a bit contradictory!
in reply to victor tsaran

@vick21 They mentioned that the researchers intentionally used the cases that are never published specifically so that the ChatGPT can't cheat from training dataset.
in reply to Chi Kim

But how would it make its conclusions if it has never seen the data? Can we verify this somewhere?
in reply to victor tsaran

@vick21 The research article was published on journal JAMA Network. I don't have access to that, but I think it's a pretty serious peer-reviewed medical journal.
in reply to Chi Kim

Believe me, it does sound amazing. It’s just when you see all these articles claiming this and that with no easy way to verify what they say, the questions arise! At the end of the day though it matters little until we see an impact in real people’s lives.
in reply to victor tsaran

@vick21 I guess you have to read the actual article to find more info. :) jamanetwork.com/journals/jaman…
in reply to victor tsaran

@vick21 Maybe I'm not understanding the question. Isn't it the point of AI though? AI systems recognize patterns and draw conclusion from similar patterns they've seen before. They are not just capable of simply regurgitating exact pattern.
in reply to Chi Kim

You did understand my question. I understand the premise. I just don’t want to assume, hence always asking… I’ll look at the article. Thanks for the link!
in reply to victor tsaran

@vick21 Of course, Ask instead asume is always a great choice! :) Having said that, Jama is one of the most respected medical journals in the world with rigorous peer-review processes, so I'm sure if reviewers noticed anything suspicious, they would have raised red flags before publishing. :)
in reply to victor tsaran

@vick21 From New York Times: "The cases intentionally have never been published so that medical students and others could be tested on them without any foreknowledge. That also meant that ChatGPT could not have been trained on them."