Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”
Anyone who have knowledge about a specific subject says the same: LLM’S are constantly incorrect and hallucinate.
Everyone else thinks it looks right.
It is insane to me how anyone can trust LLMs when their information is incorrect 90% of the time.
deleted by creator