If you tell a profesional that the answer is “B”, while the professional had “A” in mind, you will have to convince them on why “B” is the correct answer, or they will ignore your suggestion. I think a good LLM model should be able to tell which features it valued most in it’s reasoning. It would make it much easier to get used to as a tool that way.
Or, just have it as part of the xrqy software.
Analysis determines this could be X, here’s a link to Kore info on this rate condition. Please confirm diagnosis and report.
We don’t need AI to make a diagnosis. Its a tool. The health professional can be trained in its use, just like they do for any other test.
If you tell a profesional that the answer is “B”, while the professional had “A” in mind, you will have to convince them on why “B” is the correct answer, or they will ignore your suggestion. I think a good LLM model should be able to tell which features it valued most in it’s reasoning. It would make it much easier to get used to as a tool that way.
I agree, while they are sceotical. However research data over time should show sensitivity and specificity, just like any other test.