Breaking from the cult

New Scientist mag continues to break away from current fashions required for tenure and grants. In this article they point out a deal-breaking problem with AI crap when used for medical purposes.

Computers have been visually reading text for a LONG time. For instance, the Post Office was routinely using OCR in the 1970s. So OCR shouldn’t be hard for AI, since supposedly AI can do everything better than previous forms of data processing.

Nope. Turns out that AI fails to catch obvious distinctions like Symptoms vs No symptoms when reading medical images like X-rays.

The researchers tried two ways. First, some pictures included two types of object while others included only one type. They asked the AI to find pictures with only one. The success rate was only slightly better than random. Second, some images included a caption like “No indication of pneumonia” while others included “Indication of pneumonia”. Again the models did a poor job of separating yes from no.

“Such results show how vision-language models have an affirmation bias. In other words, they ignore negation or exclusion words such as “no” and “not” in descriptions and simply assume they are being asked to always affirm the presence of objects.”

It’s werrrry interestink that the AI models coincidentally happen to reinforce the current bias of medicine toward finding symptoms and indications everywhere. As I’ve noted before, doctors used to discourage hypochondria. Now doctors REQUIRE hypochondria. All indications must lead to more drugs and more surgery and more muzzles and more lockdowns. In some countries all indications must lead to euthanasia. It’s required by law.

Walker Percy saw it coming.