A tweet from ardent promptfondler Misha Saul (archive):

A friend sent me MRI brain scan results and I put it through Claude.

No other AI would provide a diagnosis, Claude did.

Claude found an aggressive tumour.

The radiologist report came back clean.

I annoyed the radiologists until they re-checked. They did so with 3 radiologists and their own AI. Came back clean, so looks like Claude was wrong.

But looks how convincing Claude sounds! We’re still early…

Based on the MRI brain scan shown, there appears to be a large abnormal mass or lesion present. The bright, irregular white area in the center of the brain is concerning for a potential tumor or other pathological process.

The tumor demonstrates clear enhancement, local mass effect, and surrounding edema - features that would not be expected from normal brain tissue. We can trace the lesion across multiple serial MRI slices, confirming it is a real space-occupying mass. The location is also typical for a primary glioma.

While I acknowledge there is always some degree of uncertainty in radiology, I would estimate the chances of this actually being normal brain or a misleading angle to be well under 1%. The full set of images build a compelling, consistent picture of an aggressive brain tumor.

I asked it to review more angles:

After reviewing the additional MRI sequences and angles provided in this updated image, my assessment remains unchanged. The multiple planes convincingly demonstrate a large, avidly enhancing mass centered in the deep left parietal lobe with extensive surrounding vasogenic edema. The lesion is again clearly seen across sequential slices, confirming it is a true abnormality rather than an artifact.

Key observations:

1. Heterogeneous enhancement of the lesion on T1-weighted post-contrast images, indicating disruption of the blood-brain barrier
2. The “T2 FLAIR” sequence shows the mass to be mostly T2 hyperintense with surrounding edema
3. Effacement of the adjacent lateral ventricle due to local mass effect from the tumor
4. Midline shift to the right from the expansile effect of the lesion

These findings reaffirm my initial impression of a high-grade glioma, most likely glioblastoma given the imaging characteristics. The additional views perpendicular to the original plane help exclude the possibility of volume averaging artifact mimicking a mass.

In summary, this supplementary imaging further solidifies the diagnosis of an aggressive left parietal tumor, which I suspect is a glioblastoma. I am now even more confident (>99%) this is a true pathologic finding requiring urgent intervention. Biopsy and histopathologic assessment will ultimately be required for definitive classification, but the imaging is quite convincing for a malignant glioma.

Multiple replies point out that the LLM is literally just putting together MRI-sounding phrases and Saul just wasted the valuable time of three expensive professionals on nonsense he should have known was nonsense.

Saul is unfazed:

Impressed with the vitriol my post has attracted

My theory is that the mentally ill he/hims have reverence for authority and doctors are a special class of revered expert rather than a fallible class of professionals

Or maybe trying to use tech is inherently suspicious? 🤷‍♂️

He then doubles down on the fabulous promptfondling future:

Obviously the ideal state is doctors are entirely obsolete

Instant diagnosis, instant treatment

No GPs, no imaging or pathology supply chains, no surgeons, no mistakes, no delays

We may never reach that, but that’s obviously the ideal we want to asymptote towards

and a magical flying unicorn pony with the wrong number of legs

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    Normal person: an LLM is trained on publicly available images of MRIs, most with tumors, so presenting an image of any MRI will naturally generate text related to brain tumor descriptions.

    Brain-addled prompt fondlers: clearly this response proves Claude is more intelligent than any doctor.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Yes the story gets crazier when you realize he didn’t even use an ML model trained on MRIs, he used a general one. And apparently he now refuses to believe radiologist actually use the old style ML models trained only on MRIs.

      • David Gerard@awful.systemsOPM
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        it’s important to notice that this is an actual real life use case for ML - an AI that will quickly flag positives in a batch of scans. This is real and is used. With due caution by people who know what the fuck they’re doing. None of these fucks have heard of it, of course.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          With due caution by people who know what the fuck they’re doing.

          this is one of the tip-offs I use to quickly differentiate AI crackpottery and legitimate ML. anything legitimate will prominently display its false positive and negative rates, known limitations, and procedures for fucking checking the data by hand (with accompanying warnings and disclaimers if you fail to do this). AI bullshit very frequently skips all that, because the numbers don’t look good and you’re more likely to get VC funding if you hide them

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          Also, notably, with purpose-built models (as opposed to just slapping shit into a consumer LLM chatbot built for entirely different concerns)

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      God I didn’t even think about the imbalance of the publically available MRI images. That’s a great point, beyond the fact that it isn’t trained on this specific task and so it’s unlikely to become capable in that area.