1463
Breast Cancer (mander.xyz)
submitted 1 month ago by [email protected] to c/[email protected]
top 50 comments
sorted by: hot top controversial new old
[-] [email protected] 206 points 1 month ago

They said something similar with detecting cancer from MRIs and it turned out the AI was just making the judgement based on how old the MRI was to rule cancer or not, and got it right in more cases because of it.

Therefore I am a bit skeptical about this one too.

[-] [email protected] 124 points 1 month ago* (last edited 1 month ago)

Using AI for anomaly detection is nothing new though. Haven't read any article about this specific 'discovery' but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

[-] [email protected] 74 points 1 month ago

That's why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

[-] [email protected] 64 points 1 month ago

Say it is a predictive llm

According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch's implementation of ResNet18, a deep convolutional neural network that isn't specifically designed to work on text. So this term would be inaccurate.

or a pattern recognition model.

Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term "AI" gets clicks and positive impressions (which is what their job actually is) and we wouldn't be here talking about it.

[-] [email protected] 39 points 1 month ago

That performance curve seems terrible for any practical use.

[-] [email protected] 20 points 1 month ago

Yeah that's an unacceptably low ROC curve for a medical usecase

load more comments (1 replies)
load more comments (3 replies)
[-] [email protected] 23 points 1 month ago

The correct term is "Computational Statistics"

[-] [email protected] 24 points 1 month ago* (last edited 1 month ago)

Stop calling it that, you're scaring the venture capital

load more comments (7 replies)
load more comments (5 replies)
[-] [email protected] 18 points 1 month ago

It's really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it's not impossible to work properly at some point.

[-] [email protected] 14 points 1 month ago

Citation please?

load more comments (1 replies)
[-] [email protected] 116 points 1 month ago

Why do I still have to work my boring job while AI gets to create art and look at boobs?

[-] [email protected] 51 points 1 month ago

Because life is suffering and machines dream of electric sheeps.

[-] [email protected] 18 points 1 month ago

I’ve seen things you people wouldn’t believe.

load more comments (1 replies)
[-] [email protected] 85 points 1 month ago* (last edited 1 month ago)

Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this'll be a useful breakthrough

[-] [email protected] 73 points 1 month ago

It's already this way in most of the world.

[-] [email protected] 40 points 1 month ago* (last edited 1 month ago)

Oh for sure. I only meant in the US where MIT is located. But it's already a useful breakthrough for everyone in civilized countries

load more comments (1 replies)
load more comments (4 replies)
[-] [email protected] 76 points 1 month ago

Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

[-] [email protected] 63 points 1 month ago

Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan... An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn't pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

[-] [email protected] 18 points 1 month ago

That's actually really smart. But that info wasn't given to doctors examining the scan, so it's not a fair comparison. It's a valid diagnostic technique to focus on the particular problems in the local area.

"When you hear hoofbeats, think horses not zebras" (outside of Africa)

load more comments (3 replies)
load more comments (7 replies)
[-] [email protected] 46 points 1 month ago

That's why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

Keep the human in the loop!

[-] [email protected] 16 points 1 month ago

Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.

load more comments (3 replies)
[-] [email protected] 12 points 1 month ago

Not at all, in this case.

A false positive of even 50% can mean telling the patient "they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years".

Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

load more comments (7 replies)
[-] [email protected] 60 points 1 month ago

The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI's methods are bullshit. Under no circumstance should we accept a "black box" explanation.

[-] [email protected] 23 points 1 month ago

good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It's like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

Under no circumstance should we accept a "black box" explanation.

Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

[-] [email protected] 13 points 1 month ago

Don't worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

load more comments (7 replies)
[-] [email protected] 20 points 1 month ago

iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

load more comments (4 replies)
[-] [email protected] 13 points 1 month ago

IMO, the "black box" thing is basically ML developers hand waiving and saying "it's magic" because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

I have a very crude understanding of the technology. I'm not a developer, I work in IT support. I have several friends that I've spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they've explained a few of the concepts to me, and I'd be lying if I said that none of it went over my head. I've done programming and development, I'm senior in my role, and I have a lifetime of technology experience and education... And it goes over my head. What hope does anyone else have? If you're not a developer or someone ML-focused, yeah, it's basically magic.

I won't try to explain. I couldn't possibly recall enough about what has been said to me, to correctly explain anything at this point.

[-] [email protected] 22 points 1 month ago

The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

For instance, the cutting edge in protein folding (at least as of a few years ago) is Google's AlphaFold. I'm sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is "the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions". Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

load more comments (2 replies)
load more comments (3 replies)
[-] [email protected] 45 points 1 month ago

If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors' time when the scan is so clean that even the AI doesn't see anything fishy.

Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it's DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don't let anything that's worth looking into go unnoticed.

Either way, as long as it isn't worse than humans in both kinds of failures, it's useful at saving medical resources.

[-] [email protected] 23 points 1 month ago

an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!

load more comments (5 replies)
[-] [email protected] 37 points 1 month ago

Ok, I'll concede. Finally a good use for AI. Fuck cancer.

[-] [email protected] 26 points 1 month ago

It's got a decent chunk of good uses. It's just that none of those are going to make anyone a huge ton of money, so they don't have a hype cycle attached. I can't wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it's good at and not shoving it indiscriminately into everything.

load more comments (32 replies)
load more comments (2 replies)
[-] [email protected] 24 points 1 month ago

And if we weren't a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

load more comments (6 replies)
[-] [email protected] 23 points 1 month ago

This is similar to wat I did for my masters, except it was lung cancer.

Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn't until almost 2 years later they got to do their first actual trial.

[-] [email protected] 22 points 1 month ago

This is a great use of tech. With that said I find that the lines are blurred between "AI" and Machine Learning.

Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here's a picture of Billy (maybe) " it's saying, "Here's a picture of some precancerous masses (maybe)".

That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

[-] [email protected] 17 points 1 month ago

I've been looking at the paper, some things about it:

  • the paper and article are from 2021
  • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
  • it needs to combine information from multiple views
  • it predicts risk for each year in the next 5 years
  • it has to produce consistent results with different sensors and diverse patients
  • its not the first model to do this, and it is more accurate than previous methods
load more comments (5 replies)
[-] [email protected] 21 points 1 month ago

I can do that too, but my rate of success is very low

[-] [email protected] 18 points 1 month ago* (last edited 1 month ago)

Yes, this is "how it was supposed to be used for".

The sentence construction quality these days in in freefall.

load more comments (11 replies)
[-] [email protected] 18 points 1 month ago
[-] [email protected] 18 points 1 month ago

Well in Turkish, meme beans boob/breast.

[-] [email protected] 13 points 1 month ago

The ai we got is the meme

[-] [email protected] 16 points 1 month ago

pretty sure iterate is the wrong word choice there

[-] [email protected] 15 points 1 month ago

They probably meant reiterate

load more comments (1 replies)
load more comments (4 replies)
[-] [email protected] 13 points 1 month ago

AI should be used for this, yes, however advertisement is more profitable.

load more comments (2 replies)
[-] [email protected] 12 points 1 month ago

Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

load more comments (6 replies)
load more comments
view more: next ›
this post was submitted on 02 Aug 2024
1463 points (98.3% liked)

Science Memes

10309 readers
1976 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.


Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS