^
A
A
A

Radiologists will be able to use AI to detect brain tumors in the near future

 
, medical expert
Last reviewed: 02.07.2025
 
Fact-checked
х

All iLive content is medically reviewed or fact checked to ensure as much factual accuracy as possible.

We have strict sourcing guidelines and only link to reputable media sites, academic research institutions and, whenever possible, medically peer reviewed studies. Note that the numbers in parentheses ([1], [2], etc.) are clickable links to these studies.

If you feel that any of our content is inaccurate, out-of-date, or otherwise questionable, please select it and press Ctrl + Enter.

19 November 2024, 11:43

A paper titled "Deep learning and transfer learning for brain tumor detection and classification" published in Biology Methods and Protocols says that scientists can train artificial intelligence (AI) models to distinguish between brain tumors and healthy tissue. AI models can already detect brain tumors in MRI images almost as well as a human radiologist.

Researchers have made steady progress in applying AI to medicine. AI is particularly promising in radiology, where waiting for technicians to process medical images can delay patient treatment. Convolutional neural networks are powerful tools that allow researchers to train AI models on large sets of images for recognition and classification.

In this way, networks can "learn" to distinguish between images. They also have the ability to "transfer learning." Scientists can reuse a model trained for one task for a new but related project.

Although detecting camouflaged animals and classifying brain tumors involve very different types of images, the researchers suggested that there is a parallel between an animal hiding thanks to natural camouflage and a group of cancer cells blending in with surrounding healthy tissue.

The learned process of generalization—grouping different objects under a single identifier—is important for understanding how the network can detect camouflaged objects. Such learning could be particularly useful for detecting tumors.

In this retrospective study of publicly available MRI data, the researchers examined how neural network models could be trained on brain cancer data, introducing a unique transfer learning step to detect cloaked animals to improve the network's tumor detection skills.

Using MRIs from publicly available online cancer data sources and control images of healthy brains (including Kaggle, the NIH Cancer Image Archive, and the VA Health System in Boston), the researchers trained networks to distinguish between healthy and cancerous MRIs, identify the area affected by cancer, and the prototypical appearance of cancer (cancer tumor type).

The researchers found that the networks were nearly perfect at identifying normal brain images with only one or two false negatives and distinguishing between cancerous and healthy brains. The first network showed an average accuracy of 85.99% in detecting brain cancer, while the second had an accuracy of 83.85%.

A key feature of the network is the multiple ways in which its decisions can be explained, which increases the trust in the models from medical professionals and patients. Deep models are often not transparent enough, and as the field matures, the ability to explain the decisions of networks becomes important.

Thanks to this research, the network can now generate images that show specific areas in the classification of a tumor as positive or negative. This will allow radiologists to check their decisions against the network's results, adding confidence as if there was a second "robotic" radiologist nearby pointing to the area of the MRI that indicates a tumor.

In the future, the researchers believe it will be important to focus on creating deep network models whose decisions can be described in intuitive ways so that AI can play a transparent supporting role in clinical practice.

Although the networks had difficulty distinguishing between brain tumor types in all cases, it was clear that they had intrinsic differences in how the data was represented within the network. Accuracy and clarity improved as the networks were trained to recognize camouflage. Transfer learning led to increased accuracy.

Although the best model tested was 6% less accurate than standard human detection, the study successfully demonstrates the quantitative improvement achieved through this learning paradigm. The researchers believe that this paradigm, coupled with the comprehensive application of explainability methods, will help bring needed transparency to future clinical AI research.

"Advances in AI make it possible to detect and recognize patterns more accurately," said lead author of the paper, Arash Yazdanbakhsh.

"This, in turn, improves image-based diagnostics and screening, but also requires more explanation about how the AI performs a task. The push for AI explainability improves human-AI interactions in general. This is especially important between medical professionals and AI designed for medical purposes.

"Clear and explainable models are better suited to aid diagnosis, track disease progression, and monitor treatment."

You are reporting a typo in the following text:
Simply click the "Send typo report" button to complete the report. You can also include a comment.