Artificial intelligence can help reduce the amount of gadolinium used as a contrast agent in MRI (magnetic resonance imaging) scans, preventing the agent’s potential toxicity when accumulated in body tissue, including the brain and bones, each time it is used, researchers suggest.
The finding is of particular importance for patients with health conditions that require regular MRI assessments such as multiple sclerosis (MS).
The data were reported in the study “Evaluation of Deep-Learning-Based Technology for Reducing Gadolinium Dosage in Contrast-Enhanced MRI Exams,” presented at the 104th Scientific Assembly and Annual Meeting of the Radiological Society of North America (RSNA) Nov. 25-30 in Chicago.
Gadolinium is a heavy metal commonly used as a contrast agent to enhance images collected by MRI scans. The use of gadolinium has been shown to be safe and improve MRI’s diagnostic accuracy as it aids in the visualization of the body’s internal structures.
Gadolinium-enhanced scans are often used to visualize inflammation, tumors, and blood vessels. It has proven to be a powerful tool to diagnose several diseases, including MS.
However, studies have recently shown that for each dose of gadolinium given, about 1% is retained in the tissues. The impact of this is still unknown, but researchers are working to minimize potential toxicity effects and optimize patient safety while preserving the diagnostic accuracy of MRI scans.
“There is concrete evidence that gadolinium deposits in the brain and body,” Enhao Gong, PhD, a researcher at Stanford University and lead author of the study, said in a press release. “While the implications of this are unclear, mitigating potential patient risks while maximizing the clinical value of the MRI exams is imperative.”
Connect with other patients and caregivers, find support and share tips for living well with MS in our MS News Today online forums.
With this in mind, researchers explored the potential of artificial intelligence to help reduce the amount of gadolinium used in routine practice. The team reviewed the protocol and scan images of about 200 patients who had undergone contrast-enhanced MRI exams for several medical indications.
They used deep learning — an artificial intelligence technique that teaches computers by example — to assess the data and compute an optimized protocol for gadolinium use. With this approach, the team could identify very small differences among images that would otherwise remain undetectable to the eye.
The team collected three sets of images for each patient: one to define baseline values (before contrast administration), a low-dose scan (acquired after 10% of gadolinium dose had been administrated), and a full-dose scan (acquired after 100% of gadolinium dose had been administrated).
Using the artificial intelligence-based analysis strategy, the team built a computer algorithm that could create images with similar quality as those obtained with full-dose contrast-enhanced MRI images.
Researchers also demonstrated that it would be possible to create images with equivalent diagnostic quality to those obtained with full-dose contrast agent administration without really using any contrast enhancer compound.
“We’re not trying to replace existing imaging technology,” Gong said. “We’re trying to improve it and generate more value from the existing information while looking out for the safety of our patients. Low-dose gadolinium images yield significant untapped, clinically useful information that is accessible now by using deep learning and [artificial intelligence].”
Additional studies are still warranted to further explore the use of the algorithm across a broader range of MRI scanners and applied to different types of contrast agents.