An ongoing study by IBM Research, together with Sage Bionetworks, Kaiser Permanente Washington Health Research Institute, and the University of Washington School of Medicine, has revealed how joining machine learning algorithms and assessments by radiologists could improve the overall accuracy of breast cancer screenings.
Mammogram screenings, commonly used by radiologists for the early discovery of breast cancer, as indicated by IBM specialist Stefan Harrer, frequently depend on a radiologist’s expertise to visually recognize indications of cancer, which isn’t constantly precise.
“Through the current state of human interpretation of mammography images, two things happen Misdiagnosis in terms of missing cancer and also diagnosing cancer when it’s not there,” Harrer told ZDNet.
“Both cases are highly undesirable — you never want to miss cancer when it’s there, but also if you’re diagnosing cancer and it’s not there, it creates enormous pressure on patients, on the healthcare system, that could be avoided.
“That is exactly where we aim to improve things through the incorporation of AI (artificial intelligence) to decrease the rate of false positives, which is the diagnosis of cancer and also to decrease missing cancer when there is one.”
The research used over 310,800 de-recognized mammograms and clinical information from Kaiser Permanente Washington (KPWA) and the Karolinska Institute (KI) in Sweden. Of the joined datasets, KI contributed around 166,500 examinations from 6,800 ladies, of which 780 were cancer positive; while the staying 144,200 examinations were given by KPWA from 85,500 ladies, of which 941 were cancer positive.
“We had hundreds of thousands of mammograms that were annotated. That means medical practitioners looked at them and placed a label on the piece of information that said, ‘Yep, there is tumor’, or ‘No, there is no’ … and what we did was we took a portion of that data — or what we called training data — and used that data and we trained the algorithms on recognizing tumors,” Harrer said.
Harrer featured that while utilizing AI to decipher mammograms isn’t new, the study was critical because of its size.
“What we did here was to create a benchmark of the most advanced algorithm against by far the largest dataset of any kind,” he said.
“We expect this study is the start of any future work … the algorithms from this study will be publicly available for research purposes and can be used by anyone.”
Harrer included the research likewise empowered the group to build up a secure ecosystem, giving analysts access to datasets that were beforehand inaccessible for research activities.
“What we’ve done is create an ecosystem that allows us to keep that dataset … behind a secure firewall … to allow researchers to build models and submit these models to us, as the organizers of this ecosystem,” he said.
“These models can then come through the data and be tested, trained, and validated inside this secure environment by us, and then the performance of these models be returned to the researchers and they can keep on running and improving the model.”
He likewise accepted the opportunity to debunk the legend that AI might potentially take over jobs.
“AI will not replace all doctors. AI will replace doctors who don’t use AI,” Harrer said, acknowledging that the technology would “lead to a change in the field of radiology”.
The investigation returned off the results from the Digital Mammography (DM) DREAM Challenge, a crowd-sourced competition in 2016 intended to connect with the international scientific community to evaluate whether AI algorithms could meet or beat radiologist interpretive precision.