• Profile
Close

1 sec to determine signs of haemorrhage in an entire head scan by artificial neural network

M3 India Newsdesk Oct 31, 2019

An algorithm which scientists at University of California (UC) at San Francisco and UC Berkeley developed recently did better than two out of four expert radiologists at finding tiny brain haemorrhages in head scans. This advance may one day help doctors treat patients with traumatic brain injuries (TBI), strokes and aneurysms.


Artificial Intelligence (AI) technology has been making impressive strides over the past few years. The US Food and Drug Administration (FDA) have already approved more than 30 AI algorithms for healthcare. “Artificial intelligence and machine learning have the potential to fundamentally transform the delivery of health care. As technology and science advance, we can expect to see earlier disease detection, more accurate diagnosis, more targeted therapies and significant improvements in personalised medicine.” the FDA Commissioner, Dr Scott Gottlieb stated on April 2, 2019 while highlighting the steps toward a new, tailored FDA review framework for artificial intelligence-based medical devices.


Presently, diagnostic medical imaging studies are carried out. These include complex 3D imaging such as computed tomography scanning, where they review thousands of images every day carefully searching for tiny abnormalities that may signal life-threatening emergencies. Artificial Intelligence technology may make life a tad easier for radiologists.

“The number of images from each brain scan can be so large that on a busy day, radiologists may opt to scroll through some large 3D stacks of images using mice with frictionless wheels, almost like viewing a movie. But it could be much more efficient--and potentially more accurate--if AI technology could pick out the images with significant abnormalities, so radiologists could examine them more closely.”


Detecting tiny abnormalities: Goal to achieve best accuracy in least amount of time

A press release from the University of California at San Francisco (UCSF) noted while reporting on the study published in Proceedings of the National Academy of Sciences (PNAS) on October 21, 2019.

"We wanted something that was practical, and for this technology to be useful clinically, the accuracy level needs to be close to perfect," the press release quoted a cautionary note from Dr Esther Yuh, associate professor of radiology at UCSF and co-corresponding author of the study.

"The performance bar is high for this application, due to the potential consequences of a missed abnormality, and people won't tolerate less than human performance or accuracy," she added.

The speed of the algorithm, the team developed, was truly impressive. The press release stated that it just took one second to determine whether an entire head scan contained any sign of haemorrhage. Additionally, it traced the detailed outlines of the abnormalities it found--demonstrating their location within the brain's three-dimensional structure.

“Some spots may be on the order of 100 pixels in size, in a 3D stack of images containing over a million of them, and even expert radiologists sometimes miss them, with potentially grave consequences,” the press release from the university added.

The algorithm minimised the amount of time that physicians would need to spend reviewing its results. It found some small abnormalities that the experts missed, noted their location within the brain, classified them according to subtype and provided vital information that physicians need to determine the best treatment.


According to Professor Yuh, one of the hardest things to achieve with the AI technology was the ability to determine whether an entire exam, consisting of a 3D "stack" of approximately 30 images, was normal.

"Achieving 95 percent accuracy on a single image, or even 99 percent, is not OK, because in a series of 30 images, you'll make an incorrect call on one of every 2 or 3 scans," she said. "To make this clinically useful, you have to get all 30 images correct--what we call exam level accuracy. If a computer is pointing out a lot of false positives, it will slow the radiologist down, and may lead to more errors.” she cautioned.

"The haemorrhage can be tiny and still be significant, that’s what makes a radiologist's job so hard, and that's why these things occasionally get missed. If a patient has an aneurysm, and it's starting to bleed, and you send them home, they can die." The press release quoted Pratik Mukherjee, professor of radiology at UCSF and another co-author.

According to Professor Jitendra Malik, the Arthur J. Chick Professor of Electrical Engineering and Computer Sciences at Berkeley and a co-corresponding author of the study, the key was choosing which data to feed into the model. The new study used a type of deep learning known as a fully convolutional neural network, or FCN, which trains algorithms on a relatively small number of images, in this instance 4,396 CT exams.

“But the training images used by the researchers were packed with information, because each small abnormality was manually delineated at the pixel level. The richness of this data--along with other steps that prevented the model from misinterpreting random variations or "noise" as meaningful--created an extremely accurate algorithm”, the UCSF press release revealed.

Instead of choosing to feed an entire stack of images, or one complete image, all at once, the scientists chose to feed only a portion or "patch" of an image at a time, contextualising this image with the ones that directly preceded and followed it in the stack. Viewing an image in patches is also how people read text or look at a computer screen, and this enabled the network to learn from the relevant information in the data without "overfitting" the model which scientists called PatchFCN by drawing conclusions based on insignificant variations that were also present in the data.

"We took the approach of marking out every abnormality--that's why we had much, much better data," said Malik.

"Then we made the best use possible of that data. That's how we achieved success." Dr. Malik confided.


What experts behind the algorithm have to say

According to the UCSF release, Dr.Malik, a noted computer vision expert, gets many more requests to collaborate on research than he can honour, but he agreed to work on Yuh and Mukherjee's project because of its great potential to help patients.

Noting that the US FDA thus far approved 30 AI algorithms, we asked Prof. Esther Yuh about the current status of the algorithm she and her coworkers have developed.

“How long it may take to get FDA approval?” we asked her in an e-mail query.

“We will try for FDA approval based on our results. The FDA will need to see proof that the algorithm aids radiologists. It could do this by

  • increasing accuracy (reduce human error),
  • increasing the efficiency of radiologists (e.g., they can read more scans in the same amount of time), and,
  • reducing the time to get results to the ordering emergency department physician, so they can make treatment decisions more quickly.

Based on current performance, we think the algorithm could help in many of these ways, and hope that the FDA will agree,” she responded confidently.

“You and your co-workers are now applying the algorithm to CT scans from trauma centers across the country that are enrolled in a research study led by Professor Geoffrey Manley, UCSF professor and vice chair of neurosurgery. What are the aims of this further study?” we asked.

“Rigorous validation of the performance of an algorithm across many centers is important. This is something that we plan to do with TRACK-TBI” she revealed.


An alert, independent reporter may identify clear differences between the press release on the study from the UCSF and that from PNAS Journal. “AI rivals expert radiologists at detecting brain haemorrhages”, the headline of the press release from the University of California at San Francisco highlighted some promotional flavor.

"Brain haemorrhage detection by artificial neural network” the headline of the PNAS press release was an uninspiring, prosaic statement! When asked for a comment, Professor Yuh said, “The headlines were written by the press offices of PNAS and UC San Francisco. PNAS may be more conservative and wish to prefer to avoid inflammatory statements."


What is the present status of Artificial Intelligence Technology in healthcare?

A review of FDA approvals of AI algorithms by The Medical Futurist in June 2019 shows that radiology and cardiology seem to be heavily populated by AI-based solutions (already seven approved algorithms in cardiology, while 16 in radiology). “However, geriatrics, orthopaedics or pathology seem to be less prone to AI. Certain medical specialties do not even appear in the list yet, such as pulmonology, dermatology, surgery, OB/Gyn or forensic medicine,” the review noted.


According to the first systematic review and meta-analysis, synthesising all the available evidence from the scientific literature published in The Lancet Digital Health journal on September 24 this year, Artificial Intelligence (AI) appears to detect diseases from medical imaging with similar levels of accuracy as health-care professionals.

A press release from the journal stated, thus: “only a few studies were of sufficient quality to be included in the analysis, and the authors caution that the true diagnostic power of the AI technique known as deep learning--the use of algorithms, big data, and computing power to emulate human learning and intelligence--remains uncertain because of the lack of studies that directly compare the performance of humans and machines, or that validate AI's performance in real clinical environments.”

"We reviewed over 20,500 articles, but less than 1% of these were sufficiently robust in their design and reporting that independent reviewers had high confidence in their claims. What's more, only 25 studies validated the AI models externally (using medical images from a different population), and just 14 studies actually compared the performance of AI and health professionals using the same test sample," the press release quoted Professor Alastair Denniston from University Hospitals Birmingham NHS Foundation Trust, UK, who led the research.

"Within those handful of high-quality studies, we found that deep learning could indeed detect diseases ranging from cancers to eye diseases as accurately as health professionals. But it's important to note that AI did not substantially out-perform human diagnosis.” he clarified.


Dr Maggie Cheang, Team Leader in Genomic Analysis, The Institute of Cancer Research, London (ICR), differed with this view.

“This review article demonstrated the potential use of applying AI in image analysis of radiological images at the diagnostic setting, but the promise is a bit too pre-mature."

“The AI algorithm remains “black box” in how they are being trained, optimised and validated. I am looking forward to a real “head to head” comparison between human assessment and AI diagnostics in accuracy and most important added clinical benefit in a multivariate analysis in randomised trial, like the same standard we have applied to other genomic biomarkers IVD (in vitro diagnostic devices).”


The Jury is still out on use of Artificial Intelligence algorithms in healthcare.

Interestingly, the Cochrane Reviews, the authentic resource for high-quality information to make health decisions, is now using artificial intelligence and machine learning to screen thousands of trial reports and identify those that are most likely to be relevant to include in Cochrane Reviews. “This reduces the workload considerably for Cochrane Review authors, freeing their time to focus on more in-depth analysis work,” the publishers of the Reviews said.

 

Disclaimer- The views and opinions expressed in this article are those of the author's and do not necessarily reflect the official policy or position of M3 India.

Dr K S Parthasarathy is a freelance science journalist and a former Secretary of the Atomic Energy Regulatory Board. He is available at ksparth@yahoo.co.uk

Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay