A foundational research roadmap for artificial intelligence (AI) in medical imaging was published this week in the journal Radiology. The report was based on outcomes from a workshop to explore the future of AI in medical imaging, featuring experts in medical imaging, and hosted at the National Institutes of Health in Bethesda, Maryland. The workshop was co-sponsored by the National Institute of Biomedical Imaging and Bioengineering, the Radiological Society of North America, the American College of Radiology, and the Academy for Radiology and Biomedical Imaging Research.
The collaborative report underscores the commitment by standards bodies, professional societies, governmental agencies, and private industry to work together to accomplish a set of shared goals in service of patients, who stand to benefit from the potential of AI to bring about innovative imaging technologies.
The report describes innovations that would help to produce more publicly available, validated and reusable data sets against which to evaluate new algorithms and techniques, noting that to be useful for machine learning these data sets require methods to rapidly create labeled or annotated imaging data. The roadmap of priorities for AI in medical imaging research includes:
new image reconstruction methods that efficiently produce images suitable for human interpretation from source data,
automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting,
new machine learning methods for clinical imaging data, such as tailored, pre-trained model architectures, and distributed machine learning methods,
machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence), and
validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets.
Langlotz, CP, et al. A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology. April 16, 2019.
Co-authors of the report with Curtis P. Langlotz were Bibb Allen, M.D.; Bradley J. Erickson, M.D., Ph.D.; Jayashree Kalpathy-Cramer, Ph.D.; Keith Bigelow, B.A.; Tessa S. Cook, M.D., Ph.D.; Adam E. Flanders, M.D.; Matthew P. Lungren, M.D., M.P.H.; David S. Mendelson, M.D.; Jeffrey D. Rudie, M.D., Ph.D.; Ge Wang, Ph.D.; and Krishna Kandarpa, M.D., Ph.D.
To the average person, holography is the stuff of science fiction. Many people were first exposed to the concept of practical holography in the original “Star Wars” film, released in 1977. Although the apparent 3D images represented in the film were of relatively low resolution, the possibilities were undeniably intriguing — and undoubtedly inspirational to a generation of budding scientists. Subsequent portrayals of the inherent possibilities of this technology were explored on television series, such as “Star Trek: The Next Generation,” in the late 1980s and early 1990s.
Holography: From Science Fiction to Scientific Fact
In that imagined world, holography was vastly superior to the grainy, static-filled images portrayed in “Star Wars.” Entire interactive worlds were recreated in a special space. The unimaginably advanced technology was primarily used for recreation. This fictional technology more closely resembled the 3D interactive “worlds” promised by various recently introduced virtual reality (VR) systems. Although actual VR technology is arguably in its infancy, and interactive content is still largely lacking, these systems come closest to reproducing the experience of entering a “holodeck,” where fully realized, interactive, imagined worlds can be explored at will.
A Brief History
Of course, none of these imagined uses of holographic technology reflect present, real-world applications. That’s not to say holography doesn’t exist. It does, and has done since before the time of the original “Star Trek” series, which debuted in 1966. Although that seminal science fiction series made no mention of holography, the technology already existed in the real world, having begun conceptual development as early as the 1940s. In 1971, a Hungarian-British physicist was awarded the Nobel Prize in Physics for his invention of the holographic method. His success with optical holography was only made possible by the invention of the laser, in 1960.
In essence, a hologram is a photographic recording of a light field. The recording is subsequently projected to create a faithful 3D representation of the holographed subject. Technically speaking, it involves the encoding of a light field as an interference pattern. The pattern diffracts light to create a reproduction of the original light field. Any objects present in that original light field appear to be present, viewable from any angle.
Depth cures — such as parallax and perspective — are retained, changing as expected, depending on the viewpoint of the observer. Holograms have been compared to sound recordings. When a musician performs, the vibrations he produces are encoded, recorded, stored and later reproduced to evoke the original vibrations a listener would have experienced.
Of course, other forms of practical holography have been in common usage for decades. The so-called embossed hologram, which appears on many credit cards and even paper checks, was widely introduced in the mid-1980s. National Geographic magazine, which featured an image of a holographic eagle on its cover in 1984, marks the event among its most notable milestones.
The 2D embossed hologram image retains some of the characteristics of a traditional hologram, in that the image changes somewhat depending on one’s angle of view. It’s primarily used as a security measure, or as a marketing novelty (these mass-produced holograms have even appeared on boxes of children’s cereal). However, these illusions are not true holograms. While the National Geographic eagle was impressive, one could not simply examine the animal from any conceivable angle.
Guest post by Karen Holzberger, vice president and general manager, diagnostic solutions, Nuance Healthcare.
A few years ago, there was a witty car commercial advertising an alert feature that took the guesswork out of filling your tires by gently beeping to signal the appropriate pressure had been reached. It featured a series of vignettes where the car horn would beep, cautioning the owner to reconsider just as he was about to overdo something (for instance, betting all of his money on one roll of the dice). The concept of getting a reminder at the point of a decision is a compelling one, particularly if it can save you time or aggravation and guide you to do the right thing. In healthcare, any technology that can provide that level of support will have a profound impact on patient care.
Albeit humorous, that car commercial wasn’t far off the mark with healthcare challenges. Unnecessary medical imaging exposes patients to additional radiation doses and results in approximately $12 billion wasted each year, but it has also has another unintended downstream effect. It has fueled a culture of medical certainty, where tests are ordered in hopes of shedding light on some of the grey areas of diagnostic imaging, including incidental findings. The reality is that incidental findings are almost always a given, but not always a problem. So how do you know what to test further and what to monitor? And while one radiologist may choose the former option with a patient who has an incidental node finding, another might decide to go with the latter option, so who is right?
Beep! It’s important
It is important that when a radiologist sees a nodule and it has certain characteristics, he or she makes recommendation for follow-up imaging, which is why the American College of Radiology (ACR) has released clinical guidelines on incidental findings. By offering standard clinical decision support on findings covering eleven organs, the ACR is helping radiologists protect their patients through established best practices for diagnostic testing.
The results of having this information at the radiologists’ fingertips are impressive. In fact, studies show that when these clinical guidelines are built into existing workflows, 90 percent of radiologists align with them, as opposed to alternative methods, such as paper print outs, which result in 50 percent concordance.