Improving Medical Image Processing With AI

Faster, smarter imaging opens doors to everything from 4D modeling and higher resolution with less noise.


Machine learning is being integrated with medical image processing, one of the most useful technologies for medical diagnosis and surgery, greatly expanding the amount of useful information that can be gleaned from scan or MRI.

For the most part, ML is being used to augment manual processes that medical personnel use today. While the goal is to automate many of these functions, it’s not clear how quickly that will happen for clinical use. Automated medical diagnosis is still a new field, and today it’s in the hands of startups and university researchers. Nevertheless, it is expected to grow rapidly. In fact, IDTechEx forecasts that AI-enabled image-based medical diagnostics will be a big business, exceeding $3 billion by 2030 across five segments — cancer, cardiovascular, respiratory, retinal, and neurodegenerative diseases.

Image processing has been deployed in the medical field for decades, but the kind of data that could be obtained from those devices has been limited to a manual review of the results. “Much of DSP and signal processing technology have found a good home in medical but those have mostly been around capturing images, digitally being able to store them,” said Sam Fuller, director of marketing at Flex Logix.

Image processing is the ingestion and classification of 2D, 3D, and even 4D digital images made up of pixels of varying intensity in an array. The images come from a variety of images or medical imaging systems, such as an MRI, CT, micro-CT, or FIB-SEM scanners. Images can be enhanced and go through morphological and segmentation processes that identify and mark the parts of the image into body parts, for instance. The resulting data and image may then be passed through a machine learning algorithm trained to find areas of interest.

Highly trained radiologists read those images — and would do so with or without help from image processing systems. Add in machine learning and they can pinpoint areas of interest more quickly, and correlate data from previous and future images. And that’s just the beginning.

3D printing for medical, surgical guides
For example, Simpleware software (acquired by Synopsys in 2016) is being used in clinical and educational settings to make 3D printed models of a body part specific to a single person. “We go from images to models,” said Kerim Genc, a biomedical engineer and the business development manager for the Simpleware Product Group at Synopsys.

The software takes 3D and 4D medical image data (DICOM) from scanning sources such as MRI, CT, and micro-CT, and cleans up the image data. Then it runs a segmentation on the images for specific anatomies.

“We do the training and it’s pretty lightweight,” said Genc. “It can be done on any laptop or PC locally. We do all the training, we do all the inferencing for that, and can deploy that at the customer. 3D is what you typically get. 4D, for example, is when you can get scans of a beating heart. So it’s not only in three dimensions of the space, but also in time.”

The images come in stacks from the clinical scans. Segmentation is the process that takes the stack of scans and puts them together in a 3D digital model. Simpleware software does this segmentation faster than human technicians and without heavy-duty AI accelerators. Its software runs on a desktop computer GPU. The process generates a file that can be used for 3D printing or imported into CAD and CAE programs.

“You can do things like take pre-operative virtual measurements, carry out kind of virtual surgical planning,” said Genc. “You can develop plans for the surgeon before they actually go see the patient.” The 3D models make point-of-care (POC) 3D printing — printing a 3D model in the hospital or clinic to obtain a physical model of a patient’s anatomy for pre-surgical planning, training, and education. “For example, if you need to fit a plate to the patient’s bone, typically what surgeons would do is they’d go in, open up a patient, and then fit it on that patient. With 3D printing, you can all do that ahead of time.”

Understanding when to gain regulatory agency approvals can get complicated. When clinicians rely on the accuracy of the model to make clinical decisions, that software needs FDA (501)K clearance in the U.S. for specific uses. Synopsys has FDA and CE (European) clearances through its Simpleware ScanIP Medical software, but it also makes non-FDA approved versions available for research, education, and parts design.

“The AI tools are not FDA-cleared,” he noted. “That’s a different process.”

Image processing in diagnosis
Using AI image processing for diagnosis is the next big step, and it promises to speed up disease diagnosis. But this is still early days for AI-based diagnosis systems. “Now the advent of AI allows for the understanding of the image, or being able to start to take load off of medical staff, such as radiologists, to make them more productive,” said Flex Logix’s Fuller.

Image processing with AI algorithms that can interpret images and give a diagnosis — or at least hint at one — has garnered a lot of interest. Medical customers are looking at AI accelerators. “AI in health care/medical is one of the strongest AI area that’s growing and everyone’s hopping on that train,” said Subh Bhattacharya, Healthcare & Sciences lead at Xilinx.

Imaging processing is an important tool in medical diagnosis. “That’s our niche,” said Dana McCarty, vice president of sales and marketing at Flex Logix, which is developing an edge AI inference accelerator, the InferX X1, along with compilers and other support options. “We’re very focused on high-definition real-time image processing. We’ve developed our chip to be optimized for that space — machine vision, computer vision-type stuff.”

There is a lot of work underway to train and refine the algorithms used in these applications. “Taking and converting that into a product, that’s where we can bring a lot of help,” said Flex Logix’s Fuller. “If the algorithms exist, that training has occurred, but you want to build it into something that is robust and cost-effective. That’s where this whole kind of service makes a lot of sense, because there’s that scientific development converted to engineered product is a process that still needs to be done.”

Fig. 2: An AI image processing inference chip, the InferX X1, uses a dynamic TPU array. Source: Flex Logix

Fig. 1: An AI image processing inference chip, the InferX X1, uses a dynamic TPU array. Source: Flex Logix

A number of chipmakers see this as a big market opportunity. “Our products are very good and useful in implementing AI algorithms for the purpose of inferencing,” said Xilinx’s Bhattacharya. “The reasons, in a nutshell, are availability of the dedicated AI processor blocks, the software stacks, and support for popular networks and models. We also have open-source reference designs. There are very ambitious tasks targeted in the health care industry today, from anatomical geometric measurements, to cancer detection, to radiology, surgery, drug discovery, and genomics. Most uses today center around diagnostic assistance of some kind from medical imaging, and not actual diagnosis or procedure. It’s meant to speed up menial tasks and improve accuracy and efficiency.”

Xilinx created an X-ray classification inferencing engine that uses its Zynq UltraScale+ MPSoC-based SOM platform to detect and classify normal, pneumonia, and Covid19. The engine can be deployed as an edge device medical appliance and can independently run the inferencing.

Interest in inferencing
Many customers are asking for similar inferencing tasks. There are three different segments that Flex Logix’s McCarty sees regularly. Among them:

  • Image de-noising. Cleaning up the initial image. 3D image processing uses image filtering to remove or reduce unwanted noise or artifacts from the images. “It’s an analog image going to digital, so you clean it up,” said McCarty. “Our customers say it makes the image much more visible to the user. Radiologists can recognize images five times faster once it’s cleaned up.” Noise can make a digital image very hard to read. “If somebody has a hip implant and they get a CT scan, all sorts of noise come into the image data. It’ll look like a bright white star with like rays coming out.”
  • Object detection. Highlighting what is different in the images, a machine learning inference can provide hints for where to look. “After they clean the image, they do object detection on the image,” said McCarty. “It can be up to three times faster when they highlight that for somebody to be able to see it and say, ‘Go look at this area,’ or ‘Yes, this is a problem.’”
  • Pose estimation. “This one we’re just starting. Changes in your gait can change that can provide medical data back to a potential user,” said McCarty. “In sports — and again this is just very preliminary — we’re talking to a medical facility about changes in your gait and what does that means for an athlete.”

Another target for AI in medical image is segmentation of organs. It is hard to do segmentation on the heart. “It’s a moving organ, and it was based off customers who were doing work on that internally,” said Synopsys’ Genc. “We said, ‘Okay, this would be a good product to automate that process because it’s very difficult and time-consuming. For a novice user, it can take days to segment one. Even for experts, it’s a half-day to a full-day process to segment a heart. So automating that is really important. We’ve made over 40 automations. You can see the heart actually beating, which is really cool.”

Out of the comfort zone
But machine learning in image processing is not always cut-and-dry. It is subject to errors even if the training data set was very good. Humans make mistakes, but when machines make mistakes it is the subject of much greater concern. “Our expectations are different,” Fuller said.

So while capabilities improve, they often are coupled with extra caution in the medical field. “One thing we’ve stayed away from is any diagnostic AI,” said Genc, referring specifically to the Synopsys’ Simpleware product, which produces guides for doctors doing certain procedures. “We’re just staying out of diagnostic AI — that’s really the Wild West. That’s where you see a lot of algorithms coming out. Google is ready. These companies get the data sets. But then how do you validate them? How do you structure them? It still seems fairly scattered, and there’s a lot of questions, such as, ‘Is this really useful? How do we correct it?’ You’ll see a lot of articles like that coming out, so we just kind of stay out of that space because it’s a very much, ‘Does this person have cancer, yes or no?’ It is a big responsibility to put on an AI tool.”

Fig. 2: Creating models from images. Source: Synopsys

Even if automated diagnoses using image processing is a ways off, machine learning and AI systems are still being devised for other areas of health care, such as the electronic health record (EHR). EHRs, in some studies and in a limited way, are being mined for clues that could lead to better detection and treatment of cancer, especially if DNA information is part of the record.

“The health care industry is evolving rapidly to become more digital, especially during the COVID-19 pandemic, where remote consultations have become more commonplace,” said Peter Ferguson, director of Healthcare Technologies at Arm. “As part of this evolution, many countries have adopted electronic storage of patient records and are able to use that information to help treat patients quicker and more effectively — for example, using AI to help triage information within medical records such as X-ray images, or CT scans. This is generating massive amounts of data. It is estimated a single patient generates close to 80 megabytes each year in imaging and electronic medical records.”

Now, the question is what else can be done with that data, and which companies are willing to try.

Easier And Faster Ways To Train AI
Simpler approaches are necessary to keep pace with constantly evolving models and new applications.
Rising Fortunes For ICs In Health Care
Despite an initially slow adoption curve, designs are picking up for chips used in a wide variety of devices.
Overview Of Medical Chip Challenges
What challenges medical device designers face, and how EDA helps.
Challenges In Developing A New Inferencing Chip
What’s involved in designing, developing, testing, and modifying an accelerator IC at the edge.

Leave a Reply

(Note: This name will be displayed publicly)