Medical Equipment - Monitoring Equipment

2 Key Benefits Of Deploying Edge AI At The Point Of Care

April 2023

Medical Equipment - Monitoring Equipment

2 Key Benefits Of Deploying Edge AI At The Point Of Care

April 2023

The point of care is shifting, with monitoring, diagnosis, and treatment taking place increasingly outside the conventional hospital setting and moving into patients’ homes. This is an exciting shift in the paradigm of healthcare.

The arrival of cloud computing has allowed industries to accelerate data processing by leveraging its immense scalability. Medtech companies have deployed artificial intelligence and machine learning algorithms in the cloud, where there is virtually unlimited processing power. This has improved patient outcomes, for example, through improved diagnostics and digital biomarkers. However, cloud computing’s reliance on the transfer of data to and from remote servers has some limitations, preventing many applications from shifting to the home setting.

The remoteness of the servers causes the first issue: latency. Real-time processing and turnaround time are often critical in rapid decision-making. Unpredictable load on the available resources can cause unpredictable timings on turnaround of processed data, potentially even changing the result of the processing.

The second issue is that cloud computing often requires a continuous connection to remote servers to exchange information. This reliance on an uninterrupted data pipe means any healthcare solution depending on cloud compute for AI algorithms will only work effectively in well-connected locations.

Last is data security; a patient’s medical information, according to laws such as HIPAA and GDPR, must adhere to strict handling and privacy requirements. This raises the concern of how to safely train and use algorithms running on third-party hardware and securely send data over networks. Such concerns have slowed the adoption of cloud-based AI in the medical field.

With a growing demand for AI in medtech applications, deployment of technology that can mitigate these risks must be considered for future applications. Edge inference is the practice of deploying machine learning (ML) models directly onto devices, allowing the data to be captured and processed at the point of care, rather than in the cloud. Processing data at the edge enables real-time processing, a reduced reliance on network quality, and increased data security, and the availability of specialist hardware means that edge inference is viable now. By identifying situations where these benefits can be obtained, medtech companies can use edge inference to shift the point of care.

 

Bring Care To The Front Line

Devices running edge inference computing enable real-time diagnostics to be performed outside specialist centers, reducing the time to treatment. The biggest opportunities exist for conditions where diagnoses are relatively accurate, but where patient outcomes are poor because diagnoses are not being done in a timely manner.

For example, a handheld retinal camera developed in Taiwan uses edge inference to diagnose diabetic eye disease, allowing primary caregivers to perform diagnoses that typically would be done by an ophthalmologist. The accuracy of the diagnostics provided is comparable to a specialist but is 10 times quicker than a competing cloud-based offering and does not require transport to a specialist ophthalmologist. Bringing these capabilities to the point of care not only reduces the risk and time taken for transport but also makes specialist care more accessible in primary care facilities without resident specialists.

Edge inference can also act as a real-time decision support tool for medical procedures. Virgo Surgical Video Solutions used edge inference in an endoscopy demo, detecting pre-cancerous growths with a latency of 17 ms, which is unachievable by beaming the data to and from the cloud. For GE HealthCare's X-ray machines equipped with their Critical Care Suite ML algorithms, the X-ray machine automatically measures endotracheal tube positioning within seconds, allowing physicians to correct positioning errors in real time. This could feasibly extend to other procedures involving the insertion of medical devices into the body, such as stents, as well as more advanced procedures such as surgery.

Reduce Reliance On High-Quality Networks

While solutions that process all the data in the cloud exist, maintaining data quality from the edge requires sufficient network bandwidth and reliability. This is by no means a guarantee, as the last mile of connectivity is governed by local internet service providers (ISPs). By processing data locally, edge inference reduces reliance on bandwidth, allowing care to be delivered in primary care facilities and in the home (e.g., on ambulatory monitoring systems) where network quality is not guaranteed.

Applications that capture data from body worn devices are a great example. Transmission of all data for cloud processing can take time, leading to a delayed diagnostic, reduced battery life, and increased data costs. Engineers should be considering whether early stages of the AI pipeline could run locally on a device, as this will significantly mitigate the challenges.

Additionally, opportunities to leverage edge inference can be found where care capabilities are not reaching as many people as they could be due to a lack of strong network infrastructure. For the handheld retinal camera, the fact that processing is done on-device allows for diagnoses to be made in facilities without an internet connection with reliably high bandwidth. The reduced reliance on network connectivity improves the scalability of solutions and allows smaller-scale primary care centers, or even visiting clinicians to the home, to deploy the same diagnostic capabilities as hospitals with a stronger network infrastructure.

By Geoffrey Sheir, Nick Warrington, and Dan Talmage, PA Consulting

www.meddeviceonline.com