News Briefs Archive

The Changes Needed to Effectively Bring AI to Healthcare

Jan. 30, 2019

Artificial intelligence is exciting in its possibilities for the diagnosis and management of patients, but it still often seems far-flung.

A new report from the Duke University Margolis Center for Health seeks to shed light on the current state of AI technology in healthcare and how to effectively drive adoption of AI-based clinical support tools, Kevin Truong writes in MedCity News.

Changes to How FDA Evaluates & Approves AI Technology
Key changes needed to further use of AI in healthcare are centered on how the Food and Drug Administration (FDA) operates.

The regulation of of AI-based decision guidance tools falls mainly to the FDA, which is in the process of developing a pre-certification regulatory pathway which better fits the iterative software development process.

The pre-cert program relies on an excellence appraisal of developers who can reliably create high quality and safe software, streamlined pre-market review of these products and the identification of real-world performance analytics to help judge how these tools perform after it receives approval.

Importantly, the report highlights that the FDA needs to examine how software updates are verified and validated before being sent out to the field, and what rises to the level of a new regulatory submission.

Another area of potential improvement by the FDA would be in more clearly delineating how much explanation of machine learning systems is needed for regulatory approval and how much is necessary to be made available to end users. These decisions will likely lead to a distribution of liability in the case of failure of these technologies.

Building of Clinician Trust
A few of the priorities identified by researchers to better drive adoption were building up a clinical evidence base proving AI software tools actually improve care, transparent labeling and information to help both patients and clinicians understand the potential risk factors, and ensuring that AI systems themselves are ethically trained and able to protect patient privacy.

Demonstrating the value of these tools to health systems is key, and potential methodologies suggested by the report include verifying the accuracy of the product with data that reflects the provider’s specific patient population and employing front-line physicians to help design a product that fits into the clinician workflow.

Securing Payment & Coverage From Health Plans
Another key driver to adoption is securing coverage and payment from health plans, which can be supported by positive performance and ROI outcomes. While AI-based diagnostic support tools used, like existing diagnostic tests, have a clear pathway for reimbursement, the report said that software that more closely integrates into EHRs, and are used for all patients, need stronger guidelines from payers on the validation needed for coverage.

The report also suggests that the FDA take a stronger role in labeling requirements around AI systems to display algorithmic safety and efficacy, as well factors like input data requirements and applicable patient populations.

Ensuring Technology Works Well in Different Sites & Workflows
To drive adoption of AI-based systems in healthcare, developers also have an important role in taking increased responsibility in mitigating bias in data sets and evaluating the ability of their algorithms to adapt to different workflows and test sites.

Protecting Patient Privacy
Patient privacy also plays a role in ethical data usage, and researchers identified potential solutions including increased security standards, the establishment of certified third-party data holders, regulatory limits on downstream uses of data and integrating cybersecurity into the initial design and development of the system.

To Top
Subscribe Today for Free...
And join more than 35,000 optometric colleagues who have made Review of Optometric Business their daily business advisor.