Sept. 11, 2019
Artificial intelligence in healthcare is a major area of opportunity and excitement. But what if it harms patients in the process? Here are pitfalls to watch for as this technology rolls out.
Potential for Discrimination
AI involves the analysis of large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences. In medicine, the data sets can come from electronic health records and health insurance claims, but also from several other sources. AI can draw upon purchasing records, income data, criminal records and even social media for information about an individual’s health.
Other Articles to Explore
Researchers are already using AI to predict a multitude of medical conditions. These include heart disease, stroke, diabetes, cognitive decline, future opioid abuse and even suicide. As one example, Facebook employs an algorithm that makes suicide predictions based on posts with phrases such as “Are you OK?” paired with “Goodbye” and “Please don’t do this.”
Lack of Protections
The Americans with Disabilities Act does not prohibit discrimination based on future medical problems. It applies only to current and past ailments. In response to genetic testing, Congress enacted the Genetic Information Nondiscrimination Act. This law prohibits employers and health insurers from considering genetic information and making decisions based on related assumptions about people’s future health conditions. No law imposes a similar prohibition with respect to non-genetic predictive data.
AI health prediction can also lead to psychological harm. For example, many people could be traumatized if they learn that they will likely suffer cognitive decline later in life. It is even possible that individuals will obtain health forecasts directly from commercial entities that bought their data. That means you could obtain the news that you are at risk of dementia through an electronic advertisement urging you to buy memory-enhancing products.