While AI/ML powered algorithms can provide great assistance in combating the virus, they cannot avoid the negative impact that comes from underlying bias in data
The COVID-19 pandemic has caused rapid adoption, integration and development of Artificial Intelligence (AI) and Machine Learning (ML) into the healthcare industry. AI is being widely used for early detection of COVID-19 and contact tracing, as well as helps support the development and research of vaccines. According to GlobalData forecasts, the market for AI/ML platforms will reach USD 52bn in 2024, up from USD 29bn in 2019. However, while AI/ML powered algorithms can provide great assistance in combating the virus, they cannot avoid the negative impact that comes from underlying bias in data.
Kamilla Kan, Medical Device Analyst at GlobalData, comments: “According to GlobalData’s Global Emerging Technology Trends Survey 2020, more than three-quarters of companies believe AI has played a role in helping them survive the COVID-19 pandemic. While rapid adoption of AI/ML platforms is particularly beneficial for healthcare industry, the lack of regulations and underlying data bias are concerning for a lot of healthcare professionals.
“Without strong policies and procedures to prevent bias in ML algorithms, there is a possibility that underlying bias in training data and existing human biases can be embedded into the ML powered algorithms. In healthcare industry, when patient’s life is on the line, biased ML algorithms could result in potentially serious consequences. For instance, some algorithms designs could ignore how numerous factors such as sex, gender, age or the presence of other preliminary diseases impact the current state of health. Understandingly, many healthcare specialists are concerned that the AI/ML powered algorithms could negatively influence current patient care.
“Currently, the FDA regulatory framework is not designed to handle adaptive algorithms. Without a proper regulation, AI/ML powered algorithms could be trained on one demographic and used on a different one, which will produce biased and improper results.”