Main Article Content
The field of artificial intelligence (AI) has evolved considerably since the end of the 20th century. While this technology shows great promise and potential to solve daily tasks, the question of fairness of decisions by AI models needs to be addressed. There have been examples of AI models performing unfair and prejudiced decisions which has led to a growing need to be able to know ‘why’ and ‘how’ these models make decisions. This is particularly important in the healthcare field, where the outcomes of AI models play a decisive role in the well-being of patients. In addition, a system for detecting and mitigating biases needs to be developed so that the advantages of AI can be utilized in healthcare. A scoping review was carried out to study the source, nature and impact of biases of AI models. Results showed that bias can be data-driven, algorithmic or introduced by humans. These biases propagate deeply rooted societal inequality, misdiagnose patient groups, and further perpetuate global health inequity. Mitigation of biases is proposed at the various stages of the machine learning pipeline. These strategies use techniques such as scrutinizing the way data is collected, better representation of patient groups, optimal training of the model and evaluating model performance. In conclusion, it must be ascertained that AI decisions are free of unwarranted biases and justly fair. Therefore, in an effort to mitigate bias, AI models should adopt systems that contain techniques in which biases can be predicted, measured, explained and then mitigated.