01529nas a2200229 4500000000100000000000100001008004100002260001200043653000900055653001300064653004000077100002900117700001900146700001600165700001700181245006100198856005800259300000900317490000600326520095300332022001401285 2024 d c12/202410aBias10aFairness10aResponsible Artificial Intelligence1 aRubén González-Sendino1 aEmilio Serrano1 aJavier Bajo1 aPaulo Novais00aA Review of Bias and Fairness in Artificial Intelligence uhttps://www.ijimai.org/journal/bibcite/reference/3390 a5-170 v93 aAutomating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently, explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and how these are associated with different phases of an AI model’s development (including the data-generation phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the addition of transversal actions that help to produce fairer models. a1989-1660