TY - JOUR KW - Bias KW - Fairness KW - Responsible Artificial Intelligence AU - Rubén González-Sendino AU - Emilio Serrano AU - Javier Bajo AU - Paulo Novais AB - Automating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently, explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and how these are associated with different phases of an AI model’s development (including the data-generation phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the addition of transversal actions that help to produce fairer models. IS - In press M1 - In press N2 - Automating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently, explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and how these are associated with different phases of an AI model’s development (including the data-generation phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the addition of transversal actions that help to produce fairer models. PY - 9998 SE - 1 SP - 1 EP - 13 T2 - International Journal of Interactive Multimedia and Artificial Intelligence TI - A Review of Bias and Fairness in Artificial Intelligence UR - https://www.ijimai.org/journal/sites/default/files/2023-11/ip2023_11_001.pdf VL - In press SN - 1989-1660 ER -