02375nas a2200253 4500000000100000000000100001008004100002260001200043653002400055653001900079653002400098653003900122653001800161653003300179100002700212700002200239700002000261245012100281856008100402300000800483490001300491520160300504022001402107 9998 d c07/202310aAlzheimer's Disease10aClassification10aCognitive Computing10aConvolutional Neural Network (CNN)10aDeep Learning10aElectromagnetic Optimization1 aMuhammad Irfan Khattak1 aSeyed Shahrestani1 aMahmoud ElKhodr00aThe Application of Deep Learning for Classification of Alzheimer's Disease Stages by Magnetic Resonance Imaging Data uhttps://www.ijimai.org/journal/sites/default/files/2023-07/ip2023_07_009.pdf a1-80 vIn Press3 aDetecting Alzheimer’s disease (AD) in its early stages is essential for effective management, and screening for Mild Cognitive Impairment (MCI) is common practice. Among many deep learning techniques applied to assess brain structural changes, Magnetic Resonance Imaging (MRI) and Convolutional Neural Networks (CNN) have grabbed research attention because of their excellent efficiency in automated feature learning of a variety of multilayer perceptron. In this study, various CNNs are trained to predict AD on three different views of MRI images, including Sagittal, Transverse, and Coronal views. This research use T1-Weighted MRI data of 3 years composed of 2182 NIFTI files. Each NIFTI file presents a single patient's Sagittal, Transverse, and Coronal views. T1-Weighted MRI images from the ADNI database are first preprocessed to achieve better representation. After MRI preprocessing, large slice numbers require a substantial computational cost during CNN training. To reduce the slice numbers for each view, this research proposes an intelligent probabilistic approach to select slice numbers such that the total computational cost per MRI is minimized. With hyperparameter tuning, batch normalization, and intelligent slice selection and cropping, an accuracy of 90.05% achieve with the Transverse, 82.4% with Sagittal, and 78.5% with Coronal view, respectively. Moreover, the views are stacked together and an accuracy of 92.21% is achived for the combined views. In addition, results are compared with other studies to show the performance of the proposed approach for AD detection. a1989-1660