02336nas a2200241 4500000000100000000000100001008004100002260001200043653002100055653003900076653002200115653001000137653001000147653002600157100002600183700002100209245009500230856008100325300001200406490000600418520165600424022001402080 2021 d c06/202110aMotivic Patterns10aConvolutional Neural Network (CNN)10aData Augmentation10aAudio10aMusic10aInformation Retrieval1 aAitor Arronte Alvarez1 aFrancisco Gómez00aMotivic Pattern Classification of Music Audio Signals Combining Residual and LSTM Networks uhttps://www.ijimai.org/journal/sites/default/files/2021-05/ijimai_6_6_21.pdf a208-2140 v63 aMotivic pattern classification from music audio recordings is a challenging task. More so in the case of a cappella flamenco cantes, characterized by complex melodic variations, pitch instability, timbre changes, extreme vibrato oscillations, microtonal ornamentations, and noisy conditions of the recordings. Convolutional Neural Networks (CNN) have proven to be very effective algorithms in image classification. Recent work in large-scale audio classification has shown that CNN architectures, originally developed for image problems, can be applied successfully to audio event recognition and classification with little or no modifications to the networks. In this paper, CNN architectures are tested in a more nuanced problem: flamenco cantes intra-style classification using small motivic patterns. A new architecture is proposed that uses the advantages of residual CNN as feature extractors, and a bidirectional LSTM layer to exploit the sequential nature of musical audio data. We present a full end-to-end pipeline for audio music classification that includes a sequential pattern mining technique and a contour simplification method to extract relevant motifs from audio recordings. Mel-spectrograms of the extracted motifs are then used as the input for the different architectures tested. We investigate the usefulness of motivic patterns for the automatic classification of music recordings and the effect of the length of the audio and corpus size on the overall classification accuracy. Results show a relative accuracy improvement of up to 20.4% when CNN architectures are trained using acoustic representations from motivic patterns. a1989-1660