Voice assignment in vocal quartets using deep learning models based on pitch salience

Authors

Cuesta H, Gómez E

Type

Scholarly articles

Journal title

Transactions of the International Society for Music Information Retrieval

Publication year

2022

Volume

5

Number

1

Pages

99-112

ISSN

2514-3298

Publication State

Published

Abstract

This paper deals with the automatic transcription of four-part, a cappella singing, audio performances. In particular, we exploit an existing, deep-learning based, multiple F0 estimation method and complement it with two neural network architectures for voice assignment (VA) in order to create a music transcription system that converts an input audio mixture into four pitch contours. To train our VA models, we create a novel synthetic dataset by collecting 5381 choral music scores from public-domain music archives, which we make publicly available for further research. We compare the performance of the proposed VA models on different types of input data, as well as to a hidden Markov model-based baseline system. In addition, we assess the generalization capabilities of these models on audio recordings with differing pitch distributions and vocal music styles. Our experiments show that the two proposed models, a CNN and a ConvLSTM, have very similar performance, and both of them outperform the baseline HMM-based system. We also observe a high confusion rate between the alto and tenor voice parts, which commonly have overlapping pitch ranges, while the bass voice has the highest scores in all evaluated scenarios.

Complete citation

Cuesta H, Gómez E. Voice assignment in vocal quartets using deep learning models based on pitch salience. Transactions of the International Society for Music Information Retrieval 2022; 5(1): 99-112.