A Comparative Analysis of Modeling and Predicting Perceived and Induced Emotions in Sonification


Sonification is the utilization of sounds to convey information about data or events. There are two types of emotions associated with sounds: (1) “perceived” emotions, in which listeners recognize the emotions expressed by the sound, and (2) “induced” emotions, in which listeners feel emotions induced by the sound. Although listeners may widely agree on the perceived emotion for a given sound, they often do not agree about the induced emotion of a given sound, so it is difficult to model induced emotions. This paper describes the development of several machine and deep learning models that predict the perceived and induced emotions associated with certain sounds, and it analyzes and compares the accuracy of those predictions. The results revealed that models built for predicting perceived emotions are more accurate than ones built for predicting induced emotions. However, the gap in predictive power between such models can be narrowed substantially through the optimization of the machine and deep learning models. This research has several applications in automated configurations of hardware devices and their integration with software components in the context of the Internet of Things, for which security is of utmost importance.


© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).


Sonification, Security Alarm, Acoustic Features, Sound Analysis, Internet of Things, Emotion Prediction, IADSE, EmoSoundscape


Abri F, Gutiérrez LF, Datta P, Sears DRW, Siami Namin A, Jones KS. A Comparative Analysis of Modeling and Predicting Perceived and Induced Emotions in Sonification. Electronics. 2021; 10(20):2519. https://doi.org/10.3390/electronics10202519