Tutkimuksia

Tutustu lisää tunnetyöskentelyyn seuraavien englanninkielisten artikkelien avulla:

Cespedes-Guevara, J. & Eerola, T. (2018). Music Communicates Affects, Not Basic Emotions – A Constructionist Account of Attribution of Emotional Meanings to Music
https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00215/full

Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music’s ability to express core affects and the influence of top-down and contextual information in the listener’s mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be vital to move beyond the popular Basic Emotion paradigm and start untangling the emergence of emotional experiences with music in the actual contexts in which they occur.

Cowen, A. S., Fang, X., Sauter, D. & Keltner, D. (2019). What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures
https://www.pnas.org/doi/10.1073/pnas.1910704117

What is the nature of the feelings evoked by music? We investigated how people represent the subjective experiences associated with Western and Chinese music and the form in which these representational processes are preserved across different cultural groups. US (n = 1,591) and Chinese (n = 1,258) participants listened to 2,168 music samples and reported on the specific feelings (e.g., “angry,” “dreamy”) or broad affective features (e.g., valence, arousal) that they made individuals feel. Using large-scale statistical tools, we uncovered 13 distinct types of subjective experience associated with music in both cultures. Specific feelings such as “triumphant” were better preserved across the 2 cultures than levels of valence and arousal, contrasting with theoretical claims that valence and arousal are building blocks of subjective experience. This held true even for music selected on the basis of its valence and arousal levels and for traditional Chinese music. Furthermore, the feelings associated with music were found to occupy continuous gradients, contradicting discrete emotion theories. Our findings, visualized within an interactive map (<a href=”https://www.ocf.berkeley.edu/∼acowen/music.html”>https://www.ocf.berkeley.edu/∼acowen/music.html</a>) reveal a complex, high-dimensional space of subjective experience associated with music in multiple cultures. These findings can inform inquiries ranging from the etiology of affective disorders to the neurological basis of emotion.

Eerola, T. & Vuoskoski, J.K. (2011). A comparison of the discrete and dimensional models of emotion in music
https://doi.org/10.1177/0305735610362821

The primary aim of the present study was to systematically compare perceived emotions in music using two different theoretical frameworks: the discrete emotion model, and the dimensional model of affect. A secondary aim was to introduce a new, improved set of stimuli for the study of music-mediated emotions. A large pilot study established a set of 110 film music excerpts, half were moderately and highly representative examples of five discrete emotions (anger, fear, sadness, happiness and tenderness), and the other half moderate and high examples of the six extremes of three bipolar dimensions (valence, energy arousal and tension arousal). These excerpts were rated in a listening experiment by 116 non-musicians. All target emotions of highly representative examples in both conceptual sets were discriminated by self-ratings. Linear mapping techniques between the discrete and dimensional models revealed a high correspondence along two central dimensions that can be labelled as valence and arousal, and the three dimensions could be reduced to two without significantly reducing the goodness of fit. The major difference between the discrete and categorical models concerned the poorer resolution of the discrete model in characterizing emotionally ambiguous examples. The study offers systematically structured and rich stimulus material for exploring emotional processing.

Juslin, P. N. (2013). From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotion
https://doi.org/10.1016/j.plrev.2013.05.008

The sound of music may arouse profound emotions in listeners. But such experiences seem to involve a ‘paradox’, namely that music – an abstract form of art, which appears removed from our concerns in everyday life – can arouse emotions – biologically evolved reactions related to human survival. How are these (seemingly) non-commensurable phenomena linked together? Key is to understand the processes through which sounds are imbued with meaning. It can be argued that the survival of our ancient ancestors depended on their ability to detect patterns in sounds, derive meaning from them, and adjust their behavior accordingly. Such an ecological perspective on sound and emotion forms the basis of a recent multi-level framework that aims to explain emotional responses to music in terms of a large set of psychological mechanisms. The goal of this review is to offer an updated and expanded version of the framework that can explain both ‘everyday emotions’ and ‘aesthetic emotions’. The revised framework – referred to as BRECVEMA – includes eight mechanisms: Brain Stem Reflex, Rhythmic Entrainment, Evaluative Conditioning, Contagion, Visual Imagery, Episodic Memory, Musical Expectancy, and Aesthetic Judgment. In this review, it is argued that all of the above mechanisms may be directed at information that occurs in a ‘musical event’ (i.e., a specific constellation of music, listener, and context). Of particular significance is the addition of a mechanism corresponding to aesthetic judgments of the music, to better account for typical ‘appreciation emotions’ such as admiration and awe. Relationships between aesthetic judgments and other mechanisms are reviewed based on the revised framework. It is suggested that the framework may contribute to a long-needed reconciliation between previous approaches that have conceptualized music listenersʼ responses in terms of either ‘everyday emotions’ or ‘aesthetic emotions’.

Juslin, P. N. & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms
https://doi.org/10.1017/S0140525X08005293

Research indicates that people value music primarily because of the emotions it evokes. Yet, the notion of musical emotions remains controversial, and researchers have so far been unable to offer a satisfactory account of such emotions. We argue that the study of musical emotions has suffered from a neglect of underlying mechanisms. Specifically, researchers have studied musical emotions without regard to how they were evoked, or have assumed that the emotions must be based on the “default” mechanism for emotion induction, a cognitive appraisal. Here, we present a novel theoretical framework featuring six additional mechanisms through which music listening may induce emotions: (1) brain stem reflexes, (2) evaluative conditioning, (3) emotional contagion, (4) visual imagery, (5) episodic memory, and (6) musical expectancy. We propose that these mechanisms differ regarding such characteristics as their information focus, ontogenetic development, key brain regions, cultural impact, induction speed, degree of volitional influence, modularity, and dependence on musical structure. By synthesizing theory and findings from different domains, we are able to provide the first set of hypotheses that can help researchers to distinguish among the mechanisms. We show that failure to control for the underlying mechanism may lead to inconsistent or non-interpretable findings. Thus, we argue that the new framework may guide future research and help to resolve previous disagreements in the field. We conclude that music evokes emotions through mechanisms that are not unique to music, and that the study of musical emotions could benefit the emotion field as a whole by providing novel paradigms for emotion induction.

Randall, W. M. Baltazar, M. & Saarikallio, S. (2021). Success in reaching affect self-regulation goals through everyday music listening
https://doi.org/10.1080/09298215.2023.2187310

While music listening on mobile phones can serve many affect-regulatory goals, success in reaching these goals is yet to be empirically assessed. This study aimed to determine how frequently listeners successfully reach their affect-regulatory goals, and the predictors of this success. Data were collected using the experience sampling app MuPsych, from 293 Finnish participants. Goals were successfully reached in less than half of cases, with adults more successful than adolescents. Success was determined largely within contexts, and strongly predicted by an initial low-valenced emotional state of the listener, suggesting that music listening is particularly useful for those in negative states.

Saarikallio, S., Nieminen, S. & Brattico, E. (2012). Affective reactions to musical stimuli reflect emotional use of music in everyday life
https://doi.org/10.1177/1029864912462381

Music is a common means for regulating affective states in everyday life, but little is known about the individual differences in this behaviour. We investigated affective reactions to musical stimuli as an explanatory factor. Forty-four young adults rated self-selected music regarding perceived and felt emotions, preference, pleasantness and beauty. The ratings were reduced into five factors representing affective response tendencies. The participants also filled in the Music in Mood Regulation (MMR) questionnaire assessing seven music-related mood regulation strategies in everyday life. High beauty and pleasantness ratings for liked music correlated with the use of music for inducing strong emotional experiences, while ratings reflecting high agreement with the emotional content of preferred musical stimuli correlated with using music as a means for dealing with personal negative emotions. Regarding musical background, informal engagement through listening, but not formal musical training, correlated with increased use of music for mood regulation. The results clarify the link between the affective reactivity to music and the individual ways of using music as a tool for emotional self-regulation in everyday life.

Scherer, K. R. (2010). Which Emotions Can be Induced by Music? What Are the Underlying Mechanisms? And How Can We Measure Them?
https://doi.org/10.1080/0929821042000317822

The study of emotional effects of music is handicapped by a lack of appropriate research paradigms and methods, due to a dearth of conceptual-theoretical analyses of the process underlying emotion production via music. It is shown that none of the three major assessment methods for emotion induction – lists of basic emotions, valence-arousal dimensions, and eclectic emotion inventories – is well suited to the task. By focusing on a small number of evolutionarily continuous basic emotions one downplays the more complex forms of emotional processes in humans, especially affective feeling states produced by music which do not serve adaptive behavioral functions. Similarly, a description of emotional effects of music limited to valence and arousal gradations precludes assessment of the kind of qualitative differentiation required by the study of the subtle emotional effects of music. Finally, eclectic lists of emotions generated by researchers to suit the needs of a particular study may lack validity and reliability and render a comparison of research results difficult. A second problem consists in the tendency to assume that “emotions” and “feelings” are synonyms. It is suggested that “feelings” can be profitably conceptualized as a central component of emotion, which integrates all other components and serves as the basis for the conscious representation of emotional processes and for affect regulation. It is proposed that a radical paradigm change is required to free research on the emotional effects of music from the excessive constraints imposed by these two common misconceptions. Concretely, it is suggested that affect produced by music should be studied as (more or less conscious) feelings that integrate cognitive and physiological effects, which may be accounted for by widely different production rules. Suggestions for new ways of measuring affective states induced by music are made.

Vuoskoski, J. K. & Eerola, T. (2011). Measuring music-induced emotion : A comparison of emotion models, personality biases, and intensity of experiences
https://doi.org/10.1177/1029864911403367

Most previous studies investigating music-induced emotions have applied emotion models developed in other fields to the domain of music. The aim of this study was to compare the applicability of music-specific and general emotion models – namely the Geneva Emotional Music Scale (GEMS), and the discrete and dimensional emotion models – in the assessment of music-induced emotions. A related aim was to explore the role of individual difference variables (such as personality and mood) in music-induced emotions, and to discover whether some emotion models reflect these individual differences more strongly than others. One hundred and forty-eight participants listened to 16 film music excerpts and rated the emotional responses evoked by the music excerpts. Intraclass correlations and Cronbach alphas revealed that the overall consistency of ratings was the highest in the case of the dimensional model. The dimensional model also outperformed the other two models in the discrimination of music excerpts, and principal component analysis revealed that 89.9% of the variance in the mean ratings of all the scales (in all three models) was accounted for by two principal components that could be labelled as valence and arousal. Personality-related differences were the most pronounced in the case of the discrete emotion model. Personality, mood, and the emotion model used were also associated with the intensity of experienced emotions. Implications for future music and emotion studies are raised concerning the selection of an appropriate emotion model when measuring music-induced emotions.

Zentner, M., Grandjean, D. & Scherer, K. R. (2008). Emotions Evoked by the Sound of Music: Characterization, Classification, and Measurement
https://doi.org/10.1037/1528-3542.8.4.494

One reason for the universal appeal of music lies in the emotional rewards that music offers to its listeners. But what makes these rewards so special? The authors addressed this question by progressively characterizing music-induced emotions in 4 interrelated studies. Studies 1 and 2 (n = 354) were conducted to compile a list of music-relevant emotion terms and to study the frequency of both felt and perceived emotions across 5 groups of listeners with distinct music preferences. Emotional responses varied greatly according to musical genre and type of response (felt vs. perceived). Study 3 (n = 801)–a field study carried out during a music festival–examined the structure of music-induced emotions via confirmatory factor analysis of emotion ratings, resulting in a 9-factorial model of music-induced emotions. Study 4 (n = 238) replicated this model and found that it accounted for music-elicited emotions better than the basic emotion and dimensional emotion models. A domain-specific device to measure musically induced emotions is introduced–the Geneva Emotional Music Scale. (APA PsycInfo Database Record (c) 2016 APA, all rights reserved)

Materiaalista suoritettu

Materiaalista suoritettu