Universitas Psychologica

Do faces and body postures integrate similarly for distinct emotions, kinds of emotion and judgent dimensions?*

¿Las caras y posturas corporales se integran de forma similar para distintas emociones, las clases de emoción y dimensiones del juicio?

Ana Duarte Silva **
University of Coimbra, Portugal
Armando M. Oliveira ***
University of Coimbra, Portugal

Do faces and body postures integrate similarly for distinct emotions, kinds of emotion and judgent dimensions?*

Universitas Psychologica, vol. 15, no. 3, 2016

Pontificia Universidad Javeriana

Received: 10 March 2016

Accepted: 20 June 2016

Abstract: Faces and bodies are typically seen together in most social interactions, rendering probable that facial and bodily expressions are perceived and eventually processed simultaneously. The methodology of Information Integration Theory and Functional Measurement was used here to address the following questions: Under what rules do facial and bodily information integrate in judgments over different dimensions of so-called basic and self-conscious emotions? How does relative importance of face and body vary across emotions and judgment dimensions? Does the relative importance of face and body afford a basis for distinguishing between basic and self-conscious emotions? Three basic (happiness, anger, sadness) and two social self-conscious emotions (shame and pride) were considered in this study. Manipulated factors were 3-D realistic facial expressions (varied across 5 levels of intensity) and synthetic 3-D realistic body postures (3 levels of intensity). Different groups of participants judged expressed intensity, valence, or arousal of the combined presentations of face and body, meaning that judgment dimension was varied between-subjects. With the exception of arousal judgments, averaging was the predominant integration rule. Relative importance of face and body was found to vary as a function of judgment dimension, specific emotions and, for judgments of arousal only, type of emotion (basic versus self-conscious).

Keywords facial expressions, body postures, functional measurement, relative importance, information integration theory.

Resumen: Caras y cuerpos son típicamente observados en conjunto en muchas de las interacciones sociales, haciendo probable que tanto las expresiones faciales como las expresiones corporales sean percibidas y eventualmente procesadas simultaneamente. La metodología de la Teoría de Integración de la Información y la Medición Funcional fue usada en este estúdio para contestar las siguientes preguntas: ¿bajo qué reglas son integradas las informaciones faciales y corporales en los juicios sobre diferentes dimensiones de las llamadas emociones autoconcientes?, ¿cómo la importáncia relativa de la cara y del cuerpo varían a través de las emociones y las dimensiones de los juicios? ¿La importancia relativa de la cara y del cuerpo permiten tener una base para para diferenciar entre las emociones básicas y las autoconcientes? En este estudio se consideraron tres emociones básicas (felicidad, ira y tristeza) y dos emociones autoconcientes (verguenza y orgullo). Los factores manipulados fueron las expresiones faciales realistas en modelos de 3D (variadas a través e 5 niveles de intensidad) y posiciones corporales realistas en modelos de 3D (que variaron en 3 niveles de intensidad). Diferentes grupos de participantes juzgaron la intensidad de las expresiones, la valencia, o la estimulación de las diferentes presentaciones de combinaciones de caras y cuerpos, el significado de las dimesiones del juicio fue variado entre-sujetos. Con excepción de los juicios sobre la estimulación, la regla de integración del promedio fue la predominante. La importancia relativa de la cara y del cuerpo fueron observadas al variar en función de las dimensiones del juicio, de las emociones específicas y, en el caso de los juicios de estimulación solo para un tipo de emoción (básicas versus autoconscientes).

Palabras clave: expresiones faciales, posturas corporales, medición funcional, importancia relativa, teoría de integración de información.

Para citar este artículo:

Duarte Silva, A., & Oliveira, A. (2016). Do faces and body postures integrate similarly for distinct emotions, kinds of emotion and judgment dimensions? Universitas Psychologica, 15 (3). http://dx.doi.org/10.11144/Javeriana.upsy15-3.fbis

In face-to-face interaction, facial expressions of emotion are typically accompanied by other nonverbal signals, among which we count prosody, gesticulation, and body postures (Gallois & Callan, 1986; Hess, Kappas, & Scherer, 1988; Scherer & Ellgring, 2007). Yet, emotion perception research has mainly rested on the presentation of stand-alone faces (de Gelder, 2009; Fernández-Dols & Carroll, 1997). In the minority of cases where more than one expressive channel was considered, face-voice combinations got the most attention, neglecting body cues (Dael, Mortillaro, & Scherer, 2012a; Hess et al., 1988). Two factors seemingly contributed to that: (1) the belief that the body can only inform on vague affective states (Ekman, 1965); and (2) the early availability of reliable measurement systems for the face (e.g., the Facial Action Coding System-FACS: Ekman & Friesen, 1978) and the voice (Scherer, 1986), contrasting with the lack of practicable systems for the coding of body movements (Dael, Mortillaro, & Scherer, 2012b; Harrigan, 2005).

This overall picture no longer holds, as a growing number of studies in roughly the last decade suggest that body postures can communicate specific affective dimensions and emotional states (e.g., Atkinson, Dittrich, Gemmell, & Young, 2004; de Gelder, 2009; de Gelder, Snyder, Greve, Gerard, & Hadjikhanide, 2004; Winters, 2005). Certain emotions, such as pride, have even been reported as better conveyed by the body than by the face (Tracy & Robins, 2008). In tandem, several corpora of bodily expressions were assembled —e.g., the UC Davis Set of Emotion Expressions (UCDSEE; Tracy, Robins, & Schriber, 2009) and the Geneva Multimodal Emotion Portrayals corpus (GEMEP; Bänziger, Mortillaro, & Scherer, 2012)—, and analytical coding systems for body movements were developed with a view to studying emotion expression. Noteworthy among these is the Body Action and Posture Coding System (BAP: Dael et al., 2012a) which, much like FACS (Ekman, Friesen, & Hager, 2002) does for the face, provides a descriptive protocol for segmenting skeletal movements into posture and action units based on human anatomy (body articulators, such as trunk, arms, neck, and their movements following muscle contractions).

Altogether, these developments fostered research on the body as a medium for emotional expression, both in isolation (see de Gelder & de Borst, 2015) and in conjunction with other sources, faces in particular (App, Reed, & MacIntosh, 2012; Hietanen & Leppänen, 2008; Meeren, van Heujnsbergen, & de Gelder, 2005; Van den Stock, Righart, & de Gelder, 2007). However, bodies are seldom given the same status as faces in these multichannel studies. Illustrating just that, all but one of the cited studies investigated whether bodies in congruent and incongruent face-body pairs influence the categorization of emotions in the face, elected as the target variable. While this allows asserting that body cues can alter the way facial expressions are perceived, it does not mention how the two sources contribute jointly to emotion perception.

Recognizing this, App et al. (2012) attempted to address the integration of body and face by assessing their relative importance to different emotion-related judgments (of motion intention, towards or away from the observer, and of emotional state). Congruent and incongruent face-body pairs were still used as stimuli. Congruent pairs were photos of an angry body with an angry face, or of a fearful body with a fearful face (posed by five female and five male models); incongruent pairs combined each model’s angry face with his/her fearful body, or vice-versa. However, rather than judging the face in the compound, participants now judged the entire compound. The rationale for the interpretation of results was as follows: For emotion-state judgments perceiving “angry face-fearful body” stimuli as angrier than the “fearful face-angry body” ones would mean greater reliance on the face than on the body. For motion-intention judgments, a larger percentage of “away” judgments for “angry face-fearful body” than for “fearful face-angry body” would mean larger reliance on the body.

Although the hypothesis of a dependence of the relative importance on type of judgment is well taken, the App et al.’s study (2012) is indeed inadequate to fulfill its purposes. One critical unchecked assumption is that emotional angry and fearful expressions in both the face and the body are of equal magnitude. Were it not the case, any outcomes found might simply reflect the different arbitrary levels at which the emotions were conveyed. Their adopted procedure of selecting for each model the one photo (out of two) conveying the most anger, and similarly for fear, is a far way from being able to meet the harsh measurement conditions – requiring that all expressions across both channels be measured on a common unit scale with a true known zero (Anderson, 1982, pp. 273-274).

The flaws of this “equal-and-opposite” method have long been recognized in the context of Information Integration Theory (IIT) (see Anderson, 1981, p. 271; 1989, pp. 165-167; 2008, pp. 349-351), but pervasively ignored in the literature on emotion perception. The unsettled debate over the relative importance of face and context affords a parallel example to the one on the relative importance of expressive channels. Since the early studies of Goodenough and Tinker (1931) it has revolved around the methodological need to equate the “clarity” of face and context as competing information sources (see Fernández-Dols & Carroll, 1997 for an overview), with no explicit recognition of the fundamental measurement problem involved. In both cases, the consequence was inability to operationally measure the importance of the medium independently from its content, or in other words, the weight of the source separated from the scale value (magnitude) of the conveyed information.

Besides diagnosing the problem, IIT also provided a way out of it. The first step to a solution resides in acknowledging the weight-value distinction as dependent on model analysis, rather than simply empirical. Unless weights and scale values are operationally identifiable parameters within a measurement model, the very meaningfulness of their distinction can be doubted (Anderson, 1981, p. 271). The second step rests on the averaging model of the IIT, which provides a unique basis for the independent estimation of weight and scale value parameters (Anderson, 1981; 1982; 1989, pp. 165-167).

Both points can be simply illustrated by contrasting the averaging and the additive IIT models. The averaging equation embodies an explicit two-parameter representation, with ω’s standing for weights and ψ’s for scale values. For two information dimensions A and B (e.g., face and body) it can be written as:

(1)

With subscripts i and j denoting variable levels of A and B, and ρij the resultant of the integration of level i of A and level j of B. The important feature to notice is the occurrence of the weights in the denominator separately from the scale values, which allows estimating them independently. By contrast, if room is made for weight parameters in the adding equation, writing them as:

(2)

Weights remain confounded with scale values and cannot be identified. For all practical purposes, the concept of weight is thus not an integral part of the adding model, and equation (2) is practically equivalent to the standard adding equation

(3)

It follows from here that proposed measures of importance embodying an additive model are generally inappropriate (Anderson, 1982, pp. 262-272; Anderson, 2001, pp. 551-559). As most attempts at assessing the relative importance of face and voice (e.g. Mehrabian & Ferris, 1967), or of face, body, and voice (e.g., O’Sullivan, Ekman, Friesen, & Scherer, 1985), have relied on regression weights, assuming an additive model, and correlation coefficients - which do not allow a weight-value distinction - their outcomes are unwarranted and possibly not meaningful (Anderson, 1989). Other indices employed in multichannel research, such as percentage of explained variance (e.g., Hess et al., 1988) or the relative shift ratio (e.g., Frijda, 1969) share similar problems to the regression-correlation methods, changing nothing to the situation (see Anderson, 1982, pp. 271-277).

The present study was designed to investigate the integration of facial and bodily emotion expressions with IIT methodology and to assess their relative importance with functional measurement (FM). Differently from most of the previously cited studies, it relies on continuous response dimensions and not discrete choices between emotions. Rating responses are central in IIT to directly reflect the subtleties of the combination of factors in the patterns of data, something that nonmetric choice responses fall short of doing. Both the validation of these ratings as linear scales (equal-interval) and the estimation of the parameters of the model (weights and/or scale values assigned to the stimuli) depend on the observed integration patterns (Anderson, 1981; 1982). Hence, while studies such as App et al. (2012) seek to address face-body integration by first assessing their relative importance, a reversed direction is pursued here: Arriving at measuring importance by first establishing an integration model.

As the averaging model affords the basis for an operational weight-value distinction, the first required task is to check whether the averaging rule governs face and body integration. This cannot be guaranteed, and has to be empirically determined. A second concern involves the probable lack of outcome generality of relative importance (Anderson, 1982, p. 276; 1989, p. 167). Just as any functional parameter, importance cannot be expected to preexist in the stimulus independently from contextual goals. Asking in general for the relative importance of face and body is thus very likely meaningless. Accordingly, the more precise goal set for the study was investigating how judgment dimensions (emotional intensity, valence, and arousal), type of emotion (basic and self-conscious), and emotion category (anger, happiness, sadness, shame, and pride) affect the relative importance of face and body in integration tasks.

One long acknowledged problem of multichannel studies involves the production and presentation of adequate stimuli (Hess et al., 1988). Separate control of the stimulus in each channel is required; additionally, stimuli should be parametrically varied, avoiding arbitrariness in their chosen levels and range of variation. For both facial and bodily expressions, models (usually actors) are unable to provide that, let alone meeting the demands of factorial combinations of expressions across channels. On the other hand, the merging of information from distinct channels should be as natural as possible, that is, free from extraneous effects of the presentation media (Hess et al., 1988). As a compromise between both demands, the approach taken here was to use synthesized 3-D realistic combinations of facial and bodily expressions.

Method

Participants

A total of 291 college undergraduates (246 F, 45 M), aged 18-33 (M = 19.6; SD = 3.49) participated in the several tasks included in the study. All were enrolled in exchange for credit courses and were naïve regarding the goals of the study. Each participant was assigned to one of 11 tasks (see details on “design and procedure”). Although an even distribution of participants was attempted between tasks, seasonal fluctuations in the availability of participants and logistical constraints of the data collection determined variations in the extent of the samples. Five of the tasks had samples of 27 to 36 participants (M = 32, SD = 3.55), three had samples of 25, and the remaining three had samples of 22, 21, and 19. Reflecting the marked overall prevalence of female participants, the number of females was larger than that of males in every sample. Samples did not differ statistically regarding either gender composition (p = 0.966, two-tailed Fisher’s Exact Test) or mean age, F(10, 280) = 0.412, p = 0.940.

Stimuli

3-D realistic facial expressions and body postures synthesized with Poser 7 (E-Frontier, 2006) taking as a basis the polygon mesh geometry of a male character. Faces and bodies belonging to the same character can be separately modeled in Poser, which allows for varying them independently in a full-body context.

Facial expressions were modeled at the level of FACS-defined action units (AUs), which correspond to visually distinguishable changes in the face caused by the action of a specific muscle or group of facial muscles. For basic emotions (happiness, sadness, anger), the selection of AUs rested on the description of prototype expressions in the FACS Investigator Guide (Ekman & Friesen, 1978; Ekman et al., 2002), with a focus on AUs featuring in all prototypes of a given emotion (Waller, Cray Jr, & Burrows, 2008). For self-conscious emotions, FACS-based research on shame and pride provided similar guidelines (Keltner, 1995; Tracy & Robins, 2004; Tracy et al., 2009). Each AU was modeled as a local deformation to the character’s head geometry and was parametrically varied in strength according to the FACS intensity scoring (Ekman et al, 2002): A (trace), B (slight), C (marked-pronounced), D (severe-extreme) and, E (Maximum). Whole expressions for a given emotion were obtained as a combination of its associated AUs. Moreover, full expressions were varied across five levels by having their AUs jointly rendered at each of the FACS-specified intensities (A to E). Intensity of the AUs was thus positively correlated and not orthogonalized as in previous studies (A. M. Oliveira, Teixeira, M. Oliveira, Breda, & Da Fonseca, 2007; Silva et al., 2010). This reflects the fact that whole expressions, not their constituent AUs, were now the factor of interest to be combined with body postures as another manipulated factor (see Figure 1).

Body postures were modeled for the same set of emotions following the guidelines of the BAP (Dael et al., 2012a; 2012b), with further reference to video materials from the Geneva Multimodal Emotion Portrayals (GEMEP: Banziger et al., 2012) and photos of full-body expressions from the UC Davis Set of Emotion Expressions (UCDSEE: Tracy et al., 2009). One fundamental distinction in the BAP is the one existing between posture units (positioning of body parts in space) and action units (sudden excursions of articulators, with a clear onset and offset, and returning to a resting position). Besides descriptions at the anatomical level (which anatomical articulators are moving), the BAP provides a supplementary coding of the form of movement (direction and orientation of the implied body parts) and, specifically for action units, a functional level of description. Body expressions were synthesized on the basis of the coding for posture units and at the first anatomical level only.

Since there are no intensity codes proposed in the BAP, three levels of intensity were obtained by morphing between an invariable neutral posture and the final postural configuration for each emotion (maximum intensity) at three equal (33 %) steps. While a reasonably neutral baseline is available for facial expressions (the resting geometry of the character’s head, with no activated AUs), a neutral body posture is a harder notion to define (Huis in ‘t Veld, Van Boxtel, & De Gelder, 2014). The choice, consistent with the BAP coding, was to use the “standard anatomic position” (back straight, feet slightly separated, head facing forward, arms at the side slightly out from the body) as a neutral baseline. All instances of full-body neutral expressions illustrated in the UCDSEE (Tracy et al., 2009) are actually pretty close to this standard posture.

For each of the considered emotions, all combinations of the 5 levels of facial expression with the 3 intensity levels of body posture were implemented on a set of 15 full-body synthetic expressions. In addition, all combinations of the character’s neutral face with the 3 levels of body posture and of the character’s neutral posture with the five levels of facial expression were also rendered for each emotion.


Figure 1

Examples of synthesized faces and bodies used as stimuli. Illustrations refer to the prototypical expression of each emotion represented at its maximum intensity in both the face and the body posture (middle row: basic emotions; bottom row: self-conscious emotions). The figure at the top illustrates the neutral baseline composed of a neutral face (no activated facial AU) and a neutral posture (no activated postural configuration).



Source: own work

Design and procedure

All integration tasks obeyed a 5 (face) × 3 (body) × 2 (replications) full factorial design expanded with the two one-way subdesigns (isolated presentations of emotional information from either the face or the body). Rather than wiping out the face (or blurring its content) or the body, subdesigns were obtained by having bodily expressions combined with a neutral face and facial expressions of emotion combined with a neutral body posture. This option agrees with the definition of facial AUs as observable changes in the face (from a baseline), and of body postures as changes from a standard anatomic posture.

In every task, the presentation of emotional expressions was preceded by a full-body neutral expression, which remained visible for 1000 ms and gave way to an emotional expression after a 500 ms interval. This induced an apparent movement between the baseline and the emotional expressions, which constituted the relevant emotional information. Having the neutral-baseline face as part of the emotional expression thus resulted in the isolated presentation of a body change (body subdesign); keeping the baseline posture as part of the emotional expression isolated, in turn, the occurring change in the face (face subdesign).

Stimuli were randomly presented at the centre of a computer screen (15.6” LCD, 1600 × 900 px resolution, 60 hz refresh rate), with a viewing distance of about 60 cm. Depending on the task, participants judged either “conveyed intensity of the emotion”, “degree of conveyed positive-negative valence” or “conveyed arousal-activation”. Answers were given by locating a mouse cursor and clicking on a horizontal 400 px graphic rating scale, and were automatically converted to a 0-40 scale. Each participant performed singly on one task only and judged all conditions determined by the factorial design (repeated measures design).

There were 11 tasks. Five of them involved judging the expressed intensity of emotions (one emotion per task). Participants were specifically asked to assess “how intense/strong” the emotional state expressed by the character was. The rating scale was left- and right-end anchored with “no intensity at all” and “maximum intensity” respectively. Participants were instructed not to use the extreme points of the scale, reserved for an entirely neutral (non-emotional) expression and for intensities higher than the highest shown in the task. A block of training trials, always comprising the lowest and highest intensity expressions, was run before the experimenter proper.

Three of the eleven tasks involved judging valence. Participants were specifically asked to assess “how positive/negative” the expressed emotional state was. So that there were instances of both positive and negative valence, each of these tasks included the factorial designs corresponding to two emotions of different valence: sadness-happiness, anger-happiness, and pride-shame. Trials pertaining to the two designs were interspersed in the task. The response scale was bipolar, anchored on “extremely negative” and “extremely positive”. Instructions urged participants not to use the end-points of the scale. As they appeared in two of the tasks, expressions embodying the factorial design for happiness were judged by two samples of participants and in two different contexts.

The 3 remaining tasks were similar to the preceding, except that they asked for judgments of conveyed arousal-activation. Participants were asked to assess “how emotionally activated/excited/energized” the character was. The response scale was unipolar, left-anchored on “very low activation” and right-anchored on “very high activation”. As happened with valence, happiness-related expressions were thus evaluated for arousal by two different groups of participants.

Data analysis

Data analysis proceeded in two stages. The first addressed the cognitive algebra underlying the integration of facial and bodily cues. Analysis was focused on disclosing the graphical and statistical signatures of integration models (Anderson, 1981; 1982). It rested on visual inspection of factorial plots aided by repeated measures ANOVAs. As a means to handle heterogeneity in the data, cluster analyses were also performed, largely following the indications provided in Hofmans and Mullet (2013). When meaningful clusters were found, separate graphical and statistical analyses were conducted for each.

FM analyses were performed subsequently for estimating the parameters of the established models (Anderson, 1981; 1982). When averaging was the case, the rAverage program (Vidotto, Massidda, & Noventa, 2010; Vidotto, Noventa, Massidda, & Vicentini, 2011) was used for independently estimating weights and scale values. Goodness-of-fit of the model was always evaluated by repeated measures ANOVAs over the residuals. Correctness of the model entails the absence of sources of systematic variance and thus statistical null results in the ANOVAs (see “method of replications” in Anderson, 1982; Zalinski & Anderson, 1991). As relative importance of face and body was the main focus of interest, when weighs varied within factors (differential weighting model: see Anderson, 1981; 1982) an overall index of relative importance was also calculated. To that end, the ratio of every weight of one factor (the face) to every weight of the other factor was computed and the geometric mean of these ratios (GMR) used to express an overall ratio: , with wFj and wBk denoting the variable weights of face (F) and body (B). For a more intuitive expression, GMR was additionally converted to a percentage index of relative importance by having wB% = and wF% = , with wB% and wF% the percentage share of importance of body and face.

Results

Judgments of intensity

Cognitive algebra. Figure 2 presents the 5 × 3 factorial plots (solid lines) of the mean ratings of intensity obtained for each of the five considered emotions, with face in the abscissa and body as the curve parameter (replications aggregated). Dashed lines stand for the face subdesign.


Figure 2

Factorial 5 (face) × 3 (body) plots obtained in the intensity judgment tasks. Mean ratings of intensity are on the ordinate, levels of face on the abscissa, and body is the curve parameter. The line corresponding to the face subdesign was added in all graphs (dashed line). Basic emotions appear in the top row of graphs and self-conscious emotions in the bottom row.



Source: own work

All graphs illustrate the contribution of both factors to the intensity judgments, as seen in the vertical spreading of lines (reflecting the operation of body) and their positive slope (reflecting the operation of face). Near parallelism of lines in the main design is suggested in the happiness and, to a lesser degree, anger and pride plots, whereas sadness and more noticeably shame exhibit a detectable upward convergence to the right. Assuming linearity of the response scale, these trends are consistent with an averaging rule with extremity weighting (weights increasing with increasing levels of the stimuli: see Anderson, 1981; 1982). In all plots, the dashed lines have a steeper slope than the solid lines. While near-parallelism is consistent with both adding and equal-weighting averaging models (constant weights within each factor), only the latter predicts increased slopes of the lines for the subdesigns (see Anderson, 1981; 1982). Hence, the behavior of the dashed lines favors an averaging model (against adding) for the happiness, anger, and pride plots.

Statistical analyses buttressed the visual inspection. The results of repeated measure ANOVAs concerning the main effects and interactions of the factors are presented in Table 1. Both face and body had significant main effects in all tasks (ps < 0.001). No significant Face × Body interactions were found for happiness and anger, concurring with graphical parallelism. By virtue of the parallelism theorem of IIT (Anderson, 1981, pp. 15-16; 1982, pp. 58-59), these results support linearity of the response scale. The convergence of lines for sadness and shame was captured by significant interaction terms, associated with significant linear × linear components: F(1,35) = 11.447, p = 0.002 for sadness; F(1,24) = 13.43, p = 0.001 for shame. A significant interaction was also found for pride (p = 0.045), concentrated on the significant linear × quadratic component, F(1,24) = 10.423, p = 0.004. This interaction reflects the z-shaped pattern arising from an augmented effect of face when combined with level 2 of body, and is consistent with a differential averaging model with a decreased weight of this particular level of body.

Table 1
Results of the repeated measures ANOVAs performed for the intensity judgment tasks. Data corresponding to the subdesigns were not included in these analyses. Fractional df are Greenhouse-Geisser corrected for the violation of sphericity.
 Results
of the repeated measures ANOVAs performed for the intensity judgment tasks.
Data corresponding to the subdesigns were not included
in these analyses. Fractional df
are Greenhouse-Geisser corrected for the violation of
sphericity.


Confirming the steeper slope of the dashed lines, the interaction term changed from nonsignificant to statistically significant for both happiness and anger when data from the face subdesign were included in the ANOVAs: F(12, 248) = 2.509, p = 0.004, for happiness; F(6.35, 209.70) = 3.298, p = 0.003, for anger. Examination of individual patterns and cluster analyses using both agglomerative hierarchical methods (single-linkage; complete-linkage; centroid and Ward’s methods; data z-standardized by participants) and K-means clustering did not suggest meaningful heterogeneity in the integration rules at the level of subgroups of participants.

Functional measurement of importance. As cognitive algebra suggested an averaging model in all tasks, weights and scale values were estimated per subject with the rAverage program (version 0.3-6). The equal weighting model (EAM) was used with happiness and anger, given parallelism in the plots and the lack of statistically significant interactions. The Information Criterion procedure (IC), which starts from the EAM estimated parameters and iteratively checks the usefulness of introducing new weight parameters (see Vidotto et al., 2010), was also used to allow for some degree of differential weighting (as estimation proceeded on a single subject basis, it became thus possible to have participants with variable weights and other with constant weights in each factor). For the other emotions the differential weighting model (DAM) was used, which poses no particular constraints on weights, in addition to the IC procedure. As indicated before, goodness-of-fit was evaluated with ANOVAs performed on the residuals. EAM-based estimates were kept when the EAM residuals did not include systematic sources of variance; IC-based estimates were kept if active sources were left by the EAM but not by the IC procedure; DAM-based estimates were kept if the DAM exhausted all sources of variance and the IC procedure did not. This rationale was followed in all tasks.

For happiness, anger, sadness, and pride, the IC procedure allowed capturing all systematic variance in the data. For shame, this was achieved with DAM. Since weights are the parameters of interest in this study, scale values will no longer be considered hereafter. Since weights are estimated from the averaging model on a ratio scale with arbitrary unit (see Anderson, 1982), they allow for direct comparisons within and across factors in each task. To eliminate any differences in unit and since they add up to 1 by definition, all weights were normalized per participant to their total sum. Under this 0-1 form, they can be compared without restrictions across participants and tasks. Figure 3 presents graphically the mean normalized weights estimated in each task. The w0 parameter of the “initial state” component (w0y0) of the averaging equation (see Anderson, 1981, pp. 63-634) was also estimated but is not reported, having always been found negligible (close to 0).

A tendency for extremity weighting (higher weighting of the more intense levels) is visible in most graphs (with the exception of pride), which is sometimes confined to the face, as in anger, or to the body, as in happiness. However, differences between weights within the factors (assessed with RM ANOVAs followed by Bonferroni-adjusted pairwise comparisons), were only statistically significant between levels 1 and 4 of face in the anger task (p = 0.014), and levels 1 and 3, and 2 and 3 of body in the shame task (p = 0.005 and 0.002). This suggests that an equal weighting averaging model would allow a reasonable approximation to the measurement of importance of body and face in judging emotional intensity.


Figure 3

Estimated weights for the levels of face (1 to 5, from left to right) and body (1 to 3, left to right) in each emotion. Weights were estimated and normalized per participant. Values on the ordinate correspond to the mean of normalized weights, aggregated across participants.



Source: own work

In order to evaluate the overall relative importance of the two factors in each task, their percentage share of importance was calculated as indicated before (section data analysis). Figure 4 provides a graphical representation of those percentages.

Percentage share of importance of body
and face to judgments of expressed emotional intensity.
Figure 4
Percentage share of importance of body and face to judgments of expressed emotional intensity.


Source: own work

Emotional information in the face was overall more important than emotional information in the body, with the exception of happiness, where both sources contributed evenly. In spite of a slight advantage of the face (54%), a close to even contribution of both sources was also the case for anger. Supporting these differences, relative importance of the face did not depart from 50% in both happiness, t(29) = 0.094, p = 0.926 and anger, t(29) = 1.25, p = 0.221, but differed significantly from that reference value in the other emotions (largest value of p = 0.005, for shame).

Judgments of valence

Cognitive algebra. Figure 2 presents the 5 (face) × 3 (body) factorial plots for the valence judgment tasks. Mean ratings of valence for the face subdesign are represented by the dashed line. Higher values on the ordinate correspond to more positive judgments, lower values to more negative ones. Despite being separately presented for each emotion, it should be recalled that data were collected from three tasks, each including two opposite-valence emotions (happiness-anger, happiness-sadness, shame-pride). Regardless of the task, combined face and body expressions in each trial were always valence-congruent (i.e., valence was only varied across, not within trials). Since happiness appeared in two tasks, a mixed ANOVA with face and body as within-subject factors and task as a between-subjects factor was initially performed. No significant effects of the task were found, either main, F(1,45) = 0.014, p = 0.905, or interactions (lowest associated p value = 0.07, for the second order interaction Task × Body × Face). Data collected for happiness in the two tasks were thus treated aggregately.

The two plots for happiness on the leftmost column correspond to two subgroups (CL 1 and CL 2) suggested by cluster analyses performed over participants (data z-standardized per participant). The K-means, Ward’s, and complete-linkage methods closely converged on the identification of the two clusters. The K-means solution was the one retained. As expected, positively-valenced emotions (happiness and pride) are associated with increasing effects of the levels of both factors, and negatively-valenced emotions (anger, sadness, and shame) with decreasing effects of both face and body. A pattern of near parallelism in the main design (solid lines) is the case for anger, pride, and happiness in CL 1. A slight convergence towards the right is suggested for shame and less markedly for sadness, consistent with averaging with extremity weighting. The pattern for happiness in CL 2 is dissimilar to any other in Figure 5, displaying a rightward fanning trend. With the exception of happiness in CL 2, all dashed lines appear steeper than the full lines, both when they work up or down, which favors averaging against adding in the signaled cases of parallelism.

Statistical analyses concurred with the visual inspection. Results of repeated measures ANOVAs are reported in Table 2. In all cases, face and body had significant main effects. No significant interactions were found for anger and pride, agreeing with parallelism in the plots. These results support the linearity of the response scale and thus the psychological validity of the observed patterns. Despite apparent parallelism, a significant interaction was found for happiness in CL 1. This interaction rested on two higher-order components (cubic × quadratic and order 4 × quadratic) and, thus, did not involve differences in the overall slopes of lines. Confirming the observed downward convergence of lines, a significant interaction was found for shame, concentrated on the linear × linear component, F(1, 25) = 7.67, p = 0.01. The Face × Body interaction did not reach significance in sadness, but a significant bilinear component was present, F(1,21) = 6.30, p = 0.02. Finally, happiness in CL 2 presented a significant interaction, which, differently from CL 1, included a significant bilinear component, F(1, 12) = 7.29, p = 0.019.


Figure 5

Factorial 5 (face) × 3 (body) plots obtained in the valence judgment tasks. Mean ratings of valence are on the ordinate, levels of face on the abscissa, and body is the curve parameter. The line corresponding to the face subdesign was added in all graphs (dashed line). The two graphs on the leftmost column represent the ratings of happiness expressions of two subgroups of participants (Cl 1 and Cl 2) distinguished by cluster analyses.



Source: own work

Confirming the steeper slope of the dashed lines, when data from the face subdesign were included in the ANOVAs, the interaction term changed from non-significant to significant for pride, F(12, 312) = 2.069, p = 0.019, and anger, F(7.22, 187.73) = 3.069, p = 0.004, and a significant linear × linear component emerged for happiness in the CL 1, F(1,33) = 18.94, p < 0.001. The interaction remained non-significant for sadness (p = 0.109), which could reflect insensitivity of the ANOVA to the departure from parallelism of the subdesign curve. This line had the highest slope (modulus) among all lines, and a one-tailed paired t-test between the slope computed for the pooled curves of the main design and the slope for the subdesign revealed a significant difference, t(22) = 2.467, p = 0.011. Happiness in CL 2 was the only case where the curve for the subdesign was less steep than the other curves. As it might involve other rules than averaging, CL 2 was not considered for the purposes of the functional measurement of importance.

Table 2
Results of the repeated measures ANOVAs performed for the valence judgment tasks. Data corresponding to the subdesigns were not considered in these analyses.
 Results
of the repeated measures ANOVAs performed for the valence judgment tasks. Data
corresponding to the subdesigns were not considered
in these analyses.


Functional measurement of importance. Based on the findings of the cognitive algebra, weights were estimated as before with the rAverage program. For anger and pride the EAM captured all systematic variance in the data. This was also achieved for sadness and shame with the IC procedure. The best model adjustment for happiness (CL 1 only) was obtained with the IC procedure, but still left an active interaction in the residuals, F(4.87, 155.76) = 2.75, p = 0.022, h2p = 0.079. This interaction rested on two higher order components and was essentially dependent on level 2 of face (removing it from the ANOVA made the interaction disappear). The adjustment was considered good enough to support the weight parameters derived from the model.

Figure 6 presents the mean estimated weights after normalization of their sum. When EAM was the adjusted model, weights are constant across levels of each factor, allowing seeing that for both anger and pride the face has higher importance than the body. More generally, higher importance of the face seems apparent overall, except for happiness, where this pattern is inverted. When differential weighting is the case, some tendency for extremity weighting is observable in both factors. However, differences between weights within factors were never significant, suggesting that an equal-weighting model would afford a reasonable enough basis for weight estimation.

To compare the relative importance of the factors, differential weighting was turned into a percentage share of importance of face and body. For pride and anger, the ratio between factors was simply the constant weight of the face divided by the weight of the body (then converted to percentages). These percentages are given in Figure 7. As in the intensity tasks, information in the face is in general more important, to the exception of happiness, where the opposite is true. For anger and pride, the allocation of importance among the two factors deviated from the reference value of 50% (respectively t(30) = 3.654, p = 0.001, and t(28) = 8.096, p < 0.001), while for shame (p = 0.058) and sadness (p = 0.068) the difference was at best marginally significant (< 0.1).


Figure 6

Estimated weights for the levels of face (1 to 5) and body (1 to 3) in each emotion. Weights were estimated and normalized per participant. Values on the ordinate are the means of normalized weights.



Source: own work

Percentage share of importance of body
and face to judgments of expressed valence.
Figure 7
Percentage share of importance of body and face to judgments of expressed valence.


Source: own work

On the whole, results were quite similar to those obtained with intensity judgments, with only a slight decrease of relative importance of the face in all emotions except anger. One-way ANOVAs with percentage of importance as the dependent variable and type of judgment (valence versus intensity) as a factor did not produce statistically significant results for any emotion. The same happened when aggregated relative importance of the face across all emotions was compared between judgments, F(1, 285) = 0.759, p = 0.384. No evidence for differences between basic and self-conscious emotions emerged. Only happiness (CL 1) presented differences to other emotions, both basic and self-conscious, namely sadness, t(18) = 2.471, p = 0.024 (paired), shame, t(54) = 2.280, p = 0.027, and pride, t(57) = 3.848, p = 0.001.

Judgments of arousal

Cognitive algebra. The 5 (face) × 3 (body) factorial plots for the arousal judgment tasks are presented in Figure 8, together with the curves for the face subdesigns (dashed line). Tasks were the same used for valence judgments, so that happiness expressions were evaluated twice, in two distinct tasks. As no effects of task, either main (F(1, 39 ) = 2.716, p = 0.107) or interactions (lowest p = 0.71, found for the Body × Task interaction), were disclosed in a mixed ANOVA with task as a between-subjects factor, data for happiness were combined across tasks.

Two plots for sadness and two for shame are presented, corresponding to subgroups suggested by cluster analyses over the participants (data z-normalized per participant). The Ward’s, single linkage, and complete linkage methods all provided the same clustering solution for sadness, which was retained. Close solutions were provided for shame by the Ward’s, single linkage, complete linkage, and K-means methods. Given full agreement between the Ward’s and the complete linkage solutions, that was the one retained. For both these emotions, the minor clusters differed from the major ones on the way the two factors operate: increasingly for the major clusters, decreasingly for the minor ones. As more intense sadness and shame are expectedly associated with less activation/ arousal, the fact that only a minority of participants displayed a decreasing effect of expression intensity on arousal may signal a difficulty in distinguishing between the two dimensions (or, alternatively, some specificity of emotional arousal as regards unspecific arousal).

The two noticeable graphical trends in Figure 8 are: (1) with the exception of the minor cluster for sadness (Sadness_CL 2), near-parallelism in the main designs; (2) with the exception of pride and the minor clusters for sadness and shame, near-parallelism between the dashed line and the solid lines. Overall, this is consistent with an adding rule for the integration of facial and bodily information. Results of the associated repeated measures ANOVAs are presented in Table 3. Except for the face in Sadness_Cl 1 (p = 0.078), body and face had significant main effects in all other cases. Only one significant interaction was found, for Sadness_Cl 2 (p = 0.048), concurring with general near-parallelism in the plots.


Figure 8

Factorial 5 (face) × 3 (body) plots for obtained in the arousal judgment tasks. Mean ratings of arousal are on the ordinate, levels of face on the abscissa, and body is the curve parameter. The line corresponding to the face subdesign was added in all graphs (dashed line).



Source: own work

Table 3
Results of the repeated measures ANOVAs performed for the arousal judgment tasks. Data corresponding to the subdesigns were not considered in these analyses.
 Results
of the repeated measures ANOVAs performed for the arousal judgment tasks. Data
corresponding to the subdesigns were not considered
in these analyses.


When data for the face subdesigns were included in the analyses, only pride presented a significant interaction, F(5.93, 142.26 ) = 5.542, p < 0.001 , h2p = 0.188. Paired t-tests were additionally performed for all other emotions between the computed slope of the (pooled) curves of the main design and the slope of the subdesign, which also did not reveal significant differences. Taken together, the graphical and the statistical analyses were thus supportive of averaging for pride and adding for the other emotions.

Functional measurement of importance. Unlike averaging, adding models do not allow proper separation of weights and scale values. Under certain conditions, however, some appreciation of relative importance can be obtained with the Relative Range Index (RRI) (Anderson, 1981, 266-270). This index corresponds to the ratio of the range of one factor to the range of the other(s). The range of a factor is the effect it has on the response scale, computed as the difference between the marginal means of its highest and lowest levels. There are three conditions for the RRI to afford a measure of relative importance: (1) the response scale is linear; (2) the model is of an additive-type; (3) variation in the stimuli is not arbitrary and corresponds to the maximum or to some natural (representative) range of variation. The first two conditions are empirically validated by the preceding analyses, and the third one was implemented at the stage of stimuli construction (see method). The RRI was thus computed on a single subject basis for all emotions except pride. As averaging applies in the latter case, proper weights were estimated for pride with the rAverage program.

Figure 9 graphically presents the relative importance of face and body for judgments of expressed arousal. Differently from valence and intensity judgments, a divide between basic and self-conscious emotions is now apparent, with more relative importance of the body for basic emotions and of the face for social emotions.


Figure 9

Percentage share of importance of body and face to judgments of arousal. For sadness and shame only the major (additive) clusters are presented. RRI means that percentages were calculated on the basis of the Relative Range Index (range of the face divided by range of the body).



Source: own work

Even if not presented, the RRI and its percentage translation were also calculated for the seven participants in Shame_Cl 2, providing values of relative importance similar to those of Shame_CL 1 (39 % for body and 61 % for face). Since the integration operation in CL 2 is subtractive, this suggests that the greater relative importance of face in the self-conscious emotions is not specific to participants adopting an additive view (and thus potentially mistaking arousal for intensity).

The share of importance of face deviated significantly from the reference value of 50% in all basic emotions: t(38) = 7.554, p < 0.001 for happiness; t(18) = 2.338, p = 0.031 for anger; t(14) = 3.109, p = 0.008 for Sadness_CL 1. For social emotions, this was also the case with pride, t(24) = 5.628, p < 0.001. One-way ANOVAs were performed for each emotion with relative importance as the dependent variable and type of judgment as a three-level factor (intensity, valence, and arousal). No significant results were found for the social emotions (lowest p = 0.151), but all basic emotions were associated with significant Fs (minimum F and largest p found for anger: F(2, 77) = 5.327, p = 0.007). Follow-up pairwise comparisons carried out for the basic emotions disclosed in all cases significant differences between arousal, on the one hand, and valence and intensity on the other (largest Bonferroni-corrected ps = 0.021 for the intensity-arousal comparisons, and 0.009 for the valence-arousal comparisons). These results document a significant increase in the relative importance of the body for judgments of arousal targeting basic emotions, opening up the possibility that this may afford a distinguishing criterion in regard to self-conscious emotions.

Discussion

The present study set as a goal to examine the dependencies of the relative contribution of facial and bodily information to emotion perception on distinct emotions, emotion types, and emotion-related judgments. It relied on IIT and Functional Measurement, which allowed circumventing the conflation of importance and scale value that afflicts attempts at measuring psychological importance.

Averaging was the most commonly observed rule for the integration of facial and bodily information. It was found for every emotion when expressions were judged for emotional intensity or conveyed valence. When expressed arousal was judged, however, adding became the predominant rule, with pride (still obeying an averaging rule) as the sole exception. Adding being structurally simpler than averaging might suggest that integrating arousal across the face and body is more straightforward for a perceiver than integrating valence or emotional intensity. The specificity of pride in this regard is unclear. Pride has been suggested to be a heterogeneous construct, comprising two distinct aspects: authentic and hubristic pride (Tracy & Robins, 2008; Carver & Johnson, 2010). To the extent that these aspects bear an impact on the evaluation of arousal (with hubristic pride reportedly more related to impulsivity and aggression) evaluating arousal from pride expressions might be conjectured to involve additional complexities.

Based on the established integration rules, functional measures of importance were derived. In the case of averaging, these were proper weights estimated independently from scale values. When adding was the rule, the relative range index (RRI) was used, as the required conditions were satisfied. Arousal judgments provided again a distinctive profile of results. While the face was more important than the body for judgments of intensity and valence in all emotions except happiness, the body was on the contrary more important for judgments of arousal in all basic emotions. This result appears convergent with the notion of a chief role of the body in conveying arousal (Kleinsmith & Bianchi-Berthouze, 2007; 2012) and of the face in conveying valence (Hess, Blairy, & Kleck, 1997; Willis, Burke, & Palermo, 2011). Yet, it simultaneously disavows and limits that claim by illustrating a steady preponderance of the face in the self-conscious emotions. Whether this difference between basic and self-conscious emotions is general or contingent on the particular gamut of emotions cannot be assessed without further research (including, for example, fear, surprise, and disgust as additional basic emotions, and embarrassment or guilt as additional self-conscious emotions).

A more specific contention for a key involvement of the body in valence perception at high intensities of facial expression was put forward by Aviezer, Trope, and Todorov (2012). In the present measurement framework, this could be understood in two ways: Either as a form of differential weighting, with weights for the face diminishing at high expression intensities (resulting in increased relative weight of the body), or as a configural effect whereby absolute weights of the levels of body change (get larger) when combined with high levels of facial expression. The first interpretation disagrees with the overall trend of extremity weighting observed for valence (see Figure 7). The second is not compatible with an algebraic model, which requires invariable parameters, and thus disagrees with the finding of an averaging rule. Aviezer et al.’s proposal remains valid, we surmise, for the domain where they tested it – extreme/paradoxical facial expressions devoid of a context in a setting were recognition accuracy is at issue (we further surmise that if equivalent paradoxical body postures were produced, it would then be up to the face to differentiate between the valence of expressions).

Evidence for a dependency of the relative importance of the face and body on specific emotions was essentially limited to happiness, associated overall with a larger contribution of the body irrespective of whether valence or arousal were being judged. Several studies in the literature have contrasted specific emotions as regards their ease of recognition from the body (e.g., Atkinson et al., 2004; Van den Stock et al., 2007) and the number of reliably associated body movements (e.g., Meijer, 1989). Drawing on this, a reasonable general hypothesis would be that the importance of the body grows larger for emotions more easily recognizable from the body or more strongly associated with body postures. While happiness is typically found among the latter, other emotions such as shame (Meijer, 1989), sadness, or anger (Atkinson et al., 2004; Van den stock et al., 2007) share a similar profile or even outperform happiness. This is at variance with the distinctive character of happiness in the present study, possibly signaling a disconnection between the contribution of the body to emotion recognition and to non-classificatory judgments of emotion-related dimensions (e.g., intensity, arousal, valence, action tendencies, appraisal dimensions, etc.).

One particular issue in this study concerns the distinction between emotional intensity and arousal/activation. While these dimensions might largely overlap in high arousal emotions, they could be expected to vary inversely for low arousal emotions (see Larsen & Diener, 1992, pp. 46-47). In partial agreement with this, two clusters of participants were found for both shame and sadness (low arousal emotions), differing in the direction of the effect of stimulus intensity on perceived arousal. For the major clusters in each emotion, increases in intensity in either the face or the body led to increased ratings of arousal (additive functioning); for the minor clusters, the opposite happened (subtractive functioning). One possible interpretation would be that only a minority of participants makes sense of the distinction and that a majority of participants mistakes arousal for intensity. However, the shift from a clear predominance of the face when judging intensity of sadness (see Figure 4) to a predominance of the body when judging arousal (see Sadness_CL 1 in Figure 9) does not harmonize with a mere overlap, suggesting instead that some form of distinction was kept among these dimensions in the major clusters.

This study presents, of course, limitations. Besides the particular choice of emotions and judgment dimensions (e.g., action readiness/tendencies were not evaluated: see Frijda, 1987), both facial and bodily information were only considered at the level of whole expressions. However, constituent facial actions units (AUs) and anatomical articulators (e.g., neck, trunk, upper and lower arms) could themselves be taken as factors: whereas this would impose more complex designs, it should bring about important analytical insights about the relative importance of the body and face in emotion perception. Similar considerations apply to the limited use of static expressions only (though apparent movement was induced between baseline and emotional expressions). This is a potentially significant constraint, insofar as the temporal dynamics of facial expressions is a relevant emotional informer (Wehrle, Kaiser, Schmidt, & Scherer, 2000) and the strength and velocity of body movements contribute to their expressive value (Meijer, 1989). As synthetic faces and bodies allow precise control of the time of expressions (e.g., onset, apex, offset), turning temporal dynamics into an additional factor may be worth considering.

The circumstance that facial and bodily expressions were varied, respectively, across five and three levels of intensity may have exerted an extraneous influence on the results. The finding of larger relative importance of the body with arousal judgments and of the face with valence and intensity judgments seems to exclude a determining effect of the number of variation levels, but a partial effect cannot be ruled out. This potential confounding should thus be addressed in future studies employing the same number of levels (moreover, if possible, matched for discriminability) in both factors.

An additional obvious limitation is the use of a single head and body geometry, featuring a young male character, as a basis for modeling emotional expressions. This limits the generality of results as regards variables such as gender, age, ethnicity and even the particular morphology of the face and body. Some evidence has been obtained that, for the integration of facial AUs, similar results are found with distinct head geometries (doctoral dissertation of the first author, under preparation). However, no equivalent studies were conducted for the integration of face and body expressions. In general, thus, systematic replication experiments should be performed with different synthetic characters in order to assess the generality of the findings. Also, systematic consideration should be given to the perceivers’ characteristics (e.g., gender, age, ethnicity) as a possible influential factor in judging emotions expressed by distinct characters.

One final qualification should be offered. Resorting to the taxonomic nomenclature of basic (Ekman, 1999) and self-conscious social emotions (Tracy et al., 2009) entails no commitment to a categorical view of emotions, and is inessential to the illustrated approach. It merely reflects the need for some convenient emotion labeling (desirably relatable to ordinary discourse) to which FACS-defined action units and BAP-defined body postures may keep an operational link. For all that matters, the wording “modal emotions” (Scherer, 1994), bound to a rather distinct multi-componential view, could be used in place of “basic emotions”. And as illustrated by the use of valence and arousal as judgment dimensions, dimensional theories can also be straightforwardly accommodated. Rather than a hitch, the ability to operationally bridge between contending theoretical views within a unified quantitative framework should be credited to the advantages of the functional measurement approach.

References

Anderson, N. H. (1981). Foundations of information integration theory . New York: Academic Press.

Anderson, N. H. (1982). Methods of information integration theory . New York: Academic Press.

Anderson, N. H. (1989). Information integration approach to emotions and their measurement. In R. Plutchik & H. Kellerman (Eds.), Emotion: Theory, research, and experience. Volume 4: The measurement of emotions (pp. 133-186). San Diego, CA: Academic press.

Anderson, N. H. (2008). Unified social cognition . New York: Psychology Press.

App, B., Reed, C. L., & McIntosh, D. N. (2012). Perceiving emotional state and motion intention in whole body displays: Relative contributions of face and body configurations. Cognition and Emotion, 26 , 690-698.

Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33 (6), 717-746.

Aviezer, H. Trope, Y., & Todorov, A. (2012). Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science , 338(6111), 1225-1229 doi: 10.1126/science.1224313.

Bänziger, T., Mortillaro, M., & Scherer, K.R. (2012). Introducing the Geneva Multimodal Expression corpus for experimental research on emotion perception. Emotion, 12 , 5, 1161-1179.

Carver, C. S., & Johnson, S. L. (2010). Authentic and Hubristic Pride: Differential Relations to Aspects of Goal Regulation, Affect, and Self-Control. Journal of Research in Personality , 44(6), 698–703. doi:10.1016/j.jrp.2010.09.004.

Dael, N., Mortillaro M., & Scherer K. R. (2012a). Emotion expression in body action and posture. Emotion , 12(5), 1085-1101

Dael, N., Mortillaro M., & Scherer K. R. (2012b). The Body Action and Posture coding system (BAP): Development and reliability. Journal of Nonverbal Behavior, 36 , 97-121.

de Gelder, B. (2009). Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. Philosophical Transactions of the Royal Society B: Biological Sciences, 364 (1535), 3475-3484. http://dx.doi.org/10.1098/rstb.2009.0190

de Gelder, B., Snyder, J., Greve, D., Gerard, G., & Hadjikhani, N. (2004). Fear fosters flight: A mechanism for fear contagion when perceiving emotion expressed by a whole body. Proceedings of the National Academy of Sciences, 101 (47), 16701-16706.

Ekman, P. (1965). Differential communication of affect by head and body cues. Journal of Personality and Social Psychology, 2 (5), 926–735.

Ekman, P. (1999). Basic emotions. In T. D. Power (Ed). The handbook of cognition and emotion (pp. 45-60). Sussex, UK: Wiley.

Ekman, P., & Friesen, W. V. (1978). Facial Action Coding System . Palo Alto, CA: Consulting Psychologists Press.

Ekman, P., Friesen, W., & Hager, J. (Eds.). (2002). Facial Action Coding System [E-book]. Salt Lake City, UT: Research Nexus.

Fernández-Dols, J. M., & Carroll, J. M. (1997). Is the meaning perceived in facial expression independent of its context? In J. A. Russell & J. M. Fernández-Dols (Eds.), The psychology of facial expression (pp. 275-294). Cambridge: Cambridge University Press.

Frijda, N. H. (1987) Emotion, cognitive structure, and action tendency. Cognition and Emotion, 1 , 2, 115-143.

Frijda, N. H. (1969). Recognition of emotion. In L. Berkowitz (Ed.), Advances in experimental social psychology, Vol. 4 (pp. 167–224). New York: Academic Press.

Gallois, C., & Callan, V. (1986). Decoding emotional messages: Influence of ethnicity, sex, message type, and channel. Journal of Personality and Social Psychology, 51 , 755-762.

Goodenough, F. L., & Tinker, M. A. (1931). The relative potency of facial expression and verbal description of stimulus in the judgment of emotion. Comparative Psychology, 12 , 365-370.

Harrigan, J. A. (2005). Proxemics, kinesics, and gaze. In J. A. Harrigan, R. Rosenthal, & K. Scherer (Eds.), The new handbook of methods in nonverbal behavior research (pp. 137–198). New York: Oxford University Press.

Hess, U., Blairy, S. & Kleck, R. E. (1997). The intensity of emotional facial expressions and decoding accuracy. Journal of Nonverbal Behavior, 21 (4), 241-257.

Hess, U., Kappas, A., & Scherer, K. R. (1988). Multichannel communication of emotion: Synthetic signal production. In K. R. Scherer (Ed.) Facets of emotion: Recent research (pp. 161-182). Hillsdale, NJ: Lawrence Erlbaum Associates.

Hietanen, J. K., & Leppänen, J. M. (2008). Judgment of other people's facial expressions of emotions is influenced by their concurrent affective hand movements. Scandinavian Journal of Psychology, 49 , 221-230.

Hofmans, J., & Mullet, E. (2013). Towards unveiling individual differences in different stages of information processing: A clustering-based approach. Quality & Quantity, 47 , 555–564.

Huis in ‘t Veld, E. M., Van Boxtel, G. J., & de Gelder, B. (2014). The Body Action Coding System II: muscle activations during the perception and expression of emotion. Frontiers in Behavioral Neuroscience, 8 , 1-13. http://dx.doi.org/10.3389/fnbeh.2014.00330

Keltner, D. (1995). The signs of appeasement: Evidence for the distinct displays of embarrassment, amusement and shame. Journal of Personality and Social Psychology, 68 , 441-454.

Kleinsmith, A., & Bianchi-Berthouze, N. (2007). Recognizing affective dimensions from body posture. In A. R. Paiva, R. Prada, & R. Picard (Eds.), Affective computing and intelligent interaction. Volume 4738 (pp. 48–58). Berlin: Springer. http://dx.doi.org/10.1007/978-3-540- 74889-2_5

Kleinsmith, A., & Bianchi-Berthouze, N. (2012). Affective body expression perception and recognition: A survey. IEEE Transactions on Affective Computing, 4 , 1, 15-33. http://dx.doi.org/10.1109/T-AFFC.2012.16

Meeren, H. K. M., van Heijnsbergen, C., & de Gelder, B. (2005). Rapid perceptual integration of facial expression and emotional body language. Proceedings of the National Academy of Sciences of the USA, 102 , 16518-16523.

Mehrabian, A., & Ferris, S. R. (1967). Inference of Attitudes from Nonverbal Communication in Two Channels. Journal of Consulting Psychology , 31(3), 248–252.

Larsen, R. & Diener, E. (1992). Promises and problems with the circumplex model of emotion. In M. S. Clark (Ed.). Emotion (pp. 25-59). Newbury Park, CA: Sage.

Meijer, M. (1989). The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior, 13 (4), 247-268.

Oliveira, A. M., Teixeira, N., Oliveira, M., Breda, S. J., & Da Fonseca, I. (2007). Algebraic Integration Models of Facial Features of Expression: A Case Made for Pain. Rivista di Storia e Metodologia della Psicologia, 12 , 1-2, 155-166.

O’Sullivan, M., Ekman, P., Friesen, W., & Scherer, K. (1985). What you say and how you say it: The contribution of speech content and voice quality to judgment of others. Journal of Personality and Social Psychology , 48, 54-62.

Poser 7 (2006) [computer software]. E-frontier America, Inc.

Scherer, K. R. (1986). Voice, stress, and emotion. In M. H. Appley & R. Trumbull (Eds.), Dynamics of stress (pp. 159-181). New York: Plenum.

Scherer, K. R. (1994). Toward a Concept of ‘Modal Emotions’. In P. Ekman & R .J. Davidson (Eds.). The Nature of Emotion: Fundamental Questions (pp. 25–31.) New York and Oxford: Oxford University Press.

Scherer, K. R., & Ellgring, H. (2007). Multimodal expression of emotion: Affect programs or componential appraisal patterns? Emotion, 7 , 1, 158-171.

Silva, A. D., Oliveira, A. M., Viegas, R., Oliveira, M., Lourenço, V., & Gonçalves, A. (2010). The cognitive algebra of Prototypical expressions of emotion in the face: one or many integration rules? In A. Bastianelli & G. Vidotto (Eds.) Fechner Day 2010: Proceedings of the 26th Annual Meeting of the International Society for Psychophysics (pp. 339-344). Padova, Italy: ISP.

Tracy, J. L., & Robins, R. W. (2004). Show your pride: Evidence for a discrete emotion expression. Psychological Science, 15 , 194 –197.

Tracy, J. L., & Robins, R. W. (2008). The nonverbal expression of pride: Evidence for cross-cultural recognition. Journal of Personality and Social Psychology, 94 , 3, 516-530.

Tracy, J. L., Robins, R. W., & Schriber, R. A. (2009). Development of a FACS-verified set of basic and self-conscious emotion expressions. Emotion, 9 , 554-559.

Van den Stock, J., Righart, R., & de Gelder, B. (2007). Body expressions influence recognition of emotions in the face and voice. Emotion, 7 , 3, 487-494.

Vidotto, G., Massidda, D., & Noventa, S. (2010). Averaging models: parameters estimation with the R-Average procedure. Psicologica, 31 , 3, 461-475.

Vidotto, G., Noventa, S., Massidda, D., & Vicentini, M. (2011). rAverage: Parameter estimation for the Averaging model of Information Integration Theory [Computer program]. Retrieved from http://www.rproject.org .

Waller, B. M., Cray, J. J., & Burrows, A. M. (2008). Selection for universal facial emotion. Emotion, 8 (3), 435-439.

Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78 , 1, 105-119.

Willis, M. L., Burke, D., & Palermo, R. (2011). Judging approachability on the face of it: the influence of face and body expressions on the perception of approachability. Emotion, 11 , 3, 514-523.

Winters, A. M. (2005). Perceptions of Body Posture and Emotion: A question of methodology. The New School Psychology Bulletin, 3 , 2, 35-45.

Zalinski, J., & Anderson, N. H. (1991). Parameter estimation for averaging theory. In N. H. Anderson (Ed.), Contributions to Information Integration Theory (Vol. I: Cognition, pp. 353-394). Hillsdale, NJ: Lawrence Erlbaum Associates.

Notes

* Research article.

Author notes

** Dr., Institute of Cognitive Psychology, University of Coimbra, Portugal. E-mail: acduarte@fpce.uc.pt

*** PhD, Institute of Cognitive Psychology, University of Coimbra, Portugal. E-mail: acduarte@fpce.uc.pt