Affective Processing of Loved Familiar Faces: Contributions from Electromyography

Human faces stand among the most unique stimuli in social and emotional communication. By looking at faces we can access a broad range of significant information about the other, such as personal identity, emotional state (facial expression), sex, age, race, attractiveness, attitudes (whether they are friendly or hostile), intentions, and thoughts (Dekowska et al., 2008; Adolphs, 2009). Because of its relevance in everyday interactions, the face has been the target of much study throughout the past decades. Most studies on the psychology of face perception and recognition have focused on emotional facial expressions, following the pioneering work of Tomkins (1962), Izard (1971, 1977, 1994), and Ekman (1984, 1992) (see Russell, 1994; Dimberg, 1997; Whalen et al., 1998; Ohman et al., 2001; Adolphs, 2002; Eimer & Holmes, 2007; Vuilleumier & Pourtois, 2007; Li et al., 2010), based on the evolutionary perspective outlined by Darwin in his book The expression of emotions in man and animals (1872). More recently, a large body of research has been devoted to delineate the brain mechanisms involved in face perception and identity recognition (see Adolphs, 2002; Faihall & Isahi, 2006; Dekowska et al., 2008; Li et al., 2010). In this context, studies using central electrophysiological techniques, such as electroencephalography (EEG), event-related potentials (ERPs), or magnetoencephalography (MEG), and metabolic techniques, such as positron emission tomography (PET) or functional magnetic resonance imaging (fMRI), have proven particularly useful in describing the chain of events taking place in the processing of facial information and the neural circuitry responsible for face recognition, respectively. Thus, several event-related components such as P1, N170, N250r, P300, and N400, have been used to determine the temporal sequence that goes from pictorial/structural encoding to retrieval of biographical/emotional information (Bentin et al., 1996; Bruce & Young, 1986; Eimer, 2000a; Herrmann et al., 2005; Schweinberger, 2011). Imaging techniques, on their part, have helped identify the specific brain areas involved in face perception and recognition, including recognition of emotional expression and personal identity (Adolphs, 2002; Gobbini & Haxby, 2007; Zeki, 2007, Haxby & Gobbini, 2011). These brain areas constitute a distributed network with a core system responsible for the analysis of visual appearance (posterior superior temporal sulcus, inferior occipital and fusiform gyri), and an extended system that underlies the retrieval of person knowledge (medial prefrontal cortex, temporo-parietal junction, anterior temporal cortex, precuneus, and posterior cingulate), action understanding (inferior temporal and frontal operculum, and intraparietal sulcus), and emotion (amygdala, insula, and striatum/reward system).


Introduction
Human faces stand among the most unique stimuli in social and emotional communication. By looking at faces we can access a broad range of significant information about the other, such as personal identity, emotional state (facial expression), sex, age, race, attractiveness, attitudes (whether they are friendly or hostile), intentions, and thoughts (Dekowska et al., 2008;Adolphs, 2009). Because of its relevance in everyday interactions, the face has been the target of much study throughout the past decades. Most studies on the psychology of face perception and recognition have focused on emotional facial expressions, following the pioneering work of Tomkins (1962), Izard (1971Izard ( , 1977Izard ( , 1994, and Ekman (1984Ekman ( , 1992) (see Russell, 1994;Dimberg, 1997;Whalen et al., 1998;Öhman et al., 2001;Adolphs, 2002;Eimer & Holmes, 2007;Vuilleumier & Pourtois, 2007;Li et al., 2010), based on the evolutionary perspective outlined by Darwin in his book The expression of emotions in man and animals (1872). More recently, a large body of research has been devoted to delineate the brain mechanisms involved in face perception and identity recognition (see Adolphs, 2002;Faihall & Isahi, 2006;Dekowska et al., 2008;Li et al., 2010). In this context, studies using central electrophysiological techniques, such as electroencephalography (EEG), event-related potentials (ERPs), or magnetoencephalography (MEG), and metabolic techniques, such as positron emission tomography (PET) or functional magnetic resonance imaging (fMRI), have proven particularly useful in describing the chain of events taking place in the processing of facial information and the neural circuitry responsible for face recognition, respectively. Thus, several event-related components such as P1, N170, N250r, P300, and N400, have been used to determine the temporal sequence that goes from pictorial/structural encoding to retrieval of biographical/emotional information (Bentin et al., 1996;Bruce & Young, 1986;Eimer, 2000a;Herrmann et al., 2005;Schweinberger, 2011). Imaging techniques, on their part, have helped identify the specific brain areas involved in face perception and recognition, including recognition of emotional expression and personal identity (Adolphs, 2002;Gobbini & Haxby, 2007;Zeki, 2007, Haxby & Gobbini, 2011. These brain areas constitute a distributed network with a core system responsible for the analysis of visual appearance (posterior superior temporal sulcus, inferior occipital and fusiform gyri), and an extended system that underlies the retrieval of person knowledge (medial prefrontal cortex, temporo-parietal junction, anterior temporal cortex, precuneus, and posterior cingulate), action understanding (inferior temporal and frontal operculum, and intraparietal sulcus), and emotion (amygdala, insula, and striatum/reward system).

www.intechopen.com
In the context of this broad literature, a subset of studies has looked into the emotional processing associated with recognition of familiar loved faces (i.e., relatives, friends, own children, or romantic partner) by using electrophysiological or fMRI indices of brain activity Baçar et al., 2008;Bartels & Zeki, 2000Bobes et al., 2007;Fisher et al., 2005;Grasso et al., 2009;Grasso & Simons, 2011;Herzmann et al., 2004;Langeslag et al., 2007;Xu et al., 2011). A major shortcoming in this literature, besides the need to integrate it with research on the neural mechanisms of social and cultural cognition (Tomasello et al., 2005;Amodio & Frith, 2006;Herrmann et al., 2007), is the lack of evidence regarding the elicitation of a genuine positive emotional response to loved familiar faces. This is in part due to the presence of two significant confounding factors in most experimental studies: emotional arousal and familiarity. Emotional arousal refers to the intensity of an emotion, regardless of its affective valence (positive or negative). Arousal-related ERP modulation is a robust phenomenon that primarily affects long-latency components (Oloffson et al., 2008). Those studies focusing on the modulation of brain responses (ERPs) associated with processing of loved familiar faces have consistently found enhanced P300 and Late Positive Potential (LPP) amplitudes when comparing loved familiar faces to control familiar (newly-learned) and neutral faces. However, similar amplitude increases have been reported in response to highly unpleasant stimuli such as mutilated faces or attacking animals, thus casting doubt on whether the larger late positivity evoked by loved faces reflects activation of positive emotional mechanisms or plain emotional arousal (Palomba et al., 1997;Cuthbert et al., 2000;Schupp et al., , 2004Sabatinelli et al., 2007;Bradley, 2009). On its part, familiarity has been defined as a form of explicit or declarative memory (Gobbini & Haxby, 2007;Voss & Paller, 2006, 2007 that is affected by factors such as length of time spent with an individual, number of previous encounters, duration of the relationship, or information accumulated about that individual. Attempts to control for familiarity effects include viewing faces of acquaintances, famous people, or newly learned faces. However, familiarity of loved faces will always exceed that of control faces because of the amount of time spent with them (Grasso et al., 2009). Studies on ERP modulation associated with familiarity have consistently found increases in late positivities at posterior locations (Eimer, 2000a, b;Voss and Paller, 2006;Yovel and Paller, 2004). Interestingly, these same effects have been reported in studies of loved familiar faces (Bobes et al., 2007;Grasso et al., 2009;Herzmann et al., 2004;Langeslag et al., 2007).
Here we propose that studies on the processing of loved familiar faces would benefit from the use of those tools developed in the context of emotional processing research, with special emphasis on the potential contribution of electromyographic recordings. The picture-viewing paradigm used by Lang and colleagues Lang, 1995;Lang & Davis, 2006;Lang, Davis, & Öhman, 2000) allows us to disentangle those effects attributable to picture valence (either positive or negative) from those elicited by emotional arousal (intensity of emotion, regardless of its affective valence). In this paradigm, a broad set of central and peripheral measures are simultaneously recorded while participants view pleasant, neutral, and unpleasant pictures taken from the International Affective Picture System (IAPS; Lang, Bradley, & Cuthbert, 2008), an instrument that provides normative ratings of valence, arousal, and dominance for each picture. Research conducted within this framework has consistently shown that highly pleasant pictures are associated with (a) a pattern of accelerative changes in the heart rate, (b) reduced eye-blink startle responses, (c) increases in zygomatic muscle activity, and (d) decreases in corrugator supercilii muscle activity. The inverse pattern is observed with highly arousing unpleasant pictures. Moreover, highly arousing pleasant and unpleasant pictures, when compared to neutral ones, provoke (a) increases in skin conductance, and (b) larger amplitudes for both P3 and LPP components obtained at central-parietal locations Lang & Bradley, 2010). Therefore, two different sets of measures can be identified: one that consistently responds to affective valence, and a second one that is most sensitive to variations in emotional arousal. Both eye-blink startle and facial EMG have been extensively investigated in the context of emotional processing (see , for a review), and their ability to differentiate affective states has been repeatedly established. The startle reflex consists of a chain of extensor-flexor movements that spread throughout the body (Landis & Hunt, 1939) in response to intense and abrupt stimulation (i.e., an intense white noise). Rapid closing of the eyes constitutes one of the most stable components of this reflex in humans and it can be easily measured by placing electrodes over the orbicularis muscle under the eye. In studies of emotional modulation of the startle reflex using the picture-viewing paradigm, startle eye-blink is consistently potentiated when viewing highly arousing unpleasant pictures, and inhibited when facing highly arousing pleasant pictures, compared to neutral ones . This valence-mediated effect has been interpreted in terms of the motivational priming hypothesis (Lang et al., 1997). According to this model, perceptual stimuli that engage the aversive motivational system potentiate the reflex, whereas those stimuli that engage the appetitive motivational system provoke startle inhibition. The same linear relationship between startle amplitude and affective valence is observed in a large variety of perceptual settings: using pictures , films (Jansen & Fridja, 1994), sounds , and odors (Miltner, 1994). Facial EMG also provides reliable indices of affective engagement. Special attention has been given to the activation of the corrugator supercilii and zygomatic major muscles in different induction contexts, such as perception, anticipation, and imagery . The corrugator supercilii muscles are located above and between the eyes and are responsible for drawing down the eyebrows. The zygomatic major muscles originate in the cheeks bones and exert their action by lifting the corner of the mouth obliquely and upwards. Activity in these two sets of muscles is associated with frowning and smiling, respectively. Since the pioneering work of Schwartz and colleagues (Brown & Schwartz, 1980;Schwartz et al., 1976a, b) and until recently, corrugator supercilii and zygomatic major EMG have been often used to study emotional processing. Processing of unpleasant material elicits larger corrugator activity than neutral or pleasant stimuli Cacioppo et al., 1986;Lang et al., 1993;Larsen et al., 2003). Lang and colleagues (1993) found a linear relationship between activity of the corrugator muscle and the hedonic valence of the pictures, with unpleasant slides eliciting the largest contraction, and pleasant pictures causing inhibition of the corrugator muscles. This same linear gradation was obtained by Larsen, Norris, and Cacioppo (2003) in a study that measured facial EMG in response to three different kinds of stimuli: pictures, sounds, and words. When looking at corrugator activity as a function of emotional picture contents , slides depicting mutilations and contamination-related scenes elicit the largest activity in this muscle compared to other unpleasant material. The relationship between zygomatic activity and affective valence is somewhat more complex. Lang and colleagues (1993) obtained a quadratic relationship between emotional www.intechopen.com valence and zygomatic activity, with highly pleasant pictures eliciting the largest responses, when compared to either neutral or unpleasant material. However, significant zygomatic activation was also found for unpleasant pictures, compared to neutral stimuli. Larsen and colleagues (2003) confirmed this finding for emotional pictures and sounds, but not for affective words. Because the zygomatic major is involved in smiling and smiling has an unquestionable communicative function in humans, it has been argued that its activity is malleable according to rules of social display. Ekman, Davidson, and Friesen (1990), however, noted that a true smile (the so-called Duchenne smile) is accompanied by simultaneous activation of the zygomatic and orbicularis muscles, while social smiles devoid of positive emotion involve only zygomatic activity. In this chapter, we summarize the results of three studies aimed at unraveling the mechanisms underlying the processing of loved familiar faces, while distinguishing the relative effects of affective valence, undifferentiated emotional arousal, and familiarity, with special emphasis on zygomatic and orbicularis activity. All three studies combine the following elements: (a) an experimental paradigm that allows differentiation of valence versus arousal effects, (b) simultaneous recording of a broad set of psychophysiological measures, and (c) different sets of stimuli that differ in valence, arousal, and familiarity ratings.

Experiment 1
Experiment 1 was aimed at investigating the neurophysiological mechanisms associated with the affective processing of loved familiar faces by combining peripheral and central electrophysiological measures while separating the relative contributions of valence (i.e., positive vs. negative affect), arousal (i.e., more vs. less activation), and familiarity (i.e., known vs. unknown faces). Event-related potentials, skin conductance, heart rate, and electromyography of the zygomatic major muscle were obtained along with subjective ratings of valence, arousal, and dominance for each picture. Because of the scope of the present chapter, we will focus on the activity of the zygomatic muscle.

Participants
Thirty female undergraduate students, ranging between 20 and 27 years of age, took part in the study. Subjects were required to have a current romantic relationship (between 6 months and 6 years in duration) and to live in close proximity to five loved ones, including the partner, so as to be able to take their photographs. None of the participants reported current physical or psychological problems and none were under pharmacological treatment. All were right-handed and had normal or corrected to normal vision. All subjects signed informed consent forms and received course credit for their participation.

Materials and design
Participants viewed faces belonging to one of five categories in two separate blocks that differed in picture presentation rate and were administered in counterbalanced order. The slow block, whose results are presented here, consisted of an initial 5-min baseline period, followed by 50 trials with the following structure per trial: 4-sec baseline, 4-sec picture presentation, and 4-sec post-picture period. Inter-trial intervals (ITI) varied randomly between 8 and 12 seconds. The study included 5 photographs of faces in each of 5 categories: loved ones, neutral, unknown, famous, and baby faces. In addition to the partner, faces of loved ones could include parents, siblings, second-degree relatives, and friends. Neutral faces were selected from Ekman and Friesen's (1978) "Facial Action Coding System". Unknown faces were selected among the photographs of loved ones provided by some other participant, after ensuring that those pictures did not depict anyone known to that participant. Famous faces were selected by each participant prior to the experimental session, from a set of 9 photographs portraying people that frequented Spanish media. Participants were asked to identify famous people as such, and were expected not to feel strong attraction or rejection towards them. Finally, faces of babies were selected from the IAPS (Lang et al., 2008) among those with valence scores over 7 (rating scale ranging from 1 to 9) according to Spanish norms (Moltó et al., 1999;Vila et al., 2001). Pictures of loved ones were provided by each participant a few days before the experiment and after they had followed full detailed instructions on how to take the photographs (i.e., loved ones had to look straight at the camera with neutral expression in front of a light background). All pictures were edited using Corel-Draw Photopaint software (version 7). Photographs were matched in size, color, and background. All were cropped down to 650 x 650 pixels, transformed into gray scale (8 bits), and surrounded by a black background. Each picture was presented twice following a double Latin Square design. The first Latin Square included a 5x5 matrix, where each row and each column contained only one picture belonging to the same category. The second Latin Square was an inverted mirror of the first one. Participants, in groups of 5, were randomly assigned to 5 different sequences (each of those starting with a different picture category) to ensure that all categories were equally distributed within the complete sample. In addition, 15 participants had the slow presentation block followed by the fast presentation block, while the other 15 participants had the inverse order. Subjects were instructed to simply look at the pictures for the entire time they were on the screen, while trying to withhold from blinking or, if necessary, blinking during the ITI. After the second block, participants evaluated the valence, arousal, and dominance of each single picture according to the Self-Assessment Manikin (Bradley & Lang, 1994).

Apparatus
A Coulbourn polygraph, model V-Link, was used for the recording of zygomatic activity. Stimulus control and peripheral physiological data acquisition were accomplished using an IBM-compatible computer running VPM data acquisition and reduction software (Cook III, 1997). E-Prime software (Psychological Tools, Pittsburgh) controlled the actual presentation of the pictures using a second Pentium 4 computer connected to the master computer through the serial port. Finally, a digital camera, model Kodak Easy Share DX6490, was used to obtain the photographs of loved ones.

Zygomatic EMG recording
Zygomatic EMG activity was recorded using miniature In Vivo Metrics silver/silver chloride electrodes. The raw EMG was amplified by 5000 and band-pass filtered (90-1000 Hz) by means of a Coulbourn V75-04 bioamplifier. The signal was subsequently rectified and integrated using a Coulbourn V75-23A integrator. Time constant for zygomatic activity was 500 ms. The integrated EMG was sampled at 100 Hz and averaged every one halfsecond for the length of the trial. Each value was expressed in terms of differential scores with respect to the averaged EMG activity during the 4 sec preceding stimulus onset. www.intechopen.com

Statistical analysis
Zygomatic EMG data were averaged across trials within the same face category and order of block presentation, and analyzed by means of repeated measures ANOVA, using a 5 x 16 design, with the first factor being Face Category (Loved, Neutral, Unknown, Famous, and Babies faces), and the second factor being Time, expressed in 16 one half-second bins following stimulus onset. The Greenhouse-Geisser epsilon correction was applied to correct for sphericity. Analysis of significant interactions was performed identifying the levels of the interacting factors explaining the significant effects, followed by post-hoc pairwise comparisons using Bonferroni test. Figure 1 shows changes in zygomatic activity during face presentation when the slow block was presented in first and second place, respectively. Regardless of task order, only loved familiar faces evoked a clear response, starting right after stimulus onset and continuing until the end of the 8-sec recording period. Although less pronounced, a response can also be observed for baby faces.  Analysis of the Face Category x Time interaction yielded significant Face Category effects for all Time points from second 1.5 to second 8 (all p-values < 0.02). Except for seconds 6 and 6.5, loved familiar faces prompted a larger response than all other categories (all p-values < 0.04). Significant differences were also found between babies and neutral faces in seconds 2.5 and 3 (both p-values < 0.05). No differences were found between famous and unknown faces. Figure 2 displays the average zygomatic response across the four subcategories of loved faces. Separate ANOVAs for each pair of subcategories yielded significant differences between partner and friend (F(1, 18)=4.015, p<0.05, ηp 2 =0.157), though only in the first trial. Differences between partner and parent (F(1, 21)=3.75, p<0.07, ηp 2 =0.160), and partner and www.intechopen.com sibling (F(1,19)=3.346, p<0.08, ηp 2 =0.165), also during the first trial, were marginally significant. No other significant differences were found. Fig. 2. Zygomatic activity as a function of subcategory of loved familiar faces.

Experiment 2
In Experiment 1, familiarity of the faces was indirectly controlled by comparing loved faces with famous faces and, additionally, by examining the responses to loved faces differing in their familiarity (partner, parents, friends, and siblings) among themselves. The findings of the experiment above suggest that familiarity is not a major determinant of the substantial zygomatic activity differences obtained in that study. However,, two problems affect the familiarity issue in Experiment 1: on the one hand, the degree of familiarity of loved faces and famous faces is not comparable, since the former will always exceed any other category; on the other hand, the first experiment did not equate subcategories of loved faces in number or order of presentation. In fact, of the 30 participants, 15 included both parents among the 5 loved faces; 6 included only the father, and 8 did not include any parent. Similarly uneven numbers were associated with the rest of loved familiar faces (siblings, second-degree relatives, and friends) freely chosen by participants. In this second experiment, we examined psychophysiological responses in a design that controlled for familiarity differences among pictures and that included five different face categories: boyfriend, father, control-boyfriend, control-father, and baby faces.

Participants
Thirty-five female undergraduate students, ranging in age between 20-29 years, took part in this study. Inclusion criteria were the same as in Experiment 1. None of the participants reported current physical or psychological problems and none were under pharmacological treatment. All were right-handed and had normal or corrected-to-normal vision. All subjects signed informed consent forms and were given course credit for their participation. www.intechopen.com

Materials and design
Subjects viewed faces belonging to one of five categories in a passive picture viewing paradigm: boyfriend, father, control-boyfriend, control-father and baby. A few days before the experimental session, participants provided photographs of their boyfriend and their father according to detailed instructions provided by the experimenters (i.e., neutral expression, face looking straight at the camera, and subject standing in front of a light background). Control-boyfriend and control-father pictures were selected among the boyfriend and father photographs provided by other participants, once the experimenter ensured that the pictures did not depict someone known to the participant. The baby picture was selected from the IAPS (Lang et al., 2008) and had a valence score over 7 (rating scale from 1 to 9) according to the Spanish norms (Moltó et al., 1999;Vila et al, 2001). Pictures provided by participants were taken using a Nikon D-60 camera. All pictures were edited using Corel-Draw Photopaint software (version 7). All of them were matched in size, color, and background. All photographs had 650 x 650 pixels, were transformed to a gray scale (8 bits), and inserted into a circle surrounded by a black background. Pictures were presented by using Presentation software (Neurobehavioral Systems, CA) on a 19-in flat monitor located approximately 0.6 m from the participant. Each picture was presented 20 times following a quadruple Latin Square design. The first Latin Square consisted of a 5x5 matrix that included one picture category in each row and each column. The rest of the Latin Squares were inverted mirrors of the first one. Five different sequences of pictures were created, each one starting with a different category, so as to be equally distributed within the 35 participants. Participants, in groups of five, were randomly assigned to one of the five sequences. After a 5-min baseline period, 100 picture trials were presented with the following structure per trial: 4-sec baseline, 4-sec picture presentation, and 4-sec additional recording period. The inter-trial interval varied randomly from 1 to 3 seconds (with an average of 2 sec). No recording was obtained during that period. Participants were instructed to simply look at the pictures for the entire time they were projected on the screen.

Physiological measures
Zygomatic major EMG activity was obtained using miniature In Vivo Metrics silver/silver chloride electrodes. EMG activity was sampled at 100 Hz. The raw EMG signals were amplified by 5000 and band-pass filtered (13-1000 Hz) with a Coulbourn V75-04 bioamplifier. They were subsequently rectified and integrated using a Coulbourn V75-23A integrator. Time constant for zygomatic activity was 500 ms. Activity in the orbicularis muscle was recorded by placing two miniature In Vivo Metrics silver/silver chloride electrodes under the left eye. The raw EMG signal was amplified by 5000 and band-pass filtered (28-500 Hz) with a Coulburn V75-04 bioamplifier and a V75-48 High Performance Filter. Orbicularis EMG was subsequently integrated by a V75-23A integrator with a timeconstant of 20 ms. Integrated EMG was recorded with a sampling rate of 100 Hz and averaged every one half-second during the period of face presentation. Subsequently, each value was expressed in terms of differential scores with respect to the averaged EMG during the 4 sec preceding stimulus onset. Stimulus control and peripheral physiological data acquisition were accomplished using an IBM-compatible computer running VPM data acquisition and reduction software (Cook III, 1997).

Data reduction and analysis
Both zygomatic and orbicularis EMG data were averaged across trials within the same face category. Zygomatic activity was analyzed by repeated measures ANOVA using a 5x16 design, with the first factor being Category (boyfriend, father, control-boyfriend, controlfather, and baby), and the second factor being Time (the 16 one half-second bins following picture onset). Post-hoc pairwise comparisons were performed using the same procedure than in the previous study. For correlation analysis between zygomatic and orbicularis EMG, the maximum amplitude values during the whole period of picture presentation were obtained across categories and the Pearson bivariate correlation calculated. Figure 3 shows changes in zygomatic activity for each of the 5 face categories. Only boyfriend and father pictures prompted visible responses starting right after stimulus onset. In addition, the face of the romantic partner elicited a larger response compared to the face of the father. The ANOVA results yielded significant effects for Category (F(4,136) = 12.825, p < 0.0001, ηp 2 = 0.274), Time (15,510) = 4.889, p < 0.017, ηp 2 = 0.126), and Category x Time (F(60,2040) = 7.307, p < 0.001, ηp 2 = 0.177). Planned comparisons revealed the following results: (a) loved faces produced a significantly larger EMG response than control (p < 0.001) and baby (p < 0.001) faces, and (b) the face of the boyfriend produced a significantly larger EMG response than the different of the father (p < 0.02). Significant positive correlations were found between the zygomatic and orbicularis muscle activity evoked by loved familiar faces, but not by unknown or baby faces (see Figure 4), thus suggesting that the Duchenne smile effect was present in these data.

Results
www.intechopen.com Fig. 4. Correlation between zygomatic major and orbicularis oculii activity for boyfriend and father faces.

Experiment 3
Experiments 1 and 2 have shown that processing of loved familiar faces elicits a pattern of responses that are indicative of a positive emotional state. Further evidence supporting the view that loved familiar faces engage the appetitive motivational system would be available if the same loved familiar faces proved to be efficient inhibitors of some defensive response (e.g., the startle reflex). Consequently, this third experiment was conducted to explore whether faces perceived as highly positive (i.e., loved familiar faces) were able to inhibit the startle reflex when compared to either neutral (i.e., unknown) or unple as ant ( i . e. , mut il at ed ) fac es . Ad d it i onal ly , Exp er i ment 3 wa s de s igne d s o a s to replicate previous findings regarding central and peripheral physiological reactions to loved familiar faces

Participants
Forty-two undergraduate students (12 males) participated in this experiment. Inclusion criteria were the same as in Experiments 1 and 2. None of the participants reported current physical or psychological problems and none were under pharmacological treatment. All were right-handed and had normal or corrected-to-normal vision. All subjects signed informed consent forms and received course credit for their participation

Materials and design
Participants viewed faces in a passive picture viewing paradigm, which belonged to three face categories: loved familiar faces (partner, father, mother, and best-friend), neutral faces (control-boyfriend, control-father, control-mother, and control-best-friend), and unpleasant faces 1 : This latter category included pictures from the IAPS (Lang, Bradley, & Cuthbert, 2008) that were present in the Spanish normative ratings (Moltó et al., 1999;Vila et al., 2001). The participants provided photographs of loved familiar persons a few days before the experiment and according to full detailed instructions on how to take the photographs (i.e., pictures could not be taken by the participant him/herself, and people in the pictures should look straight at the camera with a neutral expression in front of a light background). Neutral pictures were selected among the loved familiar pictures provided by other participants after ensuring those pictures did not depict anyone known to that participant. Photographs of loved familiar persons were taken using a Nikon D-3000 camera. All pictures were edited using Corel Draw Photopaint (version 7). They were matched in size, color, and background. All photographs were cropped to 650 x 650 pixels, transformed to a gray scale (8 bits), and inserted into a circle surrounded by a black background. Pictures were presented using Presentation Software (Neurobehavioral Systems, CA) on a 19-in flat monitor located at approximately 60 cm from the participant. Participants were randomly assigned to 6 different picture-presentation sequences that were created following a set of eight 3x3 Latin Squares. In each of those 3x3 matrices, one out of the four pictures included in each category was discarded so that, in total, each picture was presented 6 times during the whole task. To control for order effects, each sequence started with a different category. The task started with a 5-min baseline period, followed by 72 trials with the following structure per trial: 4-sec baseline, 6-sec picture presentation, and 4-sec post-picture interval. Two-thirds of the pictures (48) were presented together with a startle probe (a burst of white noise at 105dBs, 50-ms duration and nearly instantaneous rise time) at 4, 4.5, 5, or 5.5 sec after picture onset. The same number of startle probes was assigned to each picture (n=4). In addition, 8 startle probes were delivered during the inter-trial interval in order to minimize noise predictability. The inter-trial interval varied randomly between 2 and 4 sec (with an average of 3 sec). Throughout the whole task a fixation point was presented in the centre of the screen to minimize eye movements. Participants were instructed to simply look at the pictures for the entire time they were on the screen.

Physiological measures
Zygomatic and orbicularis EMG activity were both measured using miniature silver chloride electrodes, and acquired using two Coulbourn V75-04 bioamplifiers. Band-pass filter settings for zygomatic and orbicularis raw signals were 13-1000 Hz, and 28-500 Hz, respectively. Both signals were amplified by 5000 and then rectified and integrated using a Coulbourn V76-24 integrator. Time constants for zygomatic and orbicularis muscles were 500 and 20 ms, respectively. Both raw signals were acquired at a sampling rate of 100 Hz during baseline with an increase in sampling rate for orbicularis muscle activity to 1000 Hz from 500 ms before the onset of the startle probe up to 500 ms after probe onset.

Data reduction and analysis
Zygomatic EMG activity was determined by subtracting the activity in the 3 sec before picture onset from that occurring at each half-second after picture presentation. Amplitude of the startle reflex was defined as the difference in microvolts between the peak and the onset of the orbicularis muscle response in a time window between 20 and 120 ms after stimulus onset, computed following the algorithm described by Balaban et al. (1986). To control for between-subject variability, startle magnitudes for each subject were converted to t scores. Zygomatic activity was analyzed by repeated measures ANOVA using a 3x12 design with the first factor being Category (loved familiar faces, unknown faces, and unpleasant faces) www.intechopen.com and the second factor being Time (the 12 half-second increments following stimulus onset). Startle reflex magnitude was analyzed using repeated measures ANOVA with Face Category as single factor. Finally, correlational analysis between zygomatic and orbicularis muscle activity was conducted as in the previous study, except that the maximum amplitude values were obtained during the first 4 seconds of picture presentation in order to avoid orbicularis EMG contamination by the startle probe.

Results
As can be observed in Figure 5, loved familiar faces prompted a clear zygomatic response, starting right after picture presentation and continuing until the end of the picturepresentation interval. No activation of the zygomatic muscle was observed for all other picture categories.

Discussion
The results of the three studies highlight the relevance of including electromyographic measures to advance knowledge on the psychological and physiological mechanisms underlying the affective processing of faces associated with identity recognition. The three studies were highly consistent in showing a large zygomatic major response to faces of loved ones, a response that starts around one second after picture onset and lasts for several seconds after picture offset. This response was totally absent or greatly diminished to all other categories of faces used as control in our three studies: unknown, famous, baby, neutral, and unpleasant faces. In all cases, the faces were photographs in black and white and with no emotional expression. Moreover, in the three studies, loved and unknown faces were physically identical since they were interchangeable across participants, guaranteeing that the observed differences among faces in physiology and subjective ratings were due to identity recognition and not to differences in emotional expression or physical features of the faces. The sequence of the three studies was also highly consistent in confirming that the processing of loved familiar faces involves activation of an intense positive emotional response that cannot be explained by undifferentiated emotional arousal or familiarity. In the first study, the arousal effect was controlled by simultaneously recording heart rate and skin conductance. In the context of Lang's picture-viewing paradigm , heart rate is an unambiguous index of emotional valence (positive versus negative emotion), whereas skin conductance is an index of emotional arousal (the intensity of the emotion, regardless of its positive or negative valence). Both indices confirmed that the zygomatic response to the loved faces was reflecting the presence of an intense positive emotional response. In this study, the familiarity effect was indirectly controlled by comparing famous versus unknown faces, as well as loved faces with different levels of familiarity among themselves. No differences were found between famous and unknown faces, and the few differences found among loved faces were contrary to the familiarity explanation: larger responses to the less familiar (partner) than to the more familiar (parent and siblings) faces. In the second and third study, the confounding arousal and familiarity effects were further controlled by including a second electromyographic measure (the orbicularis muscle activity) and by using an experimental design that better controlled for familiarity. The simultaneous recording of the zygomatic and orbicularis muscles provides two additional specific indices of emotional valence: the Duchenne smile and the eye-blink startle response. As mentioned in the introduction, the Duchenne or true smile is characterized by simultaneous activation of the zygomatic and orbicularis muscles, whereas social smile devoid of positive emotion involves only zygomatic activation (Ekman et al., 1980). The significant correlation found between zygomatic and orbicularis muscle activity only for loved faces in our second and third study is another piece of evidence supporting the conclusion that viewing faces of loved ones induces a feeling of genuine positive affect that cannot be explained in terms of arousal or familiarity. On the other hand, the eye-blink startle response is perhaps the most robust physiological response showing emotional modulation by affective valence. Numerous studies (see , for a review) have consistently demonstrated that viewing highly pleasant pictures (e.g., erotica or sport images) reduces the magnitude of the eye-blink startle response, whereas viewing highly unpleasant pictures (e.g., mutilation or threatening images) increases its magnitude, compared to neutral pictures (e.g., household objects or non-emotional faces). In these studies, however, the startle magnitude while viewing neutral pictures not always occupies an intermediate position with significant differences with respect to both pleasant and unpleasant images. The results of our third study, showing significant differences between our three categories of faces (loved, neutral, and unpleasant), are not only an independent confirmation of Lang's motivational priming hypothesis (startle inhibition during activation of the appetitive motivational system by pleasant pictures and startle potentiation during activation of the aversive motivational system by unpleasant pictures, compared to neutral ones). They are also an unambiguous demonstration that viewing loved familiar faces induces an intense positive emotion capable of inhibiting defensive reflexes. The electromyographic results of our second and third study also contribute to disentangle the confounding effects of familiarity in the processing of loved familiar faces. The control of familiarity has always been a major methodological problem in this type of studies. Although difficult to separate, emotional valence and familiarity are not inevitably confounded. In studies using the picture-viewing paradigm with pleasant, neutral, and unpleasant pictures from the IAPS, the enhanced zygomatic response while viewing pleasant pictures has been interpreted in terms of positive affect, rather than familiarity, since all pictures are new and there is no explicit memory involvement. But in studies on loved familiar faces, familiarity is necessarily confounded with emotion, since both memory and emotion are involved in their processing. Our second and third study controlled for familiarity by comparing the face of the romantic partner (with less than 6 years relationship) with the face of the parent of same sex (with more than 18 years relationship). Therefore, two categories of loved familiar faces, differing in amount of familiarity, were compared. Both studies (although the details of the third one are not reported here) www.intechopen.com replicated the results of the first study revealing larger zygomatic responses to the less familiar face (i.e., the romantic partner) than to the more familiar face (i.e., the parent), thus disconfirming the familiarity hypothesis. It should be noted, however, that the three studies reported here not only recorded EMG responses. A broad set of autonomic (heart rate and skin conductance), electromyographic (zygomatic and orbicularis muscle activity), and brain (N100, P200, P3, and LPP) responses was recorded, in addition to subjective indices of valence, arousal, and dominance, the three general dimensions of emotional reactions (see Osgood, Suci & Tannenbaum, 1957;Mehrabian & Russel, 1974;. The results of the three studies revealed a very consistent pattern of physiological and subjective responses to loved familiar faces: (a) increases in heart rate, skin conductance, zygomatic activity, and the event-related potentials P3, and LPP; (b) decreases in eye-blink startle magnitude and the event-related potential N200; and (c) higher ratings of positive valence and arousal, but lower ratings of dominance. This set of physiological and subjective responses suggests that the processing of loved familiar faces involves a complex pattern of psychophysiological mechanisms that single physiological measures, whether central or peripheral, would fail to capture accurately. The use of both central and peripheral measures to investigate the affective processing of familiar faces contrasts with the dominant trend in current cognitive and affective neuroscience, almost limited to central physiological measures (EEG, ERP, MEG, PET, or fMRI). In the context of studies on emotional processing, the neglect of peripheral physiological measures, such as facial EMG or heart rate, has the shortcoming of not providing unambiguous evidence of the type of response being elicited, confounding emotional valence, undifferentiated emotional arousal, and familiarity. But the shortcoming of a neuroscience approach that ignores peripheral physiological measures has much wider implications. It assumes a model of brain function apparently unrelated to efferent motor components. As Robert Sperry emphasized in 1952, the human brain primarily evolved not to contemplate and understand the world but to facilitate behavioral adaptation: The primary function of the brain is essentially the transformation of sensory patterns into patterns of motor coordination (Sperry, 1952, p. 297)… Cerebration, essentially, serves to bring into motor behavior additional refinement, increased direction toward distant, future goals, and greater over-all adaptiveness and survival value. The evolutionary increase in man's capacity for perception, feeling, ideation, imagination, and the like, may be regarded, not so much as an end in itself, but as something that has enabled us to behave, to act, more wisely and efficiently (Sperry, 1952, p. 299). Sperry's neuroscience approach implies that mental and motor processes are intimately associated and that research on brain mechanisms of psychological processes should integrate both central and peripheral physiological measures. This approach not only has distinguished roots in early behaviorism and motor theory of thought and emotion (Watson, 1913;Washburn, 1928;Jacobson, 1929). It also has relevant defendants among more recent psychophysiologists and cognitive neuroscientists. McGuigan published in 1978 a book entitled Cognitive Psychophysiology: Principles of covert behavior gathering together a bulk of experimental studies showing the close covariation between electromyographical measures at different muscle locations and numerous cognitive processes. Based on these data, he suggested that all brain circuits are basically neuromuscular circuits, a proposal not too different from Sperry's ideas on mind-body interactions. More recent developments within cognitive and affective neuroscience are also beginning to suggest the need to integrate brain and bodily responses in order to fully www.intechopen.com comprehend the brain mechanisms underlying psychological processes. The embodied cognition (Lenggenhager et al., 2006;De Vega, Glenberg & Graesser, 2008;Niedenthal et al., 2009) and social cognition (Tomasello et al., 2005;Amodio & Frith, 2006;Herrmann et al., 2007;Losin et al., 2010) groups are perhaps the most relevant cognitive movements of this new approach within neuroscience research. The basic idea of both movements is that bodily responses and affective social interactions are integral part of our cognitive representations in the brain and that both, central and peripheral responses, play a crucial role in successful behavioral adaptation and survival. In summary, the present chapter summarizes a set of three studies aimed at investigating the affective processing of loved familiar faces using a set of central and peripheral physiological measures, including electromyographic recordings of the zygomatic major and orbicularis oculii muscle, in the context of Lang's picture-viewing paradigm. The results of the three studies support the conclusion that viewing the faces of familiar loved ones elicits an intense positive emotional reaction that cannot be explained either by familiarity or arousal alone. The three studies highlight the relevance of integrating central and peripheral measures to advance knowledge on the brain mechanisms underlying affective processing.

Ackowledgements
This research was supported by a grant from the Junta de Andalucía (Project P07-SEJ-02964).