Sunday, November 15, 2015

Taste the placebo

One of my students asked me if I make the memes that I use in this blog: the answer is "no" - truly, I am too busy grading papers. Fortunately there are a lot of memes floating around social media. Most of the time I start with a meme and then do a literature search to find recent research on that topic. Other times, like this week, I find a great article then search Google Images for a meme to match. Sometimes I stumble upon the perfect combination and reach article/meme perfection:


Would it be mean to trick a hurting person into thinking that a piece of candy is actually a tablet of pain reliever? The answer would be, "yes" if the act of swallowing that orange Skittle would have no effect on her headache - but what if it helped just as much as a real Advil?

Faasse, Martin, Grey, Gamble, and Petrie (2015) conducted a simple experiment to answer a similar question. They recruited 87 New Zealander university students (83% female) who frequently suffered from headaches. The students were each given four identical-looking tablets: two were labeled "Nurofen," a brand name for Ibuprofen - like Advil in the United States, and the other two were labeled "Generic Ibuprofen." Thus, each of them believed that he or she received two brand name pills and two generic pain-relief pills.

However, the researchers had tricked them with some mild deception! For each participant one of the tablets labeled "Nurofen" was actually a placebo (a sugar pill) containing no medicine - much like a Skittle; likewise, one of the tablets labeled "Generic Ibuprofen" was also a placebo. The remaining two tablets were always identical doses (400 mg) of Ibuprofen; there was no difference beyond how they were labeled. This was a within-subjects experiment because all participants experienced all of the possible versions of the tablet: placebo labeled Nurofen; Ibuprofen labeled Nurofen; placebo labeled Generic; Ibuprofen labeled Generic.

The participants were instructed to use the tablets for their next four headaches, and the order of the pills was dictated by the researchers for counter-balancing. Each time the participants got headaches they had to rate the intensity of their pain, take the designated tablet, wait one hour, and then rate the intensity of any remaining pain and note any side effects.

As you would expect, the overall results showed that Ibuprofen did a better job than the placebo at resolving headache pain. What was more interesting to Faasse et al. was that placebo and Ibuprofen tablets labeled Nurofen were rated equally well on pain relief. In this case, the Placebo Effect was clear: just believing that you took brand name Ibuprofen was enough to cure your pain as well as the real medicine would!

Alternatively, when participants took the tablets labeled "Generic Ibuprofen," pain relief was much higher from the actual Ibuprofen than from the placebo. Again participants' beliefs influenced their experiences: in this case a mistrust of off-brand medicines overcame the Placebo Effect.

Side effect data followed a similar trend: when participants believed that they were taking "Generic Ibuprofen" they reported more side effects from the placebo tablets than from the actual Ibuprofen (that really could produce side effects)! When placebos produce negative effects this is call the Nocebo Effect. This Nocebo Effect was limited to the Generic labeled tablets. No difference in side effects was seen between the placebo and Ibuprofen when they were labeled Nurofen, so a mistrust of off-brand medicine encouraged the Nocebo Effect.

Faasse et al.'s experiment would suggest that orange Skittles stored an old Advil bottle would probably fix your sister's headache just as well as actual medicine! On the other hand, if she saw the Skittles come out of a store brand (generic) bottle she might be complaining an hour later: her head would still hurt and the "pills" you gave her are upsetting her stomach. Of course, the side effects of tricking your sister would be far worse than a headache, so please don't try this at home.

Further Reading:

The Faasse et al. (2015) article is available online or can be obtained through your local college library.

Curious about the placebo effect in medicine? Get two perspectives from an article in the New England Journal of Medicine by Drs. Ted Kaptchuk and Franklin Miller, and from a TED talk by Dr. Lissa Rankin.

Orange Skittles resemble Advil: similarities between candy and medicine can lead to accidental poisoning of children. Play the Pill vs Candy game from California Poison Control. Can you tell the difference between a sweet treat and a pharmaceutical?

BONUS:

Our suspicion of generic brands extends to other products as well. Watch some of the Buzzfeed crew attempt to tell the difference between off-brand and brand name cereals.


Sunday, November 8, 2015

Sharing is caring

Sharing with those less fortunate is a value that many parents wish to pass on to their children, and one that is emphasized in many traditional religions. Apparently it is valued by guinea pigs as well:


Recently, a study about sharing has received a lot of mention in the media. Decety, Cowell, Lee, Mahasneh, Malcolm-Smith, Selcuk, and Zhou (2015) examined the influence of religion on the sharing behavior of more than 1,000 school children (ages five to twelve) in six different countries (USA, Canada, South Africa, Turkey, Jordan, and China).

Children were tested individually in their schools using a modified version of The Dictator Game. Each child was offered 30 stickers and asked to choose their ten favorites. They were then informed that there would not be enough time to test all of the children at the school, so they should donate some of their stickers to the children who would otherwise receive none. The number of stickers that children chose to donate served as the measurement of sharing.

The parents of these participants also completed a questionnaire about their religious identities and religious practices, including the number of times per week the family attended religious services and the family's level of spirituality. The most commonly reported identities were: Muslim (43%); Non-religious (28%); Christian (24%).

The results grabbed media attention because the children from Non-religious families shared more stickers than the children from Muslim or Christian families. This effect was the same regardless of the level of religious practice or belief of the Muslim or Christian children. The trend was strongest among the older children in the sample (ages 8-12 years) which implies that the religious children had a longer time to internalize the values of their faiths. The authors noted that children's ages, socioeconomic statuses, and countries of origin were also predictive of sharing, but they did not explore these results in depth.

In the United States, headlines like "Religion makes children more selfish, say scientists," and, "Nonreligious children are more generous," splashed across the Internet. Some people delighted in the irony of reported selfishness from people whose religions teach generosity. Others, especially some practitioners of Islam and Christianity, were offended. But do Decety et al.'s results deserve these strong reactions?

It is true that the study demonstrated a statistical difference between Non-religious children when compared to Muslim and Christian students. That means that the results are unlikely to be due to chance, luck, or accident. So this trend is worthy of further study in future investigations of how religion influences sharing and other prosocial behaviors.

On the other hand, the average number of stickers shared by these groups were very similar. Non-religious children shared on average 4.09 of their ten stickers; Muslim and Christian children shared 3.20-3.33 of their ten stickers. The difference was consistent enough to reach statistical significance, but small enough that we would be unlikely to see a practical difference in our daily lives. In other words, if an elementary school class was made up of children from these three groups, the teacher would be unlikely to notice more or less generosity from any one group: all of the children would donate about three or four of their stickers to students who had none.

At the same time, these data tell us that we cannot assume that religious children will share more than non-religious children. In the United States this is increasingly important to understand as the number of Americans reporting that they are atheist, agnostic, or of "no religion in particular" has increased to almost 23% of adults.

Further Reading:

An online version of the Decety et al. (2015) study is temporarily available; the Current Biology publication can be accessed through your local college library.

Read Phil Zuckerman's opinion piece on "How secular family values stack up," from the Los Angeles Times.

Regardless of their religion or lack of religion, parents can get excellent tips from the Ask Dr. Sears article on "11 ways to teach your child to share." This list also includes when you should not force your child to share.

BONUS:

From the children's television program, Sesame Street, a music video "Share It Maybe;" an educational parody of Carly Rae Jepsen's "Call Me Maybe."



Monday, November 2, 2015

What we fear is a mirror

Along with trick-or-treating and attending parties, some people celebrate Halloween by paying to get scared: they walk through the spooky scenes of a haunted house while actors jump from the shadows, scream, and taunt them.


Why do we like to get a little scared? The James-Lange theory of emotion would suggest that we experience our bodies' fear response, racing hearts and sweaty palms, but then mislabel that arousal as "fun" or "excitement." As well, haunted houses keep us in the present moment so we cannot worry about anything else at the same time. People who were stressed before going into a haunted house often report the highest amount of relaxation from the experience.

But do we all get scared by the same things? Muroff, Spencer, Ross, Williams, Neighbors, and Jackson (2014) wondered if White and African American men and women have different ideas of what causes fear. They used a cognitive interview technique by asking a convenience sample of almost 200 participants (ages 18-85) a single question, "What makes and object or a situation fearful?" (p. 156).

From their answers the researchers distilled five categories of what makes something scary:

External Locus of Control: feeling out of control; not knowing what is about to happen.

Harm/Danger: risk that something would hurt or threaten your health/life.

Phobias: an irrational, intense fear of something that provokes immediate anxiety and/or panic.

Past Experience: having had or knowing about someone else's past experience with something that was fear-inducing.

Self-Perception: your belief that you will find something scary or not be able to handle something.

Muroff et al. then reexamined the participants' responses to see how often all of the participants and participants from specific groups included these categories in their original explanations about what makes things scary. From these qualitative data, they found that (p. 157):

*The most common category for all participants was External Locus of Control: it was mentioned in 35% of all explanations. This was followed by Harm/Danger (29%) and Phobias (18%).

*White women were 1.5 times more likely than African-American women to include External Locus of Control as part of their answers.

*External Locus of Control appeared most in the explanations given by White women and the least in explanations given by White men. 

*Older participants' responses were more likely to include External Locus of Control.

*White men were 3.5 times more likely than African-American men to include Past Experience in their responses.

*Men were more likely than women to include references of Self-Perception.

*Phobia was most common in the answers given by participants who had not completed high school.

This would suggest that these same individuals would react differently to various aspects of a haunted house. The majority, but especially White female and older participants, would be scared by unpredictable situations: dark passageways; rooms filled with disorienting lighting and mirrors; going around blind corners; not being able to find the way out of a room.

White male participants would be scared of things that remind them of bad situations that they, or people they know, have experienced. For example, an actor throwing a fake punch toward them, or a torture scene that includes an actor being kicked in the genitals or having a thumb hit by a hammer.

Males, in general, would also be afraid if they were led to believe that they could not handle a situation. This could be induced by playing recorded messages like, "You are helpless," "Others have come here but nobody leaves alive," or, "You can fight but you are too weak to escape," as they walk through the haunted house.

Finally, participants who had not completed high school might be particularly afraid if the horrific scenes included common phobia triggers. For example, spiders, needles, blood, tight spaces, or high heights.

Muroff et al. were not interested in how to build the scariest haunted house. Instead they were curious if people from different groups may have different interpretations of an assessment question that is commonly used to diagnosis phobias. Their results suggest that age, education level, sex, and race can influence how people conceptualize fear, and this may affect the accuracy of that phobia assessment.

The authors emphasized that the results likely reflect truly scary aspects of American culture. For example, women tend to have less financial power related to men; money allows you to better control your experiences. I would also add that women are more often cautioned to control their environments to prevent dangerous situations (e.g.; never walk alone; don't leave your drink unattended; walk with purpose and stay in well-lit areas). This may be particularly true for White women as the media more often portrays them as victims of violence - even though it is African American women who actually experience more violence. Perhaps this is why White women were more likely to endorse External Locus of Control as something that induces fear.

More clearly, Muroff et al. wrote, "Research on coping mechanisms for discriminated and stigmatized groups suggests that African American men and women may depend on various mechanisms for coping such as external locus of control and social learning" (p. 158). In this case external locus of control means that discrimination is explained as being random or a product of the situation, and thereby not caused by the person experiencing this unfair treatment. The authors explained that this beneficial coping mechanism would be less likely to be part of African Americans' explanations of fear.

Additionally they wrote, "Our findings also suggest that White respondents in this sample mention past experiences in their conceptualizations of fear more frequently than their African American counterparts. African Americans' experiences may be influenced more by unpredictable events including acts of racism and discrimination." Because of this, "African Americans may focus on the present and future in order to cope with adverse life experiences" (p. 158). When your past is full of fear it might not help you distinguish if a new situation is scary or not.

We can all exit a haunted house, but we can rarely step outside of our own experience of biological sex, race, and culture. The first step is to realize that we are all different: the same environment may be fun for some and scary for others.

Further Reading:

The Muroff et al. (2014) article can be accessed through your local college library.

Listen to National Public Radio's "Hidden Brain" podcast on the Science of Fear.

Read about Neuroscientist Dr. Lisa Feldman Barrett who creates a yearly haunted house in her basement based on the neuroscience of fear. The proceeds of the event are donated to charity.

BONUS:

Watch when talk show host Ellen Degeneris sends her executive producer and "Modern Family" star, Eric Stonestreet, through a haunted house. These men are both White: does their behavior match the results of Muroff et al.?

Andy Goes to a Haunted House with Eric Stonestreet, Part 2 | EllenTV.com

Sunday, October 25, 2015

Revenge of the preteen nerds

Many people get an education because they want good careers that earn good money. Being studious may not make you cool but it should set you up to meet that goal; students who party instead of studying may not be so successful:


This quotation is actually from Bill Gates, even if Actual Advice Mallard seems to be stating it here. Did Bill Gates get it right? Are today's nerdy kids likely to be well-employed and financially stable as adults?

Spengler, Brunner, Damian, Lüdtke, Martin, and Roberts (2015) examined data from a large-scale, longitudinal study done in Luxembourg, a small country in Europe. In 1968 the MAGRIP study collected data on Luxembourgish youngsters around age 12. This included measures of the children's IQs, their parents' levels of wealth and education (SES), ratings from teachers on how studious the children appeared to be, and self-reports about feelings of inferiority, being a responsible student, defying their parents' rules, and talking back to their parents.

In 2008, the second wave of MAGRIP followed up with 745 of these same individuals when they were in their 40s. At that time the participants were asked about their educational attainment (years of education after elementary school), their current or most recent jobs, and their yearly incomes. In their analysis, Spengler et al. wanted to know what traits at age 12 were associated with greater educational attainment, occupational success (as measured by prestige and social class), and higher incomes in middle age.

In line with past research, the participants who had higher IQs and higher SES families when they were 12 attained higher amounts of education by middle age. However, even when those two variables were accounted for, three other attributes predicted educational attainment: feeling that you are not as good as others; studiousness; talking back to parents and not always following the rules. Two of these are not surprising: self-reported feelings inferiority at age 12 were related to lower levels of education by one's 40s; while teachers' higher ratings of studiousness at age 12 predicted higher levels of education by middle age. Interestingly, reporting at age 12 that you talk back to and do not always obey your parents was also related to higher educational attainment in adulthood.

Likewise, more occupational success in middle age was predicted by having a higher IQ and coming from a higher SES family at age 12. Even when those two variables were held steady, seeing yourself as a responsible student at age 12 predicted occupational success in your 40s. Higher teacher ratings of studiousness during childhood also predicted occupational success, however this relationship played out through the amount of education that the participants experienced. Spengler et al. clarified that being seen as studious by your teachers in childhood suggests that you have traits that might lead you to choose more years of and more challenging levels of education. In fact, educational attainment was the best predictor of occupational success.

The analysis revealed an unexpected finding related to personal income. The participants who at age 12 admitted that they talked back to their parents and did not always follow their rules were more likely to have higher incomes during their 40s! This was true even when childhood IQ, family SES, and lifetime educational attainment were taken into account. Spengler et al. caution us that this finding needs to be replicated in future research - so this is not a green light for preteens around the globe to sass back at their parents. They also offer two explanations: these individuals may be more likely to argue to get higher wages; it is also possible that these individuals broke the rules as adults to get this higher income.

I would add that it is also possible that they were raised by good, Authoritative parents. These parents express love to their children and raise them with the correct amount of fair discipline; they are also willing to discuss household rules with their children, conversations which may start off with "talking back." Authoritative parents want their kids to be able to reason about their behavior choices instead of simply showing obedience to authority. So, it would stand to reason that their children may not always follow even their own rules to the letter. Authoritative parenting also prepares children for white collar jobs that require independent decision making and less emphasis on following orders; these jobs may also offer higher levels of pay.

Another possibility that I can imagine is that some of these individuals are Gifted. In general, higher IQs are related to higher incomes and Gifted children have IQs that are at least twice as different as we would expect from an average child. Parents of Gifted children often joke about them being "little lawyers," because they can be argumentative even with adults in authority. So Spengler et al.'s surprising finding may simply reflect a variable that is related to unusually high intelligence.

The authors admit that the predictive powers of pre-teen Luxembourgish children's behavior in 1968 may not generalize: it may be that educational attainment, occupational success, and income level are predicted by different traits today and these traits may vary from culture to culture. On the other hand, the Spengler et al. analysis demonstrated that personal factors like being a responsible student, coming across to teachers as being studious, and how you relate to people in positions of authority can predict achievement beyond their relationships with general intelligence and social class. In that case, being a studious (and perhaps argumentative) nerd may be enough: you don't have to be a genius like Bill Gates or have a wealthy parent like Bill Gates to become an adult with a good job and a good income.

Further Reading:

The Spengler et al. (2015) article can be accessed through your local college library.

Clinical Psychologist Kelly Flanagan blogged for The Huffington Post about why he thinks "...Every Kid Should Talk Back to Their Parents."

Serious students may have a good chance of landing a job found in U.S. News and World Report's list of "The 100 Best Jobs" of 2015.

BONUS: Watch an amusing promotional video titled "Is it true what they say about... Luxembourg?" made by the Luxembourg National Tourism Board. Spoiler Alert: no volcanoes; yes happy dogs.




Sunday, October 18, 2015

Phoning it in on exams

Even in a crowded lecture hall your professors know when you are using your smartphone during class.


Some instructors ignore this, some punish this, but most of them do not like it.


Although students might think that class rules limiting or banning the use of mobile phones are cruel, in truth they come from a good place: most faculty believe that your phones will hurt your grades by stealing your attention. A basic idea from cognitive psychology is that you have to pay attention to something if you want to commit it to memory or be able to use the idea effectively.


A study done by Bjornsen and Archer (2015) further supports this argument. These professors spent two semesters in their psychology classes surveying a total of 218 college students about their in-class cell phone use. At the end of each class the students answered a questionnaire about the number of times that they looked at their phones for: social networking (texting, email, social network apps like Facebook and Instagram); getting information (Googling, checking the online syllabus, checking a website related to class); organizing their lives (personal calendar); or playing a game. The students were assured that their answers would not influence how the professor graded their work, so the assumption is that these self-reports were honest.

The authors compared the amount of each type of mobile phone use to test scores in these classes. They found that both social media use and gaming during class were associated with lower test scores, with playing games being much worse than using social media. Because social media use was more common than playing games, Bjornsen and Archer looked at these data in a more detailed way. They divided the students who used social media during class into high (5x per class), medium (2.4x per class), and low (1x per class) then looked at average test scores for each group. Across five exams the high in-class social media users scored an average of 74% and the low social media users scored an average of a 76%. These scores are close but they could be the difference between getting a C or a C+ in a class.

However, one of these effects changed when Bjornsen and Archer controlled for overall GPAs (grade point averages). As you might guess the students with the worst GPAs were more likely to play games in class, so the relationship between game playing and low test scores was likely influenced by this third variable: it disappeared once GPA was factored in. On the other hand, even when overall GPA was included as a factor, the relationship between in-class social media use and lower exam scores remained significant.

This implies that even students who have higher GPAs still score lower on exams when they use social media during those classes! From past research on college students we know that more than 90% admit to using cellphones during class and that 77% do not believe that this would cause problems with their learning. Bjornsen and Archer's results suggest that students, especially students who do well in school, may not be aware that their tests scores might be a bit higher if they reduced or stopped using social media during class time.

Before you worry that this study will drive professors to ban cell phones, the authors suggest that this is not sensible given that today's university students are used to having their smartphones as constant companions. They cite past research about this population: college students spend 5-9 hours a day on their phones which includes: 95 minutes of texting; 95 minutes on social networking apps; 49 minutes emailing; 34 minutes surfing the Web. Based on this Bjornsen and Archer suggest some integration of phones into the classroom. For example, I will ask students to put away their phones if they are not participating or are distracting other in the class, but I don't mind if student take pictures of notes and images that I project onto the screen at the front of the room (even though there is good research behind the idea that taking notes through handwriting increases your memory and understanding of those notes).

 
Ultimately university students are adults who should weigh the benefits and costs of using their phones in class. For some, being connected to friends and family will be more important that scoring a few points higher on exams. For others, every point matters because their grades are crucial for getting into programs like Nursing, keeping scholarships, or having good GPAs for graduate school applications. If you aren't sure which group you fall into...ask Siri?

Further Reading:

The Bjornsen and Archer (2015) article can be accessed through your local college library.

There's an app for that! In 2014 college students Mitch Gardner and Rob Richardson created Pocket Points, an app that rewards students for not checking their phones during class. Find out if your university uses this system - if so you can get discounts at stores and restaurants. As if scoring 2% higher on average exam scores was not enough!

Cell or mobile phone addiction is addressed by Web MD. Find out the symptoms and suggestions for managing your smartphone time.

Sunday, October 11, 2015

When I'm 64

As a serious college student I remember feeling old before my time compared to some of my classmates who would blow off studying to socialize. Now as a middle-aged professor I don't "feel" as old as many of my peers (even though my daughter reminds me all of the time that I am "very old"). Apparently my experience is shared by Yoko Ono:


Chronological age, or how many years old you are, is just one way to think about aging. You can also consider: Biological age - how healthy you are; Social age - the habits you have and the roles that you take on; and Psychological age - how well you reason and think.

A recent study highlights how people's perceptions of their Biological and Social ages can influence their Subjective ages - or how old they feel. Stephan, Demulier, and Terracciano (2012) asked more than 1,000 French adults ages 18-91 to rate their physical health, to complete a version of the Big Five personality inventory, and to report their subjective ages.

The results support the idea that how old we feel is based on more than how old we are. These effects varied depending on the ages of the participants. The authors did not state the ranges of the age groups, but we would usually assume that their young adults were ages 18-39, middle-aged adults were 40-59, and older adults were 60-91.

A strong relationship between chronological age, health, and subjective age emerged. Middle aged and older adult participants who rated themselves as being in good health were more likely to say that they felt younger than they actually were. Stephan et al. clarify that on average, these middle aged adults felt 2 years younger and these older adults felt four years younger. These results support the importance of considering a person's Biological age.

When the authors controlled for health and demographic factors a relationship between chronological age, certain Big Five traits, and subjective age appeared. Because our personality traits often relate to our behaviors and activities, these results support the importance of assessing a person's Social age. For example, young adults who were high in Conscientiousness felt older than they really were. Conscientiousness implies being responsible and organized. Motivated and reliable young adults might feel like they are older than their peers because these are not characteristics that Western culture often associates with that age.

Middle aged and older adults who were high in Openness to Experience, and older adults who were high in Extraversion felt younger than they were. Openness to Experience has to do with engaging in diverse interests and being open to new ideas; Extraversion is associated with being out-going and dynamic. Both are traits that Western culture does not associate with middle and older age, so people with these traits are likely to feel younger than they really are as they age.

These results explain the experience that Yoko Ono and I share. When I was a young adult, my conscientious behavior did not match with my stereotype about my age: so I felt older. Now as a middle aged person who has broad interests and loves creativity, my self-perception again runs contrary to the stereotype about my age: so I feel younger. In some decades, I can predict that I will continue to feel younger because I am relatively high in Extraversion.

This research also raises a question about Western stereotypes about age. What does it mean that we view young adults as irresponsible, middle-agers and seniors as stuck in their ways, and senior citizens as being unsociable? If we can imagine a time that these negative assumptions are no longer part of our culture, it would have implications for subjective age. Instead of feeling older or younger than our chronological age we would simply be that old and recognize that at all ages individuals can differ on Big Five traits.

Further Reading:

A pre-publication version of the Stephan et al. (2012) article can be read here thanks to the National Institute of Health. The Psychology and Aging article can be accessed through your local college library.

A blog post from the AARP about research done by Rippon and Steptoe (2015): "Feeling Old vs Being Old." More support for Biological age!

Know an Extraverted senior? The social networking site Meet Up has groups around the world for outgoing older people who want to socialize!

Sunday, October 4, 2015

In defense of Kristen Stewart

There are many memes mocking Kristen Stewart for not smiling. This week's meme is one of them:

 
However, if you type her name into a Google Images search, and then do a second search for her Twilight co-star Robert Pattinson you will see that the two actors are both pictured smiling and not smiling. When I did this (albeit unscientific) search and compared the first 25 images for both actors, I found that both of them were shown smiling 11 times and not smiling 14 times. So why the shade for Kristen while Robert's image remains sparkling?

Part of it is due to gender expectations in our culture: women smile more than men and are punished more for not smiling. The most recent review of this phenomenon occurred in 2003 when LaFrance, Hecht, and Paluck published a meta-analysis of 162 studies. A meta-analysis allows researchers to statistically combine the results of many studies to determine if a difference exists and how big (or meaningful) that difference may be. The overall results from LaFrance et al. confirm that across these studies there is a small to moderate, or noticeable in the real world, effect of men smiling less. Fans are used to seeing women smiling so they notice and react poorly when Kristen Stewart bucks that gender expectation. Likewise, fans are used to men not smiling so the same facial expressions from Robert Pattinson go unnoticed as if they were invisible.

The researchers also used statistics to examine the different contingencies of the studies to see what is associated with this difference getting smaller or larger. In some cases it would come out smaller than the overall difference: in these situations women and men were closer to smiling at similar rates. Many of these effects were very small or even close to zero, which means that in the real world these contexts would likely be associated with very few observable differences between women and men:
*when people are not aware of being observed
*when they are in a group of four or more people (so the focus is not on one person)
*when they are not interacting with the people around them
*when they are very familiar with each other
*if they are comfortable because there is low pressure to impress
*when they are talking to a younger person or an older person
*when they are paired with a woman
*when they are interacting with somebody of the opposite sex
*when they share equal power with the other person
*if they are asked to play a role that requires caretaking, like taking care of a baby
*if they are forced to argue against the other person
*if people are from England
*if people are African-American
*if people are middle aged or senior citizens

In other cases this gender difference would come out larger than the overall difference: in these situations women were even more likely to smile than men. These range from moderate to almost high effects, which means that in the real world these contexts would likely be associated with actual observable differences between women and men:
*when people are alone (and presumably self-conscious about being observed)
*when people are alone but asked to imagine another person being with them
*when they are paired with a man
*if they are asked to persuade somebody
*if they have to reveal personal information about themselves
*if they are made to feel embarrassed
*if people are Canadian
*if people are teenagers (a time of gender intensification)

Looking at the results LaFrance et al. note that,"...the extent of sex differences in smiling is highly contingent on social groups and social factors" (p. 326). In simpler language, men tend to smile less than women, but when this happens and how obvious it is depends on the characteristics of the situation. For example, there are personal and cultural factors like age, race, and culture. There is also the question of what is required in this situation: do they have to persuade; argue; or be in charge of the care of another being? Who are they interacting with - do they share the same age, sex, or level of power in the situation?

Notably for Kristen Stewart the results also demonstrate that people are more likely to show this gender difference when they know that they are being watched, when they imagine that they are being watched, and when they feel like they need to make a good impression (or instead are facing embarrassment). So another reason that fans may be critical is that, by being an actor and a public figure, she is constantly in these contexts yet she does not do what most women would do in those situations, she does not smile. On the other hand, if Robert Pattinson reacts the same way on the red carpet he is actually doing what we expect men to do in those situations, so once again he escapes criticism. And that really bites.

Further Reading:

A pdf of the LaFrance et al. (2003) article can be accessed on Dr. Elizabeth (Betsy) Paluck's website.

 Kristen Stewart may wish to work on her smile - not for the fans - but for how smiling, even fake smiling, might help her deal with stress. Read the Association for Psychological Science (APS) coverage of research done by Kraft and Pressman (2012).

Kristen Stewart is not alone. Read Emily Matchar's article, "Memoirs of an Un-Smiling Woman," from The Atlantic.