Monday, November 30, 2015

Creativity may be a joke, but it is no accident

The late American artist and televised painting teacher, Bob Ross, was best known for his positive attitude summed up by his classic saying, "we don't make mistakes, we just have happy accidents." Because of this cheery outlook he has been much loved and often parodied in our culture:


If Bob Ross was creative, would he be likely to think aggressive memes like this one were funny? Was Bob Ross creative because he was so positive or because he was a complex person (he was also a Master Sergeant in the United States Air Force, a rescuer of injured squirrels in Muncie, Indiana, the father of three boys from two marriages, a painter of - as he often said - "happy trees").

Chang, Chen, Hsu, Chan, and Chang (2015) were interested in a similar question: does a person's style of humor relate to that person's level of creativity? To answer this, they conducted a large study (1,252 participants) with Chinese 13-year-olds living in Taiwan. The measures that they used were translated into Chinese and were chosen because they worked in cross-cultural settings (meaning that the nationality of the participants should not be an issue that will influence the results).

The teenagers competed a Humor Styles Questionnaire and a cluster analysis of their answers placed them into groups. Similar to past research, four styles emerged:

1) Positive Humor: joking about yourself in a positive way (showing off) or joking about others in a positive way to show that you like them.

2) Negative Humor: joking about yourself in a negative way (putting yourself down) or joking about others in a negative way to be aggressive. For example, implying that nobody will date you!

3) General Humor: joking in both positive and negative ways about yourself and about others. In this case humor can be used to both befriend and to be aggressive at times.

4) Humor Deniers: below average use of any type of humor.

Among other measures, the participants took a modified figure drawing test of creative thinking. Each teenager received a piece of paper with 27 versions of the Chinese character for "Human" printed on it. They were asked to doodle upon each symbol to create drawings that incorporated that shape. The drawings were rated on: Fluency (how many of the 27 versions were drawn upon in ten minutes); Flexibility (the number of categories in the drawings' themes); Originality (how novel the drawings were compared to others); and Elaboration (the number of details in the drawings).

The most robust results strongly supported a theory of creativity: Intrapersonal Variability. In this perspective, "...individuals who hold opposing or conflicting traits...[or have] complex personalities..." are more creative (p. 307). The teenagers whose main humor style was General, or full of contradictions and complexity, scored noticeably higher than the other students on all four ratings of creativity.

On the other hand, one comparison did seem to support an alternate theory: the Positivity Perspective suggests that positive humor is linked to more creative thinking. Only in the ratings of Elaboration did participants with Positive Humor outscore the participants with Negative Humor.

If we accept that Bob Ross was a creative man, the most prominent results of this study support that his complex personality and conflicting life experiences may have increased that creativity. Chang et al. explain that creativity may be enhanced when people are able to accept their inner conflicts (self-integration) or when their complex personalities allow them to be psychologically flexible in their thinking.

At the same time, the Elaboration finding implies that Bob Ross' attention to detail in his paintings may have been, at least in part, related to his positive outlook on life. The Positivity Perspective assumes that using positive humor is associated with being in a positive mood, which is assumed to allow one to be more creative, or in this case, to embellish creative acts.

Although Bob Ross' humor clearly included positivity, his life experiences may have led him to embrace a General Humor style, which in turn may have fostered his creativity. In that case Bob Ross would probably find this week's meme funny, even though its humor is aggressive.

Further Reading:

The Chang et al. (2015) article can be accessed through your local college library.

Watch a talk by Dr. Robert Provine, a psychologist and neuroscientist, on laughter.

Read a press release from the Association for Psychological Science about a research study by Vohs, Redden, and Rahinel (2013) that links a messy work environment to increased creativity. 

BONUS:

Feel the positivity! Feel the creativity! The Public Broadcasting Service (PBS) Digital Studios created a really amazing musical remix of Bob Ross using manipulated clips from his television program.




Monday, November 23, 2015

Tryptophan saves the day?

This week in the United States we will celebrate Thanksgiving. In addition to feelings of gratitude, family gatherings can also stir up heated arguments about political issues and other differences between family members. Today's meme suggests that the neurotransmitter found in turkey meat, the centerpiece of most Thanksgiving feasts, may be an antidote to this problem:


Tryptophan naturally occurs in turkey meat and other common foods (including beans for vegetarians). Its presence in the brain has been linked to an increase in Serotonin, another neurotransmitter associated with positive mood and decreased aggression. Could eating Tryptophan-rich foods decrease fighting? An experiment with non-human subjects suggests that a touch of Tryptophan might help.

Walz, Stertz, Fijtman, dos Santos, and de Almeida (2013) divided male mice into five groups: four groups received a dose of Tryptophan that was 1%, 2%, 3%, or 10% of 30 ml of a carrier liquid; the last group was the control group so they did not get any Tryptophan in their liquid. Immediately after dosing, the mice were individually exposed to an intruder: a stranger male mouse. This encounter lasted for five minutes as the test mouse's behaviors were recorded. This experience occurred eight times for all male mice from the five groups.

When the animals' reactions were coded across these trials, Walz et al. found that the mice dosed with a 1% and 2% Tryptophan solution were less likely than the control group to aggressively bite or to threaten the stranger mouse from the side. The other doses did not show a significant reduction in these aggressive behaviors, and none of the doses was related to changes in non-aggressive behaviors, such as activity levels or grooming. At a low dose, Tryptophan is speculated to raise Serotonin levels enough to take the edge off this stressful experience; at larger doses it may be that Serotonin rises so high that it sends a feedback signal to the mouse's body to decrease Serotonin production thereby undoing any good effect of supplementation.

The authors concluded, "...that low doses of [Tryptophan] are able to reduce aggressive behavior in male mice....Tryptophan supplementation may be an alternative treatment for aggression in groups that exhibit such behavior" (p. 400). Of course, we can't be sure that the same dose of Tryptophan (especially if it is combined with cranberry sauce, mashed potatoes, and stuffing) would have the same effect on humans or that it would diminish verbal aggression as it did physical aggression. Walz et al. encouraged further research: "To control aggressiveness, a person's diet may be an important factor" (p. 397).

Until then, when discussions get heated at your next family gathering, try changing the subject to something that everyone can agree on. Until we know the details about Tryptophan we will apparently have to rely on Adele.




Further Reading:

The Walz et al. (2013) article is available online and the Psychology and Neuroscience journal article can be accessed through your local college library.

People used to think that the Tryptophan in turkey was responsible for that sleepy feeling so many of us have after the big meal. Find out what is more likely to blame in this Live Science piece by Tanya Lewis.

Why do family celebrations so often turn into family fights? Read Olga Khazan's article in The Atlantic:  "Why families fight during the holidays."

Sunday, November 15, 2015

Taste the placebo

One of my students asked me if I make the memes that I use in this blog: the answer is "no" - truly, I am too busy grading papers. Fortunately there are a lot of memes floating around social media. Most of the time I start with a meme and then do a literature search to find recent research on that topic. Other times, like this week, I find a great article then search Google Images for a meme to match. Sometimes I stumble upon the perfect combination and reach article/meme perfection:


Would it be mean to trick a hurting person into thinking that a piece of candy is actually a tablet of pain reliever? The answer would be, "yes" if the act of swallowing that orange Skittle would have no effect on her headache - but what if it helped just as much as a real Advil?

Faasse, Martin, Grey, Gamble, and Petrie (2015) conducted a simple experiment to answer a similar question. They recruited 87 New Zealander university students (83% female) who frequently suffered from headaches. The students were each given four identical-looking tablets: two were labeled "Nurofen," a brand name for Ibuprofen - like Advil in the United States, and the other two were labeled "Generic Ibuprofen." Thus, each of them believed that he or she received two brand name pills and two generic pain-relief pills.

However, the researchers had tricked them with some mild deception! For each participant one of the tablets labeled "Nurofen" was actually a placebo (a sugar pill) containing no medicine - much like a Skittle; likewise, one of the tablets labeled "Generic Ibuprofen" was also a placebo. The remaining two tablets were always identical doses (400 mg) of Ibuprofen; there was no difference beyond how they were labeled. This was a within-subjects experiment because all participants experienced all of the possible versions of the tablet: placebo labeled Nurofen; Ibuprofen labeled Nurofen; placebo labeled Generic; Ibuprofen labeled Generic.

The participants were instructed to use the tablets for their next four headaches, and the order of the pills was dictated by the researchers for counter-balancing. Each time the participants got headaches they had to rate the intensity of their pain, take the designated tablet, wait one hour, and then rate the intensity of any remaining pain and note any side effects.

As you would expect, the overall results showed that Ibuprofen did a better job than the placebo at resolving headache pain. What was more interesting to Faasse et al. was that placebo and Ibuprofen tablets labeled Nurofen were rated equally well on pain relief. In this case, the Placebo Effect was clear: just believing that you took brand name Ibuprofen was enough to cure your pain as well as the real medicine would!

Alternatively, when participants took the tablets labeled "Generic Ibuprofen," pain relief was much higher from the actual Ibuprofen than from the placebo. Again participants' beliefs influenced their experiences: in this case a mistrust of off-brand medicines overcame the Placebo Effect.

Side effect data followed a similar trend: when participants believed that they were taking "Generic Ibuprofen" they reported more side effects from the placebo tablets than from the actual Ibuprofen (that really could produce side effects)! When placebos produce negative effects this is call the Nocebo Effect. This Nocebo Effect was limited to the Generic labeled tablets. No difference in side effects was seen between the placebo and Ibuprofen when they were labeled Nurofen, so a mistrust of off-brand medicine encouraged the Nocebo Effect.

Faasse et al.'s experiment would suggest that orange Skittles stored an old Advil bottle would probably fix your sister's headache just as well as actual medicine! On the other hand, if she saw the Skittles come out of a store brand (generic) bottle she might be complaining an hour later: her head would still hurt and the "pills" you gave her are upsetting her stomach. Of course, the side effects of tricking your sister would be far worse than a headache, so please don't try this at home.

Further Reading:

The Faasse et al. (2015) article is available online or can be obtained through your local college library.

Curious about the placebo effect in medicine? Get two perspectives from an article in the New England Journal of Medicine by Drs. Ted Kaptchuk and Franklin Miller, and from a TED talk by Dr. Lissa Rankin.

Orange Skittles resemble Advil: similarities between candy and medicine can lead to accidental poisoning of children. Play the Pill vs Candy game from California Poison Control. Can you tell the difference between a sweet treat and a pharmaceutical?

BONUS:

Our suspicion of generic brands extends to other products as well. Watch some of the Buzzfeed crew attempt to tell the difference between off-brand and brand name cereals.


Sunday, November 8, 2015

Sharing is caring

Sharing with those less fortunate is a value that many parents wish to pass on to their children, and one that is emphasized in many traditional religions. Apparently it is valued by guinea pigs as well:


Recently, a study about sharing has received a lot of mention in the media. Decety, Cowell, Lee, Mahasneh, Malcolm-Smith, Selcuk, and Zhou (2015) examined the influence of religion on the sharing behavior of more than 1,000 school children (ages five to twelve) in six different countries (USA, Canada, South Africa, Turkey, Jordan, and China).

Children were tested individually in their schools using a modified version of The Dictator Game. Each child was offered 30 stickers and asked to choose their ten favorites. They were then informed that there would not be enough time to test all of the children at the school, so they should donate some of their stickers to the children who would otherwise receive none. The number of stickers that children chose to donate served as the measurement of sharing.

The parents of these participants also completed a questionnaire about their religious identities and religious practices, including the number of times per week the family attended religious services and the family's level of spirituality. The most commonly reported identities were: Muslim (43%); Non-religious (28%); Christian (24%).

The results grabbed media attention because the children from Non-religious families shared more stickers than the children from Muslim or Christian families. This effect was the same regardless of the level of religious practice or belief of the Muslim or Christian children. The trend was strongest among the older children in the sample (ages 8-12 years) which implies that the religious children had a longer time to internalize the values of their faiths. The authors noted that children's ages, socioeconomic statuses, and countries of origin were also predictive of sharing, but they did not explore these results in depth.

In the United States, headlines like "Religion makes children more selfish, say scientists," and, "Nonreligious children are more generous," splashed across the Internet. Some people delighted in the irony of reported selfishness from people whose religions teach generosity. Others, especially some practitioners of Islam and Christianity, were offended. But do Decety et al.'s results deserve these strong reactions?

It is true that the study demonstrated a statistical difference between Non-religious children when compared to Muslim and Christian students. That means that the results are unlikely to be due to chance, luck, or accident. So this trend is worthy of further study in future investigations of how religion influences sharing and other prosocial behaviors.

On the other hand, the average number of stickers shared by these groups were very similar. Non-religious children shared on average 4.09 of their ten stickers; Muslim and Christian children shared 3.20-3.33 of their ten stickers. The difference was consistent enough to reach statistical significance, but small enough that we would be unlikely to see a practical difference in our daily lives. In other words, if an elementary school class was made up of children from these three groups, the teacher would be unlikely to notice more or less generosity from any one group: all of the children would donate about three or four of their stickers to students who had none.

At the same time, these data tell us that we cannot assume that religious children will share more than non-religious children. In the United States this is increasingly important to understand as the number of Americans reporting that they are atheist, agnostic, or of "no religion in particular" has increased to almost 23% of adults.

Further Reading:

An online version of the Decety et al. (2015) study is temporarily available; the Current Biology publication can be accessed through your local college library.

Read Phil Zuckerman's opinion piece on "How secular family values stack up," from the Los Angeles Times.

Regardless of their religion or lack of religion, parents can get excellent tips from the Ask Dr. Sears article on "11 ways to teach your child to share." This list also includes when you should not force your child to share.

BONUS:

From the children's television program, Sesame Street, a music video "Share It Maybe;" an educational parody of Carly Rae Jepsen's "Call Me Maybe."



Monday, November 2, 2015

What we fear is a mirror

Along with trick-or-treating and attending parties, some people celebrate Halloween by paying to get scared: they walk through the spooky scenes of a haunted house while actors jump from the shadows, scream, and taunt them.


Why do we like to get a little scared? The James-Lange theory of emotion would suggest that we experience our bodies' fear response, racing hearts and sweaty palms, but then mislabel that arousal as "fun" or "excitement." As well, haunted houses keep us in the present moment so we cannot worry about anything else at the same time. People who were stressed before going into a haunted house often report the highest amount of relaxation from the experience.

But do we all get scared by the same things? Muroff, Spencer, Ross, Williams, Neighbors, and Jackson (2014) wondered if White and African American men and women have different ideas of what causes fear. They used a cognitive interview technique by asking a convenience sample of almost 200 participants (ages 18-85) a single question, "What makes and object or a situation fearful?" (p. 156).

From their answers the researchers distilled five categories of what makes something scary:

External Locus of Control: feeling out of control; not knowing what is about to happen.

Harm/Danger: risk that something would hurt or threaten your health/life.

Phobias: an irrational, intense fear of something that provokes immediate anxiety and/or panic.

Past Experience: having had or knowing about someone else's past experience with something that was fear-inducing.

Self-Perception: your belief that you will find something scary or not be able to handle something.

Muroff et al. then reexamined the participants' responses to see how often all of the participants and participants from specific groups included these categories in their original explanations about what makes things scary. From these qualitative data, they found that (p. 157):

*The most common category for all participants was External Locus of Control: it was mentioned in 35% of all explanations. This was followed by Harm/Danger (29%) and Phobias (18%).

*White women were 1.5 times more likely than African-American women to include External Locus of Control as part of their answers.

*External Locus of Control appeared most in the explanations given by White women and the least in explanations given by White men. 

*Older participants' responses were more likely to include External Locus of Control.

*White men were 3.5 times more likely than African-American men to include Past Experience in their responses.

*Men were more likely than women to include references of Self-Perception.

*Phobia was most common in the answers given by participants who had not completed high school.

This would suggest that these same individuals would react differently to various aspects of a haunted house. The majority, but especially White female and older participants, would be scared by unpredictable situations: dark passageways; rooms filled with disorienting lighting and mirrors; going around blind corners; not being able to find the way out of a room.

White male participants would be scared of things that remind them of bad situations that they, or people they know, have experienced. For example, an actor throwing a fake punch toward them, or a torture scene that includes an actor being kicked in the genitals or having a thumb hit by a hammer.

Males, in general, would also be afraid if they were led to believe that they could not handle a situation. This could be induced by playing recorded messages like, "You are helpless," "Others have come here but nobody leaves alive," or, "You can fight but you are too weak to escape," as they walk through the haunted house.

Finally, participants who had not completed high school might be particularly afraid if the horrific scenes included common phobia triggers. For example, spiders, needles, blood, tight spaces, or high heights.

Muroff et al. were not interested in how to build the scariest haunted house. Instead they were curious if people from different groups may have different interpretations of an assessment question that is commonly used to diagnosis phobias. Their results suggest that age, education level, sex, and race can influence how people conceptualize fear, and this may affect the accuracy of that phobia assessment.

The authors emphasized that the results likely reflect truly scary aspects of American culture. For example, women tend to have less financial power related to men; money allows you to better control your experiences. I would also add that women are more often cautioned to control their environments to prevent dangerous situations (e.g.; never walk alone; don't leave your drink unattended; walk with purpose and stay in well-lit areas). This may be particularly true for White women as the media more often portrays them as victims of violence - even though it is African American women who actually experience more violence. Perhaps this is why White women were more likely to endorse External Locus of Control as something that induces fear.

More clearly, Muroff et al. wrote, "Research on coping mechanisms for discriminated and stigmatized groups suggests that African American men and women may depend on various mechanisms for coping such as external locus of control and social learning" (p. 158). In this case external locus of control means that discrimination is explained as being random or a product of the situation, and thereby not caused by the person experiencing this unfair treatment. The authors explained that this beneficial coping mechanism would be less likely to be part of African Americans' explanations of fear.

Additionally they wrote, "Our findings also suggest that White respondents in this sample mention past experiences in their conceptualizations of fear more frequently than their African American counterparts. African Americans' experiences may be influenced more by unpredictable events including acts of racism and discrimination." Because of this, "African Americans may focus on the present and future in order to cope with adverse life experiences" (p. 158). When your past is full of fear it might not help you distinguish if a new situation is scary or not.

We can all exit a haunted house, but we can rarely step outside of our own experience of biological sex, race, and culture. The first step is to realize that we are all different: the same environment may be fun for some and scary for others.

Further Reading:

The Muroff et al. (2014) article can be accessed through your local college library.

Listen to National Public Radio's "Hidden Brain" podcast on the Science of Fear.

Read about Neuroscientist Dr. Lisa Feldman Barrett who creates a yearly haunted house in her basement based on the neuroscience of fear. The proceeds of the event are donated to charity.

BONUS:

Watch when talk show host Ellen Degeneris sends her executive producer and "Modern Family" star, Eric Stonestreet, through a haunted house. These men are both White: does their behavior match the results of Muroff et al.?

Andy Goes to a Haunted House with Eric Stonestreet, Part 2 | EllenTV.com

Sunday, October 25, 2015

Revenge of the preteen nerds

Many people get an education because they want good careers that earn good money. Being studious may not make you cool but it should set you up to meet that goal; students who party instead of studying may not be so successful:


This quotation is actually from Bill Gates, even if Actual Advice Mallard seems to be stating it here. Did Bill Gates get it right? Are today's nerdy kids likely to be well-employed and financially stable as adults?

Spengler, Brunner, Damian, Lüdtke, Martin, and Roberts (2015) examined data from a large-scale, longitudinal study done in Luxembourg, a small country in Europe. In 1968 the MAGRIP study collected data on Luxembourgish youngsters around age 12. This included measures of the children's IQs, their parents' levels of wealth and education (SES), ratings from teachers on how studious the children appeared to be, and self-reports about feelings of inferiority, being a responsible student, defying their parents' rules, and talking back to their parents.

In 2008, the second wave of MAGRIP followed up with 745 of these same individuals when they were in their 40s. At that time the participants were asked about their educational attainment (years of education after elementary school), their current or most recent jobs, and their yearly incomes. In their analysis, Spengler et al. wanted to know what traits at age 12 were associated with greater educational attainment, occupational success (as measured by prestige and social class), and higher incomes in middle age.

In line with past research, the participants who had higher IQs and higher SES families when they were 12 attained higher amounts of education by middle age. However, even when those two variables were accounted for, three other attributes predicted educational attainment: feeling that you are not as good as others; studiousness; talking back to parents and not always following the rules. Two of these are not surprising: self-reported feelings inferiority at age 12 were related to lower levels of education by one's 40s; while teachers' higher ratings of studiousness at age 12 predicted higher levels of education by middle age. Interestingly, reporting at age 12 that you talk back to and do not always obey your parents was also related to higher educational attainment in adulthood.

Likewise, more occupational success in middle age was predicted by having a higher IQ and coming from a higher SES family at age 12. Even when those two variables were held steady, seeing yourself as a responsible student at age 12 predicted occupational success in your 40s. Higher teacher ratings of studiousness during childhood also predicted occupational success, however this relationship played out through the amount of education that the participants experienced. Spengler et al. clarified that being seen as studious by your teachers in childhood suggests that you have traits that might lead you to choose more years of and more challenging levels of education. In fact, educational attainment was the best predictor of occupational success.

The analysis revealed an unexpected finding related to personal income. The participants who at age 12 admitted that they talked back to their parents and did not always follow their rules were more likely to have higher incomes during their 40s! This was true even when childhood IQ, family SES, and lifetime educational attainment were taken into account. Spengler et al. caution us that this finding needs to be replicated in future research - so this is not a green light for preteens around the globe to sass back at their parents. They also offer two explanations: these individuals may be more likely to argue to get higher wages; it is also possible that these individuals broke the rules as adults to get this higher income.

I would add that it is also possible that they were raised by good, Authoritative parents. These parents express love to their children and raise them with the correct amount of fair discipline; they are also willing to discuss household rules with their children, conversations which may start off with "talking back." Authoritative parents want their kids to be able to reason about their behavior choices instead of simply showing obedience to authority. So, it would stand to reason that their children may not always follow even their own rules to the letter. Authoritative parenting also prepares children for white collar jobs that require independent decision making and less emphasis on following orders; these jobs may also offer higher levels of pay.

Another possibility that I can imagine is that some of these individuals are Gifted. In general, higher IQs are related to higher incomes and Gifted children have IQs that are at least twice as different as we would expect from an average child. Parents of Gifted children often joke about them being "little lawyers," because they can be argumentative even with adults in authority. So Spengler et al.'s surprising finding may simply reflect a variable that is related to unusually high intelligence.

The authors admit that the predictive powers of pre-teen Luxembourgish children's behavior in 1968 may not generalize: it may be that educational attainment, occupational success, and income level are predicted by different traits today and these traits may vary from culture to culture. On the other hand, the Spengler et al. analysis demonstrated that personal factors like being a responsible student, coming across to teachers as being studious, and how you relate to people in positions of authority can predict achievement beyond their relationships with general intelligence and social class. In that case, being a studious (and perhaps argumentative) nerd may be enough: you don't have to be a genius like Bill Gates or have a wealthy parent like Bill Gates to become an adult with a good job and a good income.

Further Reading:

The Spengler et al. (2015) article can be accessed through your local college library.

Clinical Psychologist Kelly Flanagan blogged for The Huffington Post about why he thinks "...Every Kid Should Talk Back to Their Parents."

Serious students may have a good chance of landing a job found in U.S. News and World Report's list of "The 100 Best Jobs" of 2015.

BONUS: Watch an amusing promotional video titled "Is it true what they say about... Luxembourg?" made by the Luxembourg National Tourism Board. Spoiler Alert: no volcanoes; yes happy dogs.




Sunday, October 18, 2015

Phoning it in on exams

Even in a crowded lecture hall your professors know when you are using your smartphone during class.


Some instructors ignore this, some punish this, but most of them do not like it.


Although students might think that class rules limiting or banning the use of mobile phones are cruel, in truth they come from a good place: most faculty believe that your phones will hurt your grades by stealing your attention. A basic idea from cognitive psychology is that you have to pay attention to something if you want to commit it to memory or be able to use the idea effectively.


A study done by Bjornsen and Archer (2015) further supports this argument. These professors spent two semesters in their psychology classes surveying a total of 218 college students about their in-class cell phone use. At the end of each class the students answered a questionnaire about the number of times that they looked at their phones for: social networking (texting, email, social network apps like Facebook and Instagram); getting information (Googling, checking the online syllabus, checking a website related to class); organizing their lives (personal calendar); or playing a game. The students were assured that their answers would not influence how the professor graded their work, so the assumption is that these self-reports were honest.

The authors compared the amount of each type of mobile phone use to test scores in these classes. They found that both social media use and gaming during class were associated with lower test scores, with playing games being much worse than using social media. Because social media use was more common than playing games, Bjornsen and Archer looked at these data in a more detailed way. They divided the students who used social media during class into high (5x per class), medium (2.4x per class), and low (1x per class) then looked at average test scores for each group. Across five exams the high in-class social media users scored an average of 74% and the low social media users scored an average of a 76%. These scores are close but they could be the difference between getting a C or a C+ in a class.

However, one of these effects changed when Bjornsen and Archer controlled for overall GPAs (grade point averages). As you might guess the students with the worst GPAs were more likely to play games in class, so the relationship between game playing and low test scores was likely influenced by this third variable: it disappeared once GPA was factored in. On the other hand, even when overall GPA was included as a factor, the relationship between in-class social media use and lower exam scores remained significant.

This implies that even students who have higher GPAs still score lower on exams when they use social media during those classes! From past research on college students we know that more than 90% admit to using cellphones during class and that 77% do not believe that this would cause problems with their learning. Bjornsen and Archer's results suggest that students, especially students who do well in school, may not be aware that their tests scores might be a bit higher if they reduced or stopped using social media during class time.

Before you worry that this study will drive professors to ban cell phones, the authors suggest that this is not sensible given that today's university students are used to having their smartphones as constant companions. They cite past research about this population: college students spend 5-9 hours a day on their phones which includes: 95 minutes of texting; 95 minutes on social networking apps; 49 minutes emailing; 34 minutes surfing the Web. Based on this Bjornsen and Archer suggest some integration of phones into the classroom. For example, I will ask students to put away their phones if they are not participating or are distracting other in the class, but I don't mind if student take pictures of notes and images that I project onto the screen at the front of the room (even though there is good research behind the idea that taking notes through handwriting increases your memory and understanding of those notes).

 
Ultimately university students are adults who should weigh the benefits and costs of using their phones in class. For some, being connected to friends and family will be more important that scoring a few points higher on exams. For others, every point matters because their grades are crucial for getting into programs like Nursing, keeping scholarships, or having good GPAs for graduate school applications. If you aren't sure which group you fall into...ask Siri?

Further Reading:

The Bjornsen and Archer (2015) article can be accessed through your local college library.

There's an app for that! In 2014 college students Mitch Gardner and Rob Richardson created Pocket Points, an app that rewards students for not checking their phones during class. Find out if your university uses this system - if so you can get discounts at stores and restaurants. As if scoring 2% higher on average exam scores was not enough!

Cell or mobile phone addiction is addressed by Web MD. Find out the symptoms and suggestions for managing your smartphone time.

Sunday, October 11, 2015

When I'm 64

As a serious college student I remember feeling old before my time compared to some of my classmates who would blow off studying to socialize. Now as a middle-aged professor I don't "feel" as old as many of my peers (even though my daughter reminds me all of the time that I am "very old"). Apparently my experience is shared by Yoko Ono:


Chronological age, or how many years old you are, is just one way to think about aging. You can also consider: Biological age - how healthy you are; Social age - the habits you have and the roles that you take on; and Psychological age - how well you reason and think.

A recent study highlights how people's perceptions of their Biological and Social ages can influence their Subjective ages - or how old they feel. Stephan, Demulier, and Terracciano (2012) asked more than 1,000 French adults ages 18-91 to rate their physical health, to complete a version of the Big Five personality inventory, and to report their subjective ages.

The results support the idea that how old we feel is based on more than how old we are. These effects varied depending on the ages of the participants. The authors did not state the ranges of the age groups, but we would usually assume that their young adults were ages 18-39, middle-aged adults were 40-59, and older adults were 60-91.

A strong relationship between chronological age, health, and subjective age emerged. Middle aged and older adult participants who rated themselves as being in good health were more likely to say that they felt younger than they actually were. Stephan et al. clarify that on average, these middle aged adults felt 2 years younger and these older adults felt four years younger. These results support the importance of considering a person's Biological age.

When the authors controlled for health and demographic factors a relationship between chronological age, certain Big Five traits, and subjective age appeared. Because our personality traits often relate to our behaviors and activities, these results support the importance of assessing a person's Social age. For example, young adults who were high in Conscientiousness felt older than they really were. Conscientiousness implies being responsible and organized. Motivated and reliable young adults might feel like they are older than their peers because these are not characteristics that Western culture often associates with that age.

Middle aged and older adults who were high in Openness to Experience, and older adults who were high in Extraversion felt younger than they were. Openness to Experience has to do with engaging in diverse interests and being open to new ideas; Extraversion is associated with being out-going and dynamic. Both are traits that Western culture does not associate with middle and older age, so people with these traits are likely to feel younger than they really are as they age.

These results explain the experience that Yoko Ono and I share. When I was a young adult, my conscientious behavior did not match with my stereotype about my age: so I felt older. Now as a middle aged person who has broad interests and loves creativity, my self-perception again runs contrary to the stereotype about my age: so I feel younger. In some decades, I can predict that I will continue to feel younger because I am relatively high in Extraversion.

This research also raises a question about Western stereotypes about age. What does it mean that we view young adults as irresponsible, middle-agers and seniors as stuck in their ways, and senior citizens as being unsociable? If we can imagine a time that these negative assumptions are no longer part of our culture, it would have implications for subjective age. Instead of feeling older or younger than our chronological age we would simply be that old and recognize that at all ages individuals can differ on Big Five traits.

Further Reading:

A pre-publication version of the Stephan et al. (2012) article can be read here thanks to the National Institute of Health. The Psychology and Aging article can be accessed through your local college library.

A blog post from the AARP about research done by Rippon and Steptoe (2015): "Feeling Old vs Being Old." More support for Biological age!

Know an Extraverted senior? The social networking site Meet Up has groups around the world for outgoing older people who want to socialize!

Sunday, October 4, 2015

In defense of Kristen Stewart

There are many memes mocking Kristen Stewart for not smiling. This week's meme is one of them:

 
However, if you type her name into a Google Images search, and then do a second search for her Twilight co-star Robert Pattinson you will see that the two actors are both pictured smiling and not smiling. When I did this (albeit unscientific) search and compared the first 25 images for both actors, I found that both of them were shown smiling 11 times and not smiling 14 times. So why the shade for Kristen while Robert's image remains sparkling?

Part of it is due to gender expectations in our culture: women smile more than men and are punished more for not smiling. The most recent review of this phenomenon occurred in 2003 when LaFrance, Hecht, and Paluck published a meta-analysis of 162 studies. A meta-analysis allows researchers to statistically combine the results of many studies to determine if a difference exists and how big (or meaningful) that difference may be. The overall results from LaFrance et al. confirm that across these studies there is a small to moderate, or noticeable in the real world, effect of men smiling less. Fans are used to seeing women smiling so they notice and react poorly when Kristen Stewart bucks that gender expectation. Likewise, fans are used to men not smiling so the same facial expressions from Robert Pattinson go unnoticed as if they were invisible.

The researchers also used statistics to examine the different contingencies of the studies to see what is associated with this difference getting smaller or larger. In some cases it would come out smaller than the overall difference: in these situations women and men were closer to smiling at similar rates. Many of these effects were very small or even close to zero, which means that in the real world these contexts would likely be associated with very few observable differences between women and men:
*when people are not aware of being observed
*when they are in a group of four or more people (so the focus is not on one person)
*when they are not interacting with the people around them
*when they are very familiar with each other
*if they are comfortable because there is low pressure to impress
*when they are talking to a younger person or an older person
*when they are paired with a woman
*when they are interacting with somebody of the opposite sex
*when they share equal power with the other person
*if they are asked to play a role that requires caretaking, like taking care of a baby
*if they are forced to argue against the other person
*if people are from England
*if people are African-American
*if people are middle aged or senior citizens

In other cases this gender difference would come out larger than the overall difference: in these situations women were even more likely to smile than men. These range from moderate to almost high effects, which means that in the real world these contexts would likely be associated with actual observable differences between women and men:
*when people are alone (and presumably self-conscious about being observed)
*when people are alone but asked to imagine another person being with them
*when they are paired with a man
*if they are asked to persuade somebody
*if they have to reveal personal information about themselves
*if they are made to feel embarrassed
*if people are Canadian
*if people are teenagers (a time of gender intensification)

Looking at the results LaFrance et al. note that,"...the extent of sex differences in smiling is highly contingent on social groups and social factors" (p. 326). In simpler language, men tend to smile less than women, but when this happens and how obvious it is depends on the characteristics of the situation. For example, there are personal and cultural factors like age, race, and culture. There is also the question of what is required in this situation: do they have to persuade; argue; or be in charge of the care of another being? Who are they interacting with - do they share the same age, sex, or level of power in the situation?

Notably for Kristen Stewart the results also demonstrate that people are more likely to show this gender difference when they know that they are being watched, when they imagine that they are being watched, and when they feel like they need to make a good impression (or instead are facing embarrassment). So another reason that fans may be critical is that, by being an actor and a public figure, she is constantly in these contexts yet she does not do what most women would do in those situations, she does not smile. On the other hand, if Robert Pattinson reacts the same way on the red carpet he is actually doing what we expect men to do in those situations, so once again he escapes criticism. And that really bites.

Further Reading:

A pdf of the LaFrance et al. (2003) article can be accessed on Dr. Elizabeth (Betsy) Paluck's website.

 Kristen Stewart may wish to work on her smile - not for the fans - but for how smiling, even fake smiling, might help her deal with stress. Read the Association for Psychological Science (APS) coverage of research done by Kraft and Pressman (2012).

Kristen Stewart is not alone. Read Emily Matchar's article, "Memoirs of an Un-Smiling Woman," from The Atlantic.


Sunday, September 27, 2015

On the borderline of rejection

Nobody likes to feel left out; it hurts and you might wonder if you did something wrong or if you will always be rejected. On the other hand, imagine being included and feeling a sense of impending abandonment. In this week's meme Overly Attached Girlfriend is exhibiting that problem as she spends time with her partner:


Gutz, Renneberg, Roepke, and Niedeggen (2015) investigated how three populations respond to being included and then rejected. Along with healthy controls, or average people for comparison, their experiment included people in treatment for Social Anxiety Disorder (SAD) and a separate group in treatment for Borderline Personality Disorder (BPD).

Individuals with Social Anxiety Disorder are terrified that they are going to come across as awkward losers when they interact with others. This fear comes from a distorted perception that other people are more negative and judgmental than they really are, which leads to avoiding social situations and having very few social connections. When these situations cannot be avoided, people with SAD are hardest on themselves after the fact: they go over what happened in their minds and imagine how their behaviors left bad impressions.

People with Borderline Personality Disorder have an intense fear of being rejected and also view others as more negative and judgmental than they really are. This is complicated for people with BPD because they lack a solid sense of self and tend to derive some identity through the attention given to them in relationships. At the start of relationships they treat people as if they can do no wrong, but as they grow closer this bond becomes a risk - they might lose their source of self! Based on a distorted perception of reality and motivated by fear they turn against the people closest to them: accusing and blaming; projecting their own faults; exploding with over-the-top emotional reactions; or making threats of self-harm. These behaviors are scary and hurtful to receive from somebody who used to treat you so well, so eventually many of these relationships end. This further reinforces a fear of rejection.

Gutz et al. were interested in how the three groups would react to being included and being excluded. To provoke these situations they used a rigged video game called Cyberball: in this game you can "toss" a ball to two other players and they can choose to "toss" the ball to you. The participants are tested individually and they believe that they are playing online against two other people. However, they are actually just playing with a computer program and what occurs in the game is not by accident. In the first trial the program sends the ball to the participant and the two pretend players equally (33%) of the time; this should give participants a feeling of being included. In the second trial the program sends the ball to the participant less (16%) of the time compared to the pretend players; this should give participants a feeling of being excluded or rejected.

In addition to participants' ratings of their fear, expectation, and experience of rejection, the authors collected biological data in the form of brain activity during both trials. This was measured by electrical changes on the scalp known as an "event related brain potential" or ERP. The ERP of interest was P3b: this change occurs when a person has to reevaluate a situation - when something unexpected happens (and should not be confused with the dehydrated peanut butter PB2). In prior studies using Cyberball, average people exhibited this electrical change when they received the ball in the exclusion trial because this is an unexpected event.

The results of the present experiment confirmed differences between the healthy controls and the two clinical populations: overall the healthy controls rated themselves as lower in anxiety about being rejected and on expectations of rejection. In the exclusion trial, all three groups felt left out but the two clinical groups felt more threatened by this than did the healthy controls. This is evidence that rejection is felt more strongly by people diagnosed with SAD and BPD.

Differences between the clinical groups also occurred: Borderline participants rated themselves as higher in rejection expectancy than did the participants with Social Anxiety Disorder. Likewise, while there was little difference between healthy controls and participants with SAD, the participants in treatment for BPD always reported a higher experience of being left out, even in the inclusion trial. In fact, whereas the other two groups were accurate, at all times participants with BPD underestimated how many times the ball was tossed to them: they overestimated how much they were left out.

P3b activity confirmed these self-reports: only the participants with Borderline Personality Disorder demonstrated a P3b event, in fact a large P3b change, during the inclusion trial. Because this is associated with unexpected events, this means that these participants were expecting to be left out even when they were being included. Further evidence can be found in a statistical analysis: self-reports of rejection expectancy explain the largest difference in the P3b changes between the three groups.

Gutz et al. explain the significance of these findings:
...this negative perception bias might cause situation-inappropriate overreactions in daily life, and consequently prompt expectation-confirming rejection behavior by others. A recent study indicated that it is the negative perception of others that triggers elevated negative affect and quarrelsome behavior in patients with BPD (p. 428).

If Overly Attached Girlfriend struggles with BPD she may assume that any relationship puts her at risk for abandonment. An evening hanging out with her partner would not be interpreted as being accepted, but instead as a situation of likely rejection. The meme is funny because we can see the distortion in her thinking. In real life it would not be any fun: instead it might start a fight or even lead to a break-up.

Further Reading:


The Gutz et al. (2015) article can be accessed through your local college library.

If you or somebody you care about are dealing with anxiety or personality disorders, you can find helpful resources on the National Alliance on Mental Illness website.

Dr. Marsha Linehan, who created a therapy for Borderline Personality Disorder, made news by announcing that she, too, struggles with this psychological problem. Read the New York Times article about her "coming out."

Dr. Kipling Williams, one of the originators of the Cyberball game, has an updated version available on his website. This site also includes a gif that shows an image that is similar to what is seen when participating in Cyberball experiments.

BONUS: None of us would need to fear rejection if Ryan Gosling were our partner (or lab partner). Enjoy the statistics humor!