Sunday, October 25, 2015

Revenge of the preteen nerds

Many people get an education because they want good careers that earn good money. Being studious may not make you cool but it should set you up to meet that goal; students who party instead of studying may not be so successful:


This quotation is actually from Bill Gates, even if Actual Advice Mallard seems to be stating it here. Did Bill Gates get it right? Are today's nerdy kids likely to be well-employed and financially stable as adults?

Spengler, Brunner, Damian, Lüdtke, Martin, and Roberts (2015) examined data from a large-scale, longitudinal study done in Luxembourg, a small country in Europe. In 1968 the MAGRIP study collected data on Luxembourgish youngsters around age 12. This included measures of the children's IQs, their parents' levels of wealth and education (SES), ratings from teachers on how studious the children appeared to be, and self-reports about feelings of inferiority, being a responsible student, defying their parents' rules, and talking back to their parents.

In 2008, the second wave of MAGRIP followed up with 745 of these same individuals when they were in their 40s. At that time the participants were asked about their educational attainment (years of education after elementary school), their current or most recent jobs, and their yearly incomes. In their analysis, Spengler et al. wanted to know what traits at age 12 were associated with greater educational attainment, occupational success (as measured by prestige and social class), and higher incomes in middle age.

In line with past research, the participants who had higher IQs and higher SES families when they were 12 attained higher amounts of education by middle age. However, even when those two variables were accounted for, three other attributes predicted educational attainment: feeling that you are not as good as others; studiousness; talking back to parents and not always following the rules. Two of these are not surprising: self-reported feelings inferiority at age 12 were related to lower levels of education by one's 40s; while teachers' higher ratings of studiousness at age 12 predicted higher levels of education by middle age. Interestingly, reporting at age 12 that you talk back to and do not always obey your parents was also related to higher educational attainment in adulthood.

Likewise, more occupational success in middle age was predicted by having a higher IQ and coming from a higher SES family at age 12. Even when those two variables were held steady, seeing yourself as a responsible student at age 12 predicted occupational success in your 40s. Higher teacher ratings of studiousness during childhood also predicted occupational success, however this relationship played out through the amount of education that the participants experienced. Spengler et al. clarified that being seen as studious by your teachers in childhood suggests that you have traits that might lead you to choose more years of and more challenging levels of education. In fact, educational attainment was the best predictor of occupational success.

The analysis revealed an unexpected finding related to personal income. The participants who at age 12 admitted that they talked back to their parents and did not always follow their rules were more likely to have higher incomes during their 40s! This was true even when childhood IQ, family SES, and lifetime educational attainment were taken into account. Spengler et al. caution us that this finding needs to be replicated in future research - so this is not a green light for preteens around the globe to sass back at their parents. They also offer two explanations: these individuals may be more likely to argue to get higher wages; it is also possible that these individuals broke the rules as adults to get this higher income.

I would add that it is also possible that they were raised by good, Authoritative parents. These parents express love to their children and raise them with the correct amount of fair discipline; they are also willing to discuss household rules with their children, conversations which may start off with "talking back." Authoritative parents want their kids to be able to reason about their behavior choices instead of simply showing obedience to authority. So, it would stand to reason that their children may not always follow even their own rules to the letter. Authoritative parenting also prepares children for white collar jobs that require independent decision making and less emphasis on following orders; these jobs may also offer higher levels of pay.

Another possibility that I can imagine is that some of these individuals are Gifted. In general, higher IQs are related to higher incomes and Gifted children have IQs that are at least twice as different as we would expect from an average child. Parents of Gifted children often joke about them being "little lawyers," because they can be argumentative even with adults in authority. So Spengler et al.'s surprising finding may simply reflect a variable that is related to unusually high intelligence.

The authors admit that the predictive powers of pre-teen Luxembourgish children's behavior in 1968 may not generalize: it may be that educational attainment, occupational success, and income level are predicted by different traits today and these traits may vary from culture to culture. On the other hand, the Spengler et al. analysis demonstrated that personal factors like being a responsible student, coming across to teachers as being studious, and how you relate to people in positions of authority can predict achievement beyond their relationships with general intelligence and social class. In that case, being a studious (and perhaps argumentative) nerd may be enough: you don't have to be a genius like Bill Gates or have a wealthy parent like Bill Gates to become an adult with a good job and a good income.

Further Reading:

The Spengler et al. (2015) article can be accessed through your local college library.

Clinical Psychologist Kelly Flanagan blogged for The Huffington Post about why he thinks "...Every Kid Should Talk Back to Their Parents."

Serious students may have a good chance of landing a job found in U.S. News and World Report's list of "The 100 Best Jobs" of 2015.

BONUS: Watch an amusing promotional video titled "Is it true what they say about... Luxembourg?" made by the Luxembourg National Tourism Board. Spoiler Alert: no volcanoes; yes happy dogs.




Sunday, October 18, 2015

Phoning it in on exams

Even in a crowded lecture hall your professors know when you are using your smartphone during class.


Some instructors ignore this, some punish this, but most of them do not like it.


Although students might think that class rules limiting or banning the use of mobile phones are cruel, in truth they come from a good place: most faculty believe that your phones will hurt your grades by stealing your attention. A basic idea from cognitive psychology is that you have to pay attention to something if you want to commit it to memory or be able to use the idea effectively.


A study done by Bjornsen and Archer (2015) further supports this argument. These professors spent two semesters in their psychology classes surveying a total of 218 college students about their in-class cell phone use. At the end of each class the students answered a questionnaire about the number of times that they looked at their phones for: social networking (texting, email, social network apps like Facebook and Instagram); getting information (Googling, checking the online syllabus, checking a website related to class); organizing their lives (personal calendar); or playing a game. The students were assured that their answers would not influence how the professor graded their work, so the assumption is that these self-reports were honest.

The authors compared the amount of each type of mobile phone use to test scores in these classes. They found that both social media use and gaming during class were associated with lower test scores, with playing games being much worse than using social media. Because social media use was more common than playing games, Bjornsen and Archer looked at these data in a more detailed way. They divided the students who used social media during class into high (5x per class), medium (2.4x per class), and low (1x per class) then looked at average test scores for each group. Across five exams the high in-class social media users scored an average of 74% and the low social media users scored an average of a 76%. These scores are close but they could be the difference between getting a C or a C+ in a class.

However, one of these effects changed when Bjornsen and Archer controlled for overall GPAs (grade point averages). As you might guess the students with the worst GPAs were more likely to play games in class, so the relationship between game playing and low test scores was likely influenced by this third variable: it disappeared once GPA was factored in. On the other hand, even when overall GPA was included as a factor, the relationship between in-class social media use and lower exam scores remained significant.

This implies that even students who have higher GPAs still score lower on exams when they use social media during those classes! From past research on college students we know that more than 90% admit to using cellphones during class and that 77% do not believe that this would cause problems with their learning. Bjornsen and Archer's results suggest that students, especially students who do well in school, may not be aware that their tests scores might be a bit higher if they reduced or stopped using social media during class time.

Before you worry that this study will drive professors to ban cell phones, the authors suggest that this is not sensible given that today's university students are used to having their smartphones as constant companions. They cite past research about this population: college students spend 5-9 hours a day on their phones which includes: 95 minutes of texting; 95 minutes on social networking apps; 49 minutes emailing; 34 minutes surfing the Web. Based on this Bjornsen and Archer suggest some integration of phones into the classroom. For example, I will ask students to put away their phones if they are not participating or are distracting other in the class, but I don't mind if student take pictures of notes and images that I project onto the screen at the front of the room (even though there is good research behind the idea that taking notes through handwriting increases your memory and understanding of those notes).

 
Ultimately university students are adults who should weigh the benefits and costs of using their phones in class. For some, being connected to friends and family will be more important that scoring a few points higher on exams. For others, every point matters because their grades are crucial for getting into programs like Nursing, keeping scholarships, or having good GPAs for graduate school applications. If you aren't sure which group you fall into...ask Siri?

Further Reading:

The Bjornsen and Archer (2015) article can be accessed through your local college library.

There's an app for that! In 2014 college students Mitch Gardner and Rob Richardson created Pocket Points, an app that rewards students for not checking their phones during class. Find out if your university uses this system - if so you can get discounts at stores and restaurants. As if scoring 2% higher on average exam scores was not enough!

Cell or mobile phone addiction is addressed by Web MD. Find out the symptoms and suggestions for managing your smartphone time.

Sunday, October 11, 2015

When I'm 64

As a serious college student I remember feeling old before my time compared to some of my classmates who would blow off studying to socialize. Now as a middle-aged professor I don't "feel" as old as many of my peers (even though my daughter reminds me all of the time that I am "very old"). Apparently my experience is shared by Yoko Ono:


Chronological age, or how many years old you are, is just one way to think about aging. You can also consider: Biological age - how healthy you are; Social age - the habits you have and the roles that you take on; and Psychological age - how well you reason and think.

A recent study highlights how people's perceptions of their Biological and Social ages can influence their Subjective ages - or how old they feel. Stephan, Demulier, and Terracciano (2012) asked more than 1,000 French adults ages 18-91 to rate their physical health, to complete a version of the Big Five personality inventory, and to report their subjective ages.

The results support the idea that how old we feel is based on more than how old we are. These effects varied depending on the ages of the participants. The authors did not state the ranges of the age groups, but we would usually assume that their young adults were ages 18-39, middle-aged adults were 40-59, and older adults were 60-91.

A strong relationship between chronological age, health, and subjective age emerged. Middle aged and older adult participants who rated themselves as being in good health were more likely to say that they felt younger than they actually were. Stephan et al. clarify that on average, these middle aged adults felt 2 years younger and these older adults felt four years younger. These results support the importance of considering a person's Biological age.

When the authors controlled for health and demographic factors a relationship between chronological age, certain Big Five traits, and subjective age appeared. Because our personality traits often relate to our behaviors and activities, these results support the importance of assessing a person's Social age. For example, young adults who were high in Conscientiousness felt older than they really were. Conscientiousness implies being responsible and organized. Motivated and reliable young adults might feel like they are older than their peers because these are not characteristics that Western culture often associates with that age.

Middle aged and older adults who were high in Openness to Experience, and older adults who were high in Extraversion felt younger than they were. Openness to Experience has to do with engaging in diverse interests and being open to new ideas; Extraversion is associated with being out-going and dynamic. Both are traits that Western culture does not associate with middle and older age, so people with these traits are likely to feel younger than they really are as they age.

These results explain the experience that Yoko Ono and I share. When I was a young adult, my conscientious behavior did not match with my stereotype about my age: so I felt older. Now as a middle aged person who has broad interests and loves creativity, my self-perception again runs contrary to the stereotype about my age: so I feel younger. In some decades, I can predict that I will continue to feel younger because I am relatively high in Extraversion.

This research also raises a question about Western stereotypes about age. What does it mean that we view young adults as irresponsible, middle-agers and seniors as stuck in their ways, and senior citizens as being unsociable? If we can imagine a time that these negative assumptions are no longer part of our culture, it would have implications for subjective age. Instead of feeling older or younger than our chronological age we would simply be that old and recognize that at all ages individuals can differ on Big Five traits.

Further Reading:

A pre-publication version of the Stephan et al. (2012) article can be read here thanks to the National Institute of Health. The Psychology and Aging article can be accessed through your local college library.

A blog post from the AARP about research done by Rippon and Steptoe (2015): "Feeling Old vs Being Old." More support for Biological age!

Know an Extraverted senior? The social networking site Meet Up has groups around the world for outgoing older people who want to socialize!

Sunday, October 4, 2015

In defense of Kristen Stewart

There are many memes mocking Kristen Stewart for not smiling. This week's meme is one of them:

 
However, if you type her name into a Google Images search, and then do a second search for her Twilight co-star Robert Pattinson you will see that the two actors are both pictured smiling and not smiling. When I did this (albeit unscientific) search and compared the first 25 images for both actors, I found that both of them were shown smiling 11 times and not smiling 14 times. So why the shade for Kristen while Robert's image remains sparkling?

Part of it is due to gender expectations in our culture: women smile more than men and are punished more for not smiling. The most recent review of this phenomenon occurred in 2003 when LaFrance, Hecht, and Paluck published a meta-analysis of 162 studies. A meta-analysis allows researchers to statistically combine the results of many studies to determine if a difference exists and how big (or meaningful) that difference may be. The overall results from LaFrance et al. confirm that across these studies there is a small to moderate, or noticeable in the real world, effect of men smiling less. Fans are used to seeing women smiling so they notice and react poorly when Kristen Stewart bucks that gender expectation. Likewise, fans are used to men not smiling so the same facial expressions from Robert Pattinson go unnoticed as if they were invisible.

The researchers also used statistics to examine the different contingencies of the studies to see what is associated with this difference getting smaller or larger. In some cases it would come out smaller than the overall difference: in these situations women and men were closer to smiling at similar rates. Many of these effects were very small or even close to zero, which means that in the real world these contexts would likely be associated with very few observable differences between women and men:
*when people are not aware of being observed
*when they are in a group of four or more people (so the focus is not on one person)
*when they are not interacting with the people around them
*when they are very familiar with each other
*if they are comfortable because there is low pressure to impress
*when they are talking to a younger person or an older person
*when they are paired with a woman
*when they are interacting with somebody of the opposite sex
*when they share equal power with the other person
*if they are asked to play a role that requires caretaking, like taking care of a baby
*if they are forced to argue against the other person
*if people are from England
*if people are African-American
*if people are middle aged or senior citizens

In other cases this gender difference would come out larger than the overall difference: in these situations women were even more likely to smile than men. These range from moderate to almost high effects, which means that in the real world these contexts would likely be associated with actual observable differences between women and men:
*when people are alone (and presumably self-conscious about being observed)
*when people are alone but asked to imagine another person being with them
*when they are paired with a man
*if they are asked to persuade somebody
*if they have to reveal personal information about themselves
*if they are made to feel embarrassed
*if people are Canadian
*if people are teenagers (a time of gender intensification)

Looking at the results LaFrance et al. note that,"...the extent of sex differences in smiling is highly contingent on social groups and social factors" (p. 326). In simpler language, men tend to smile less than women, but when this happens and how obvious it is depends on the characteristics of the situation. For example, there are personal and cultural factors like age, race, and culture. There is also the question of what is required in this situation: do they have to persuade; argue; or be in charge of the care of another being? Who are they interacting with - do they share the same age, sex, or level of power in the situation?

Notably for Kristen Stewart the results also demonstrate that people are more likely to show this gender difference when they know that they are being watched, when they imagine that they are being watched, and when they feel like they need to make a good impression (or instead are facing embarrassment). So another reason that fans may be critical is that, by being an actor and a public figure, she is constantly in these contexts yet she does not do what most women would do in those situations, she does not smile. On the other hand, if Robert Pattinson reacts the same way on the red carpet he is actually doing what we expect men to do in those situations, so once again he escapes criticism. And that really bites.

Further Reading:

A pdf of the LaFrance et al. (2003) article can be accessed on Dr. Elizabeth (Betsy) Paluck's website.

 Kristen Stewart may wish to work on her smile - not for the fans - but for how smiling, even fake smiling, might help her deal with stress. Read the Association for Psychological Science (APS) coverage of research done by Kraft and Pressman (2012).

Kristen Stewart is not alone. Read Emily Matchar's article, "Memoirs of an Un-Smiling Woman," from The Atlantic.


Sunday, September 27, 2015

On the borderline of rejection

Nobody likes to feel left out; it hurts and you might wonder if you did something wrong or if you will always be rejected. On the other hand, imagine being included and feeling a sense of impending abandonment. In this week's meme Overly Attached Girlfriend is exhibiting that problem as she spends time with her partner:


Gutz, Renneberg, Roepke, and Niedeggen (2015) investigated how three populations respond to being included and then rejected. Along with healthy controls, or average people for comparison, their experiment included people in treatment for Social Anxiety Disorder (SAD) and a separate group in treatment for Borderline Personality Disorder (BPD).

Individuals with Social Anxiety Disorder are terrified that they are going to come across as awkward losers when they interact with others. This fear comes from a distorted perception that other people are more negative and judgmental than they really are, which leads to avoiding social situations and having very few social connections. When these situations cannot be avoided, people with SAD are hardest on themselves after the fact: they go over what happened in their minds and imagine how their behaviors left bad impressions.

People with Borderline Personality Disorder have an intense fear of being rejected and also view others as more negative and judgmental than they really are. This is complicated for people with BPD because they lack a solid sense of self and tend to derive some identity through the attention given to them in relationships. At the start of relationships they treat people as if they can do no wrong, but as they grow closer this bond becomes a risk - they might lose their source of self! Based on a distorted perception of reality and motivated by fear they turn against the people closest to them: accusing and blaming; projecting their own faults; exploding with over-the-top emotional reactions; or making threats of self-harm. These behaviors are scary and hurtful to receive from somebody who used to treat you so well, so eventually many of these relationships end. This further reinforces a fear of rejection.

Gutz et al. were interested in how the three groups would react to being included and being excluded. To provoke these situations they used a rigged video game called Cyberball: in this game you can "toss" a ball to two other players and they can choose to "toss" the ball to you. The participants are tested individually and they believe that they are playing online against two other people. However, they are actually just playing with a computer program and what occurs in the game is not by accident. In the first trial the program sends the ball to the participant and the two pretend players equally (33%) of the time; this should give participants a feeling of being included. In the second trial the program sends the ball to the participant less (16%) of the time compared to the pretend players; this should give participants a feeling of being excluded or rejected.

In addition to participants' ratings of their fear, expectation, and experience of rejection, the authors collected biological data in the form of brain activity during both trials. This was measured by electrical changes on the scalp known as an "event related brain potential" or ERP. The ERP of interest was P3b: this change occurs when a person has to reevaluate a situation - when something unexpected happens (and should not be confused with the dehydrated peanut butter PB2). In prior studies using Cyberball, average people exhibited this electrical change when they received the ball in the exclusion trial because this is an unexpected event.

The results of the present experiment confirmed differences between the healthy controls and the two clinical populations: overall the healthy controls rated themselves as lower in anxiety about being rejected and on expectations of rejection. In the exclusion trial, all three groups felt left out but the two clinical groups felt more threatened by this than did the healthy controls. This is evidence that rejection is felt more strongly by people diagnosed with SAD and BPD.

Differences between the clinical groups also occurred: Borderline participants rated themselves as higher in rejection expectancy than did the participants with Social Anxiety Disorder. Likewise, while there was little difference between healthy controls and participants with SAD, the participants in treatment for BPD always reported a higher experience of being left out, even in the inclusion trial. In fact, whereas the other two groups were accurate, at all times participants with BPD underestimated how many times the ball was tossed to them: they overestimated how much they were left out.

P3b activity confirmed these self-reports: only the participants with Borderline Personality Disorder demonstrated a P3b event, in fact a large P3b change, during the inclusion trial. Because this is associated with unexpected events, this means that these participants were expecting to be left out even when they were being included. Further evidence can be found in a statistical analysis: self-reports of rejection expectancy explain the largest difference in the P3b changes between the three groups.

Gutz et al. explain the significance of these findings:
...this negative perception bias might cause situation-inappropriate overreactions in daily life, and consequently prompt expectation-confirming rejection behavior by others. A recent study indicated that it is the negative perception of others that triggers elevated negative affect and quarrelsome behavior in patients with BPD (p. 428).

If Overly Attached Girlfriend struggles with BPD she may assume that any relationship puts her at risk for abandonment. An evening hanging out with her partner would not be interpreted as being accepted, but instead as a situation of likely rejection. The meme is funny because we can see the distortion in her thinking. In real life it would not be any fun: instead it might start a fight or even lead to a break-up.

Further Reading:


The Gutz et al. (2015) article can be accessed through your local college library.

If you or somebody you care about are dealing with anxiety or personality disorders, you can find helpful resources on the National Alliance on Mental Illness website.

Dr. Marsha Linehan, who created a therapy for Borderline Personality Disorder, made news by announcing that she, too, struggles with this psychological problem. Read the New York Times article about her "coming out."

Dr. Kipling Williams, one of the originators of the Cyberball game, has an updated version available on his website. This site also includes a gif that shows an image that is similar to what is seen when participating in Cyberball experiments.

BONUS: None of us would need to fear rejection if Ryan Gosling were our partner (or lab partner). Enjoy the statistics humor!


Sunday, September 20, 2015

I can't be trusted around marshmallows

A pet peeve of mine is when news articles make it sound like parents can completely control their children's behaviors by simply saying the right thing. One gift from our children is the lesson that we cannot control everything -  even if we do everything "right." This confounding behavior comes in part from children's low self-control. Even if we tell them not to do something it is very hard for them to stop themselves from doing it anyway.


However children differ in the their abilities to delay gratification, their abilities to fight temptation. Classic research by Mischel and colleagues in the 1960s and 70s outlined a protocol that is often referred to as "The Marshmallow Test." In these studies preschool children were tested individually by a familiar adult experimenter. The child chose a preferred treat (along with marshmallows, animal cookies, pretzels, and other treats were choices) and then the experimenter said that he or she could eat that one piece of treat now, OR wait until the experimenter returned in 15 minutes to get two pieces. The children differed in how long they were able to wait: some only lasted a few moments and gobbled down the single treat; while others distracted themselves or literally sat on their hands until the 15 minutes was up to claim the double treat. Follow-up studies demonstrated that the children who waited the longest grew up to have better grades in school and show other markers of success. This work is still cited today to show the importance of self-control and is sometimes compared to the modern concept of grit.

Through the years some have raised questions about The Marshmallow Test. For example, Duckworth, Tsukayama, and Kirby (2013) asked if it really tests for self-control or if there is a third variable, another quality, that drives both the longer waiting time and higher grades. Based on past research, it could be that children who are higher in intelligence prefer to wait for a larger reward and tend to get higher grades than children who are lower in intelligence. Likewise, children's behavior in The Marshmallow Test and their achievement in school may be reflect how much they are tempted by reward: some children are more tempted by treats while others have little temptation. It may be that children who have low temptation do not have to work very hard to control their impulses - which allows them to wait and to do better in school.

To clarify this issue Duckworth et al. conducted two studies. Study 1 included 56 local 5th graders (average age 10) who participated in a modified version of The Marshmallow Test: the waiting time was extended to 30 minutes and the students completed a survey about what they like eating before the delay of gratification test began. On a separate occasion the children participated in two tests of intelligence, and answered questions related to temptation. The questions addressed: how excited they get when a reward is offered; how actively they seek out the reward; and the extent that getting in trouble impacts them. Teachers rated each child on self-control (as it relates to schoolwork) and reported end of year grades.

The results demonstrated that the children who waited longer were also rated as having more self-control. Although grades were marginally related to waiting time, intelligence and temptation by reward were not related to waiting time in The Marshmallow Test. However, Duckworth et al. note that the children did not differ very much from each other - especially on the variable of intelligence.

Because the first study involved a small sample lacking in diversity, Study 2 analyzed a data set from the NICHD Study of Early Child Care and Youth Development that covered a broader sample of 966 children. These participants had completed The Marshmallow Test at age four (in this version, the experimenter would return after a maximum of seven minutes) and were tested using two age-appropriate measures of intelligence. At that time, their parents and preschool teachers rated them on two measures of self-control: their abilities to concentrate and their abilities to control their behaviors. As well their mothers rated them on their temptation by reward: how excited they became when offered a reward.

When these children were in eighth and ninth grade, the students' grades were included in the data set. In ninth grade the participants also indicated how often they engaged in "risky behaviors" as a measure of impulsiveness.

The results demonstrated that the children who waited longer were higher in intelligence and higher in self control at age four: they received higher ratings on their abilities to concentrate and to control their behaviors. To a lesser extent, children who waited longer were also less tempted by rewards. The authors then used sophisticated statistical methods to separate out the influences of self-control, intelligence, and temptation by reward.

In line with past research, the children who waited longer at age four were more likely to become adolescents with higher grades and fewer reported risky behaviors. Analyses revealed that level of self-control beat intelligence as a better predictor of the associated high grades. However, unlike high self-control, being less tempted by reward did not predict grades or behaviors in adolescence.

Taken together, these studies support the long-assumed notion that The Marshmallow Test is a good test of self-control. Self-control was the most reliable predictor of the higher grades associated with longer waiting times in The Marshmallow Test; intelligence and temptation by reward were less effective or failed as predictors.

We can only assume that Mischel would be pleased with this confirmation that his famous protocol is a valid test of self-control. Considering that he had to wait more than four decades for this news, I would say that he has passed the "ultimate" marshmallow test (too bad he doesn't like the taste of marshmallows)!

Further Reading:

A pdf of the Duckworth et al. (2013) article can be read online courtesy of the University of Pennsylvania.

For an update on Duckworth's work, read "Measuring Students' Self-Control: A 'Marshmallow Test' for the Digital Age", an article by Ingfei Chen for Mind/Shift. Move over marshmallows! Can today's students delay gratification when it comes to video games?

Watch Walter Mischel explain delay of gratification to Stephen Colbert on the comedy news show, The Colbert Report. Very entertaining!

BONUS: If kids totally lose their cool waiting for an extra marshmallow, imagine their reactions to enormous portions of their favorite foods! Watch what happens in this BuzzFeed video:





Sunday, September 13, 2015

Getting the gist of sex ed.

In the United States we have higher rates of teen pregnancy than in many other Western countries. We also know that our teenagers are at high risk for STIs, sexually transmitted infections. To combat these problems, we often require teens to participate in sexual education programs. However, the type of program that is most effective is still debated. This week's meme offers a humorous suggestion:


Because even the most mature adult parents can be worn down by incessant crying and endless repetition, the joke implies that if teenagers experienced these things it would be unforgettable and they would abstain from sex. The joke also implies that teaching statistics like, "six week old babies spend 30% of their awake time crying," is not necessary; instead teenagers will remember a take home message like, "babies cry a lot."

This emphasis on gist, or take home message, over specific quantitative facts is applied more formally to sexual education by Reyna and Mills (2014). These authors investigated if concepts from Fuzzy Trace Theory could make a pre-existing sex ed. curriculum more effective. Past research on Fuzzy Trace Theory suggests that better memory of ideas and better decision-making comes from understanding the the gist, the take home message, as opposed to weighing the pros and cons based on detailed facts or figures that are more forgettable. Because these gist-based thoughts are related to one's emotions and values, they allow for very rapid decision-making.

Their experiment followed more than 700 adolescents, ages 14-19, during a 16 session intervention period and for one year after its completion. The participants were randomly assigned to three types of interventions: an existing sexual education program called Reducing the Risk (RTR); a version of Reducing the Risk that had been modified to add ideas from Fuzzy Trace Theory (RTR+); a Control group focusing on communication skills unrelated to sexuality.

Reducing the Risk (RTR) teaches teens how to recognize situations of sexual risk, how to resist pressure to have sex, and how to reduce risk of pregnancy with contraception and infection by using condoms. Teens enrolled in this program should come to realize that they are personally at risk and that they possess the ability to avoid or reduce sexually related risk. The sessions usually include a factual presentation by an adult leader followed by activities such as guided role playing. For example, the leader might present detailed factual data like, "60% of teenagers report that they used condoms during their most recent sexual experience," followed by guided role play of how to reason with a partner who does not want to use a condom. This program is considered to be effective at delaying first sexual experiences and increasing the use of birth control and infection protection. However, Reyna and Mills note that there is little evidence to show if these effects are lasting.

Because Fuzzy Trace Theory predicts that gist thinking should have a lasting effect on decision making, the authors modified RTR with ideas from this theory to create RTR+. This version is identical to RTR with two additions that emphasize gist. First, at the end of every lesson students are presented with one line summaries of the take home ideas of risk from the lesson. For example, "You should use a condom every time you have sex." Second, the participants in RTR+ receive a checklist of possible values, such as, "I will use condoms every time I have sex," that they are asked to rate themselves on after every lesson. Because gist related decision making is closely influenced by our values, asking students to repeatedly clarify their values should also provoke attention to generalized risks related to sex.

All participants' sexual experience, and beliefs about sex and disease prevention were measured during the intervention, and at three months, six months, and 12 months after the intervention ended. The results demonstrated that the teenagers who had been randomly assigned to RTR and RTR+ showed overall lower risk in their behavior and beliefs than the teenagers who had been assigned to the Control condition.

When the results from RTR and RTR+ were compared, overall RTR+ was considered to be more successful. RTR+, that included Fuzzy Trace Theory's emphasis on gist, was related to:
*a lower rate of students who started having sex during that 12 month period.
*the lowest increase in the number of sexual partners during that 12 month period.
*the smallest increase in positive attitude toward sex (thinking that having sex at this age is a good idea).
*the smallest increase in beliefs that parents and peers think sex was okay for them to experience at this age.
*the highest belief that they were at risk for generalized (take home message) risks associated with sex. This is a measure of gist thinking - so it is not surprising that the teens in RTR+ demonstrated more of this.
*the highest knowledge of sex related risks lasting up to six months after the intervention.
*the highest recognition of warning signals of sexual risk.

On the other hand, RTR, the original sexual education program, was related to:
*more favorable attitudes toward condom use during that 12 month period.
*more agreement with gist-based summaries of sexual risk as measured three months after the intervention. This is another measure of gist thinking - so it IS surprising that the teens in RTR demonstrated more of this

The results were further influenced when the participants' races were taken into account. Reyna and Mills compared results from the three most prominent groups represented in their sample: African American; Hispanic; and White. Curiously, the authors decided to add data from Asian participants into the category White because the responses from those groups were similar. So data reported in relation to White participants should be understood as White and Asian. However, an improvement to this study would have been to further diversify the sample so that Asian adolescents were properly represented.

How did race influence the results? For example, RTR+:
*initially improved African American participants' attitudes toward condoms, but that effect disappeared after the intervention was over.
*increased White participants' beliefs that they could successfully use condoms.
*was related to higher sexual risk knowledge for Hispanic and White teens.
*increased African American and White participants' belief that they were at risk for generalized (take home message) risks associated with sex. This is a measure of gist thinking - so it is not surprising that some teens in RTR+ demonstrated more of this. However, it is surprising that Hispanic participants did not demonstrate more of this in RTR+.

Race also influenced the results of the other intervention. For example, RTR:
*increased Hispanic and White participants' belief that parents and peers think condoms should be used when having sex.
*increased Hispanic and White participants' belief that they were at risk for generalized (take home message) risks associated with sex. This is a measure of gist thinking - so it IS surprising that some of the teens in RTR demonstrated more of this.

Taken together, the results suggest that including an emphasis on the gist of sex related risks could be a positive addition to sexuality education if we want to encourage teenagers to delay sexual experience and to use condoms to prevent infection. This statement is based on results that reached statistical significance, meaning that the differences between teens assigned to RTR and RTR+ were not simply due to chance.

Some of these differences seem substantial. For example, Reyna and Mills note that being enrolled in RTR+ was 84% more effective at delaying the start of sexual activity when compared to the Control condition. At the same time some the differences were often very small even though they reached statistical significance. For example, teens assigned to RTR+ had fewer sexual partners than teens assigned to the regular RTR program. This result seems less impressive when you read that the teens in RTR+ reported an average of  2.15 partners while the teens in RTR reported an average of 2.21. In terms of a practical difference that would be of interest to parents and educators, this is almost nothing: both groups had about two sexual partners during that year.

Another issue is that interventions need to be tailored to the characteristics of the audience to be effective. The influence of race in the present study is especially important to note because teenagers from different racial groups differ in: how old they are when they start having sex; their risk of pregnancy (or getting somebody pregnant); and their risk of contracting a sexually transmitted infection (STI). Reyna and Mills also acknowledge that further research needs to focus on how and why concepts from Fuzzy Trace Theory may impact the sexual behaviors and beliefs of adolescents from diverse backgrounds.

Further Reading:

The Reyna and Mills (2014) article can be accessed through your local college library.

Watch a lecture by Dr. Valerie Reyna (one of the authors of this week's article) on "Risky Decision Making in Adolescence." This talk, given at Cornell University, covers, "...developmental differences in the way adolescents make decisions and reviews her research regarding why adolescents perceive risks and benefits and yet take more risks."

If you are interested in decreasing adolescent pregnancy and STIs, The National Campaign to Prevent Teen and Unplanned Pregnancy website includes statistics (state and national) and an excellent comparison of effective programs aimed at decreasing these problems. 

BONUS: The Reyna and Mills (2014) article includes a particularly cringe-worthy example used in RTR (and RTR+) as a warning sign that, "...unsafe sex may be imminent...": "being alone with a significant other, lights low and soft music playing..." (p. 1631). As goofy as that might sound to a modern teenager, I suppose that we do have evidence of its truth from the classic 1955 Disney film, "Lady and the Tramp."