Why Our Brains Constantly Create New Threats
When something becomes rare, we sometimes see it in more places than ever. It’s a quirk that can impair our judgment, but may be one we can control.
Why
do many problems in life seem to stubbornly stick around, no matter how
hard people work to fix them? It turns out that a quirk in the way
human brains process information means that when something becomes rare,
we sometimes see it in more places than ever.
Think of a “neighborhood watch” made up of volunteers who call the police when they see anything suspicious. Imagine a new volunteer who joins the watch to help lower crime in the area. When they first start volunteering, they raise the alarm when they see signs of serious crimes, like assault or burglary.
Let’s assume these efforts help and, over time, assaults and burglaries become rarer in the neighborhood. What would the volunteer do next? One possibility is that they would relax and stop calling the police. After all, the serious crimes they used to worry about are a thing of the past.
But you may share the intuition my research group had – that many volunteers in this situation wouldn’t relax just because crime went down. Instead, they’d start calling things “suspicious” that they would never have cared about back when crime was high, like jaywalking or loitering at night.
You can probably think of many similar situations in which problems never seem to go away, because people keep changing how they define them. This is sometimes called “concept creep,” or “moving the goalposts,” and it can be a frustrating experience. How can you know if you’re making progress solving a problem, when you keep redefining what it means to solve it? My colleagues and I wanted to understand when this kind of behavior happens, why, and if it can be prevented.
To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task – to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.
As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.
This kind of inconsistency isn’t limited to judgments about threat. In another experiment, we asked people to make an even simpler decision: whether colored dots on a screen were blue or purple.
As blue dots became rare, people started calling slightly purple dots blue. They even did this when we told them blue dots were going to become rare, or offered them cash prizes to stay consistent over time. These results suggest that this behavior isn’t entirely under conscious control – otherwise, people would have been able to be consistent to earn a cash prize.
After looking at the results of our experiments on facial threat and color judgments, our research group wondered if maybe this was just a funny property of the visual system. Would this kind of concept change also happen with non-visual judgments?
To test this, we ran a final experiment in which we asked volunteers to read about different scientific studies, and decide which were ethical and which were unethical. We were skeptical that we would find the same inconsistencies in these kind of judgments that we did with colors and threat.
Why? Because moral judgments, we suspected, would be more consistent across time than other kinds of judgments. After all, if you think violence is wrong today, you should still think it is wrong tomorrow, regardless of how much or how little violence you see that day.
But surprisingly, we found the same pattern. As we showed people fewer and fewer unethical studies over time, they started calling a wider range of studies unethical. In other words, just because they were reading about fewer unethical studies, they became harsher judges of what counted as ethical.
Why can’t people help but expand what they call threatening when threats become rare? Research from cognitive psychology and neuroscience suggests that this kind of behavior is a consequence of the basic way that our brains process information – we are constantly comparing what is front of us to its recent context.
Instead of carefully deciding how threatening a face is compared to all other faces, the brain can just store how threatening it is compared to other faces it has seen recently, or compare it to some average of recently seen faces, or the most and least threatening faces it has seen. This kind of comparison could lead directly to the pattern my research group saw in our experiments, because when threatening faces are rare, new faces would be judged relative to mostly harmless faces. In a sea of mild faces, even slightly threatening faces might seem scary.
It turns out that for your brain, relative comparisons often use less energy than absolute measurements. To get a sense for why this is, just think about how it’s easier to remember which of your cousins is the tallest than exactly how tall each cousin is. Human brains have likely evolved to use relative comparisons in many situations, because these comparisons often provide enough information to safely navigate our environments and make decisions, all while expending as little effort as possible.
Sometimes, relative judgments work just fine. If you are looking for a fancy restaurant, what you count as “fancy” in Paris, Texas, should be different than in Paris, France.
But a neighborhood watcher who makes relative judgments will keep expanding their concept of “crime” to include milder and milder transgressions, long after serious crimes have become rare. As a result, they may never fully appreciate their success in helping to reduce the problem they are worried about. From medical diagnoses to financial investments, modern humans have to make many complicated judgments where being consistent matters.
How can people make more consistent decisions when necessary? My research group is currently doing follow-up research in the lab to develop more effective interventions to help counter the strange consequences of relative judgment.
One potential strategy: When you’re making decisions where consistency is important, define your categories as clearly as you can. So if you do join a neighborhood watch, think about writing down a list of what kinds of transgressions to worry about when you start. Otherwise, before you know it, you may find yourself calling the cops on dogs being walked without leashes.
David Levari is a postdoctoral researcher in psychology at Harvard University.
This article was originally published on The Conversation. Read the original article.
Think of a “neighborhood watch” made up of volunteers who call the police when they see anything suspicious. Imagine a new volunteer who joins the watch to help lower crime in the area. When they first start volunteering, they raise the alarm when they see signs of serious crimes, like assault or burglary.
Let’s assume these efforts help and, over time, assaults and burglaries become rarer in the neighborhood. What would the volunteer do next? One possibility is that they would relax and stop calling the police. After all, the serious crimes they used to worry about are a thing of the past.
But you may share the intuition my research group had – that many volunteers in this situation wouldn’t relax just because crime went down. Instead, they’d start calling things “suspicious” that they would never have cared about back when crime was high, like jaywalking or loitering at night.
You can probably think of many similar situations in which problems never seem to go away, because people keep changing how they define them. This is sometimes called “concept creep,” or “moving the goalposts,” and it can be a frustrating experience. How can you know if you’re making progress solving a problem, when you keep redefining what it means to solve it? My colleagues and I wanted to understand when this kind of behavior happens, why, and if it can be prevented.
To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task – to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.
As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.
Share this story!
This kind of inconsistency isn’t limited to judgments about threat. In another experiment, we asked people to make an even simpler decision: whether colored dots on a screen were blue or purple.
As blue dots became rare, people started calling slightly purple dots blue. They even did this when we told them blue dots were going to become rare, or offered them cash prizes to stay consistent over time. These results suggest that this behavior isn’t entirely under conscious control – otherwise, people would have been able to be consistent to earn a cash prize.
After looking at the results of our experiments on facial threat and color judgments, our research group wondered if maybe this was just a funny property of the visual system. Would this kind of concept change also happen with non-visual judgments?
To test this, we ran a final experiment in which we asked volunteers to read about different scientific studies, and decide which were ethical and which were unethical. We were skeptical that we would find the same inconsistencies in these kind of judgments that we did with colors and threat.
Why? Because moral judgments, we suspected, would be more consistent across time than other kinds of judgments. After all, if you think violence is wrong today, you should still think it is wrong tomorrow, regardless of how much or how little violence you see that day.
But surprisingly, we found the same pattern. As we showed people fewer and fewer unethical studies over time, they started calling a wider range of studies unethical. In other words, just because they were reading about fewer unethical studies, they became harsher judges of what counted as ethical.
Why can’t people help but expand what they call threatening when threats become rare? Research from cognitive psychology and neuroscience suggests that this kind of behavior is a consequence of the basic way that our brains process information – we are constantly comparing what is front of us to its recent context.
Instead of carefully deciding how threatening a face is compared to all other faces, the brain can just store how threatening it is compared to other faces it has seen recently, or compare it to some average of recently seen faces, or the most and least threatening faces it has seen. This kind of comparison could lead directly to the pattern my research group saw in our experiments, because when threatening faces are rare, new faces would be judged relative to mostly harmless faces. In a sea of mild faces, even slightly threatening faces might seem scary.
It turns out that for your brain, relative comparisons often use less energy than absolute measurements. To get a sense for why this is, just think about how it’s easier to remember which of your cousins is the tallest than exactly how tall each cousin is. Human brains have likely evolved to use relative comparisons in many situations, because these comparisons often provide enough information to safely navigate our environments and make decisions, all while expending as little effort as possible.
Sometimes, relative judgments work just fine. If you are looking for a fancy restaurant, what you count as “fancy” in Paris, Texas, should be different than in Paris, France.
But a neighborhood watcher who makes relative judgments will keep expanding their concept of “crime” to include milder and milder transgressions, long after serious crimes have become rare. As a result, they may never fully appreciate their success in helping to reduce the problem they are worried about. From medical diagnoses to financial investments, modern humans have to make many complicated judgments where being consistent matters.
How can people make more consistent decisions when necessary? My research group is currently doing follow-up research in the lab to develop more effective interventions to help counter the strange consequences of relative judgment.
One potential strategy: When you’re making decisions where consistency is important, define your categories as clearly as you can. So if you do join a neighborhood watch, think about writing down a list of what kinds of transgressions to worry about when you start. Otherwise, before you know it, you may find yourself calling the cops on dogs being walked without leashes.
David Levari is a postdoctoral researcher in psychology at Harvard University.
This article was originally published on The Conversation. Read the original article.
Non-profit & editorially independent. Exploring #science as a frequently wondrous, sometimes contentious, and occasionally troubling byproduct of human culture.
No comments:
Post a Comment