Be aware of your decision frame

How it occurs

    1. Question framing

      The phrasing of questions during user interviews or usability tests can have a significant impact on the response of participants.

    2. Presentation research findings

      The frame in which we choose to present research findings from user interviews or usability tests can easily influence how the research is interpreted.

    3. Design feedback

      Another potential entry point for framing during the design process is when receiving design feedback. Overly specific feedback can frame up the problem in a way that limits the design solution, or influences it in a way that leads to overlooking other important considerations.

While it may be tempting to jump into solutions right away, taking a little more time to think through context will significantly impact how we interpret the data.

How to avoid

    1. Think through the context

      Solutioning when we should be still gathering information can be a constant challenge for team members, especially for designers. While it may be tempting to jump into solutions right away, taking a little more time to think through context will significantly impact how we interpret the data. It’s only once we spend ample time with the gathered data that we can begin to synthesize useful insights from it and identify patterns that aren’t as obvious on the surface.

    2. Gather more context

      Instead of making a decision based on however much data you have, wait until you have enough to make an informed decision. Sometimes research uncovers what we need to learn more about. You’ll know you have enough information to make an informed decision when you have a clear idea of the problem and sufficient information to begin on a solution.

    3. Switch your view

      Another way to diagnose if your opinion is being influenced by framing is to switch up your point of view. You can do this by reversing a data point from a success rate to a failure rate, or taking the opposite approach to articulating a problem. The point of this is to perceive the data from another angle to avoid emphasis or exclusion of specific information.

Case study

The Framing of Decisions and the Psychology of Choice

Microscopic view of a deadly disease.

In 1981, Amos Tversky and Daniel Kahneman explored how different phrasing affected participants’ responses to a choice in a hypothetical life and death situation. In the study, participants were asked to choose between two treatments for 600 people affected by a deadly disease. Treatment A was predicted to result in 400 deaths, whereas treatment B had a 33% chance that no one would die but a 66% chance that everyone would die. This choice was then presented to participants either with positive framing, i.e. how many people would live or with negative framing, i.e. how many people would die. Treatment A was chosen by 72% of participants when it was presented with positive framing (“saves 200 lives”) dropping to 22% when the same choice was presented with negative framing (“400 people will die”).

The choice of the participants in this study highlights our tendency to make decisions based on whether the options are presented with positive or negative connotations; e.g. as a loss or as a gain. In psychology, this is known as the framing effect and it affects all aspects of the design process, from interpreting research findings to selecting design alternatives.

Further Reading

Related

Interviewer bias

A bias caused by mistakes made by the interviewer, which may include influencing the respondent in some way, asking questions in the wrong order, or using slightly different phrasing (or tone of voice) than other interviewers.

The Hawthorne effect

A type of reactivity in which individuals modify an aspect of their behavior in response to their awareness of being observed.

Social desirability bias

A type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others.

Consider what you don’t see

How it occurs

    1. Design research

      One critical way that survivorship bias can creep into the design process is through the participants we choose (or leave out) of design research. A lack of participant diversity will inevitably lead to a lack of data diversity. If you’re only considering one perspective, it will reduce the reliability of the data.

    2. Design feedback

      Design feedback is another crucial part of the design process and also one that survivorship bias can take hold. The best design is that which considers a diversity of perspectives, specifically in regards to the people who will be using it. For example, focusing too much on only the positive feedback from our peers is likely to result in solutions that aren’t resilient enough. If the team with which we are receiving feedback lacks diversity, so too will the input we receive during the design process.

When we focus too much on success stories, positive metrics, or ‘happy paths’ in our designs, we lose sight of how the design responds when things fail.

How to avoid

    1. Factor in failure

      While survivorship bias is common within the design process, there are ways to counter it as well. When we focus too much on success stories, positive metrics, or ‘happy paths’ in our designs, we lose sight of how the design responds when things fail. Consideration of only positive feedback is likely to lead to a very one-sided design approach. It’s best to factor in failure, consider where things can go wrong, centralize the edge cases in the design process, and seek out diverse perspectives during design reviews. In other words, we can make our designs more resilient by considering the unhappy path just as thoroughly as the happy one. In the process of designing for the less ideal scenarios, we address the fundamental features needed for everyone.

    2. Recognize the limits of quantitative data

      We must remember quantitative data is only relevant to the actions currently available and therefore has the potential to limit our thinking. As Erika Hall points out in Just Enough Research, “By asking why we can see the opportunity for something better beyond the bounds of the current best”. We should consider what the quantitative data isn’t telling us to make more informed design decisions.

Case study

Abraham Wald’s Work on Aircraft Survivability

Damaged WW2 bomber.

During World War II, the Statistical Research Group at Columbia University was asked by the U.S. military to examine the damage done to bombers that had returned from missions. Their objective was to determine where armor could be added to the bombers to increase the protection from the flak and bullets. Planes that returned were carefully examined and damage was compared across all the planes. The consensus was that additional armor should be added to the areas of the planes that show the most common patterns of damage: the wings, tailfin, and middle of the planes.

Luckily, a statistician on the project by the name of Abraham Wald countered this conclusion by pointing out that the planes being examined were only those that survived, therefore calculations were missing a critical set of data (the planes that didn’t make it back). As a result, Wald recommended that armor get added to the areas that showed the least damage to reinforce the most vulnerable parts of the planes.

The case study of World War II bombers highlights our tendency to concentrate on the people or things that made it past a selection process and overlook those that do not, typically because of their lack of visibility. This can lead to false conclusions in several different ways. This tendency is known as survivorship bias and it can show up during a critical part of the design process.

Further Reading

Related

Sampling Bias

A bias in which a sample is collected in such a way that some members of the intended population have a lower or higher sampling probability than others.

Availability heuristic

A mental shortcut that relies on immediate examples that come to a given person’s mind when evaluating a specific topic, concept, method, or decision.

Don’t build solutions in search of problem

How it occurs

    1. User testing

      One place we can anticipate it showing up throughout the design process is during user testing. For example, focusing on test findings that validate the desired outcome.

    2. Design feedback

      Similar to survivorship bias, the best design is that which considers a diversity of perspectives, specifically in regards to the people who will be using it. This is also an important consideration with confirmation bias, where we can block out or become more critical when receiving feedback that doesn’t validate or support the design direction we’d prefer.

User feedback is great for understanding user thinking, but not great for understanding their actions.

How to avoid

    1. Multi-faceted user research

      You’ve probably heard the saying “pay attention to what users do, not what they say”. User actions and feedback is seldomly aligned, and this can introduce inaccurate data in the design process if we rely too much on user feedback alone. User feedback is great for understanding user thinking, but not great for understanding their actions. One way to combat the confirmation bias is a multi-faceted approach to user research. Using a combination of user interviews, usability testing, and quantitative analysis to understand peoples’ actions will help to avoid the bias that comes with overly relying on any single method.

    2. Red team, blue team

      Another effective approach to combating confirmation bias is to designate a separate team (the red team) to pick apart a design and find the flaws. In his book Designing for Cognitive Bias, David Dylan Thomas points out the effectiveness of an exercise called ‘Red Team, Blue Team’ in which the red team seeks to uncover “very little unseen flaw, every overlooked potential for harm, every more elegant solution that the blue team missed because they were so in love with their initial idea”. This can be both an efficient way to quickly discover the bias embedded into the design output of the team and avoid the pitfalls of falling in love with the wrong idea.

    3. Watch for overly optimistic interpretation of data

      A critical eye is necessary during design research. We must be diligent not to interpret data the wrong way, especially if the interpretation favors the desired outcome. When we watch for an overly optimistic interpretation of research findings or design feedback, we can avoid confirming a preexisting bias towards a specific option or approach.

Case study

Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence

Lethal injection bed.

People who hold strong opinions on complex social issues are likely to examine relevant empirical evidence in a biased manner. As a result, they will accept confirming evidence at face value while critically examining evidence that disconfirms their opinions. This was the conclusion of a study conducted in 1979 at Stanford University concerning capital punishment.3 In the study, each participant read a comparison of U.S. states with and without the death penalty, and then a comparison of murder rates in a state before and after the introduction of the death penalty. The participants were asked whether their opinions had changed. Next, they read a more detailed account of each comparison’s procedure and had to rate whether the research was well-conducted and convincing. Participants were told that one comparison supported the deterrent effect and the other comparison undermined it, while for other participants the conclusions were swapped. In fact, both comparisons were fictional.

The participants, whether supporters or opponents of capital punishment, reported shifting their attitudes slightly in the direction of the first comparison they read. After reading the more detailed descriptions of the two comparisons, almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations.

We tend to search for, interpret, favor, and recall information in a way that confirms or supports our prior beliefs or values. This tendency is known as confirmation bias, and when we aren’t careful it can also show up during the design process as well.

Further Reading

Related

False consensus effect

A pervasive cognitive bias that causes people to see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances.

Observer-expectancy effect

A form of reactivity in which a researcher’s cognitive bias causes them to subconsciously influence the participants of an experiment.

Semmelweis reflex

A metaphor for the reflex-like tendency to reject new evidence or new knowledge because it contradicts established norms, beliefs, or paradigms.

Know your blind spots

How it occurs

The implicit biases that govern our decision-making and judgments are always present. They are part of our design as humans, enabling us to efficiently process information and get through the day without getting overwhelmed. They are useful mental shortcuts that can become a problem when we’re unaware of them and we base our decisions and judgments on the assumptions and perspectives we believe to be true.
The implicit biases that govern our decision-making and judgments are always present.

How to avoid

    1. Acknowledge your bias

      The moment we acknowledge that we are operating under unconscious bias, we are then empowered to change it. We must first seek to understand the biases that affect our judgment and decision-making ability. Once we recognize the fact that our own biases can influence us, we can begin the process of identifying them and determine the best approach for overcoming them.

    2. Seek Perspective

      Feedback helps to identify our unconscious bias and how it’s affecting our judgment and decision-making ability. Our work becomes more inclusive once we embrace a plurality of perspectives. We must seek a variety of perspectives to broaden how we think of our work and the value it provides people.

    3. Cultivate diversity

      Diverse teams bring diversity in life experience and increase a team’s ability to counter the unconscious bias present in homogeneous groups. We must cultivate diversity within our teams to bring a diversity of thinking to our work.

Case study

Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot

Definition of bias selected diagram.

A study published in 2006 by Emily Pronin and Matthew Kugler explored how people make judgments about themselves and others. The study began with the researchers explaining cognitive bias to the participants. Next, they were asked how might cognitive bias might affect their judgment concerning the other participants. What the researchers found was that test participant rated themselves as less susceptible to bias than others in the experiment. When they were asked to explain the judgment of others, they did so openly. In contrast, when they were asked to explain their own judgments, they looked inward and searched their thoughts and feelings for biased motives. The fact that biases operate unconsciously means that this introspection was not informative, but people mistakenly interpret them as evidence that they are immune to bias.

We believe that we are rational, that our actions and judgments are accurate, and they are not influenced by unconscious bias. However, the study described highlights the tendency for people to not see their own biases and see themselves as less susceptible to bias than others. In psychology, this is known as a bias blind spot, and its potential to undermine the decision-making process has a profound impact on design teams.

Further Reading

Related

Egocentric bias

The tendency to rely too heavily on one’s perspective and/or have a higher opinion of oneself than reality.

False-consensus effect

The tendency to assume that our personal qualities, characteristics, beliefs, and actions are relatively widespread through the general population.

Reason over recall

How it occurs

Mental shortcuts such as the availability heuristic most commonly occur when we need to make quick decisions and therefore rely on information that is easily recalled. This falls within the category of System 1 thinking, or the mental events that occur automatically and require little or no effort. When we let this bias drive our design decisions, it can easily lead to setting goals based on what’s easy to measure (versus what’s valuable to measure) or blindly mimicking competitors as a result of ‘competitor research’.
When we make decisions based on what’s easy to recall, we fall into the trap of letting limited information guide us rather than reasoning through the situation.

How to avoid

    1. Invoke System 2 thinking

      When we make decisions based on what’s easy to recall, we fall into the trap of letting limited information guide us rather than reasoning through the situation. One effective approach to avoiding the pitfalls of this bias is to intentionally invoke System 2 thinking, which requires deliberate processing and contemplation. The enhanced monitoring of System 2 thinking works to override the impulses of System 1 and give us the room to slow down, identify our biases, and more carefully consider the impact of our decisions.

    2. Valuable metrics only

      Relying on data that quickly comes to mind can easily impact how success is defined within a product or service. When project goals are set based on what’s easy to measure versus what’s valuable, the measure becomes a target and we will game the system to hit that target (i.e. clicks, DAU, MAU). This ultimately leads to losing track of the needs and goals of the people who use the product or service. The alternative is taking time to understand what the needs and goals of people actually are and then defining the appropriate metrics that correspond to them.

Case study

Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment

Flooded city.

A study conducted by Amos Tversky and Daniel Kahneman in 1982 asked a sample of 245 University of British Columbia undergraduates to evaluate the probability of several catastrophic events in 1983. The events were presented in two versions: one that included only the basic outcome and another that included a more detailed scenario leading to the same outcome. For example, half of the participants evaluated the probability of a massive flood somewhere in North America in which more than 1000 people drown. The other half evaluated the probability of an earthquake in California, causing a flood in which more than 1000 people drown. A 9-point scale was used for the estimations: less than .01%, .1%, .5%, 1%, 2%, 5%, 10%, 25%, and 50% or more.

What Tversky and Kahneman found was that the estimates of the conjunction (earthquake and flood) were significantly higher than the estimates of just the flood (p < .01, by a Mann-Whitney test). Even though the chance of a flood in California is smaller than that of a flood for all of North America, participants estimated that the chance of the flood provoked by an earthquake in California is higher. The researchers concluded that since an earthquake causing a flood in California is more specific and easier to imagine (versus a flood in an ambiguous area like all of North America), people are more likely to believe its probability. The same pattern was observed in other problems.

The study highlights our tendency to believe that the easier that something is to recall, the more frequent it must happen. This tendency, known as the availability heuristic, is a common bias we fall into as a way for our brains to make conclusions with little mental effort or strain based on evidence that is instantly available into our mind. It can also heavily influence our decision-making abilities and prevent us from seeing the bigger picture.

Further Reading

Related

Survivorship bias

The logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility.

Confirmation bias

The tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values.

You are not the user

How it occurs

People often overestimate the degree to which other people will agree, think, and behave the way they do. This applies to designers, developers, and UX researchers who are tasked with creating digital interfaces as well: we infer generalizations when we assume that others will perceive and understand the interface in the same way we do. The false-consensus effect can be found in the design process when there’s a lack of data on how the actual users respond to your designs.
People often overestimate the degree to which other people will agree, think, and behave the way they do.

How to avoid

    1. Identify your assumptions

      The first step is to identify the assumptions you or your team are making about the intended users. Whether it’s assumptions related to their needs, pain points, or how they accomplish tasks, we must begin by acknowledging the things we are assuming. Once assumptions have been identified, they should be prioritized by the amount of risk they carry and investigated via tests.
    2. Test with real users

      The next step is to conduct user interviews and usability tests with the intended audience. User interviews are critical in understanding the decision-making process, needs and frustrations, opportunities, and how steps are taken to complete tasks. Usability tests are also quite effective for understanding how actual users respond to your designs by watching them use these designs and challenging your assumptions.

Case study

False Consensus Effect: An Egocentric Bias in Social Perception and Attribution Processes

Space rocket launching.

A 1976 study conducted at Stanford University presented participants with hypothetical situations and asked them to determine what percentage of people would make one of two choices. For example, one situation asked participants to determine what percentage of their peers would vote for a referendum that allocated large sums of money towards a revived space program with the goal of manned and unmanned exploration of the moon and planets nearest Earth. Some additional context was given, explaining that supporters would argue that this would provide jobs, spur technology, and promote national pride and unity. In contrast, opponents argue that a space program will increase taxes or else drain money from important domestic priorities while not achieving the goals of the program.

Once the participants provided their estimate, they were asked to disclose what they would do given the situation and fill out two questionnaires about the personality traits of those who would make each of the two choices. Researchers concluded that the participants not only expected that most of their peers would make the same choices they did but assumed those that opted for the opposite choice had extreme personality traits.

The study highlights our tendency to assume that our personal qualities, characteristics, beliefs, and actions are relatively widespread while alternate points of view are rare, deviant, and more extreme. This bias, known in psychology as the false-consensus effect, can skew our thinking and negatively influence design decisions.

Further Reading

Related

Availability bias

A mental shortcut that relies on immediate examples that come to a given person’s mind when evaluating a specific topic, concept, method or decision.

Negativity bias

The notion that, even when of equal intensity, things of a more negative nature (e.g. unpleasant thoughts, emotions, or social interactions; harmful/traumatic events) have a greater effect on one’s psychological state and processes than neutral or positive things.

Loss aversion

The tendency to prefer avoiding losses to acquiring equivalent gains.