We have updated our Privacy Policy and Privacy Options

Got It Button arrow

Ethical A/B Testing – How To Use Behavioral Economics for Good

Share

Note: This post was authored by Suzi Tripp, Senior Director of Innovative Solutions at Brooks Bell, and Kristen Berman, Co-Founder of Irrational Labs.

The ethical debate begins, and it has two elements. Is it right to experiment on your users? And second, what is a company’s intent for any new feature, product, marketing campaign?

Let’s discuss the first: experimentation.

A company’s mental model sometimes goes something like: “It is not our role to make decisions for the user. It’s our role to design a product that empowers the user to do what’s right for them.” This mental model is the heart of all debates around limiting choices, defaults, and other behavioral economics tactics.

This mental model seems strong because it feels good. It feels like we’re on the righteous side of the argument to fight for the user and against manipulative design.

But the problem with this mental model is that it’s impossible. One of the major lessons of the past half century of psychological research is that it’s impossible to design a system in which the user has full agency on their decisions. The system (i.e., what your buttons say, what your copy implies, the form design, the feature design) has an outsized impact on our decisions. And thus, by being the one who designs the system (however we choose to do it), we are inherently designing our users’ decisions.

When we understand the power that companies have over our decisions, it becomes unethical when they do NOT experiment on their users. If Facebook doesn’t experiment on the newsfeed, it implies we think the engineer who first designed the newsfeed algorithm got it 100% right. It implies we are ok giving him or her the power to design our decisions without the data on how it is actually influencing us. We should applaud Facebook for this experiment and their attempt to understand how their system is influencing our emotions.

The second ethical question is more difficult. It is intent.

The center of this ethical debate is imbalance of power between a company and the customer. The anger over this Facebook experiment reflects society’s intuition that Facebook’s intent is not pure. It appears to the public that they are de-prioritizing their users’ welfare. The experiment reminded us how little control we have over our environment and, thus, our decisions and how much control Facebook has over us.

For profit companies have a mandate to their shareholders and board members to drive profit. Driving profit may also drive customer value, but it could also drive high prices and negative customer value (think tobacco companies and gambling).

How should a company design its systems to ensure it’s acting in the best intent of its users?

Measure short-term and long-term value.
For example, if Zynga succeeded in getting me to play games for five hours a day, they may have provided me short-term value (otherwise I wouldn’t keep playing). But they may not be delivering long-term value. Humans act differently when making decisions for short term/today’s self and long term/tomorrow’s self. Because of this, they would need to measure the long-term impact of heavy gameplay and whether it corresponds to positive well-being for their users. If a company only measures short-term value they may harm their customer’s welfare. We are what we measure.

Create transparency.

In October 2017, BehavioralScientist.org published their recommendations for researchers along with an ethics checklist. Within it, they cited online dating company, OK Cupid, for being open about how it employs users’ data for research and regularly publishes summaries of the insights it has gained on its blog (opposed to Tinder, which does experiments but doesn’t publicize insights). Transparency often works because it assures people that the other party is unlikely to misbehave because their behavior is being monitored. It also helps people understand what’s actually happening.

Give users a way to fight back.
Many times customers have no recourse on a company if they feel they were wronged. This amplifies the unbalanced power dynamic. To mitigate this, the company can provide a way for the customer to publicize their grievances. This not only empowers the user but signals that the company has a long-term mindset on the relationship by putting its reputation at risk. James Andreoni and John Miller did a version of the prisoner’s dilemma that found people to be much more trusting when they think their interactions will extend for a longer period of time. Players that had consistent partners cooperated 63% of the time, compared to the 35% cooperation rates when partners switched after each round. Be the long-term partner your customers can communicate with and trust.

At Irrational Labs, their mantra is to use “irrationality for good.” And what does this mean? They use their in-depth understanding of human decision-making (which is often predictably irrational) to help improve people’s lives. They do this through experiments designed to uncover successful paths to wiser financial decision-making and healthier lifestyles. Here are two principles that Irrational Labs uses to ensure they are always using their powers for good.

  1. Focus on a key behavior that the majority of people desire for their long-term well-being.
    For example, Irrational Labs partnered with a large insurance company to provide recommendations on cafeteria design. The goal was to increase healthy food consumption by making the options more appealing and easier to purchase. This did not reduce unhealthy food options but instead made the fight a little more fair.
  2. Ensure the business goals are aligned with customer goals.
    Irrational Labs partnered with a large company to redesign its healthcare cost calculator for seniors. The business goal was to get people to click on the recommended actions at the end of the calculator. The customer goal was to increase the likelihood of taking an action that could help reduce future healthcare costs. This clear alignment helped ensure the customer’s well- being was put first.

We encourage you, as a choice architect, to prioritize experimentation and always ensure you’re putting the guardrails in place to drive positive increases in people’s welfare.