Note: This post was authored by Suzi Tripp, Senior Director of Innovative Solutions at Brooks Bell, and Kristen Berman, Co-Founder of Irrational Labs.
The ethical debate begins, and it has two elements. Is it right to experiment on your users? And second, what is a company’s intent for any new feature, product, marketing campaign?
Let’s discuss the first: experimentation.
A company’s mental model sometimes goes something like: “It is not our role to make decisions for the user. It’s our role to design a product that empowers the user to do what’s right for them.” This mental model is the heart of all debates around limiting choices, defaults, and other behavioral economics tactics.
This mental model seems strong because it feels good. It feels like we’re on the righteous side of the argument to fight for the user and against manipulative design.
But the problem with this mental model is that it’s impossible. One of the major lessons of the past half century of psychological research is that it’s impossible to design a system in which the user has full agency on their decisions. The system (i.e., what your buttons say, what your copy implies, the form design, the feature design) has an outsized impact on our decisions. And thus, by being the one who designs the system (however we choose to do it), we are inherently designing our users’ decisions.
When we understand the power that companies have over our decisions, it becomes unethical when they do NOT experiment on their users. If Facebook doesn’t experiment on the newsfeed, it implies we think the engineer who first designed the newsfeed algorithm got it 100% right. It implies we are ok giving him or her the power to design our decisions without the data on how it is actually influencing us. We should applaud Facebook for this experiment and their attempt to understand how their system is influencing our emotions.
The second ethical question is more difficult. It is intent.
The center of this ethical debate is imbalance of power between a company and the customer. The anger over this Facebook experiment reflects society’s intuition that Facebook’s intent is not pure. It appears to the public that they are de-prioritizing their users’ welfare. The experiment reminded us how little control we have over our environment and, thus, our decisions and how much control Facebook has over us.
For profit companies have a mandate to their shareholders and board members to drive profit. Driving profit may also drive customer value, but it could also drive high prices and negative customer value (think tobacco companies and gambling).
How should a company design its systems to ensure it’s acting in the best intent of its users?
Measure short-term and long-term value.
For example, if Zynga succeeded in getting me to play games for five hours a day, they may have provided me short-term value (otherwise I wouldn’t keep playing). But they may not be delivering long-term value. Humans act differently when making decisions for short term/today’s self and long term/tomorrow’s self. Because of this, they would need to measure the long-term impact of heavy gameplay and whether it corresponds to positive well-being for their users. If a company only measures short-term value they may harm their customer’s welfare. We are what we measure.
In October 2017, BehavioralScientist.org published their recommendations for researchers along with an ethics checklist. Within it, they cited online dating company, OK Cupid, for being open about how it employs users’ data for research and regularly publishes summaries of the insights it has gained on its blog (opposed to Tinder, which does experiments but doesn’t publicize insights). Transparency often works because it assures people that the other party is unlikely to misbehave because their behavior is being monitored. It also helps people understand what’s actually happening.
Give users a way to fight back.
Many times customers have no recourse on a company if they feel they were wronged. This amplifies the unbalanced power dynamic. To mitigate this, the company can provide a way for the customer to publicize their grievances. This not only empowers the user but signals that the company has a long-term mindset on the relationship by putting its reputation at risk. James Andreoni and John Miller did a version of the prisoner’s dilemma that found people to be much more trusting when they think their interactions will extend for a longer period of time. Players that had consistent partners cooperated 63% of the time, compared to the 35% cooperation rates when partners switched after each round. Be the long-term partner your customers can communicate with and trust.
At Irrational Labs, their mantra is to use “irrationality for good.” And what does this mean? They use their in-depth understanding of human decision-making (which is often predictably irrational) to help improve people’s lives. They do this through experiments designed to uncover successful paths to wiser financial decision-making and healthier lifestyles. Here are two principles that Irrational Labs uses to ensure they are always using their powers for good.
- Focus on a key behavior that the majority of people desire for their long-term well-being.
For example, Irrational Labs partnered with a large insurance company to provide recommendations on cafeteria design. The goal was to increase healthy food consumption by making the options more appealing and easier to purchase. This did not reduce unhealthy food options but instead made the fight a little more fair.
- Ensure the business goals are aligned with customer goals.
Irrational Labs partnered with a large company to redesign its healthcare cost calculator for seniors. The business goal was to get people to click on the recommended actions at the end of the calculator. The customer goal was to increase the likelihood of taking an action that could help reduce future healthcare costs. This clear alignment helped ensure the customer’s well- being was put first.
We encourage you, as a choice architect, to prioritize experimentation and always ensure you’re putting the guardrails in place to drive positive increases in people’s welfare.
Now that we’ve introduced you to Irrational Labs, shared that even experts are surprised, demonstrated ways to incorporate behavioral economics alongside your data, and reviewed the ethics around using behavioral economics for good – we’ll dive into some of our favorite behavioral science principles. That’s coming next month, so stay tuned!
About the authors:
Kristen Berman, Co-founder of Irrational Labs, Author, Advisor & Public Speaker
Kristen helps companies and nonprofits understand and leverage behavioral economics to increase the health, wealth and happiness of their users. She also led the behavioral economics group at Google, a group that touches over 26 teams across Google, and hosts ones of the top behavioral change conferences globally, StartupOnomics. She co-authored a series of workbooks called Hacking Human Nature for Good: A practical guide to changing behavior, with Dan Ariely. These workbooks are being used at companies like Google, Intuit, Neflix, Fidelity, Lending Club for business strategy and design work. Before designing, testing and scaling products that use behavioral economics, Kristen was a Sr. product manager at Intuit and camera startup, Lytro. Kristen is an advisor for Loop Commerce, Code For America Accelerator and the Genr8tor Incubator and has spoke at Google, Facebook, Fidelity, Equifax, Stanford, Bay Area Computer Human Interaction seminar and more.
Suzi Tripp, Sr. Director of Innovative Solutions
At Brooks Bell, Suzi sets the course of action for impact-driving programs while working to ensure the team is utilizing and advancing our test ideation methodology to incorporate quantitative data, qualitative data, and behavioral economics principles. She has over 14 years of experience in the industry and areas of expertise include strategy, digital, communications, and client service. Suzi has a BS in Business Management with a concentration in marketing from North Carolina State University.