We have updated our Privacy Policy and Privacy Options

Got It Button arrow

How To Apply Behavioral Economics To Your A/B Testing Strategy

Share

At Brooks Bell, we’re obsessed with behavioral economics (BE). We’ve integrated the principles and experiments explored by Ariely and Cialdini into our methodology and everyday conversations with our clients.

We’ve seen the way this knowledge can positively influence experimentation programs and create better user experiences. In a recent post, I shared my must-reads for those of you interested in taking your ideation to the next level with BE. But don’t just take it from me…

We’re excited to announce that we’ve partnered with Irrational Labs, founded by Dan Ariely and Kristen Berman, to teach you about BE and help you build it into your toolbox, culture and process. Each month in “Oh BEhave,” we’ll bring you new content designed to give you knowledge that will take your experiment ideation to the next level.

To tell you more about Irrational Labs, we’ll turn to Kristen Berman.

Kristen founded Irrational Labs with Dan Ariely in 2013. She helps companies and non-profit organizations understand and leverage BE to increase the health, wealth and happiness of their users. She has spoken at Google, Facebook, Fidelity, Equifax, Stanford and more – and we’re looking forward to partnering together to bring “Oh, BEhave” to you.

Let’s get started:

Suzi: What is Irrational Labs, and what inspired you to create it?

Kristen: I was a product manager at Intuit, leading QuickBooks Online customer discovery and first-use experience. After talking to five or six customers, it consistently felt like we were misled by them. They would suggest changes to the product, and our team would implement them – but nothing would happen to our conversion numbers. Their suggestions did not improve the experience.

After these disappointing attempts, I got hungry for a scientific process to product development. Why don’t people actually complete a sign-up process? How could we motivate someone to do a task they normally put off – like accounting? Dan Ariely had the answer. At that time, Dan was a consultant for Intuit. I heard him speak and immediately knew that the secret sauce missing from product development was the rigor of BE. Dan and I started working together formally a couple years later, but under the same premise. How can we use the science of behavior change to help companies design features that make their customers better off?

Behavioral science is a shortcut to product development. It gives us a language, tool kit and experimental mindset to understand why people do the things they do.

Suzi: At Brooks Bell, we help our clients drive results and customer insights through optimization. We do some personalization, advanced segmentation and multivariate tests – but the foundation of our programs are based in A/B experiments. What types of experiments do you run at Irrational Labs?

Kristen: At Irrational Labs we dabble in a variety of domains – but mainly focus our work with tech companies running A/B experiments. For example, we are currently running an experiment to figure out how to design incentives that optimally motivate employees. Our partner, GrowBot, allows managers to send employee bonuses in the form of compliments, money or other rewards. We’re testing different methods to measure the incentive that drives effectiveness.

Another experiment in the works is around choice architecture. We vary the number and framing of the choices displayed to figure out the optimal matrix. By understanding how customers make decisions, we can design the environment to increase their well-being.

Suzi: Those experiments sound familiar! There’s a lot of crossover with the types of experiments we’re running, as well. Are academic experiments different from conversion-driven experiments?

Kristen: Yes and no. The basic components of an academic experiment are two-fold: Participants are selected from a random sample of users, and there is a “control” group.

Most companies nail the first one. A/B experiments typically have a randomly selected sample. However, conversion-focused experiments frequently trip up on the second element – a control.

For an experiment to be trusted, it’s required that the conditions only have one variable. It’s not okay to change multiple things at once! For example, you can’t change the button AND the image on your landing page. Otherwise, it’s unclear what actually drove the results. It’s imperative to be rigorous about your control to ensure the test has sustainable validity. While many conversion-driven teams say they understand this concept, we frequently find that people cut corners and include a lot of changes in one test.

Suzi: It can be tempting to change multiple things in one condition/variation when pressure for a positive result is high. But, you’re right – you sacrifice a clean test with reliable learnings for the hope of a positive result. If it works, you don’t know why. And if it didn’t work, you don’t know why and could make that same mistake again. I’ve certainly been in this scenario, and I think our readers can relate as well. That’s a great reminder.

What advice would you give to optimization leads that want to bring BE into their organization?

Kristen: Read more. Test more. The first step to understanding social science is to appreciate the complexities of humans. There is no “one size fits all” solution for any problem. Over the years, research has helped us develop better intuitions on what will happen, but context is king. Every environment is different. Because of that, we must remember to test our key assumptions instead of assuming that they will work out.

Suzi: Yes, test everything! We’ve seen surprising results many times. I love that experimentation can prevent negative impacts that could happen if those changes had been rolled directly into production. We should do a post on surprising tests!

I’m also a big fan of bringing the principles into the conversation. At Brooks Bell, we’re using The Irrational Game, created by Dan Ariely. The cards each represent an experiment and participants try to predict the outcome. It’s a great way to get groups to learn the principles in a fun and interactive way.

On that note… What’s it like to work with Dan?

Kristen: Amazing! Dan is an incredible thinker, has world-class creativity and, of course, is the most kind and generous person I have ever met. Dan challenges people to think about the long term outcomes vs. the short ones. He works tirelessly (literally, people have no idea how he keeps going and going…) to bring behavioral science into the world. My favorite Dan quality? The humor. His quick wit reflects his knowledge and understanding of the human condition AND makes people laugh. When we start meetings it is always with a joke. Over the last 7 years my admiration and love for him has only grown.

Next up, Kristen and Suzi will share some surprising experiment results, reinforcing the importance of experimenting before rolling out. From there, we’ll explore the ways quantitative and qualitative data can be used alongside BE to drive value – adding some new tools to your ideation toolbox!

Have a question for Kristen and the Irrational Labs team? Let me know! Email your questions to us, and we might explore them in a future post!