A/B Testing

Optimizing the Blackbaud Checkout experience

B2B2C Web Design, FinTech

Case Study

Project Overview
The Problem:
A feature called Complete Cover that allows Blackbaud customers to offset processing fees was released in 2022. It was a novel concept and in order for Blackbaud to continue offering the feature its take rate needed to increase by 0.5%.

The Solution:
Put together a team to continually execute A/B tests in order to optimize the user experience and increase the take rate.

The Results: 
Testing is ongoing, but as of February 2023 the take rate has increased by 0.9%.
My Contributions
My Role:
  • Led project from a UX perspective
  • Contribute test ideas
  • Design test UI
  • Design around complicated technical constraints
  • Guide and educate team members to ensure dark patterns were not used to reach the take rate goal
Product screenshots
"Blackbaud Checkout" is Blackbaud's checkout workflow representing $6.3M in annual processing. Checkout allows a user to select their preferred payment method and then securely processes their payment. It is displayed in a modal to keep the burden of PCI compliance on Blackbaud instead of Blackbaud's customers.

Complete Cover
is a new feature and is now the first step in the Checkout process. A/B testing as a way to validate designs is also new to Blackbaud.

This project was quite visible to leadership as its results would have an impact on whether or not the Complete Cover model was sustainable for the business long term.

The team: 
1 Product Manager, 1 UX Designer (me), 1 Software Engineer, and 1 Technical Writer

Background

Getting started

Complete Cover definition

Complete Cover is a processing fee offset feature provided by Blackbaud. If a customer creates a donation form, for example, and enables Complete Cover on that form, then Blackbaud will pay the processing fees for all transactions that are made through that form. Meaning, if a customer enables Complete Cover they will receive free processing for that form.

In turn, Blackbaud asks the donor to contribute an additional dollar amount to Blackbaud in order to sustain the Complete Cover feature.

My team's A/B testing focused on this ask.

Original Complete Cover ask

Complete Cover workflow

When Complete Cover was first introduced to Checkout the only payment method offered was Pay by Card, so the Complete Cover ask was injected onto the "Payment method" screen.

We conducted a few tests using this workflow, but once additional payment methods could be offered with Complete Cover, I determined that it would be best to separate the Complete Cover ask from the "Payment method" selection screen.

This separation would:

Note: Separating the ask from the payment method screen was technically difficult and took time to implement.

Original Complete Cover ask workflow
Revised Complete Cover ask workflow

A/B Testing

A/B testing is a research method where different versions of the same screen are compared against one another to determine which performs better.

We tested against our initial control until we had a test that was a "winner". Once that variation won it became the new control and we tested against it, so on and so forth.

Defining a "winner"

An experiment was determined successful if the metrics we measured showed a statistically significant increase without showing a statistically significant decrease in overall conversions and revenue delivered to customers.

Metrics

Metrics that are measured with each test:

Optimizely & Backlog building

We used a tool called Optimizely to execute A/B tests. It's implementation was technical and was done by the engineer on our team.

During the period of time Optimizely was being implemented, we created a backlog of initial tests.

As a group, we looked at the problem space and what primary metric we thought each test would affect, brainstormed A/B test ideas, wrote hypotheses for each idea, and determined the level of effort it would take for our engineer to build each test vs. each test's impact. After ranking those hypotheses, we compiled our initial backlog of tests.

Initial testing

Constraints

This project had complicated constraints. The 2 most notable are that we had to keep the Complete Cover ask within the Checkout workflow, i.e. it could not be moved onto the donation form, and that the Checkout backend is quite rigid, meaning making even small changes can be difficult.

Additional, noteworthy, constraints were that the we had to use the word "Blackbaud" at least once and could not use the words "tip", "contribution", "additional amount", or "donation".

Initial tests

All of the tests in the initial backlog were based on assumptions we made about what we thought would improve the take rate. Unfortunately, some of the tests we thought would have the biggest impact were not technically possible to build at the time due to the rigid nature of the checkout backend. Eventually, with more time and some creative UX design work we were able to build these tests.

List of initial tests:
Ranked in order of highest impact & effort to lowest impact & effort

Test examples

Donation summary test
Selection mechanism test
Copy change test

Initial test results

All of the initial tests we ran produced no changes in the metrics we measured.

As a group, we reviewed our testing process and noted some ways we could improve.

Insights

Because A/B testing was new to the company, we were leading the way in determining how to run tests, communicate with stakeholders, and document our findings, etc.

Testing insights

We learned quickly that to be confident the change we made had an impact on metrics we needed to ensure we were only making 1 change per test and that we were letting tests run long enough to reach statistical significance.

Additionally, we learned that donors making different sized donations behave quite differently to one another. For example, someone making a $10 donation behaves differently than someone making a $1,000 donation.

Documentation and communication insights

As time went on and more customers began to adopt Complete Cover, more people started noticing that we were making changes to the Checkout experience.

We learned that we needed to better document our tests, their results, and when they are turned on and off. We also needed to better communicate with a wider audience, like Support, in order to keep everyone in the loop and up to date on what versions of Checkout customers might be seeing.

To remedy our communication problem, we used a slide deck I presented to stakeholders about our A/B testing efforts and repurposed it into a communication tool. The deck included slides for each test where we documented the test hypothesis, an image of the control variation, an image of the change made (Variation B), the dates the test ran, and the test results. Additionally, we included slides defining our metrics, goals, and the definition of A/B testing. The deck was saved in a shared location, so anyone could access it at anytime.

Continued testing & what's next

Continued testing

Over time, as we kept testing our team got better at interpreting test results and determining what to test next. By refining our testing approach, we were able to become more targeted and strategic, resulting in tests that increased the take rate.

For example, we determined that it would be beneficial to figure out what tip options should be presented for the user to select from. We ran multiple tests with different percentage options and using the best performer from those tests, we started focusing on determining which option to select by default.

From these tests we learned valuable information about donor behavior at different donation amounts.

What's next

Testing the Complete Cover ask is ongoing and the team will continue to iterate and improve the design.

As of February 2023, the A/B testing effort has expanded to include the entire Blackbaud Checkout workflow with a goal of increasing the overall conversion rate by 4%.

Reflection

This project was challenging due to its constraints and at first was daunting because our tests were not producing the desired result. However, we knew we were learning even if our tests did not initially improve the take rate. With more time and practice we got better at identifying where to focus our testing efforts and we were able to exceed our goal. This was an interesting and challenging project due to the relative immaturity of A/B testing at Blackbaud. It required constant team work, collaboration, and communication between UX, PM, and Engineering. Overall, I found this project to be an enjoyable and rewarding experience.

Want to work together?

If you like what you see and want to work together, get in touch!