Optimizing the Blackbaud Checkout experience
B2B2C Web Design, FinTech
Complete Cover is a processing fee offset feature provided by Blackbaud. If a customer creates a donation form, for example, and enables Complete Cover on that form, then Blackbaud will pay the processing fees for all transactions that are made through that form. Meaning, if a customer enables Complete Cover they will receive free processing for that form.
In turn, Blackbaud asks the donor to contribute an additional dollar amount to Blackbaud in order to sustain the Complete Cover feature.
My team's A/B testing focused on this ask.
When Complete Cover was first introduced to Checkout the only payment method offered was Pay by Card, so the Complete Cover ask was injected onto the "Payment method" screen.
We conducted a few tests using this workflow, but once additional payment methods could be offered with Complete Cover, I determined that it would be best to separate the Complete Cover ask from the "Payment method" selection screen.
This separation would:
Note: Separating the ask from the payment method screen was technically difficult and took time to implement.
A/B testing is a research method where different versions of the same screen are compared against one another to determine which performs better.
We tested against our initial control until we had a test that was a "winner". Once that variation won it became the new control and we tested against it, so on and so forth.
An experiment was determined successful if the metrics we measured showed a statistically significant increase without showing a statistically significant decrease in overall conversions and revenue delivered to customers.
Metrics that are measured with each test:
We used a tool called Optimizely to execute A/B tests. It's implementation was technical and was done by the engineer on our team.
During the period of time Optimizely was being implemented, we created a backlog of initial tests.
As a group, we looked at the problem space and what primary metric we thought each test would affect, brainstormed A/B test ideas, wrote hypotheses for each idea, and determined the level of effort it would take for our engineer to build each test vs. each test's impact. After ranking those hypotheses, we compiled our initial backlog of tests.
This project had complicated constraints. The 2 most notable are that we had to keep the Complete Cover ask within the Checkout workflow, i.e. it could not be moved onto the donation form, and that the Checkout backend is quite rigid, meaning making even small changes can be difficult.
Additional, noteworthy, constraints were that the we had to use the word "Blackbaud" at least once and could not use the words "tip", "contribution", "additional amount", or "donation".
All of the tests in the initial backlog were based on assumptions we made about what we thought would improve the take rate. Unfortunately, some of the tests we thought would have the biggest impact were not technically possible to build at the time due to the rigid nature of the checkout backend. Eventually, with more time and some creative UX design work we were able to build these tests.
List of initial tests:
Ranked in order of highest impact & effort to lowest impact & effort
All of the initial tests we ran produced no changes in the metrics we measured.
As a group, we reviewed our testing process and noted some ways we could improve.
Because A/B testing was new to the company, we were leading the way in determining how to run tests, communicate with stakeholders, and document our findings, etc.
We learned quickly that to be confident the change we made had an impact on metrics we needed to ensure we were only making 1 change per test and that we were letting tests run long enough to reach statistical significance.
Additionally, we learned that donors making different sized donations behave quite differently to one another. For example, someone making a $10 donation behaves differently than someone making a $1,000 donation.
As time went on and more customers began to adopt Complete Cover, more people started noticing that we were making changes to the Checkout experience.
We learned that we needed to better document our tests, their results, and when they are turned on and off. We also needed to better communicate with a wider audience, like Support, in order to keep everyone in the loop and up to date on what versions of Checkout customers might be seeing.
To remedy our communication problem, we used a slide deck I presented to stakeholders about our A/B testing efforts and repurposed it into a communication tool. The deck included slides for each test where we documented the test hypothesis, an image of the control variation, an image of the change made (Variation B), the dates the test ran, and the test results. Additionally, we included slides defining our metrics, goals, and the definition of A/B testing. The deck was saved in a shared location, so anyone could access it at anytime.
Over time, as we kept testing our team got better at interpreting test results and determining what to test next. By refining our testing approach, we were able to become more targeted and strategic, resulting in tests that increased the take rate.
For example, we determined that it would be beneficial to figure out what tip options should be presented for the user to select from. We ran multiple tests with different percentage options and using the best performer from those tests, we started focusing on determining which option to select by default.
From these tests we learned valuable information about donor behavior at different donation amounts.
Testing the Complete Cover ask is ongoing and the team will continue to iterate and improve the design.
As of February 2023, the A/B testing effort has expanded to include the entire Blackbaud Checkout workflow with a goal of increasing the overall conversion rate by 4%.
This project was challenging due to its constraints and at first was daunting because our tests were not producing the desired result. However, we knew we were learning even if our tests did not initially improve the take rate. With more time and practice we got better at identifying where to focus our testing efforts and we were able to exceed our goal. This was an interesting and challenging project due to the relative immaturity of A/B testing at Blackbaud. It required constant team work, collaboration, and communication between UX, PM, and Engineering. Overall, I found this project to be an enjoyable and rewarding experience.
If you like what you see and want to work together, get in touch!