Part 2: Application to Programmatic Yield Management, Limitations, & Conclusion
In part one of this series, we overviewed the A/B test and the benefits of A/B testing. Today we will cover some examples of the framework, how to use it for yield management and limitations of the A/B test.
Here are some examples of A/B test uses:
- Basic Metrics: Unique page views, Page views, Users, Mobile, Medium/Source, Location.
- Engagement Metrics: Average time on page, Pages/Session, New vs returning, Referral traffic.
- Conversion Metrics: Funnel and reg path, Lead generation, Sales.
- Spotting problems on your site: Using A/B testing you can detect problems with visitor flow, layout, text, visual elements, page load times, etc..
- Increase the marketing potential of your site: With A/B testing you can track target metrics, feedback, identify your highest yielding pages, etc..
As you can see, it’s no coincidence that Google and Facebook propose A/B test natively in their tool sets .
How to use the A/B test to measure yield changes
By using the A/B methodology in Adtech we can ensure that A and B are similar in every way so that items like those below are taken into account in both groups of a test to make sure there is no bias that could impact the results of a test:
- Audience and seasonality (e.g. Audience measurement, beginning vs. end of quarter, weekends vs. weekdays, etc).
- Change in inventory structure (e.g. creation of new formats or ad units).
- Change in demand (e.g. new active campaigns or buyers).
Now, how could this framework apply to changes in a publisher’s ad stack? Here are a few examples:
- When you make a change in Open Auction floor prices. Here you can see what the unchanged outcome (your benchmark) earns versus your floor price change (hopefully you made more with the change!).
- When you add new partners in your header bidding set-up. Here you can setup the split to measure the stack without the addition of the partner vs. with the addition of the partner to see if they are delivering holistic uplift to your revenue. Hint: if they aren’t, get rid of them and test others until you find someone that lifts revenue. It’s also worth noting that this framework gives the publisher the power back in negotiating with monetization partners like SSPs. You always have the data to understand which partners help and which don’t. You can keep them working harder with your split test data!
- The impact of activating Google First Look. Again, it’s a simple measurement to watch the overall impact on revenue from activating First Look. As with all questions about the setup, reach out to Adomik for help and best practices for each of these test scenarios).
The limitations of the A/B test
While an incredibly powerful and flexible tool, the A/B test framework does have its limitations and they should be understood in advance of installing the framework across the stack (which we highly recommend!).
- For content and creative testing, tools already exist (Optimizely, Google Analytics, etc.) and are often shipped with most content platforms. However, in programmatic tools such as SSPs or other monetization partners, the A/B test framework are not available natively. This is why Adomik advocates that the publisher builds their own so they can drive programmatic yield decisions with their own data.
- The A/B test needs be performed on the right sample size of data to make sure you're accounting any reaction to change in a system. Without the proper volume of data, the A/B test might not be sufficient to properly account for the "reaction" of the system. Hence, scaling from a small sample size to full size might induce some changes that should be taken into account in your assessment.
- You need to be sure you have the right technologies in place to conduct a reliable split test. Its critical that once you install your A/B test framework that it’s properly communicating to your reporting tools or to those responsible for conducting the test. Without accurate data flowing from the test to your measurement tools, you will have data reliability issues.
In this ever changing and evolving world of digital publishing, it’s table-stakes to have a measurement framework in place - without one, you are driving blind. We strongly believe that publishers should never make a big change without measuring impact - otherwise, what’s the point of the change?
We believe (based on our experience) and the A/B test’s relative acceptance with modern publishers, that the A/B test is validated as a great tool when used and measured properly with the right technologies. Further, the more publishers that use the methodology, the closer we get to establishing it a standard that will be used to dialog with partners and illustrate the effectiveness of change to the stack.
At Adomik, we practice what we preach... we have created an A/B test framework in use with all of our clients to measure the impact of all changes in the stack and to illustrate the impact of our automated price optimization.
If you want to know more about how to build your own A/B test measurement framework, please get in touch, we are happy to help.