When carrying out these tests, there are three key things to think about.
Stick to the scientific method. It works.
Paid Media gives us the flexibility to test so many things, including ad creative, targeting options and audiences. With such a range of options, it can be tempting to dive right in and start testing several things right away. Do this, however, and you may forget to clearly define what you are going to test, how it’s going to be tested, and how results will be measured, resulting in sub-optimal results.
Instead, use the following method:
- Observation & Question
- Hypothesis & Prediction
- Experiment & Analysis
You may not need to delve into all these areas – particularly for smaller tests – but by sticking to this process, you avoid testing variants and finding one version works better but having no idea why.
Plus, when it comes time to presenting back the findings, you’ll have a list of true or false hypothesises to report on, as well as ideas for future tests.
Understand your measurement framework
This is critical, as there are plenty of pitfalls when it comes to testing and measurement – as anyone who has run A/A tests (running identical ads against each other and comparing performance) will attest to!
Some of these can be avoided by clever segmentation and data cleaning, but there are other issues you can encounter when running a larger scale test that covers multiple channels and touch points.
As an example, in the video I spoke about how we split an audience into those who had previously visited the site via Google Ads and those who had not. Here, we had to consider:
- Someone visited the site four weeks ago via organic search and now via Google Ads. Is this the same as someone who came in through Google Ads and is seen as a new user? Do we count them as the same type of user, or filter these out?
- How do we approach users who visited and converted within a single visit?
- Is there a similar device split between these two groups, or are we seeing lots of new users through mobile devices? How do we approach this?
- What about non-last direct click, how does that play into our segmentation?
Whilst this can seem like a rabbit hole of questions, by asking this at the start, you’re more likely to design a test that gives an accurate result. You’re also less likely to get to the end of the test and find you’ve not accounted for a critical variable, rendering the results less accurate, or worse, unusable.
Ensure you have the right tools to hand
As I mentioned in the talk, ensuring you have the rights tools is incredibly important to generating accurate results and insights.
As an agency, we want to spend time thinking and designing tests that drive value for clients – not spend hours wrangling multiple spreadsheets and CSV files into a usable state!
For example, R combined with packages like googleAnalyticsR & adwordsR are particularly useful in gathering the data from each platform. As an example, the anti sample feature within googleAnalyticsR has proven an invaluable timesaver, opening up the option to work with large, detailed data sets that otherwise would have required manually downloading and stitching together reports.
Alongside gathering and cleaning data, R also enables us to carry out detailed analysis on test data. Packages like CausalImpact, created by Google, give us a simple framework to run causal inference analysis on our data sets and output the data in both tables and charts.
Detailing the uses of R within the world of paid media is a blog post or two on its own, but having the skillset to use R (or alternatives) will dramatically step up your testing game, allowing you design and run big tests with bigger data sets, without the headaches that often come with it!