It’s been over two years since Performance Max (PMax) launched, and during that time advertisers have been trying to find ways of controlling the campaign type. Google didn’t give us much at first, but we now have a few levers to pull such as campaign-level negatives and asset group reporting.
With Smart Shopping making a swift exit shortly after PMax came onto the scene, conversations surrounding how we can manipulate PMax to replicate what Smart Shopping left behind are becoming more frequent.
But should we be manipulating PMax campaigns at all? Considering PMax was designed to be relatively ‘hands-off’, is there a risk that over-optimisation could come at the detriment of performance – and where do we draw that line?
What do we already know?
In theory, the key to PMax is that we give a little, and in return the campaign delivers a lot. With access to all of Google’s inventory, we just need to provide creative assets, audience signals and optional data feeds in order to ‘steer’ the algorithm.
But what more can we be doing? Is there value in splitting campaigns out to have greater control over where we show and, crucially for retailers, what products we show?
Testing the granular approach
We devised a series of tests on a B2B eCommerce brand to see if certain PMax ‘workarounds’ would improve overall performance by granting us greater control. For example, could we create a ‘Smart Shopping 2.0’ by removing image and text assets from an existing PMax campaign, allowing us to run across the shopping network exclusively? And could we use granular product sets with 5-10 Stock Keeping Units (SKUs) in each to increase visibility for certain items?
What did we find out?
The recurring conclusion from each test was that a granular approach lends itself well to short-term efficiencies, but it’s not a scalable solution.
For example, favouring the shopping network by manipulating assets did result in improved conversion rates and a cheaper Cost Per Acquisition (CPA) but we also saw significantly less traffic due to no longer appearing on the Google Display Network (GDN). Average Order Value (AOV) also took a huge hit (-20% vs. the previous period) which had a knock-on-effect on revenue, suggesting that PMax requires access to the full inventory in order to attract higher value prospects.
Similarly, creating product groups with small sets of products naturally meant those SKUs experienced a greater number of impressions compared to when they were sharing a product set with hundreds of other products. Great news in the short term, but a poor conversion rate (83% lower than our PMax average) resulted in an inflated CPA and inefficient Return on Ad Spend (ROAS) so we couldn’t sustain this approach in the long term.
Looping this back to how Google initially pitched PMax, the premise was to power the campaign type with a goal and then let AI do the heavy lifting in regards to bidding, audiences, creatives etc. Considering that manipulating some of these factors has implicated the scalability of our campaigns, perhaps Google was right from the start?
Whilst it’s tempting to seek out loopholes which allow us to manipulate PMax campaigns, ultimately this is not how the campaigns were designed to work. Each of our tests showed us that PMax works best when it has a lot of data points to fuel it; anything which involves splitting out those data points can hinder scalability in the long run.
But this isn’t to say that we hand the reins over to Google entirely. There are still a lot of levers we can pull in order to guide the algorithm in a way which achieves scale as well as efficiency:
- Audience signals
- Feed quality
- Product consolidation
- Bidding excellence
- Creative and copy testing
With the ‘black box’ nature of PMax campaigns, there are even fewer ways in which we can differentiate from the competition. We can improve our inputs as mentioned above, or better still, we can layer those inputs with more advanced measurement to make the most of these campaigns.
Tracking margin and profit data will facilitate more advanced segmentation of campaigns, optimising based on Profit on Ad Spend (POAS) rather than ROAS to maximise efficiency. A controlled hold-out test will allow us to measure the incremental impact PMax activity is having on a wider scale, to support strategic budget allocation.
Considering how much control we used to have a few years back (anyone else remember having to tweak bids for every individual keyword?) automation has come an incredibly long way in a short space of time. With the advent of AI in marketing, it’s unlikely the big platforms will hand the reins back to us any time soon. So we need to develop efficient ways of working alongside AI, rather than working against it.
If you need support with getting the most out of your Performance Max campaigns, get in touch with our team.