If someone has tried to define search best practice to you before, they’ve probably given a different answer depending on what day of the week you have asked them. There are going to be two different reasons for that.
Firstly, the concept of best practice is constantly changing. We’ve moved from a world where we would utilise single keyword ad groups (SKAGs) with manual Cost Per Click (CPC) and a multitude of bid adjustments for dictator-like control over how we were bidding against our competitors, trying to game the system as much as we possibly could – to a brave new world of consolidated campaigns, smart bidding and Performance Max. Bye bye control, right?
Secondly, there’s no one-size-fits-all solution. Account structures will (and should) vary based on any given advertisers industry, challenges, budget, recent performance, and more. Ultimately, it is up to the marketer to decide what is going to work best for them, but there are new truths which should be recognised in the age of automation and AI.
Out with the old
Just a few short years ago, we used to love control. And we needed to.
The system wasn’t set up to allow the platforms to operate autonomously. We, as advertisers, would enter an auction on their terms, with an exact bid. The typical process would involve keyword research before finding the most relevant terms and mapping them to keyword planner to see which had any volume worth talking about. We would also take into account any specific terms the client would like to appear for. This part of the process remains consistent with current practices.
We’d then proceed to create every single version of how a potential customer might search for that term, and negating the same keyword in the opposing match type ad group. That meant creating tens, hundreds, or maybe even thousands of variants and putting these into individual ad groups split by match type (R.I.P. broad match modifier) with their own individual ad. The perfect SKAG.
The logic of that approach makes sense. The advertiser would be matching the query to the keyword as closely as possible, which meant the best possible experience for the user, whilst giving the advertiser as high a quality score as possible and helping with their overall bid (because click through rate would be higher).
This approach with manual bidding worked. Advertisers could even try to maximise clicks if they wanted to (or use conversion optimiser if they felt brave enough.) Campaigns would do what they were told, and overlaid with optimisation techniques we could get relatively predictable performance.
The problem here was scale. How were we meant to grow good performance when we’ve been so prescriptive about the user journey? We’d predetermined exactly how the potential customer was going to search without thinking about other routes people who weren’t ready to purchase immediately might take.
However, adapting to growth works both ways. The desire can be there, but you need to be operating on an ad platform that facilitates not just growth, but good growth. Growing to an audience that is relevant, not just numbers for the sake of it.
In with the new
The main game changer for search came with the evolution of smart bidding. It marked a departure from manual control and letting Google’s algorithm show your ad to a user based on who they are as a potential customer, not who they are as a searcher.
It was initially treated with caution. And rightly so, some of the early results we saw were…mixed. The understanding came in quickly that what makes these bid strategies thrive was data. More specifically, the volume of data. In the (very) early days of conversion optimiser, there was a clear stipulation of at least 15 conversions in 30 days. That was needed just to be able to opt into the bid strategy. There are still recommendations around minimal data thresholds, but we now know they work much better the more you can fuel them.
The fuelling of the bid strategies relies however on a consolidated campaign structure, not a segmented one – as much data going through as few data points as possible. This meant moving away from SKAGs and moving towards consolidation. It also means embracing match types which previously saw little use. Our hand has been somewhat forced on this because of the loosening of match types by Google, but it has opened our eyes to matching against the right person at the right term with the right ad (which is also now automated.)
We’ve seen the results from taking this approach. For a telecommunications brand relying on a segmented campaign structure, smart bidding wasn’t working efficiently and going through a consolidation approach grew orders 65% and reduced Cost per Acquisition (CPA) by 41% in the process.
This didn’t mean relinquishing control to the black box of the Google algorithm; parameters around performance were still set, all within a sphere of AI.
How AI is working with your paid search
AI within paid search probably isn’t what you think it is. It’s not letting ChatGPT run your campaigns or having one single Performance Max campaign do everything for you. Google’s algorithm is AI, and that encapsulates everything from ad formats, to bid strategies, to campaign types.
The simplest example of this is responsive search ads becoming the default ad type when setting up a search campaign. Before, you would write in a headline and description and away it went. Now, letting Google decide between 15 headlines and four description lines in a mix and match scenario will fuel better performance. We still have the choice to pin certain assets in place, but each pin does restrict how much learning you are letting the algorithm do.
With these ads, we then want to be making sure they are getting in front of the right person. To allow Google to best do this, it needs to be equipped with a bid strategy where you have defined what success looks like, and with the right keywords. These keywords no longer need to be segmented in the way they once were, but by utilising broad match variants – as long as you have parameters around your bid strategies – you’ll be able to go after the right users who are searching in ways humans are not able to predict. Google estimates there are 500 million brand new queries every day; keyword planners can’t know everything, but they do know their core terms to let the machine learn with.
For advertisers, there are still plenty of exclusions that can be utilised for peace of mind if needed. Across both Google and Bing there are partners which ads can be shown on, and it is commonplace to exclude these if there is demonstrably poor performance. Exclusions now need to be treated with more caution rather than as part of a pre-campaign launch ticklist, they should only be excluded when necessary as to not throttle learning.
Testing the best approach for you
Exclusions are one part of something an advertiser should look to test. None of what has been described above is a one size fits all solution. If campaigns are operating on smaller budgets then it will be harder to get smart bidding to work as efficiently and effectively as competitors spending more.
The same applies for a consolidated campaign structure. Not every advertiser is going to be able to consolidate for business or compliance reasons. This doesn’t mean that the alternative is to go to a granular model, but instead finding a solution which utilises modern best practices with performance.
Test while you can, and learn while you can, because we are moving into a world where an advertiser will be less of a technician and more of an analyst as the platforms become even smarter.