You’ll have heard by now that Google has released its Search Quality Rating Guidelines to the public. You’ll be pleased to know that we’ve read it cover-to-cover in all its full 160-page glory.

If you fancy pouring a cuppa and reading it yourself, you can download it here…but I’m sure you’d much prefer to jump straight to the highlights.

Manual Raters, huh?!

Before we start on those highlights, let’s quickly address this whole manual raters thing.

When most people think of Google’s search engine, they imagine a really clever computer algorithm that’s connected to all of the world’s information. Basically, the magic behind the search results is a huge farm of servers crunching data.

But we’re not at the mercy of robot overlords just yet; even a computing powerhouse like Google needs real humans to check its work. It therefore hires manual raters to perform searches and assess the quality of the results. Although these ratings are not a ranking factor, they are a good indication of the quality signals that are built into Google’s algorithms.

The release of these guidelines is interesting because we now have an ‘official’ understanding of the processes and criteria these raters work with. Until now, we’ve made do with back-channel gossip, unofficial leaks and conjecture.

E.A.T. – Expertise, Authoritativeness, Trustworthiness

This is the first takeaway. We’ve known for a long time that providing useful content is key, but what we learn in the guidelines is that the manual raters are instructed to go digging into websites’ reputations to see how authoritative they are.

What we can do about it

Placing content on third-party websites for links is SEO 101, but here we get a much clearer indication of the other gauges of quality that Google is looking at. We’ve known for a long time that the quality and relevancy of links is far more important than quantity, but this is an important reminder that it’s just as important to be seen on or mentioned by respected publications.

Instead of frantically pushing for links, we need to focus on creating purely useful content for the readers of the publication we want to be mentioned by. This immediately changes the conversation with an editor. We don’t have to start with “can we do this for a link?” – we can lead with “can we do this for your readers?” instead. And that’s a much nicer conversation to have.

A Focus On Mobile

The guidelines also explicitly focus on mobile users’ experiences. Websites that fail to deliver a mobile-friendly experience will be rated as ‘Fails to Meet’. This is the same rating spam sites receive!

The mobile-specific issues that raters are instructed to look out for include:

  • Cumbersome data entry
  • Sites that don’t function well on small screen sizes
  • Poor or difficult user experience (navigation, menus, horizontal scrolling, images that don’t resize)
  • Slow or inconsistent site speed


What we can do about it

This is simple. Get your website’s mobile experience in good health, or else. There are no excuses for not having a mobile-friendly site. It’s 2015 and everyone’s been talking about “mobile first” for about five years now!

User Intent

Google is keenly aware of the user’s intent when searching. The guidelines go into detail about the intent behind the queries users make. “Know”, “Do” and “Device action” are just three they mention.

  • Know: Knowledge based queries. “Who did…”, “When was…” are how these queries can start. They imply the user wants to know
  • Do: “Buy…”, “Where can I …” are how these queries can start. They imply the user wants to do
  • Device-specific: “Remind me to…”, “Call …” are how these queries can start. They imply the user wants to initiate a function on their device, such as starting a voice call.

Google also knows that the results for these queries also depend on the context of the search. For example:

“Pizza” is a query that requires totally different results based on when and where the user is. For example, relevant results for a user searching on a mobile phone whilst in a town high street at 9pm are likely to be very different to the results a user would see if they were searching on a desktop in their office at 10am.

What we can do about it

As SEOs, we’re mostly concerned with the first two types of query, the “Know” and the “Do”. For each query type, we need to first understand the searches and intent of our users and then create content that addresses their needs.

Needs Met

Following on from the relevance of the result, the raters’ guidelines now introduce a system which assesses how well the user’s needs were met. In other words, did the result answer their question, or let them do whatever they needed to do?

What we can do about it

Obviously this impacts content, which should be putting the user’s needs first. This may affect a lot of lead-generation campaigns, where often the content that provides the value is gated behind a form. The solution is a shift in mind-set, moving from “how many leads did I generate” to “how many people did I help”. This is easier said than done for most B2B businesses that need leads to survive, and in reality the answer will be a blended approach that balances providing answers and capturing lead information.

The More Things Change The More They Stay The Same

In summary, the raters’ guidelines showed the manual review process is designed to improve the experience people have when using Google’s search product: high quality results that are aware of device and intent and which meet their needs.

And we believe that, when you look at the motivation behind any of Google’s organic search developments, you’ll find the user’s experience at the core of it.

Do you agree? Tell us what you think on @rocketmill