SEO in a Two Algorithm World: Pubcon Keynote by Rand Fishkin
SEO in a Two Algorithm World: Pubcon Keynote by Rand Fishkin was originally published on BruceClay.com, home of expert search engine optimization tips.
Rand dedicates this presentation to Dana Lookadoo, who will always be with us.
This author’s TL:DR take: On top of traditional SEO optimization factors (ranking inputs include keyword targeting, quality and uniqueness, crawl/bot friendly, snippet optimization, UX/multi-device optimization) SEOs need to optimize for searcher outputs (CTR, long-clicks, content gap fulfilment, amplification and loyalty, task completion success).
Here’s where you can get the presentation: http://bit.ly/twoalgo
Onsite SEO in 2015: An Elegant Weapon for a More Civilized Marketer from Rand Fishkin
Remember when we only had one job? We had to make perfectly optimized pages. The search quality team would get it ranked and they used links as a major signal. By 2007, link spam was ubiquitous. Every SEO is obsessed with tower defense games because we love to optimize. Even in 2012, it felt like Google was making liars out of the white hat SEO world (-Wil Reynolds). Rand says today that statement isn’t true anymore. Authentic, great content is rewarded by Google better than they ever have. They’ve erased old school practices like combatting link spam. And they’ve leveraged feat and uncertainty of penalization to keep sites inline. It’s often so dangerous with disavows that many of us are killing links that provide value to our sites because we’re so afraid of penalties.
Google has also become good at figuring out intent. They look at language and not just keywords.
They predict diverse results.
They’ve figured out when we want freshness.
They segment navigational from informational queries. They connect entities to topics and keywords. Brands have become a form of entities. Bill Slawski has noted that Google mentions brands in a number of their filed patents.
Google is much more in line with their public statements. They mostly have policy that matches the best way to do search marketing today.
During these advances, Google’s search quality team underwent a revolution. Early on, Google rejected machine learning in their organic ranking algorithm. Google said that machine learning didn’t let them own, control and understand the factors in the algorithm. But more recently, Amit Singhal’s comments suggest some of that has changed. In 2012 Google published a paper of how they use machine learning to predict ad click-through rate. Google engineers called in their SmartASS system (apparently that’s ACTUALLY the name of the system!). By 2013, Matt Cutts at Pubcon talked about how Google could be using machine learning (ML) publically in organic.
As ML takes over more of Google’s algo, the underpinnings of the ranking change. Google is public about how they use ML in image recognition and classification. They take factors they could use to classify images and then training data (things they tell the machine is a cat, dog, monkey, etc.) and there’s a learning process that gets them to a best-match algorithm. Then they can apply that pattern to live data all over.
Jeff Dean’s slides on Deep Learning is a must-read for SEOs. He says that this is essential reading and not too challenging to consume. Jeff Dean is a Google fellow and someone they like to make fun of a lot at Google. “The speed of light in a vacuum used to be about 35 miles per hour. Until Jeff Dean spent a weekend optimizing the physics.”
Bounce, clicks, dwell time – all these things are qualities that are in the machine learning process and the algo tries to emulate the good SERP experiences. We’re talking about an algorithm to build algorithms. Googlers don’t feed in ranking factors. The machine determines those themselves. The training data is good search results.
What does deep learning mean for SEO?
Googlers won’t know why something ranks or whether a variable is in the algo. Between the reader and Rand, does that sound like a lot of the things Googlers say now?