Deconstructing Google

Posted by gfiorelli1

Adso, if I knew the answers to everything, I would be teaching theology in Paris.
(William of Baskerville – The Name of the Rose)

I am not a mathematician, therefore I cannot give you formulas to play with; I am not a what can be defined strictly as a technical SEO, therefore I cannot give you insides about technical methodologies to fight spam. I have an old marketing school background mixed with humanistic studies. So, my approach to the quality of SERPs issue, so hot during these last weeks, will be more philosophical and theoretical than high tech and statistic.

The Socratic method will be guiding me here in a series of questions I ask and answer to myself. Are they the Answers? They are not, but I think they are painting a probable future.

Is Web Popularity the same as Web Quality?

Mostly not.

Let’s be clear: even if spam did not exist, people do not usually link to the most valuable things in the Internet. People link to cats playing the piano, talking dogs, some kitsch website and, oh yes, sometimes to viral content crafted by some agency. Or to Brands.

But when it comes to niches, web popularity becomes a more blurred concept, where popularity gets mixed with authority. For instance is more probable that we as SEOs (yes SEO is a niche) will link to SEOmoz citing their posts because we cite them, blame them, commend them than to some unknown SEO newbie.

Sometimes the miracle happens, and an authority discovers a great piece of content hidden in the web, and therefore it becomes popular. Remember: authority.

Fortunately, there is SEO, and popularity can be obtained with creative link building; but "black hat" techniques makes the "popularity" factor a very risky one to base the SERPs on.

And the risks of popularity are even more enhanced now that tweets and shares are officially counted as a ranking factor.

Should not SERPs present popular content?

Yes, but…

If people do not find in the SERPs what everybody talks about, well, Search Engines would last like a breath. But popularity should be based just in links or should have to be based mostly on trusted links?

Ask yourself: would you choose a restaurant suggested by bazillions people and one link on Yelp or in hundreds of affiliate sites?

Trust, authority…again. And that is something that we as SEOs always preach in our Belief Pray for a Well Optimized Website. And Google preaches the same. But, if it is so, why still is it possible to see so many websites artificially popular because they own millions of links from thousands of unrelated and not authoritative sites?

Maybe the reason is that something is failing on the trust authority check by Google, and it knows it.

So…is it possible to balance popularity and quality?

Yes.

Personally I am not one of those who pretend that the Search Engines should show only astonishing web sites. OK, maybe I am a little bit of freaky tastes, but I don’t want search engines to become some sort of Wikipedia.

But, at the same time, I do not want SERPs polluted by clearly spam/insignificant sites. What I want is to see and explore genuine web sites, and I believe that Google could use tools and concepts that already exist making them better.

If link popularity (as other factors) has proved to be a too difficult factor to control now that exist billions of websites and searches and a quite easy formula to game, than another factor (or factors) should have to be highlighted for rankings.

If this factors exists, what are they?

Authority and Trust.

And we all know they are the real factors we really care for, because we do it already in our life. It is simple common sense. We buy that car because we trust that brand; we see that movie because we trust what says a friend of ours; we believe in what a scientist says about climate because he is an authority in climatology. Therefore it is logical that also the search engines should base ranking mostly on those two factors: Authority and Trust.
They are already counted in the Google algorithm, as Rand told in 2009 and Trust Rank is an old dude.

This graphic from another 2009 post here on SEOmoz explains better what TrustRank is than I could possibly do.

The Concept of Trustrank
Someone, using the Occam’s razor principle, could now say: "Put Trust Rank as most important factor and we will see the end of spam".
But that would not be so true in this moment.

Are trusted seeds really to be trusted?

Theory says yes, practice says no (ok, I am a little bit paranoid, but – hell! – I am Italian).

The J.C. Penney case is just one that came to light because the New York Times pointed its finger on it. If not, we would be still probably seeing its site quite well ranking, as many others trusted brand sites. But J.C. Penney is not the only website that consciously or not makes use of not licit SEO tactics. And, on the other hand, it is a clear example of how much Google has to improve the trust factor in it algorithm.

What happened to BMW some years ago seem did not teach that much to Google.

And we know well how easy can be to obtain links from .edu sites and also .gov ones.

No, trusted seed can be gamed… if Google forget to control them first.

WTF can be done (exclaimed the SEO in despair)?

In reality a lot.

And a lot of things seems are moving to a new big algorithm change. Let see the signals Google sent especially in the last two months:

December 1 2010. Danny Sullivan publishes the famous article What Social Signals Do Google & Bing Really Count?. In the post Google, apart saying that use (re)tweets as a signal in its organic and news rankings, also affirms Yes we do compute and use author quality. We don’t know who anyone is in real life :-). This is not like saying that also Users are now counted as trusted seeds?

December 1 2010. Another article by Danny Sullivan, that did not received a deserved attention, maybe because published in the same date of the previous one: Google: Now Likely Using Online Merchant Reviews As Ranking Signal. In that post Danny cite this declaration from the Official Google Blog: In the last few days we developed an algorithmic solution which detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide a extremely poor user experience. The algorithm we incorporated into our search rankings represents an initial solution to this issue, and Google users are now getting a better experience as a result. Danny adds that customers’ reviews are probably used as a new factor in the algorithm (but not sentiment analysis). Again, user signals used as confirmation of the trustiness of a website.

Between December 2010 and the end of January, the SEOsphere saw an increasing number of posts claiming against the everyday worst quality of Google SERPs. Somehow as a reaction, we started to see an increasing number of ex Search Quality Googlers answering in Quora and Hacker News and usually predicting some big change in the algorithm. During this period Matt Cutts says that all the engineers that were moved to work on other Google project will return full time into the Search Quality Department… that means more people working on the Algorithm or more manual reviews?

On January 21 2011. Matt Cutts publishes a post in the Official Google Blog, the most official of the many Google has Google search and search engine spam. It is the famous announcement of the against-content-farms Google campaign. In the post, Matt Cutts affirms: we can and should do better. Again a move that seems showing how Google is going to favor trusted authority sites. In the same post he says how the May Day Update and the later "Brandization" of SERPs were meant as previous steps in this direction.

January 31 2011. The always clever Bill Slawski publishes a post that can give hint on how Google may rank social networks, presenting three 2007 patents that have been published few weeks ago. Probably some of the signals described in the first patent are the ones Google is actually using in order to bestow authority to influencers.

February 1 2001. At Future Search Google accuses Bing of copying its search results detecting them thanks to Bing toolbar. Ironically, another ex Search Quality Team Googler reveals in Quora that Google use the same technique with its toolbar. Again, users’ data.

February 12 2001. The J.C. Penney case comes to light thanks to an investigation of the New York Times. Google intervenes, but this delayed intervention shows one thing: that Google does has serious problem on the Trust side of its algorithm.

February 15 2001. Matt Cutts presents a video where he explains how Webspam works at Google (an advice?) and promote actively the new spam blocker Chrome plugin launched on San Valentine’s day. Another way to detect useful signals from users about what is relevant or not on the web.

What conclusions can be drawn?

  1. That Google seems to have understood that it has to come back to its origins and the base of its core business: quality of SERPs;
  2. That Google has probably understood that old classic link-ranking factor can be so easily gamed that some other factors, as Trust and Domain Authority should be given priority;
  3. That Social Media is so influencing the way people searches, that social signals must be considered as important ranking factors and that Trust and Authority must be translated to the Social reality;
  4. That users generated content and users interaction with the websites is more active than ever was before, therefore that the users factors must be considered as relevant, at least as a litmus mirror, even though it has to be very well crafted into the algo, as elements like reviews can be easily gamed.

And that the frantic series of news about Search is just at its beginnings.

Post Scriptum: I wrote this post between the 13th and 14th of February, totally unaware that Rand Fishkin was writing a post that touches the same subject. Anyway, I hope mine will give another perspective to the search quality issue and the predictions that can be done on the basis of the last event in search.

Update – 03 of the March 2011

In my last line I was saying that we were still at the beginning of a long series of events and change that could change – a lot – the SERPs we knew.

Infact we had: the penalization of Forbes for selling link, the Farmer Update, Google Social expanded in the Universal Search and today March the 3rd Google has announced that will retouch the Farmer Update in order to penalize legitimate site…

Let see if Google – citing "Il Gattopardo" – is changing everything to change nothing.

Do you like this post? Yes No

This entry was posted in Uncategorized and tagged , , , , , , , , , . Bookmark the permalink.