The History of Google Search Algorithm Updates

|

google-algorithm

To control the search engine ecosystem and prevent negative manipulations by others, Google frequently updates its search algorithm, which affects website rankings and user’s searching experience. These updates often cause variable issues, making website visibility in search results and SEO an increasingly difficult task for webmasters.

What is a Search Algorithm?

In the computing lingo, a search algorithm refers to the secret method where search engines automate the relevance of webpages. In the early days of the search engines, algorithms were simple and easy to manipulate. Opportunistic website owners figured out how to manipulate search results by repeating words through methods (which were later named as black hat SEO), covering all the processes related to creating fake relevance and popularity.

Many updates has been applied by Google since 1999 in order to deliver the best possible search results to its users. Nowadays, most webmasters strive to stay updated with all algorithm changes – which reached over 500 updates per year –  while others weep on their websites that get penalised for applying black hat tactics.

google-beta
Google in 1998

From the rather unsuccessful updates for onpage and offpage factors to the introduction of the first major update in 2003 (codename: Florida) that “put SEO into map”, Google made it clear that algorithmic updates aim to help the search engine deliver the best possible experience to searchers.

google updates in 2003
Google Algorithms Updates in 2003

Following the series of searching algorithms updates, Google introduced the Everflux update that included daily crawling in order to discover the most relevant and fresh content across the web, the controversial Sandbox update, the Jagger update in 2005 that targeted at low quality links, the introduction of Universal Search that integrated most of the numerous results results we see today in SERPs such as news, images, maps, videos, etc.

google-2005
Google Algorithms Updates in 2005

To eliminate spam indexing and PageRank sculpting, Google introduced the “no follow” value in 2008, the “rel=canonical” in 2009 (a solution for on-page duplicate content issues), the “Panda” update in 2011 and finally the “Penguin” update in 2012. While the “no follow” element was designed for webmasters to optimise their webpages by selecting which hyperlinks the Bots should crawl and index , “Panda” aimed to lower the rankings of low content quality sites. PageRank sculpting has led numerous websites to the penalisation. Other “black hat” techniques that were disapproved by Google include methods such as keyword stuffing, invisible texts, content scrapers, cloaking pages and article spinners.

google-2011-2012
Google Algorithms Updates in 2011-2012

The “Penguin” update targeted both content and link spam, such as automated link building software , hidden links, link farms, sybil attacks, page hijacking, domaining and cookie stuffing. Most of them were plummeted in SERPs.

With the latest ‘Hummingbird‘ update, the search algorithm is shifting towards semantic web by aiming to provide more intelligent and personalised results to users. More importantly, the NSA/CIA revelations in 2013 gave Google cover to implement secure connections on all searches, which has in turn led to the loss of vital data for SEO with levels reaching 80% in late 2013.

google-2012
Google Latest Update

Many SEO factors, such as fresh and high value content, author rank and natural link building will continue to be essential to ranking in the (transforming) SERPs in the next years. Although algorithm updates have caused troubles to most webmasters, it can be argued that these changes have contributed to the transformation of organic results as plain website listings to smart and reliable “answer finder”.