Escaping the Google Panda Filter

By Christina

Mar 10, 2015

Google Panda was designed to seek out and 'destroy' websites with a high volume of thin or low quality content (among other things).  Google introduced this algorithm in an effort to improve the overall strength of the search results.  Unfortunately the reality is that well established websites with good content get caught in the filter and can see a huge drop in search result rankings.

Websites being hit by Panda do vary but some are trusted, authoritative brands with large audiences that have produced quality, popular content for years and really should not be included in the Panda filter.

However, certain activities these companies may be involved with, which were perfectly fine prior to Panda are no longer best practice.

The good news is that a website can eventually recover from this filter.  But it is not an easy process and it does take significant changes and considerable time.  If a website has seen their traffic decline substantially for several months or longer this has serious repercussions for their business, so the sooner you spot you have a problem the quicker it can be sorted out.

Google Panda hit and recovery

Although I could go on forever about fixes that can be made to a website, here I will keep it quick and share some of the most common causes of Panda problems.

Taking a pre-emptive approach against search engine algorithmic filtering is now a fundamental part of search engine optimisation (SEO).  Nearly every website has some degree of technical, design and editorial issues that are potential risk factors.

Here are some things that cause good sites to get caught up in Google Panda, making the necessary changes will help a good site escape the Panda filter:

Templates with limited text - many content sites have certain non-article templates that by design have relatively little text.  While not usually a problem by itself, when combined with other Panda risk factors this can be problematic.

Slideshows - slideshows and photo galleries are frequently used by website owners for the visual experience they provide to their visitors and to help potential customers see at a glance what is on offer.  The search engines often see this type of content as thin and are usually a source of duplicate page problems.

Many designers are now moving to setups in which the full content is accessible on the main page and that is the only URL that can be indexed.

Mobile - with mobile search continuing to grow in popularity, some websites are being altered so they have shorter articles that are more succinct, making the content more digestible on mobile devices.  Unfortunately this creates thin content issues, because the search engines want to see substantial articles that have data, proof of findings and references to other high quality content.  Coming up with a balance could prove tricky.

View-ability - too many promotions or advertisements above the fold, especially if they interrupt the visitor, can also be problematic, in part because they can lead to poor engagement signals.

Duplicate content or page elements - having a significant number of pages with minimal text, copied text, duplicate title tags and headings is a common problem. 

These can be caused by publishing the same content on more than one URL, which can happen because of the CMS or migration issues, links throughout the website, or external links into the website as well as tracking codes.

Syndicated content - publishing large amounts of syndicated content, especially when it appears on more than one website is problematic.  Websites need a high volume of unique quality content.

Overlapping content - over lapping content is publishing the same or similar idea across a blog post article, service content and gallery, where they share the same subject and repeat portions of the same text, headlines and title tags.  Although often better from a user's point of view, this can cause problems as it can be seen as thin content from the search engines point of view.

Internal search -internal search can be a source of an excessive number of thin, overlapping and partially duplicate pages. 

Incorrect, empty or missing pages - can contribute to triggering the Panda filter.  It is not unusual for a website to have pages that do not exist, or should not have been indexed.  This can cause soft 404 errors and low value pages.  This can happen during redesigns.

Recovering from the Panda Filter

Recovering from Panda typically requires an audit of a website to firstly find out what problems exists.  Then putting fixes in place, such as, template and technical fixes combined with the elimination, consolidation and de-indexation of low value pages through strategic use of 404s, permanent redirects, rel=canonical and meta robots "noindex" tags, as well as rewriting content, among other things.

Unfortunately, the path to recovery is not an easy one.  The most likely causes can be identified but it is a case of trial and error to fix enough of the offending issues to break a site out of the filter.

Recovery is not instantaneous, once you have cleansed your website, you have to wait for Google to roll out an update.  These updates are happening more frequently but the bigger updates happen periodically.

You will require time and patience.


Website Design and Development

Comments have been switched off for this post!

Bookmark this article

Bookmark this article

Refer back to it when you need to