Google’s Phantom Update in May - SEO Traffic down 33%
After digging into each of the four sites since 5/9, I have some information that I would like to share about what this phantom update was targeting. Since Penguin 1.0 heavily targeted unnatural links, I wanted to know if this update followed the same pattern, or if there were other web spam factors involved (and being targeted). Now, my analysis covers four sites, and not hundreds (yet), but there were some interesting findings that stood out.
Below, I’ll cover five findings based on analyzing websites hit by Google’s Phantom Update on May 8th. And as Penguin 2.0 officially rolls out, keep an eye on my blog. I’ll be covering Penguin 2.0 findings in future blog posts, just like I did with Penguin 1.0.Google Phantom Update Findings:
- Link Source vs. Destination
One of the websites I analyzed was upstream unnatural links-wise. It definitely isn’t a spammy website, directory, on anything like that, but the site was linking to many other websites using followed links (when a number of those links should have been nofollowed). Also, the site can be considered an authority in its space, but it was violating Google’s guidelines with its external linking situation.I’ve analyzed over 170 sites hit by Penguin since April 24, 2012, and this site didn’t fit the typical Penguin profile exactly… There were additional factors, some of which I’ll cover below.But, being an upstream source of unnatural links was a big factor why this site got hit (in my opinion). So, if this is a pre-Penguin 2.0 rollout, I’m wondering how many other sites with authority will get hit when the full rollout occurs. I’m sure there are many site owners that believe they can’t get hit, since they think they are in great shape as an authority… I can tell you this site got hit hard.
- Cross-Linking (Network-like)
Two of the sites that were hit were cross-linking heavily to each other (as sister websites). And to make matters worse, they were heavily using exact match anchor text links. Checking the link profiles of both sites, the sister sites accounted for a significant amount of links to each other… and again, they were using exact match anchor text for many of those links. It’s worth noting that I’ve helped other companies (before this update) with a similar situation. If you own a bunch of domains, and you are cross linking the heck out of them using exact match anchor text, you should absolutely revisit your strategy. This phantom update confirms my point.
- Risky Link Profiles (historically as well as current)
This was more traditional Penguin 1.0, but each of the four sites had risky links. Now, one site in particular had a relatively strong link profile, has been around for a long time, and had built up a lot of links over time. But, there were pockets of serious link problems. Spammy directories, comment spam, and spun articles were driving many unnatural links to the site. But again, this wasn’t overwhelming percentage-wise. I’ve analyzed some sites hit by Penguin 1.0 that had 80-90% spammy links.This wasn’t the case with the site mentioned above. Two of the sites I analyzed had more spammy links. Their situation looked more Penguin 1.0-like. And they got hit hard. There were many spammy directories linking to the sites using exact match anchor text, comment spam was a big problem, etc. And drilling into their historic link profile, there were many more that had been deleted already. So, their link profiles had “unnatural link baggage”. And I already mentioned the cross-linking situation earlier (with two sites). So yes, links seemed to have a lot to do with this phantom update (at least based on what I’ve seen).
- Scraping Content
To make matters more complex, two of the sites were also scraping some content to help flesh out pages on the site. This wasn’t a huge percentage of content across each of the two sites mentioned, but definitely was big enough of a problem that it stood out during my analysis. The other two sites didn’t seem to have this problem at all. Scraping-wise, one site was providing excerpts from destination webpages, and then linking to those pages if users wanted more information (this was happening across many pages). The other site had included larger portions of text from the destination page without linking to it (more traditional scraping)