I had a thought earlier today, funnily enough whist interviewing copywriters, which led me to think: how important is the quality of writing as a ranking factor following the Panda update?
Obviously the Panda update has had a negative impact to websites which have low-quality content – but, from the other way around, how has this impacted high-quality content? Forgetting completely about links for now and assuming all things are equal, does higher quality written content now have more of a positive impact to search rankings?
Well I thought I’d do a few tests with Google’s reading level search filter to compare the differences between how content ranks which is either basic, intermediate or advanced.
How is the reading-level split between Google SERPs?
I wanted to get an idea about how the reading levels of content is spread between content which ranks on the first page of Google. Does content have to be well-written, or is the fact that it’s unique sufficient enough? Here’s an example query, with the reading levels highlighted in red – please note, not all listings are classified with reading levels.
So what can we read into this (no pun intended!)? Well, looking at these results, it’s interesting to see that no advanced content ranks at all for this query, despite the split between indexed content being very evenly spread across the three categories. In fact I didn’t find a single “advanced” piece of content listed in Google’s top 100 results at all!
But what about individual sites?
Ralph Fiennes said earlier in the week that Twitter is to blame for dumbing down the English language and making people use shorter words:
Ironically this was written in the Daily Mail, whose overall website reading level has an advanced score of less than 1%!
So is reading level a factor?
Based on the results I’ve seen so far, I would have to say no. And if it is, it’s not rewarding advanced content – in actual fact the opposite effect is taking place if anything. These results aren’t very fitting with what the Panda update was intended to achieve – and I’m actually quite surprised by this. It does make a large assumption that all things are equal with link profiles (which of course is never true), but even so I’d expect to see Google varying the results between different reading levels to a wider extent.
Perhaps it’s something they are (or will be) looking at though – we haven’t even got to Panda 3.0 yet!
So what factors is the Panda update looking at?
Daniel Bianchini and I presented at a4uexpo a couple of weeks ago on this subject – and there’s a wide range of factors we believe to have affected this. Dan goes into more detail in his Panda algorithm update post, but to summarise a couple of key points – here are what I think are the main things which need to be addressed.
The biggest success stories so far have been from sites who have really cleaned up their act and removed low-quality content. Those who had thousands of pages and have now trimmed this right back to ensure that it’s only the top quality content that is indexed and visible to Google. Or splitting content across several domains or subdomains.
User intent and bounce rate
Do Google monitor bounce rate? In my opinion, yes of course they do. Maybe not from Google Analytics directly, but if you’ve clicked a search listing in Google and bounced straight back out to visit the next site, it’s not a good sign of quality.
They’re likely to test things like this all the time, for example what’s the user intent behind the query “Apple”? Are you looking for the fruit or an iPod? Only bounce rate can really tell Google the user intent behind this – and it’s probably the main reason why the fruit industry is in ruins after the success of Apple, Blackberry and Orange!
But what is high-quality content?
Based on the results of this, I’m not sure it is writing standard. Or at least it’s not a huge factor. I’m sure Google will be aware of sites with spelling mistakes and perhaps poorly written content or grammar, but whether they are being penalised in any way, I’m not so sure.
I’ve recently heard stories of how some SEOs have tried buying old newspapers from the 1960s off eBay, scanning them and uploading each article as a unique webpage. The theory being that it’s a professionally written level of content, and because it was published well before the internet era, it’s also likely to be unique content once indexed. I have to admit, I was very impressed by that idea – with the theory being that you should be able to rank highly for these terms – and even if you’re just doing it to collect affiliate ad revenue via Google AdSense, it sounds like it should be a very effective strategy. But it is forgetting about one key thing…
Personally, I think the biggest losers from the Panda update are those which have low volumes of links to internal content. If you’ve been hit by Panda, try asking these questions:
- How much long-tail traffic do you get vs head of tail? How has that changed?
- How many links do your internal pages have vs your homepage?
- How many pages do you have indexed vs pages which are generating search traffic?
If can analyse the detail of these, then you can get a clearer idea about what content you need to either tidy up, remove or build links to. And from most experiences I’ve seen so far, it’s not a quick process – it will take gradual steps towards building the recovery back up. But if you have content that can naturally attract direct, deep links, then you’ve got much more chance of coming out of this the other side with a large spike in traffic – just in the right direction this time!
Would be great to hear anyone else’s experiences in this? Especially if they’ve seen a change in rankings across different quality of writing standards.