Recently I’ve written about Klout score optimisation. Since then I and others who outed themselves as actively using Klout have been attacked by self proclaimed SEO stars and other people who seemingly “hate Klout”. Can you hate a metric? Obviously people get very emotional when it comes to Klout.
Klout measures the social media influence of people. While it fails at determining your real life influence, it’s quite accurate for measuring how active and influential you are on social media, including Facebook, LinkedIn, Twitter and Google+.
That’s why some people hate Klout: they are only influential within a small closed group, while they have never shared enough with the general public on social media to get appreciation from the masses.
What did I say when people ridiculed me for using Klout to determine people’s influence? I said that I am quite sure that Google internally has a similar system of finding out who exerts influence on the social web and who does not. It wasn’t a very daring prediction, it was just an extrapolation based on the steps Google has undertaken in the past. Google has already been focusing on authorship, real names and the social graph for a while.
Now Bill Slawski has written an article on the reputation systems Google uses, might use or will use in the future. There are three mentioned in the post. The most interesting one is the Agent Rank. Not only does the name sound familiar and self-explanatory to some extent, but it’s also a patent Google has filed. It most probably gets or will be used for Google +1 votes.
The Agent Rank patent does not describe in detail how such Agent Rank might rank people, but the other papers mentioned above suggest a few ways to determine trust on a collaborative social site. At Wikipedia for example:
“Users gain reputation when they make edits that are preserved by subsequent authors, and lose reputation when their work is partially or wholly undone.”
What does this mean? Opportunism and mainstream opinions pay off.
This is similar to early Web 2.0 sites such as Digg or Hacker News, where a few dominant users gained reputation by submitting content from sites everybody likes and agrees on. Back then it was TechCrunch, for example. I have seen this phenomenon on my first generation social sites.
The more “one size fits all” and “smallest common denominator” a page was, the more likely it succeed.
With it the submitter succeeded as well. Thus people were always in a race to submit TechCrunch articles. The ones who submitted the most TechCrunch articles were the most reputable.
I’ve seen a similar phenomenon on the newer social sites you couldn’t game as easily, people submitting all stories by the
- Search Engine Land
or whatever the main authority in your field is. Most automated accounts do it. They gain authority by simply feeding RSS feeds of popular sites to Twitter, even though nobody clicks the links.
I can see because I see the stats from my blogs that get retweeted by these bots, the bit.ly stats for the URLs and the reputation metrics of these bot accounts. Many of them were able to game Topsy algorithm. Topsy is not as exact as Klout: they just have three different kinds of user not influential ones, influential ones and highly influential ones.
Google will measure trust with more complexity than Klout
I guess but elements from all the measurement system above will be among the likely factors. We’re still in a very early phase of this, but it’s already clear that Google does not want to trust websites anymore but authors instead.
The insistence on real names and the many incentives to verify your identity on Google services all point in the same direction: Google will focus more on people than on websites in the future. Thus an author publishing a completely new website will be able to push it quickly to the top or at least the article they transfer their reputation to.
Also, the more reputable documents and pieces of content you support, the more reputable you get. As on Klout, it will be most likely the sheer activity that will make you more trustworthy. Google’s Agent Rank won’t be able to compute your reputation just based on one or two articles and a few votes.
The more you participate and the more content you create, the more you will become an authority. In the long run, Google will have to focus more and more on real authority that is not the sheer number of votes. In the beginning the search giant has no choice. It has to reward sheer activity as it doesn’t have enough users, votes and other social signals yet.
You may have noticed that I use terms such as
quite interchangeably in this article. As of yet, there is no clear standard on the web for measuring the worth or value of one’s contributions. We already know that neither model is perfect yet but that the importance of measuring people and not websites is growing.
You don’t have to be a prophet to extrapolate the most likely ranking signals for people Google will have to use. Consider these:
Activity – as noted above, you can’t measure something where there is nothing – without activity there is not enough to measure. On the other hand there will be some limits to that. We know that having a million followers on Twitter does not necessarily mean that you are more important. Also, sending 50 automated messages a day may be too much.
Altruism – nobody, either on social media or in real life, likes constant self-promotion. Many marketers still get that wrong. You get what you give. Science has proven that only through altruism can the whole species survive. Algorithms can’t rely on egoists to offer the best advice, as there would be no popularity at all: everybody would just promote their own works. So it has to count the other people who share content, with whom they have no direct connection.
Authority – one of the reasons why Google succeeded in becoming the biggest search engine in the world is its reliance on authority. The more experts consider something to be a good resource, the better. It worked for a while with websites, as the original PageRank formula reflected the reputation model of the traditional scientific community. The more a document got cited, the better. We know that it’s no longer enough, ever since Google has been the leading search engine. PageRank could be gamed quite easily with paid links. You can’t bribe thousands of people as easily as you can pay for a few links though.
Expertise – authority can’t be measured without measuring expertise as well. You can get very popular despite being dead wrong. So an Agent Rank will have to measure whether a given author gets supported by thousands or even millions of people who have no clue or whether they get supported by a few experts who are really knowledgeable in a given area of expertise.
Impartiality – just consider a Google +1 user who constantly votes up Fox News. Can Google count on this person to be an expert on news? Well, most probably the algorithm will consider such a user just an expert on US conservative views. In contrast, consider a user who gives +1 to all kinds of resources, including CNN, BBC, Al Jazeera etc. Will this user be more of an impartial expert?
Popularity – you may be right, but as long as you don’t tell the world or convince more than a bunch of bookworms who do nothing else than deal with the issue all day, it won’t be sign of influence. You have to be able to appeal to the masses. Google already favours Wikipedia in its results not because it’s always the best results, but because most people can get that. Whether you search for SEO, film or God, Wikipedia will show up on top. You will surely agree that there are bigger authorities or better results for all three examples.
Quality – the aforementioned factor; mass appeal can be easily gamed though. We have seen content farms embrace the shallow but popular approach over the years until Google has to curb it. Quality will have to be measured as well. How can quality be determined? This is very difficult, I could write a huge post about this. The on-going Google high quality update, aka Panda, has been about it in 2011. The quality of published and voted for texts by authors will have to be determined by a complex mix of signals itself.
Reputation – someone can have mass appeal, be considered an expert by other experts, even be considered an authority. The reputation of this person can still be a nightmare. Just think about people like Jason Calacanis, Derek Powazek or Steve Rubel who declared SEO dead or rubbish. They are not even famous – they are infamous. People know them because they shout louder than others. So their reputation is awful no matter how much they can game other simpler social media metrics.
Topicality – as you see above these “experts” who indeed have enormous mass appeal, gained great success from their anti-SEO rants when measured by sheer reach and attention. Most of their other contributions haven’t been about SEO at all. So an Agent Rank will have to measure whether you are an expert on SEO, gardening or homeopathy. For example, Klout assumes I’m an expert on homeopathy because I’ve been involved in many online arguments with people who never tried it but attempt to convince me that it cannot work.
Trust – trust is not influence and not reputation either. Trust is about telling the truth, being reliable and not tricking people in order to gain something. How on earth do you measure that? You can be trustworthy without being influential or without having a reputation. You don’t need to be an expert or have mass appeal to be trustworthy either. It’s a very important but easy to grasp concept. Nonetheless you need it to survive, and Google will have to measure it as well. Can a person be trusted not to favour their own clients, colleagues or advertisers? Most people will have a bias. The less bias the better to determine a good resource or author. So Google will have to measure the trust other people ascribe to you.
Velocity – news that spreads fast is in many cases more important than news that spreads slowly. Of course this signal is not enough in most cases. Is the royal wedding or Osama Bin Laden’s death really the most important news? It depends on many other factors. The speed with which articles by a particular author or social media user spread is one metric that has to included among the above as well. Some ideas need a decade or a century to spread; they aren’t less important, they just need more time, but in many cases there’s a reason viral ideas spread like wildfire. Google will have to measure velocity, as it already does with breaking news.
It’s a huge task to measure these abstract concepts, but at the end of the day they determine how important a person, a source or a document is.
Some old school SEOs who are envious of the social media influence of more active users frantically try to outpace the competition by making their employees vote them up on social sites or by bragging that they work for big brands and only accept the highest quality.
Telling people is not enough these days; you have to show or rather offer this quality while sharing your know-how free of charge, otherwise others will do it. Most people will look at the measurable social proof and not the clandestine contracts you have with a large corporation. Google will likewise care more for what other people say about you than what you say yourself or make your employees tell the world.
Already there are tendencies such as selling employee attention to the highest bidder as Walmart does with its more than one million underpaid workers. Google will have to determine quickly whether there are voting patterns between a particular group of people.
Still I’m quite optimistic, overall; authors will be judged by what they give to the world, not what they sell to a chosen few. That’s a great way to find out what’s important. I believe that 99% of the people know better than just the top 1%.
Let’s just hope that Google doesn’t mistake mob mentality for democracy.