In the two decades before the Internet bubble, you didn’t really hear the word algorithm much unless you were a computer programmer, applied math major, or in a tech spelling bee - if such a thing exists. Fast forward to today and if there's "an app for that" there's probably an algorithm for it too. These days, it seems that every angle of our lives is presided over by algorithms. They predict what books we will want to buy on Amazon, who we might want to befriend on Facebook and perhaps even choose a potential soul mate.
The latest algorithm is one you may or may not be familiar with, but in the last few years, it has jumped on the social media measurement bandwagon. A few big players - Klout, Kred and Peer Index to name a few - claim to be able to measure a person’s social influence in tidy numerical form. All three use complex, randomized algorithms to compute some kind of proprietary score to compare people’s supposed influence. This is easier said than done. Klout, for example, faced criticism for giving U.S. president Barack Obama a lower score, therefore labeling him as less influential than teenybopper star Justin Bieber. This was only reversed in August 2012 when Klout altered its algorithm to tie in Wikipedia page relevance (and therefore take more real-world data into account.)
For me, however, these new measures of Web popularity beget a few questions. Like, are there too many things in our lives we are trying to boil down into an algorithm? What can an algorithm really tell us and where does it fall short? And what are the ramifications when it does?
The Algorithmic Flaw
Using the social media measurement sites as an example, it's clear that they all possess a major flaw: The algorithm looks at a user’s "influence" in a vacuum, and the sites offer little in the way of measuring what those people are doing offline. In one way or another, all these sites in question somehow reward the participant for becoming more engaged and tying in more social media networks. Klout, for example, asks users to connect each active social networking account to the service, and works in interaction on Facebook, Twitter, Google+, LinkedIn, Foursquare and other social media sites along with other publicly available online data (such as the Wikipedia page). Of course, these exact algorithms are proprietary, and therefore mostly under wraps. But that's part of the problem too. After all, if there are shortcomings in the algorithm's scoring calculations, is the average user aware of them?
In some of my earliest experiences with using Klout, a few weeks after tweeting a joke about my local CVS pharmacy, the site had created a category and declared me to be "influential" on CVS, just based on a few re-tweets of my joke. Clearly, this gives me way more credit than I deserve in terms of influence on this topic!
There are all kinds of other problems with using algorithms to calculate things, especially if it’s a randomized algorithm that uses random data. For example, I asked Andrew Grill, the CEO of Kred, about Kred’s ability to detect purchased Twitter followers or fake accounts, which many high-profile people have been accused of abusing in recent months. (Learn more about this in The Economics of Fake Twitter Followers.)
"We couldn’t have that measurement in the algorithm," Grill said. "There would be no way to detect a false positive, like a legitimate surge of followers, say from a TV appearance."
Such a dilemma is a prime example of when algorithms fail; while algorithms can determine data, they aren't so good at interpreting what it means.
"The problem with social media monitoring tools, is that the computers can see if a name is used, but they can't tell the context or if the mention is a positive or negative impression," said Mike Byrnes of Byrnes Consulting, a firm that provides business planning and marketing strategy services.
"As brands want to sell more products and services in the future, they will look for social 'influencers' to help them do it," Byrnes said. "My guess is that a lot of effort will be put into rating each person and brand using social media to highlight the best online referral target markets."
What this means is that these relatively new social algorithms are a lot more than an ego war or a popularity contest. Increasingly, real money is trading hands as a result of these algorithms, whether it's through marketing that people perform online, or through the algorithms' purveyors themselves (Klout, PeerIndex and Kred all give incentives from their sponsors for gains in user influence).
And if users don't know how their scores are being calculated, they're definitely at a disadvantage.
"Users should always know how their score is calculated, we post how we calculate our score right on our website," Grill told me.
Transparency Vs. Tricking the System
That seems like a start, but one of the problems with transparency in an algorithm is that it can be gamed. Just think of black hat SEO users who performed tricks like keyword cloaking as soon as it was discovered that keywords were part of the search results algorithm. So, when companies conceal how algorithms are calculated, they put users at a disadvantage. But when algorithms become too transparent, they can also be rendered virtually useless. That puts users at a disadvantage too, or at least honest ones.
On the latter point, a spokesperson from Klout did tell me that "to maintain the integrity of the score, we don’t disclose the entire algorithm or how we develop it…"
This seems reasonable, but I think that at least an explanation on these sites about the basis of the algorithm would be warranted, especially as these companies continue to lend out our information with their APIs.
We all know that algorithms are often very reductive; that's just their nature. I think the real problem is that we - and the companies that build those algorithms have a hard time owning up to the fact that what there are significant limits on what they can tell us about the big, wide complicated world we live in.
As these sites develop and improve, so will their algorithms. And although we don’t all need a computer science degree per se, people will increasingly need to understand the extent to which algorithms can and can’t help us in our lives.
I, for one, wonder what it would be like if dating sites encouraged users to contact those who were determined to be the worst match. After all, some things in life are totally unpredictable. Or at least we're free to think so until a better algorithm proves otherwise.