London Mayoral Elections: Detecting BBC Bias through Google?

I don’t foray into politics on this blog, but nonetheless you can find some interesting stats using Google around the issue.

The BBC’s charter instructs it to maintain a fair balance in political reporting through the news – giving a proportional share of editorial space and coverage to all interested parties. Famously, the BNP’s performance during the late 2000s led to appearances on Question Time for Nick Griffin as part of this remit to even-handedness.

But lingering suspicions about the corporation’s bias remain. Rare is a week that passes without some politician or other averring that the BBC shows bias against his or her side of a debate. But thanks to Google, it’s possible to do some high-level stats to test the notion of balance. A good example is the election for London Mayor* – now just days away.

Current polls for the London Mayoral elections are quite revealing. The major parties – Conservatives, Labour and Lib Dems are naturally standing candidates – as  well as The Green Party, UKIP, BNP and a number of independents and small parties.

Firstly, bear in mind that by current polling figures, UKIP is expected to poll around 3% of the vote  –  around exactly that of the Greens.

So you’d expect that the BBC would be giving coverage to these parties more or less equally, right? I did some snuffling around using Google and found the following facts.

Of the main candidates, the results are more evenly split but still indicate some imbalances

As the incumbent, it is difficult to separate election-related stories for Boris Johnson from stories that involve him in his role as mayor – Google treats “mayor” and “mayoral” as equivalents. It is likely that the number of mentions he has received in the specific context of the election is actually fairly similar to that of Livingstone, but filtering signal from noise isn’t easily done.

Despite this, some interesting facts leap out.

  • Brian Paddick receives 66% of the number of mentions of Ken Livingstone does, despite only 8% of polled Londoners declaring their intention to vote for the Lib Dem compared to 41% for Livingstone.
  • Jenny Jones is given almost as much coverage as as Paddick, despite only 3% of voters saying they will vote Green.
  • Siobhan Benita receives fewer mentions than Jones, despite being at least equal to her in the polls
  • The BNP received more coverage than UKIP despite only polling at only 1% in contrast to 3% for UKIP
  • UKIP receive the least coverage of the ‘major’ parties (and by some margin) despite polling more highly than either the Greens, Independents or BNP.

Of course, this is all just fun and games but I think it’s possible to construct a view of how the BBC is covering the London Mayoral elections and it’s not one that the BBC should be proud of. The Liberal Democrats receive far more coverage than their likely share of the vote would suggest they should- and the paucity of coverage given to UKIP is pretty damning. That is nothing compared the favour shown to the Greens, who receive almost as much coverage as the Liberal Democrats, despite their even smaller share of the vote.

It must be allowed that the BBC disproportionately favours the Greens and the Liberal Democrats and almost ignores UKIP altogether, despite the backdrop of falling interest in green issues and increasing concern over the future of the EU.

*As I’m not a Londoner, and won’t be voting, none of this matters to me except in the abstract matter of how the election is being covered.

How Bad is the Google Penguin Update?

Wow. As I averred the other day, I’m aware of some awful SERPs for some particular keywords (a few people now have picked up on the fact that Viagra.com doesn’t rank for ‘viagra’, which is borderline batshit insane).

As you may or may not know, I’m currently working at Trusted Dealers, with an interest in the used car market. It’s a tough market, because there are a bunch of brands who, as you’d expect, are pretty much grandfathered in at the top of the rankings for pretty much everything: Autotrader, Exchange and Mart, and Motors.co.uk, followed by a raft of big national names in motor retail – including manfacturers, franchised dealers and some very good affiliates.

Penguin didn’t leave much of a ripple in the market (which is part of the reason I have half an inkling that it was targeted at particular verticals) but even so, here’s a site that’s suddenly ranking on page 2 in the UK for “second hand cars”, which is a reasonably competitive term in this vertical.

Seriously, Google? A peek at some of the referring domains hardly suggests someone putting in the hard work at the coal face of link building either.

There’s a few other sites that have crept into the first couple of pages that really have no business being there – plastered in AdSense or linking to this kind of shit – which is some kind of awful low-rent affiliate program, I guess:

So while the industry as a whole is relatively untouched, there’s definitely a chill around the nether regions.

So what to make of it? It’s war on the SEO industry – or at least the linkbuilding part of it. In a way, that’s the logical conclusion of what Google has been moving towards over the last few years: reward brands. Reward social signals. Punish content farms. Go after spun content and made-for-AdSense sites.

Dave Naylor probably had it right a few days ago: Google have decided to start dropping the hammer, and are prepared to countenance compromising the quality of the SERPs in the short term in the name of what they perceive to be a greater good.

So what does this tell us about the future? Is not link building the new link building? Will you have to be a brand? Should you work on legit social signals. Probably yes, but if that’s news to you, you’ve been asleep too long even before this update.

Google “Webspam” Update

At places as diverse as Traffic Planet, Webmaster World and Search Engine Land there’s a hell of a lot of chatter about Google’s most recent update – almost all of it despairing cries of woe from webmasters who’ve seen their sites trashed overnight in the SERPs.

As ever, there’s a lot of rune-reading going on, and the guys who are doing well out of the update will be quietly chortling to themselves  rather than transmitting their woes on forums.

None of this is new – you can search back in time to find similar threads that followed pretty much every significant Google update ever.

That being said, I still follow some rankings in some verticals in which I used to work (no current commercial interest to me) and I can see what they’re saying – with some major slaps being handed out to what are/were legitimate sites. On the other hand, sites that were de-indexed/banned a fortnight ago are suddenly sitting pretty at the number 2 slot for their money term.

I’ve followed ‘viagra’ results for a long time as a marker for what Google is rewarding, this being a notoriously rich hunting ground for short term spammers). It has to be said that the UK SERP for ‘viagra’ for example, suddenly looks terrible – with any number of horrible, dated, non-authority sites making up much of the top 10. That suggests a big follow on from the link/blog network clear up from a couple of weeks ago and the Webmaster Tools warnings that came shortly afterwards.

Google misstep? Perhaps. They closely monitor user interaction with the SERPs and if results have got ‘worse’ for users they can pretty quickly be rolled back. Anyone who’s followed SEO for long enough will know that this happens.

My feeling is (given the nature of the SERPs I’ve been looking at and Google’s own statement that this will affect 3% of all searches) is that this is a vertical-specific slap – and probably targeted at affiliates in those markets.

More thoughts tomorrow when the dust has started to settle.

Sergey Brin on the Internet: He Has a Point, You Know

Google founder Sergey Brin has some interesting thoughts to share in his interview with the Gaurdian today. His primacy concern is around the future of what he calls ‘the open internet.’

By this, he means the internet as it was originally conceived and built: a series of independent websites which provided information and were accessible to all. It is this model, as he notes, that allowed for Google to build a searchable index of content.

He sees the threat to that model coming from a variety of places. On the one hand you have companies such as Apple and Facebook who are “walled gardens”, hosting information either in applications or behind logins through which web crawlers such as Google can’t go and therefore can’t index. On the other hand, governments are increasingly trying to use the internet as a means to track their citizens and the things that they are doing. He cites the usual suspects like Iran and China, but under recent regulation or proposed legislation he could as well have added the US and UK to that list.

In these cases he has a point. In the case of Government

As for the “open internet” he espouses? That is a little thornier. Anyone has the ability to deny Google (or any search engine) access to their site with the addition of a tiny text file and a couple of lines of instructions. This is part of the long standing convention of the internet and its relationship with privacy: you may wish to publish things and share only with a small number of people for a huge number of reasons.

Secondly, some people would prefer you to pay for their content. And why not? Few people would write and produce a novel and give it away for nothing (as ever, there are outliers) – most would expect to get paid. Rupert Murdoch may or not be foolish for placing The Times behind a paywall, but it is his right as a publisher to demand payment for what his organisation has produced.

Finally, there is an element of privacy. A platform such as Facebook does offer you control over your privacy to some degree. Facebook employees and their software can mine the shit of everything I’ve ever done for sure, but in terms of the lay public, I’ve got my privacy settings maxed out so that only people I’m happy to can see what I’m doing, who I’m married to, where I live and so on. Again, for many reasons I may not want that information out there on Google where it can’t be controlled.

This is before we get even get to the huge parts of the internet which operate below the horizon and off the official grid altogether, such as Silk Road (you’ll need TOR).

So the “open internet” has never been all-encompassing. It may have been part of the founding principles of the internet, but in truth – like all ideals – it was unenforceable and its erosion inevitable.

Of course, Brin has a dog in this fight, which creeps out in some of his comments: “There’s a lot to be lost. For example, all the information in apps – that data is not crawlable by web crawlers. You can’t search it.”

By this, he means that Google can’t search it and therefore can’t monetise it.

In fact, many apps are in fact highly specialised search engines in themselves. The Booking.com app is, as I have opined before, an existential threat to Google’s search model in the hotel space. Having used it several times, the idea of going back to Google to search for “hotels in leeds” seems suddenly alien to me. The same is true for movie reviews: I wouldn’t dream of searching Google for movie reviews now I have the IMDB app. The same is true for tide times, booking a train ticket or any number of mundane tasks for which Google was once the default port of call.

Despite all this strategic weakness that Brin has revealed about Google, he also has a point. The internet was supposed to free information from jealous gatekeepers – the final step of the transmission of information from the elites to the masses that began with the printing press. In many ways it still is. The only question is: will you use Google to find that information?

The answer is not as clear now as it was 5 years ago.

New Google+ Pages Design

I’m no fan of Google+ but the new look they’ve just rolled out for pages is infinitely superior. In fact, it’s quite attractive – this is what Weird Island’s Google+ Page now looks like.

The obvious things that leaps out is the Facebook-like logo/strap treatment. Actually, I think it’s better than the Facebook layout in that regard, as Facebook’s banner images are just too damn big and unwieldy and your actual profile image too small and pokey, so hats off to the G+ designers for this.

It’s interesting to note how quickly the look has evolved. Following the ground-up reskinning of Google’s Products last year, I feared that Google was making a mis-step in aiming for the corporate vibe. This redesign of Google+ suggests that they’re actually thinking harder about how to beat Facebook than I gave them credit for.

Google Penetrating the Social Graph?

I’ll look into this more fully when I get the chance, but I thought I should share this observation as soon as I noticed it. In Google News right now, it appears that Google have inserted the most popular social media buttons into the results – check out this top story about Rick Santorum on the BBC.

It occurs to me that Google’s biggest problem (and the one they are trying to address with Google+) is that their classic metric is the link.

It also occurs to me that the link is old skool – dating back to a time when people’s main engagement with the internet was through reading, writing and linking to websites. Your grandad’s internet. Web 1.0.

Now in a world where a share on Facebook is more immediate and impactful, that whole link scene looks decidedly dated and, as any fule kno, spammed all to hell and back.

Google’s problem now is how to access sharing data, which is the modern analogue to Web 1.0’s linking paradigm. Both Facebook and Twitter are walled gardens so far as Google is concerned – Google backed out of mining Twitter’s firehose and Facebook has thrown its lot in with Bing for search so their options are limited. But what if Google just started putting sharing buttons into its own properties – like the news result screenshotted above – or even the SERPs themselves? Suddenly they’re getting data on social activity to fold into their algorithms. Farfetched? Well they’re trialling it right now – +1 buttons are all over your SERPs and it looks like Google are getting ready to countenance sticking in other buttons too (side issue: a tiny admission that Google+ is failing?)

Add that to the engagement data they get from Analytics, plus their existing link analysis algorithms and they’ve got a pretty comprehensive thang going on.

But herein lies the danger, both for Google and for the web in general. Google’s biggest problem is still the purpose-built SEO link. You can still buy your way to the top of the rankings by judiciously buying links and setting up networks, as any SEO knows. And so the link landscape is polluted all to hell and back. In a way, it tends not to matter so much, because what does the man in the street care about who hyperlinks to whom? He gets his SERPs and they seem reasonable – who cares if massivebrandedretailer.com is buying links from an SEO agency? Boo hoo.

But now picture a world in which social activity becomes a critical piece of ranking data. What price a ‘like’ on Facebook or a retweet? Google monetised links and hence ruined the notion of links. The same logic follows for social activity. How hard will it be to game social media? A trip to Fiverr.com or Mechanical Turk or any number of forums and you can probably buy ‘likes’ for next to nowt.

From there, it’s just a hop, skip and a jump to fake profiles and polluted timelines. Let’s hope it doesn’t come to that.

 

Facebook’s Acquisition of Instagram Confirms Dotcom Bubble Never Ended

Facebook’s $1 billion acquisition of Instagram has excited the internets. Again. I opined on this last year and I cleave to it now: the dotcom bubble still exists. As the August body of record in the UK The Metro reminded me this morning, this valuation of Instagram makes it worth more than the New York Times (I won’t link to the Metro on principle because of this, so you’ll have to make do).

A further note: Instagram has no income stream from either advertising or subscription fees and employs a total of 12 or 13 people, depending on which source you choose to believe.

In a world where financial reality is still yet to truly bite, despite 3 years of colossal warnings written in huge pink neon letters in the supplementary pages of the UK, US, Japanese and European budgets, the only industry apparently stupider than the finance industry is the tech industry.

This bubble is about 5 or 6 years overdue to pop. Catch you on the flipside when, hopefully, sanity will have come to prevail.