I resent AMP

ampFirstly, the bog-standard explanation of AMP:

  • What’s it stand for?
    It stands for Accelerated Mobile Pages,
  • And what’s that?
    It is basically another web ‘standard’
  • …comprising of?
    A limited version of HTML that results in really lean pages that load faster (basically for mobile) and – perhaps more importantly – don’t contain as much crap as normal web pages.
  • And I should care because?
    Google are pushing the technology as a way the “open web” of normal websites can easily produce content that is kind of like the way that Facebook and Apple have built publisher platforms that sit exclusively within their ecosytems.
  • So what does that mean to a regular schmo?
    You’ve probably seen results like the accompanying screenshot at the top of this post if you’ve used Google on mobile in the last year or so.
  • I sense you are ambivalent at best about this?
    Yup. And now I’d like to tell you why.

It’s Yet Another Web “Standard” That is Anything But

I’ve said it before and I’ll say it again: there are no such things as web standards. Not by the properly-accepted definition of the term anyway.

For historic and practical reasons browsers are extremely forgiving about what they will render. Properly done, HTML would always have been a subset of XML and thus would have broken when written incorrectly. Missing a closing tag? “This page cannot be displayed.” Of course, this doesn’t happen, because in the nascent days of the web, every other page was hand-coded or written in something like Geocities editor, so if you wanted people to use your browser it had to be capable of interpreting the resulting code-soup as best as it could, otherwise no one using it would be able to read pretty much anything on the internet at all.

Secondly, most alleged web standards are weak. You only have to look at the 5 year box model wars (which are probably still being fought somewhere) to realise that the “standards” are open to different interpretations.

All this holds true for other alleged standards such as Schema, which I have also pontificated boringly about before.

So partly, my response is simply a yawn of boredom. I’ve been building web pages for getting on for two decades now and every year someone wants you to adopt some new standard for some tedious reason that ultimately is about making them money ahead of you.

In this case the justification is to create fast, lean mobile pages. But you know what: we already are. And if you’re not? Google penalises you in the mobile SERPs anyway.

Practical Maintenance

We’re happily in a place where – finally – any halfway decent designer can make a nice responsive page design that renders well on mobile, tablet and desktop through the many various CSS techniques and general level of agreement between browser vendors that happily now exists.

“Write once: display anywhere” is a nice little mantra. But with the advent of AMP, developers are being asked to create two versions of the same page, as seen in the guidelines. Once more, Jo Muggins is given yet another opportunity to unwittingly send her site to SEO oblivion.

amp2

Probably by the time you’re reading this (disclaimer: I haven’t been exactly quick off the mark to talk about this) there’ll be two billion duplicate pages floating around the internet and someone will be forgetting to update the AMP version and break whatever traffic they’re getting from it.

A Further Drift Away from the Point of the Internet

This is more of a moral-philosophical point about the evolving nature of the internet. Facebook are increasingly acting as a publishing platform rather than a social media site. It’s obvious why: you allow any Tom, Dick or Harry to go linking around to any old website on the internet, and you’re leaking visitors. You want them to stay on Facebook where you can monetise them. So naturally, FB have introduced a bunch of stuff that is effectively native publishing. Write your sordid article for your bottom-feeding website, but then republish it on Facebook according to their standards – and for no other purpose at heart than to make Facebook money (of which they generously allow you to keep a cut). This is the “walled garden” of the internet – and the backdrop to the almighty clash of the internet titans as to who controls your eyeballs and so, ultimately, advertising spend.

AMP is similarly pitched by Google – but mainly to keep people on Google and thus away from Facebook. If a story appears only Facebook, then Google might not even be able to access the damn thing, which spells trouble for them.

Hence the urgent desire to get people to publish a version for Google as well. Once again, your run-of-the-mill web manager is having their job role expanded to fulfil the whims of the colossal tech behemoths that run the show. Google push AMP pages to the top of the SERPs, so as per usual the people with the resources necessary to implement it get the benefit and the bit-players sink even further down the food chain. Google will certainly seek to dangle AMP as a mobile ranking factor so SEO guys can hawk it to their clients. But good luck to you trying to get your newly-minted AMP content anywhere past the big publisher sites.

And will it even last? Well the track record on these things isn’t great. The recently announced death of Google Authorship – which was created for similar reasons and created a similar amount of ultimately pointless work. A thousand and one articles are still out there telling webmasters how to leverage it to their advantage when Google killed it last week. Even the mooted problem that AMP is there to fix – fast responsive sites for mobile – is probably going to be swept away with the arrival of 5G inside a couple of years.

Despite all of these forebodings and misgivings, I will of course, be joining the serried ranks to implement AMP, because that is our lives now: doing as Google commands.

Advertisements

Aaaarrrrggghhhhlgorithms

It’s already a well-worn trope that algorithms are good for some things (processing huge amounts of data) and bad at others (anything to do with human interaction) and yet Instagram has now joined Facebook, Twitter and Uncle Tom Cobleigh in rolling out an algorithm that purports to display the ‘most important’ things in your feed.

In short: stop it.

algorithm

In long: It began – as many things do – with Google. Google were the first people to do a good job of automating the process of crawling and ranking websites in response to queries.

Prior to that, things like Yahoo, DMOZ, Best of the Web etc used human eyeballs to judge the quality of sites and pop them into categories. And guess what? Humans are both imperfect and corruptible and trying to put entire websites into one category is often impossible (hence my long-running belief that Schema is a backwards step). I can still just about recall the days when getting a site onto DMOZ was phase 1 of an SEO campaign, and meant trying to find someone who either accepted anything that was put in front of them, or someone who would accept anything that was put in front of them alongside a brown envelope with some cash in it.

So Google’s programmers wrote an algorithm. It followed every link to see where it led, and added that place to its index. Then it calculated the importance of pages based on the number of links, and the rest you know (i.e. their swift rise to total dominance made the internet accessible to all, and also hopelessly corrupted its very nature, turning everything into a commercial shitstorm and an entire economy based on the whims of this algorithm).

So. Algorithms have a place. It would be an act of absolute folly to try to replicate what Google does with humans. Google still pay humans to do a bunch of testing, but RankBrain is the first tolling of the bell for those guys.

Google search engineers, who spend their days crafting the algorithms that underpin the search software, were asked to eyeball some pages and guess which they thought Google’s search engine technology would rank on top. While the humans guessed correctly 70 percent of the time, RankBrain had an 80 percent success rate.

As most of the commercial web hands over usage data to Google through Analytics, and drives traffic to their web pages via AdWords or other Google properties, so mass data tools can be used to supplant human imperfections. If a site has a high bounce rate for a particular query, Google might fairly surmise that that site is actually not suited to that query and start to drop it down the rankings. Finding a replacement for ‘linkjuice’ has probably been Google’s top priority for years now, and each turn of the ratchet brings the end game closer.

Naturally, Google having set this tone means that every company wants to have an algorithm, BECAUSE ALGORITHMS. But it’s not always clear who these algorithms are meant to serve, or to what end outside the very specific needs of Google.

Twitter and Instagram are two big brands that have recently rolled out algorithms to their products that actually serve to defeat their own very nature.

An example: I follow a bunch of people on Twitter – from friends with a handful of followers to big accounts who tweet (seemingly 24/7 – get a life, guys!) about the search industry. Trad Twitter just showed everything in chronological order, which is perfect for the medium. How is any algorithm going to determine relevancy: no one outside Twitter’s engineers knows. While I’m not a programmer any more, I do know that all they’ll be doing though is chugging data. The algorithm will pick tweets for me from the people I followed based on either one or two things:

  1. Tweets which have had lots of engagement (I wouldn’t know about that: my stats are lousy)
  2. Tweets from people I most commonly interact with (which is about three people).

And why are they doing this?

Ostensibly to ‘improve’ Twitter so it behaves more like Facebook and thus can attract the idiots who populate that horrid corner of the internet. In reality, we all know that they’re ramping it up so they have a further means to shove advertising in. At first it will be indirect (“this tweet from Celebrity X was amazingly popular”) but then as Twitter’s finances continue to get worse they’ll just use it to sell another slot to advertisers until eventually they give up and sell to Yahoo! for them to hammer the final nails into its coffin.

And so it will come to pass with Instagram, Snapchat and whatever-the-fuck the “next big social media site” is. (hint: not Google+)

And why is this a bad idea?

There are some people I follow who I never engage with and who don’t have big follower counts, but whose content sparks trains of thought that otherwise might not cross my mind. Someone like @TheWarNerd tweets infrequently and has relatively (in the scheme of things) a low follower count of under 10,000 – but I wouldn’t want to miss a single tweet.

You know how this plays out without me having to type it. Shorn of metrics to analyse the way I follow him, Twitter will probably conclude that he is ‘less important’ and shove his tweets down the line. Instagram will do exactly the same thing and it will suck for exactly the same reasons.

Why does this keep happening?

I have a bit of a theory about why big tech companies start to “improve” their products until such a point that they become an unusable mess and die. It’s because of the typical life cycle of a platform and the peril of having a team of immensely talented idiots around the place.

  1. Great idea
  2. Great PR begets exposure
  3. Exposure begets big investment from someone
  4. Big investment means recruiting a whole bunch of Really Smart Guys to help get things scaled up
  5. A whole bunch of Really Smart Guys begets boredom when the scaling has been done and the name of the game is administration
  6. Boredom begets dwindling PR exposure
  7. Dwindling PR exposure begets the need to announce things
  8. The need to announce things begets asking the Really Smart Guys to think of ways to ‘improve’ things.
  9. Really Smart Guys’ improvements begets disenchantment because Really Smart Guys don’t know shit about how humans work
  10. Disenchantment begets falling user numbers
  11. Falling user numbers mean Microsoft or Yahoo! buys them – partly to get the Really Smart Guys and partly because they can’t think of their own ideas
  12. Someone realises it’s all been a huge waste of money
  13. It shuts

I don’t even know why I’m writing this, or where it’s going – other than it’s been almost a year since I wrote anything on my blog and one simply must keep up with these things so no one sees behind your carefully constructed facade to find the jaded 40-something web manager within.

Anyway, the point still stands: take yourself and your stupid algorithms and get in the bin.

 

Google’s Eccentric Choice of Review Partners

Google have proselytised about the value of reviews for many a long year (in internet terms, at least). Their own reviews system – like many of the company’s second-tier offerings – has never really garnered traction. Doubtless, this is partly because it is yoked to Google Plus / Google Profiles, but also because lots of other players in the space actually have better systems. As an example of this, one only has to check out the Google ‘reviews’ for Guantanamo Bay or Broadmoor Hospital.

broadmoor

While these might be hilarious, it doesn’t really indicate that Google is promoting or policing its product properly.

But what should be of more concern Google’s product managers in this space is the drawing in of review scores from other providers (hint: go look for a new job!).

Typically, the big one box/card for a company will highlight Google Reviews, but also draw from other sources around the web. This mirrors Google’s recent, Hummingbird-led forays into injecting information directly into the results rather than encouraging clicks to source data. This is very much a topic for another day.

In the meantime, however, just look how seriously Google are taking things. A search for “Chiswick Honda” has these results:

chiswick1

Reviews from… webcompanyinfo.com? websitepic.com? Both of these sites are actually just the kind of thin-content SEO shill that offer some crappy “seo data” for webmasters. Clicking the links from that one box don’t even take you to reviews! Just this kind of crap.

chiswick2

It isn’t hard for Google to determine who offer real reviews from real people – be it reevo, bazaarvoice, feefo etc – so why are they giving airtime to operators like this?

The answer lies, as ever, in monetisation and a power-play that Google is engaged with against review sites. As the day draws short, I will leave that for another post, but if you take anything away from this let it be this: Google’s concern for ‘quality’ is often skin deep where that ‘quality’ poses even a minor threat to its own model.

More anon.

Tabs, “Hidden Content” and Google

Tabs are a handy, universally understood visual metaphor that have been used for many years by designers to make manageable and usable pages. There has always been a small degree of confusion about whether or not Google treated tabbed content as ‘hidden’ content and whether or not they would penalise sites using tabs.

Following a post by John Mueller, it seems that Google have come down against tabs. They believe that tabs are a way for people to show one thing to users and another to Google.

To an extent, that’s true: it would be easy to make a short, punchy “selling” article that is seen by visitors, while hiding a whole bunch of keyword-heavy text behind a tab. Whether that’s good or bad practice is something of a religious question.

Personally, I’ve always felt – and still do – that tabs are a good way to visually organise things on a page. Here’s how I use them on my hobby site:

tabs

Now I don’t see anything inherently wrong with this. I can make a super-useful page, packed with content but organised in such a way to be navigable without a 4 mile long page.

But I think Google and I disagree on this. Recent uses of the site: command in Google have revealed that the main ‘hub’ pages for any topic have been downgraded recently in Google. Searching site:weirdisland.co.uk “yorkshire ripper” did not place the relevant page at the top, despite a reasonably solid internal link structure. Instead, the target page was under pretty much every other page on the topic.

This made me sniff around the page to see what the problem could be. The main suspect? A ‘timeline’ tab. This tab included data from all the related articles – dates and locations, all Schemafied and presented in a nice fashion. I couldn’t see any real fault with that, but looking at it again from what Google have been saying, this tab actually had more information and a higher word count than the main article itself.

tabs2

In my eyes, I had done a pretty nice job of balancing visual presentation and information, but I suspect this sort of thing is the kind of trigger for Google to downgrade a page.

As such, I’ve separated these timelines into standalone pages like this.

I feel ambivalent about this. I feel that I’ve been bullied into changing my site to fulfil an algorithmic diktat from Google that implies that my design was an attempt to trick their bots. Part of me thinks I should stand my ground and not change a thing.

However, as part of the remit I’ve given myself with that site is to use it as a testbed for such things, I’ve caved to see what happens from a Google perspective.

I will, of course, let you know what happens.

Keyword Data back in Analytics

One of trad SEO’s biggest gripes for the last couple of years is the obscuring of keyword data in Analytics. Of course, much of that data has actually been available in Webmasters Tools for quite a while now

gwt

Until today (so far as I’ve seen – it’s probably been rolled out all over the place in stages) the nearest equivalent data in Analytics was found under the Acquisition > Keywords > Organic screen.

But now? That’s gone, and the data from GWT is showing up in Analytics

analytics

This is a nice move, as it puts back a little context into the job rather than educated guesswork based on landing page URLs. It still means a bit of legwork if you want to do detailed analysis but for most SEO purposes it is a long-overdue move. The only critical issue with this is that assuming it follows the pattern used in Webmasters Tools the data will only be available for the last 90 days, and won’t include the last 2 days – which will obviously cause some limitations in analysis.

Schema: Addenda

Schema was (you may recall) backed by Microsoft, Yahoo (in the days when it still had its own search tech) and Google. As I’ve described, I’m not seeing anything in Google’s treatment of the site beyond the adoption of aggregate ratings in the SERPs. However, in Bing (and thus Yahoo!) the site has seen a positive bounce in traffic and (one assumes) ranking.

Sadly, 114% of all searches in the UK are done on Google, so don’t expect any sudden transformation.

Random thoughts about ‘brand’ in the online space

In one of his regular broadcasts, Matt Cutts mused on the problems of ‘real world’ companies competing in the online space in response to a question posed by one of his viewers. It’s a problem as old as the commercial web – and something of a philosophical conundrum for Google and marketers alike: if a big, household name exists in a market, should it ‘naturally’ get traffic from Google, even if its online presence is poorly built, optimised and/or marketed?

Most obviously, this is reflected in the nature of what search marketers like to call ‘brand signal’. If a company receives hundreds of thousands of searches for its brand name then surely that site should do well for the products it sells, almost regardless of how well the site is built from a technical perspective?

I’ve seen this in effect myself in a previous role. The company we were doing work for had around 2 million searches for their brand name every month. The site itself was appallingly built – with duplicate content issues, constantly spiralling redirects and broken internal links and riddled with empty pagination and search filters. Despite that, we could rank the site for hugely competitive two and three word head phrases with a relatively small amount of spadework. The conclusion? Branding works in Google, with sufficient volume.

Tangled up in this is the whole, quasi-religious debate about exact match domains: if someone searches for ‘cheap car insurance’ are they looking for cheap car insurance, a company called cheap car insurance or cheapcarinsurance.com? The vagaries inherent in this line of thought lies behind a lot of zig-zagging debate and attendant strategising.

At the opposite end of the spectrum, you also have companies (such as my own) which don’t actually have any ‘real world’ presence but are online-only brands. One such brand that I am aware of is motors.co.uk – which is a used car listings site, owned in the past by the Daily Mail Group and currently under the auspices of Manheim – mainly of note to me as a bellweather for the industry as it has always been a huge online brand.

I am not privy to what promotional work Motors have been engaged in but I do know that they spent millions in the past on radio, local advertising, offline promotion and print work – as well as untold sums in content production and website development. For the past couple of weeks however, they have ceased to rank for their own brand name.

motors.co.uk

In SEO terms, this is a colossal slap. I don’t know what it is that Motors have ‘done’ in Google’s eyes to deserve this – most likely some ancient linkbuilding campaign has come back to bite them (a recurring problem I have alluded to here before in relation to our own site). Doubtless there is a huge disavowal exercise going on behind the scenes now to recover what they’ve lost. Hopefully for them they will extricate themselves from that particular hole.

This illustrates the flipside of the ‘big company with poor web presence’ paradigm, namely: ‘a company with nothing but a web presence’.

Assuming that this evident penalty has struck motors.co.uk across the board in relation to their SEO, I can only assume they’re suffering from a big drop in traffic and thus revenue. Luckily, they are backed by Manheim, so will have resources to weather the storm  – but many companies aren’t so lucky. Such are the complications of Google’s algorithm (and the competing internal imperatives of Google as a business in and of itself) these days that I’m no longer sure that anyone can really claim to understand the market any more – regardless of the noise, fog and general sturm-und-drang of the SEO community. In the same market – and I will not name names – I know for absolute fact that some of the big players are spending £5-10,000 a month on aggressive link buying and haven’t (as yet) seen any penalisation.

The truth is that being wholly reliant on ‘natural’ search traffic is actually a dangerous place to be in. If your only focus is SEO I would strongly advise starting to siphon some of your revenues into other channels as a bulwark against potential penalisation – either in terms of building up a ‘fighting fund’ for a rainy day, or spend on social/offline channels to build up “brand traffic” as best you can.

None of these options is cheap.

Any way you look at it, the days when you could truly view the web as ‘a level playing field’ seem laughably distant today.