Content Strategy is the New SEO

Happy New Year! I have been taking a nice long holiday break and am now ready to get back into blogging. I haven’t been idle over the break. I’ve mostly been writing for other projects, such as my InformIT page and a video lecture series I am getting close to releasing.

In the course of the research for these two projects, I made a startling discovery: The Google Panda algorithm is a radical attempt to equate content quality with SEO, as much as an automated system can do so. I knew that Google said this about content quality when it released Panda, and I even wrote about it on this blog. But I didn’t understand the inner workings of how Google makes this happen. Plus, I didn’t really believe that you could develop an algorithm that truly favored content quality until I started researching the way Panda is built.

Savvy readers will notice I used the present tense in the previous paragraph–it was intentional. Google has developed Panda to be continually improved. Panda’s on version 2.6 since it’s initial release in March of 2011. It issued 30 new improvements in December alone. It’s called machine learning,–a method borrowed from artificial intelligence–which describes how to train a system for continual improvement. It’s similar to the method that IBM used to hone Watson’s Jeopardy!-playing skills.

Automation is necessary because of the sheer volume of content on the web. But the real key lies in the inputs Google gave to the algorithm–and the way it analyzed those inputs– before honing it through machine learning. The inputs were derived from perhaps hundreds of quality testers, who ranked thousands of pages for content quality. Google took this data and crunched the numbers to derive some rules. Then the machine-learning program honed the rules, and continues to hone them over time, getting ever more accurate. The end result is an algorithm that places a premium on content quality over the simplistic checklists and other tools common to traditional SEO.

SEOMoz’s Rand Fishkin does the best job of explaining what this means for SEOs in the future.

Note: Content Quality is as much a function of the whole collection of pages and other assets as it is of a particular page. It won’t do to focus all your attention on the top-level portal and let the lower-level pages go. The quality of your whole corporate site stands or falls as a collection. Though individual pages can do better than others, poor-quality pages, duplicates, and other features common to poor content strategy pull the whole collection down. The antidote to poor content quality is good content strategy.

6 Ways Google Killed SEO And What to Do About It

If I seem absent from this site, it is only because most of my work is published now by Biznology. In that blog, I am following a long thread about how to optimize digital experiences for Google post SEO. SEO as we know it is dead. But attracting an audience through Google is not optional. So how do we do it in that age post SEO? That is the point of my monthly posts at Biznology.

Occasionally, I find myself with fresh insights that don’t quite fit into the flow of that blog. So I will write them here. This is one such post. This one came about when I was in residence for a week at the IBM Design Lab in New York. In the course of my discussions with key collaborators there, I came to realize that the Biznology thread is a bit too narrow. There I have mostly focused on how Panda and Penguin have killed SEO. But these algorithm adjustments are only two of the six seismic changes in Mountain View that adjust the algorithm in ways one cannot reverse engineer. I’d like to highlight all six in this post.

First a bit of terminology. By “SEO” I mean the attempt to reverse engineer Google’s algorithm and build pages that will tend to rank well on that basis. Traditionally, this has been about learning the rules Google used to rank one page higher than another, all things considered, and trying to follow those rules. SEOs chased the algorithm by keeping up with how the rules changed–either new rules were added or existing rules were given different weight or, etc.

Well, in the last several years, Google has added other factors and filters that are not rules-based at all. It was never a good idea to chase the algorithm when it was rules-based. Now that it is ever less rules-based, chasing the algorithm is a fools errand. But as I say, ranking well in Google for the words your target audience cares about is not optional. So how do you do it post SEO?

1. Performance metrics

It was three years ago when I first heard from Avinash Kaushik how Google rewards pages that have high click-through rates (CTR) and low bounce rates on its search engine results pages (SERPs). (Fortunately, we were able to include this in our book.) What does this mean? Well, if your page performs well by conventionally accepted metrics, it will rank better over time. This makes sense because high CTR and low bounce rates are indicators of relevant content. Google is effective to the extent that it serves relevant results to its users. It’s results will tend to get more relevant over time if they promote content that perform well in these key metrics over time.

Note that performance is not rules-based. With this filter, Google is saying they really don’t care how you perform well. It is enough merely to do so. And there is no way to fake performance. The only way to maintain good ranking with this filter is to provide relevant experiences to your users, which is too highly varied to put into a discrete set of rules.

2. Quality signals

I have written extensively about how Panda affects results. So I won’t belabor the point here.The basic principle is that Panda uses crowd sourcing and machine learning to make ever more accurate assessments of the quality of the content in its results. It then rewards high quality content and punishes low-quality content.

Again, this can’t be done with rules. There is no one standard of high quality content that satisfies the various contexts in web publishing. But there are certain signals or hallmarks of quality content that Google can learn through its quality testers and look for. Once it sees those hallmarks, it rewards the pages that manifest them.

3. Semantic smarts

After IBM Watson won at Jeopardy!, I wrote in this blog how I thought future search engines would use semantic smarts to rank pages, rather than the dumb syntactic pattern matching they used at the time. SEO used to be about getting the exact phrases in the right places on pages, and then to vary the language on a page with different phraseology so as not to look like you were trying to game the system.

Semantic search renders all that advice garbage. It’s not about exactly matching the syntax (actual stings of letters and spaces) of the keywords your target audience cares about It’s about matching the semantics of what they type in search. Synonyms can have entirely different syntax. There’s no use trying to re-engineer a semantic algorithm. You just have to write naturally in a way that is relevant to your target audience and forget about building pages solely with exact-match keywords. Of course, it’s important to have those things in the title tags and meta descriptions. But otherwise, it’s not that simple.

4. Overoptimization signals

Google has been wise to SEOs for some time. Like virus writers and virus software makers, it has always been a dance of keeping one step ahead of Google and building pages that Google does not penalize for overt uses of SEO. There was a point at which Google caught up and overtook the SEOs on this issue. It was some time in 2011, when Google recognized patterns in the way SEOs responded to algorithm changes. Using machine learning again, Google discovered a way to see patterns in SEO behavior and thwart it before it became widespread. The upshot is: If you find a way to game the system and it becomes even remotely widespread, Google will discover it and write countermeasures into its algorithm. These countermeasures are not typically rules based anymore.

For example, a couple of years ago, I was on a call with a group and a cocky guy who had recently read a blog post by a prominent SEO was touting using ALT attributes to pump up keyword density in pages. By the time I studied this in greater detail, it was now more of a negative signal for Google than a positive one. In other words, Google was punishing sites for using ALT attributes to pump up keyword density, similar to what it had done with hidden text a decade before. The difference since 2011 is, they actually built a machine learning program to detect algorithm hacks and punish sites that use them programmatically.

5. Link building signals

When Google first started, it was like every other search engine in most ways. It ranked pages by systems of rules based on the syntax of the text on pages. The reason Google dominated the search game was not because it did this part of search better than the others. It was because it also looked at links. Links are the signals that tell Google of the context of the page it is trying to rank. It rewards pages that are contextually relevant to the search query all things considered, based on the quantity and authority of links into the page.

Clever SEOs discovered how to game this system by buying or swapping links. Eventually, it got so bad, links were practically meaningless for Google and it was back to the dumb pattern matching based on syntax rules. That was, until Penguin, when it found a way to detect and foil apparent link building activities in the algorithm. This is a form of overoptimization using links. Penguin thwarts apparent link building as surely as Panda thwarts keyword stuffing in alt attributes, using the machine learning techniques.

6. Copyright infringement

Google will continue to add ways of thwarting those who game the system as soon as their tactics become known to Google. One common way to game the system is to copy the content from a top-ranking page, build a better optimized page with the same content and publish it as original. Google recently announced that it has developed a way of detecting copyright infringement and severely punishing sites that appear to engage in these activities. Again, there is no single rule that can help a machine to know which of two pages is the original and which is the copy. Rather, there are certain hallmarks to infringement that Google looks for. Woe unto those sites that manifest these signals.

How to rank well post SEO

I am writing a new book about this topic, so, again, I won’t belabor the point. Simply put, there is no better approach to ranking well for Google than to build honest and transparent websites that attract, excite and compel your target audience to engage with them. This might go against the grain of your approach to marketing if it is typically based on hyperbole. But in the age of algorithms which use performance metrics, machine learning and semantic smarts, it is the only effective way to do digital marketing.

How Search and Social are Interdependent

When I say that search and social are interdependent, I don’t just mean that any effective digital strategy ensures that you do both well. Of course, they are both strong drivers of relevant traffic to your sites. But I also mean that they are interrelated. That is, you can’t do search effectively without an effective social strategy and you can’t do social effectively without an effective search strategy. Since this is a controversial position, allow me to e’splain sum up now and explain after the jump.

In brief, social content is findable: It is built to demonstrate relevance and to provide context for the audience, where search engines are a proxy for the audience. When you jump into the middle of a conversation, it can seem confusing. You understand the full conversation by searching for content related to it. Building findable content begins with keyword research, which is a form of audience analysis. Building social campaigns also begins with keyword research, which gives you a cue into the conversations your audience engages in. Building shareable content depends on understanding those conversations.

Findable content is shareable: It is parsed into modular chunks that can be easily shared from within pages. Pages are just carriers of shareable content that demonstrate relevance and provide context for the audience. Pages full of relevant, shareable content become link bait, which then rank better over time in search engines because the audience is effectively voting for the content by sharing it and otherwise building links into it from external sources.

It all fits together. But I admit the summary view might be a bit dense for those who have not read our book Audience, Relevance and Search: Targeting Web Audiences with Relevant Content. If you need further explanation, please read the example after the jump.

You’re doing it wrong: How not to build a socially aware microsite

I recently engaged in a conference call that took at least a year off my life. I was asked to join the call because the site designer wouldn’t listen to the designated search consultant. So we were both on the call, attempting to play bad cop/worse cop with this obstinate and arrogant designer.

The designer presented a page that was simply a set of tiles, four wide and four deep. Some tiles were videos. Some tiles were screen captures of tweets. Some tiles were thumbnails of infographics. There was no original content on the page at all. Everything was curated from some other social source. There was nothing on the page that indicated the context of these items, either to each other or to the larger set of conversations. The page was entirely devoid of text of any kind to help build that context. And none of the items was shareable, either in the sense that you could click a share button near them or in the sense that anyone would want to share them (in the unlikely event they stayed on the page long enough to try to share them).

The first question I asked him was what the purpose of the page was supposed to be. He said the point of the experience was to foster social conversations around the products owned by the stakeholders sponsoring the work. I tried to explain to him that the page as designed would not accomplish the goal because:

  1. The page would not be findable in search: Without any discernible message, there is no way search engines would ever rank the page anywhere near the top page in rankings (after which, pages are functionally irrelevant). Ranking on the first page in Google gives you a shot at the credibility you need to gain the trust of the audience. For anonymous experiences, it is the only way to have a chance. From there, perhaps you could develop trust and loyalty over time, if you include sharable content and highlight the work of a few relevant experts.
  2. The page did not contain shareable assets: It’s a simple thing to add share buttons to the assets on the page, but nobody shares anything if they don’t know the context of the item. In social settings, such as Twitter, the context is partly determined by the Twitter handle of the person sharing the piece. This signal is only as strong as the credibility of the one sharing the item. Not only was the page itself anonymous, but it would not be given credibility simply because it was on the web. In short, shareability is not just about the message, it’s about the messenger. There was nothing on the page itself that gave the user a sense of the credibility of the messenger.

When I say that the designer was obstinate, I mean that he would not listen to all the evidence we provided that this design was DOA. We provided data point after data point of designs like his that had failed, and how they evolved to be more effective through agile iterations. Invariably, these evolutions involved adding some static text to the upper left portion of the white space, which explained what the page was about and why the audience should pay attention to it, in plain language.

The experiences were optimized when the other pieces of content for which the pages acted as carriers were ever more tightly relevant to the pain points or top tasks of the target audience for the defined context. We presented dozens of examples of pages that get tens of thousands of search referrals and hundreds of downloads, shares and conversions per month after similar evolutions. The designer would not be convinced.

When I say that the designer was arrogant, I will give you his own words: “You brought me in to create a next-generation experience…. Your experiences are tired and boring.” The gist was that his design was web 3.0, whereas our UX best practice is so web 2.0, and including static text on the page is so web 1.0. Meanwhile, the collective web effectiveness wisdom on the call, between the two search SMEs and the product owner,  was 45 years. He was fresh out of college with a degree in web design. One wonders what they teach aspiring web designers in college if this is what they learn.

As an aside, see my blog post about the sorry state of web development skills. One of the contributing factors seems to be that colleges are not teaching what works in actual cases, but what looks really cool and what might work. I have worked with many designers right out of college who had similar attitudes to this designer, but perhaps not to this degree. Those who succeeded quickly learned to be more pragmatic and less dogmatic. When they do, they learn to design their pages to be findable and the content on the page to be shareable. As long as they do that, they can do all the cool stuff they want to make the site look good.

To his credit, the designer eventually agreed to go back to the drawing board, after several rounds of bad cop/worse cop. I told the product owner after the call that I was pleased to get some blog fodder for my time and trouble, because “blogging is better than therapy or alcoholism.”

What is Relevance, Again?

Since before I started this blog with my co-author Frank Donatone, I’ve been engaging in a long and fruitful virtual debate with a group of people I lovingly refer to as the search haters. My latest blog about this can be found on Biznology: “Five Critical Roles that Need SEO Skills.” Not that the group of search haters is organized or has its own user group. But there is a long line of folks who are willing to trash the practice of SEO on the basis of two facts:

  1. SEO has sometimes been practiced by unscrupulous agencies to try to gain unfair advantage for their clients, thus this is what most SEO amounts to
  2. Search results are sometimes wildly irrelevant to search queries, thus search is not all that helpful in providing relevant content to audiences

I write this in the hope that I might influence a few search haters into a more sympathetic understanding of SEO. As the above  Biznology post indicated, I spend the majority of my time training folks on SEO. Much of this is in countering myths 1. or 2. above. If I can preempt some of this training by influencing a few people now, I just might be able to get down to business with new hires in digital marketing sooner.

A Smashing Debate

Since I wrote the above blog post, several of my colleagues have alerted me to a couple of long and detailed blog posts in Smashingmag.com. The first is called “The Inconvenient Truth about SEO.” In it, author and apparent search hater Paul Boag makes some good points about the way SEO is sometimes practiced. But he also makes some logical and factual errors. Most of the logical or factual errors were  well countered in a follow-on blog called “What The Heck Is SEO? A Rebuttal”

The most important is the counter to point 1. above. Authors Bill Slawski, Will Critchlow rightly say that this is a straw man. Most SEO is in fact practiced by people who only want the search traffic commensurate with the value of their content, using legitimate means of attaining it. SEO spam is like junk mail spam or email spam: Even though it is not representative of all SEO, we remember SEO spam (aka black hat SEO) because it is so annoying, So our tendency is to over generalize from black hat SEO  to all SEO. The authors also did a good job curating the results of a poll of SEOs in describing what it is SEOs actually do.

I highly recommend that you read both posts, especially the accounts of what SEOs actually do in the rebuttal. As an SEO, I do all of those things and then some. The picture that emerges is that SEOs are really just digital strategists who will do whatever is needed to ensure that clients get ROI for their web development efforts. Since most people search for information “often or always,” being available in search results for the queries your target audience cares about is job 1. So, as I describe in Biznology and elsewhere, the role of an SEO is helping everyone else on the team understand how their work affects search results, i.e., training.

Still, the rebuttal is incomplete. I won’t take Boag’s post apart in detail. But I do want to point out a fallacy in the hopes that it will illuminate why myth number 2. above is a commonly held belief. Here is what Boag says:

Your objective should be to make it easier for people who are interested in what you have to offer to find you, and see the great content that you offer. Relevant content isn’t “great content”. Someone searches for a pizza on Google, and they don’t want prose from Hemingway or Fitzgerald on the history and origin of pizza — they most likely want lunch. An SEO adds value to what you create by making sure that it is presented within the framework of the Web in a way which makes it more likely that it will reach the people that you want it seen by, when they are looking for it.

What is Relevance, Again?

First of all, I completely agree with everything in the above quote, except the bold part. The way I read it, he is saying that content need not be great in order to be relevant. Considering that I say content quality is a proxy for relevance, the bold statement in the Boag quote is a problem for me.

Let’s revisit our definition of relevance. Content is more or less relevant to the audience to the extent that:

  1. It maximizes the audience’s ability to achieve their information goals
  2. It minimizes the effort required by the audience to achieve those goals

We unpack these two conditions in probably more detail than most of the readers of our book need. But if you are interested in the complete picture, see Audience, Relevance and Search. For most of you, it suffices to say that content is optimally relevant if it helps the audience get the information they need in the shortest possible time. (Note that it sometimes takes longer to grasp overly condensed text. So I don’t say, “in the smallest possible space”.)

There is a reading of Boag in which his quote agrees with our definition. If by placing quotes around “great content” he means to connote “literary masterpieces,” then fine. A small percentage of your audience on the web is looking for highly crafted, poetic prose. An even smaller percentage is looking for long-winded stories told from a fictional voice. Highly relevant content on the web is typically brief, to the point, and abundantly clear. (Note that this does not make it boring. It is the antithesis of boring to the audience in that it answers their most pressing questions.)

Part of my insistence on spending entirely too much space in the book explaining how web content is fundamentally unlike print content is to emphasize this point. On the web, readers are in charge of the story. It’s their story. The writer must try to understand the reader well enough to figure out what they need to complete their story, and to provide it in the easiest and quickest way. Turns of phrase and other poetic language tend to reduce relevance on the web by introducing ambiguity in a fundamentally literal medium. Worse still, internal company jargon and other brain-dead colloquial language (e.g. “leverage,” “paradigm shift,”  “next generation,” etc.) defeats relevance.

If this is what Boag means, then I agree completely with his quote. But, if this is what he means, why then does he take the side of the search hater? We published our book in 2010. I’ve spoken about it at high-end conferences a dozen times. The whole industry has rallied behind the vision outlined in the book (whether they were aware of it or not). The search engines have followed suit with algorithm changes like Panda that reward relevant content as we define it and punish black hat SEO. Most decent SEOs practice it as we preach it (again, whether they’re aware of our book or not).

Can we please dispense with the myths so we can give SEO its rightful place in digital strategy?

Why search is so important for the executive audience

The other day, a colleague stopped by my desk and asked a question that took me aback: “Executives don’t really search that much, do they? That’s the domain of geeks, right?”

The question implies that most of my work has been misguided. I primarily work on sites built for the executive audience and I place search as the most important facet of content strategy for this audience type. I have written here and elsewhere that more than 85 percent of the executive B2B tech audience starts their journey with search and more than 70 percent of them continue to use search throughout the buy cycle. This information comes from numerous studies by Google, Tech Target and others.

If the premise of my colleagues’ rhetorical questions is correct, my work is a fraud. Also, if I’m wrong, site performance improvements I have seen over and over again using my methods are also a fraud. Fortunately, In the soul searching that followed his question, I have reassured myself. Not only do I trust the studies, but I have done deeper research on why executives use search so extensively to make purchasing decisions. I presented the research this summer at the Social Media Strategies Summit. But it bears repeating in this context. If you’re interested, please read on.

Search is the best way to learn new things

For as long as I have practiced SEO, pundits have been proclaiming the death of search. In articles too numerous to list, the self-proclaimed experts on the web have declared that users hate to search and they only do it because navigation is so screwed up, they are forced to search. My own opinion is quite the opposite: When users are presented with new information challenges and too many options to sort through one by one, they prefer to let the search engine filter them. It is simply the most efficient way to find new information. And it is getting better and better.

There are times when we prefer other ways of getting information. I use Twitter, for example, to get the best information on my area of expertise. If you follow the leading experts in a field, you are bound to get fed more information than you can possibly consume on a topic. This is what social media are best at: Helping you geek out on a topic.

But if you try to take a systematic approach to learning a new topic, you will miss a lot of information on social media platforms. First you have to know whom to follow, and that requires a degree of domain expertise. Once you follow the right people, you will miss a lot of information as it whizzes by like billboards on the Autobahn. This is where social media stumbles, and why executives especially like search. If you crack open the executive brain with me, you’ll see why.

Executives are generalists

Profile any senior executive and you will find one characteristic they all share: They have all led numerous diverse organizations. Executives climb the corporate ladder by moving from one organization to another and demonstrating leadership effectiveness at each stop along the way. To do this, they have to quickly get up to speed on the practices of the people they lead. Some of this involves trusting their people to help them get up to speed. Much of it requires research. In the digital age, where do they do this research? Search.

The executive understanding of the practices of their people is an inch deep and a mile wide. The more people that report up to them, the wider and thinner this understanding gets.

In contrast, developers and other geekier types (the people whom executives manage) are heavy users of social media. They learn from members of their communities the (sometimes closely held) tips and tricks of the trade. When I started in the tech field, forums were the places I would go to geek out. Now I just work really hard to follow the right people and publications on Twitter. And I try not to miss anything.

We are always learning new things, and for this we use search. Executives just have a lot more need of it than developers because they move around so much. When they make purchasing decisions, they don’t do it from an expert’s perspective using social media. They do it from a generalist’s perspective using search.