A Common Thread of Intelligent Content and SXSW: Analytics
I thought when I was invited to present the Search-First Content Strategy at Intelligent Content 2011 (#icc11) in Palm Springs, Calif. and later to serve on a panel called Not My Job: The Ultimate Content Strategy Smack Down (#notmyjob) at SXSW in Austin, Texas, I would have plenty of blog fodder with which to increase my blog volume. Little did I know that presenting at conferences makes blogging harder, not easier.
Blogging demands clarity of mind. Conferences do more to disturb clarity of mind than most any type of event. This is mostly a good thing. Conferences are like heavy spring rains: They muddy the waters, refreshing and revitalizing the river ecosystem. But it takes some time for the river to settle out and return to its former clarity. Just so, conferences muddy the mind and revitalize one’s thinking. It takes me at least a week for the rivers of my mind to return to their former clarity after a conference.
Because I had two conferences so close together, as ICC11 and SXSW were, I had to experience both and let them both settle into the river bed before I could get back to blogging. The insights that I gleaned from the two conferences are all together in my mind, as though it was not two, but one event with a single central theme: intelligent content strategy. By that I mean using the data we can gather in digital media to inform our content decisions across an enterprise. If this topic interests you, please read on.
Intelligent Content: Using Keyword Analytics to Predict Audience Intent
I kind of felt out of place in Palm Springs amongst the crowd of content strategists and editors interested in how to use DITA to share and reuse information across an enterprise. DITA, short for Darwin Information Typing Architecture, is an XML document type definition developed by IBM and released to Oasis as an open-source standard. It is now used throughout the industry to facilitate intelligent content sharing and reuse. Most of the vendors sponsoring the conference sell solutions that help large companies build intelligent digital content experiences for their clients based on DITA.
Even though I represented IBM and the conference was mostly about DITA, my talk had little to do with DITA. My talk was at a higher level. It attempted to answer the question: How do you learn the language of your clients and prospects so as to better serve their information needs?
Traditionally, this question is answered by studying your existing customers. Market intelligence folks poll focus groups of existing clients or use survey research to determine customer pain points. Marketing content strategists use Web data to learn what works and what doesn’t on their sites. In the technical documentation world, DITA architects perform usability studies with existing clients. No matter where you sit in the enterprise, intelligent content implies creating content geared toward what you learn about existing customer preferences and attitudes. At least that was how all the other talks at ICC11 worked. It’s all very valuable stuff, but it’s not what I talked about.
Thing is, existing clients are familiar with your nomenclature. So if you only do research on them, you ignore all those potential clients who are not clients primarily because they have different words for your offerings than you do. If you want to reach these people, you need to learn their language and use their language to connect with them.That is where search plays such a crucial role in growing your business–connecting with people who have no clue you offer the things they need.
How do you do this? As we have been touting for the past year on this blog, you look at their search behavior and you gear your content efforts around that. Easier said than done, I know. But our book is a manifesto on how to do this. Nowhere in the book do we claim that this is easy. It is a difficult, though necessary, practice in the digital world. Not only is it difficult to do the kind of deep keyword research necessary to get beyond existing customers, it is even more of a challenge to build content experiences based more on your prospective clients’ words rather than your own words.
To do this in a way that standardizes your experiences across an enterprise requires a taxonomy of words and phrases your prospective customers use to find information about your product categories. One of the audience questions I fielded was about how to do this. That is, how to build a taxonomy based on client nomenclature rather than company nomenclature. I only had time in our session to give a cursory answer. And I sense I don’t have much more of your attention here. Suffice it to say I will be writing about this in the coming weeks and months. Until then, please check out my blog post on the Semantic Web.
The whole point of the search-first content strategy is to use analytics about your clients and prospects to predict their information needs. The first rule of communication is know your audience. In the digital world, the best way to know your audience is through analyzing their search queries. Keyword research is powerful predictive content analytics.
SXSW: Using Analytics to Govern Content
“Not My Job” was a surreal experience. Kristina Halvorson (@halverson) hand-picked a panel of folks who do enterprise content strategy, including Nathan Curtis (@nathancurtis), Lisa Welchman (@lwelchman), and Rahel Bailie. Unfortunately, Rahel fell ill. So Kristina coaxed Evany Thomas (@evany) to fill in. Evany doesn’t do enterprise content strategy. She runs content strategy for Facebook, which is just getting started on developing it’s own content. So she offered balance to all the folks who have lived through the wars of trying to govern content standards across large organizations. Content strategy is so much easier if you can do it right from the start, as Facebook is doing, rather than trying to retrofit existing content to a strategic vision.
They booked us in a room that housed 130 people at the Sheraton, a mile from the Convention Center. Despite being out of the way, 400 people showed up. So we offered those who could not find seats the opportunity to stick around for a second session, which we held immediately after the first. At least 100 people chose to stay. Both sessions were lively exchanges about how to govern content strategy across an enterprise.
The main problem with content governance is the people. How do you enable all the diverse people in a large organization to do content the right way? This is hard if you run a print organization. It’s even harder on the Web, because users expect the digital content from the same organization to be interconnected. Users also expect content to be consistently refreshed, or retired if it gets stale. With print content, you just publish and forget. On the Web, you publish and nurture. So not only do you have to help the people comply with standards, you have to get them to collaborate and maintain the content they care about. In a large organization like IBM, this is the most challenging thing because we are all essentially competing for the limited time and attention of the same audience.
The collective experience of the panel had all come to essentially the same answer to the question of how to do this: You build tools that make it easy for content producers to comply with standards and collaborate. Ideally, these tools have content analytics built right in. I’m not just talking about the audience analysis involving keyword research, but text analytics to help people write for the audience and Web analytics to help people build more effective digital experiences over time.
As Evany said, everybody thinks they know how best to create content because they all have Microsoft Word. The reality is, few people really know how to create good Web content. But they don’t know what they don’t know, so how do you convince them to humbly accept your help? At IBM, we started by creating a lot of Web resources like style guides and such. But either people didn’t bother to read them in the course of their busy jobs or they didn’t care. The other panelists confirmed that this is the common attitude of the people.
If the tools have the standards and analytics built in, the conflict that results when the people think they know more than the standards bearers vanishes. For some reason, people believe tools more readily than content experts. And since it does not take them more effort to check the standards when the tips and other help are built right into the tools, they do comply.
The panelists all agreed on one point in particular: This kind of tooling is still in its infancy. At IBM, we use acrolinx IQ to do some of this. We also use Covario. And we have a suite of our own Web analytics tooling, to go along with our enterprise content management (ECM) system. But I won’t be the first to say there is a strong need in the industry for enterprise content governance tooling, which integrates all this stuff into one tool kit. The vendor that solves this problem for content strategists will have a large market for its products.
James Mathewson (@James_Mathewson) is the Global Search Strategy and Expertise Lead for IBM. He is also co-author of Audience, Relevance and Search, Targeting Web Audiences with Relevant Content. The opinions expressed here are his own, and not those of IBM.