I wrote recently about the prediction that the world’s 360 million websites will become “appified” over the next 10 years. As much as everyone thinks that apps are all about programming, design and making everything fast, the real struggle, it turns out, will be dealing with all that content.
In the early days of the web, people used to say, “content is king.” And a lot of the people who were saying that were the big publishers who thought that they would easily transfer their market share on the newsstands and in the book stores onto the web. Wrong! Homey don’t play dat!
Along with the birth of “content” was the related field of “content strategy” that arose from the fact that repurposing editorial assets for fun and profit is not a simple as it seems. Content strategy is closely aligned with the more technical sounding practice of user experience (UX), because the key to successful content experiences is understanding what users want and need.
No one’s career has been centered on these two fields as much as Karen McGrane, of the consultancy Bond Art + Science. When it comes to strategy for mobile content, McGrane can now say that she wrote the book on the subject. Content Strategy for Mobile, just released from A Book Apart, will provide a great introduction to anyone trying to grapple with how to get all of what’s on a website (or in the archive of a publishing company) out into the app-o-verse and beyond.
When we were all just thinking of how to get things from print on to the web, content strategy was a way to think about how and why to do that. Print was still considered the primary vehicle and the web, its bolted-on sidecar. But now that we are living in a truly multi-device world—with new screen formats emerging constantly—the content itself has become the constant, with no one execution more privileged than another.
What is required now, according to McGrane, is the “separation of content from form.” Anyone familiar with the model-view-controller paradigm in programming will nod their head at the obviousness of this, but for publishers and editorial people of all kinds, this is quite a new concept. One of the first publishers to do this successfully was TV Guide, way back in the late 1980s. Although at the time the most popular magazine in America, its leadership realized presciently that it was not in the magazine publishing business, but the content publishing business.
The company split itself in two, one half publishing the magazine and the other managing the mainframe database of content that powered the magazine (and eventually, on-screen listings.) Benefitting, perhaps, from the fact that the content of the magazine was highly structured (and repetitive) to begin with, the database part of the company set out to abstract the ongoing production of the listings from their printed form. The crude green screen interface had fields that needed to be filled in for show titles and genres (effectively turning what would have been text into valuable metadata—data about the data) as well as three summaries of different lengths for each show. You can imagine how the workers from the magazine guild must have felt about that! But management held the course, and guess what? Fast forward to 2008 and the magazine company itself was sold for $1, because, in McGrane’s words, “all of the value in that company was contained in the structured content assets held in their database.”
This process of figuring out how to structure editorial material for optimal flexibility and reuse is called “content modeling,” and it is best thought of as the opposite of the WYSIWYG editor we have all become accustomed to from Microsoft Word and other desktop and web applications. Instead of allowing (or requiring) a writer or editor to change the font or size or color within a “blob” of text, each element (headline, caption, text, list, quote, byline, etc.) is rendered in a content management system (CMS) as separate fields tagged with the appropriate metadata. In this way, the same content can be displayed differently in print, on a web page, in an app or even on a TV, without requiring laborious human intervention.
I like to think of this process as analogous to pickling or making kimchi. The metadata is like the salt that helps to preserve the content for reuse in the future. Paradoxically, the crystalline nature of the salt (I mean metadata!) both encapsulates its component parts but also enables it to flow easily into different vessels.
The real star of McGrane’s book is NPR. Yes, NPR. How did public radio become the standard bearer for the future of content? By embracing the API model with both arms. Application programming Interfaces (APIs) are the life blood of the modern web. In the simplest terms, an API is a set of code conventions which programmers can use to interact with an external data source or web service. Instead of writing everything from scratch, APIs provide libraries of predefined functions and commands that serve as reliable shortcuts for developers.
In the case of NPR, its goals are simple—to publish its content as widely as possible. (For-profit publishers who need to monetize content have a trickier problem.) NPR’s solution was to define a primary content type that contained all of the assets related to a given story. These include headline and link copy of different lengths, summaries, photos, audio and video files and body text (and their associated metadata.) All of these elements are created, managed and dispatched from a single interface and can be accessed with the highly structured language of its API.
What this mean, in practice, is that the producers at NPR assemble the content for each on-air story in a very consistent way and then all kinds of “clients”— including NPR’s own website and apps as well those of third parties—pull the elements their product requires through the API. The NPR.org website displays different elements than its audio player or mobile app, and different still from the iTunes podcast of the story or the website of a local NPR affiliate. And because of how the content is structured, these elements can be called in a reliable, programatic way. McGrane considers this “such a beautiful and easily understood example of the value of using adaptive content.” It seems so obvious, but in the world of content, very little now works this way.
One of the greatest successes of structured content is the Tumblr blog platform. Much has been made of how visual and sharable its blogs are, but it was its early adoption of the concept of post types that is responsible for the effortless user experience of its content creation functions. The atomic unit of blogs are posts, which traditionally meant that “blob” of text, photos, quotes and lists styled by a WYSIWYG interface. Tumblr’s innovation was to build each of these different components into its own post type. So you can have a standard text post with a photo and a headline, but you can also have just a photo, or just a quote, or just a link, or a video, or an audio file.
This approach benefits both the content creators and the platform itself. For the author, the structured fields of each post type streamline the process of posting content. For the platform it means that other users can fill their blogs with feeds of just photos, or just quotes or just videos. Metadata is, in fact, the elixer of successful user experience. McGrane quotes a design student as suggesting that, “Metadata is the new art direction,” and archivist Jason Scott as saying, “Metadata is a love note to the future.” The point is, if you make it easier for content creators to structure content you will wind up with more structured content.
That is certainly the position of the current U.S. Government, McGrane explains in her recent story on A List Apart, Uncle Sam Wants You (to Optimize Your Content for Mobile). She quotes President Obama as saying, “Americans deserve a government that works for them anytime, anywhere, and on any device.” She argues that the movement of US.gov to an API model should be a wake up call to the publishing industry.
The initiative to optimize content for mobile is part of the larger Digital Government strategy aimed at building a twenty-first-century platform to better serve the American people. This strategy outlines a sweeping vision for how to deliver government services more efficiently and effectively, and it covers everything from how government agencies can share technology and resources more effectively to how to maintain the privacy and security of sensitive government data.
But running through the entire Digital Government strategy is a consistent thread: The government needs to communicate with and deliver services for its citizens on whatever devices they use to access the web.
If it’s true for the government, it’s probably true for your company, too. Your customers are using mobile devices to access your content—you need a strategy to communicate with them where they are.
If you give people content where they want and need it, they will consume it more. In fact, there are some users who are (or will be) accessible only through the mobile web. These groups include the young, the less affluent and much of the developing world. Seen from this perspective, it is less surprising that governments, advocacy groups and other public entities should make mobile a priority. But content providerts of all types will increasingly feel the pull as well.
The first step, McGrane says, is to get your content structured for reuse in multi-channel publishing. Once that is in place, the metadata in the content and the interaction data from its use can provide personalization and customization—which are the holy grail for advertisers on the web. Most of the world’s web content is now locked up in content management systems (CMS) that are locked into the single web view. That view can be made “responsive” to different screen sizes, but the individual device “views” an only be served a custom content “model” through a lot of custom development. The big opportunities in this area in the next 3-5 years will be in what McGrane refers to as “de-coupled” systems that provide structure for the content but are agnostic about how it is displayed.
The beauty of this strategy is that once you accept the pain, cost and tedium of getting it all set up and maintained, you have a robust, future-friendly asset that will be able to be used in ways you cannot imagine—many of which haven’t been invented yet!
0 comments:
Post a Comment