 Good morning everybody, how are you? Excellent, glad to hear it. So we're here today, and everybody can hear me, right? Great. We're here today to talk about the future of content. My name is Todd, I'm from Austin, Texas, and I run a small digital agency called Four Kitchens. We are a digital agency that creates websites, apps, and software for organizations that need to tell important stories. We focus on all aspects of content, content strategy, content management, and publishing across all channels. But most importantly, we build things. We do a lot of work with decoupled, and while we specialized in Drupal for the past 13 years, we work with a wide range of content management systems and architectures. So in short, we make content go. Once upon a time, there was a fridge. It had one job to keep things cold, and it did its job well. FridgeCo, the company that built the fridge, was doing fine too, but they thought they could do better. So one day, as they so often did, the FridgeCo product team gathered two brainstorming. How could they differentiate themselves from the competition? The product manager was tired of hearing the same mold ideas. Bigger capacity, more energy efficiency, more shelves, nothing new, nothing exciting. It seemed like they'd had this meeting a hundred times before, and they had. Someone across the table was looking at his phone, bored, and watching him rudely tap at the screen, she was suddenly struck with a totally new concept. Let's connect it to the internet. Science, almost. A fridge connected to the internet. Ask the guy with phone incredulously. Yes. That sounds ridiculous. No, she said. It's the future. It will have a bright, color screen. It will support apps just like your phone. We can partner with recipe websites and integrate with Google Calendar and Twitter. Twitter's big, right? Let's put Twitter on the fridge. And so it was that FridgeCo ushered the humble fridge into the information age. Now, I have no idea if this is how it happened, but there is internet on fridges today. This seems plausible, right? Some product managers and a team getting together thinking, how can we sell more fridges? How can we make them more expensive? But imagine that it's 10 years ago. The iPhone has just been released. The App Store has just been released. And you're a software developer, as many of you probably are. You get a call from FridgeCo, and they ask you to make this vision a reality. Put the internet on a fridge. How do you go about implementing something like that 10 years ago? Well, first, you have to build a software development kit. And you're going to need a set of standards for building apps for your fridge. Competitors are surely going to follow suit. So FridgeCo won't want to create some kind of open standard for internet-enabled fridges, and it all gets locked down. So you have to create a proprietary platform for fridge development, and so does everybody else. Now imagine you're a grocery store or a grocery delivery company, and you want your app to appear on every fridge because as soon as somebody runs out of milk or eggs or whatever, they want to reorder groceries directly from the fridge, which is kind of the point of an internet-enabled fridge, right? So building an app from scratch seems like a really daunting task, not to mention having to do it for 5 or 10 or 20 different fridge brands, each with their own proprietary software development kit. You're having a hard enough time just getting your iPhone app out of the door, and now you have to worry about 20 different fridge app ecosystems, not if you have the right technologies and workflows in place. If you've centralized all of your products, your groceries, your content, in a single consistent data model, and you've made that information available using a robust API, you can rapidly develop apps for all kinds of devices and environments. So it seemed like a crazy idea 10 years ago, putting the internet on a fridge is almost trivially easy today, and it's not because we got really good at building fridges, it's because we created all of the infrastructure needed to rapidly adopt new technologies and devices. Fridges, streaming video, home assistance, whatever. So today, right now, we expect our digital interactions to span a variety of experiences and contexts. We are reading, watching, and listening to content. We're doing this on different devices, in different situations, and with different needs, like looking at recipes on the fridge. Before we get started today, we should define a couple of concepts. I want to talk about experiences and contexts. Experience is what happens inside the content. It's the medium of the content itself, the text, the video, the sound, and the message it delivers, the narrative, the emotional quality, the meaning. Context is everything outside of the content. It's the device you're using, it's operating system, and the browser. It's the physical environment that you're in when you're using it. It's your history with that brand and all of the emotional baggage you bring with it. It's the needs you're trying to meet. It's your mode, meaning are you exploring or actively trying to find something. And it's the mental model that you're using to compare this content to other content that you have experienced in the past. Context is everything surrounding and influencing the content experience, but not the actual content itself. So for example, watching a movie is an experience. The movie itself is an experience, but watching it in a theater with your kids versus on your phone while flying to a conference versus curled up on your sofa with your dog, these are contexts, the context in which you experience the content. So today I'm not going to talk much about experience. I'm not going to talk about the quality of the content itself or its emotional tone or its narrative or its impact. I'm going to be focusing mainly on context, everything around the content itself and how we experience it. So with this in mind, I'm going to make eight predictions about the future of content and where we're heading next. Prediction number one, CMSs will be content repositories, not website managers. Back in the 1990s we were dealing with a lot of flat files, HTML files, there was no CSS, there was no JavaScript. Everything was tied to presentation. You had an HTML file that was both the content and the presentation in one document. Then it's got a little unwieldy, having to upload these things and using FTP programs and oh did I update this or that or I don't really know Markup but I still want some kind of WYSIWYG editor so tools on the front page were created which were desktop CMSs that uploaded flat files on our behalf to a server. Then along came Geocities and Tripod and Angelfire and these were web-based versions of that but they were still fundamentally dealing with Markup, a little bit of CSS and files. In the 2000s, web-based CMSs came along. Drupal was among the first in the year 2000. Then WordPress and .NET Nuke in 2003 and Joomla in 2005. And these web-based CMSs were a single piece of software divided between a front-end, the display of content, and the back-end, the management of the content. First we could display content in a web browser. That was really the only way to experience content. Then came RSS and add-on feeds which allowed us to syndicate our content to other sites and pull content in from other sites. Then came Smartphones and Tablets which ushered in this mobile revolution around 2008-2009 and fundamentally changed our approach to front-end design but it did not change our approach to content management. Meanwhile on the back-end CMSs of course stored text, various pieces of media, user-generated content and as these CMSs became more popular and were widely used web administrators, webmasters at the time started demanding more back-end functionality. So we added things like user management, user accounts and then permissions of course associated with that because you don't want all of your users being able to do everything on your CMS. Then content creators wanted to use layout tools. They didn't want to deal with market and CSS and code so let's put layout tools directly in the CMS. And then of course third-party integrations with marketing automation and CRMs and e-commerce platforms. This made our CMSs really heavy and complicated and fundamentally CMSs were really no longer about content management. They were about website management. These were tools that were managing every aspect of our website. So they may be managing a commerce platform, fundraising campaigns or communities. And for those of you who've been around the Drupal community long enough, how many of you remember when on the homepage on d.o.drupal.org it said that Drupal is community plumbing, right? Not anymore, it's content management. So looking ahead to this decade, recently we've seen a proliferation of devices, apps and channels. And these devices have introduced new technology like location awareness and new interaction patterns like touch gestures and voice commands. And the apps built for these devices created new expectations of user-friendliness. We have created a mindset in which everything is an app or has app-like capabilities. And we've also seen new channels for publishing content and engaging users. Media companies, for example, now have to contend with Facebook Instant Articles and Apple News and other off-site distribution channels often as the primary means to distribute their content. And in our Scramming to Keep Up, we've spun up new websites, apps and tools to satisfy our short-term and timely needs. And these new endpoints often duplicated content, code and effort. And not only did this greatly increase the effort and cost of maintaining our digital presence, but it led to inconsistent bare-bones and just plain bad experiences for our audiences. So how many of you find yourself in this position right now with all of these separate websites all operating independently? Here's where we're headed. We're already beginning to see a shift to a more agile approach to architecture, the decoupling of CMSs and centralizing content management. For some of you, this may be old news, but for some of you, you may have heard of decoupling and you kind of nod your head and think, yeah, sure, I get it. It's okay to not know. I'm going to explain it very, very quickly. So for those of you who aren't entirely sure what we mean by decoupling, here's a very quick introduction. First, it includes both the front end, the display of content, and the back end, the management and storage of content. Imagine we separate the two. Each is a separate piece of software that focuses on its specific task to display or to manage content. The resulting architecture is decoupled. Some people refer to this as headless. Some people make a distinction between those two. But generally, this is what we're talking about. So when we talk about the front end, we most commonly mean your website. And when we talk about the back end, we're talking about content. So let's label these appropriately. And these two are connected by an API that pulls content from the database and turns it into market. And one misconception I want to clear up before we move on is that you don't need to rebuild your CMS to decouple. It's not necessary. Recently, we decoupled PRI.org's homepage, Public Radio International, while keeping their underlying site on Drupal 7. And an interesting side benefit of decoupling, is to work with more widely understood code and design patterns. You don't need front-end developers who understand PHP template in Drupal 7 or Twig in Drupal 8. They can work with HTML, CSS, and JavaScript. One of the project leads on the PRI.org project said that it was the first time in years that their team had enjoyed working with the code base. When your site is decoupled, you can centralize your content, meaning store all of your content regardless of how many websites it may appear. You can then attach different sites to your content repository. So perhaps you need to add a microsite for a specific topic or a short-term event or a marketing campaign, or you want to add a blog for a featured author or a work professor. And the same API that you created when you decoupled can be used to connect all of these sites and share content between them. This is content centralization. Prediction 2. It can be extensible and modular. A modern CMS doesn't treat a website as the primary experience. A modern CMS is multi-device, but not designed for specific devices. This means you have to think about your content first, not websites, not specific devices. You should let your CMS deliver structured content and let the device's software or app handle the rest of the work. And if your content is properly modeled, if you have reusable fields and robust content types, you can quickly support new devices and experiences. One of the things that makes Drupal really great is the ability to quickly add new fields and new content types. So let's say you want to develop iPhone and smartphone app support. You can simply add a location field, which in turn gains you more support for other devices like Android and lots of other location work devices. Let's say you want to get into streaming video. Well, you'll need to add some metadata fields to your video assets, but there's a side benefit to this. Not only do you then support something like Roboop, but that also extends to Apple TV and third-party sites like YouTube and Vimeo. All of that information can be reused across these different platforms and devices. And let's say you want to create an Alexa skill. Well, if you create a conversational field or a read-aloud field, that can be reused on a number of other home assistant devices, as well as enhancing the accessibility of your content. And then, of course, you have things like feeds and amp, which are quite similar. And in turn are similar to Facebook, Instagram articles, and Apple News. So by adding just a few fields to your content, you can start to support a variety of experiences of devices and platforms. So in the future, all of these front-ends are first-class citizens. And this means that content must be modular so it can be assembled and delivered to someone based on their context, the device they're using, the situation they're using it in, how they're interacting with that device or the situation. So here's an example. NBC has a ton of content, and it has to be delivered across many platforms, websites, apps, multiple streaming devices. So rather than building multiple solutions for multiple devices, NBC created a single content solution that's fundamentally at its core powered by Google. The same infrastructure and content that powers the Saturday Night Live website also powers the SNL apps for iPhone and Android. And their decoupled approach and their very robust content model allowed them to rapidly develop for streaming devices, smart TVs, and home assistants like Alexa. And so here's how we did it. NBC wanted an Alexa skill that would respond to questions like, hey Alexa, when is AP buy on? It's a showtime field for their television show content type. So we knew there's a field called showtime, and this is what's displayed in that field. 9 slash 8C, which is a common abbreviation used in the television industry to mean 9pm, 8pm central. But when Alexa sees that, Alexa reads it as 9, 8C. That doesn't make sense when you hear it, right? So we added a new field to Drupal, and it was showtime spoken, and we simply spelled out central, so that Alexa could very easily read 9, 8, central. This seems really obvious, but some of you are probably wondering why didn't you just edit the Alexa skill to interpret the letter C as central? The issue there is that it's not sustainable. If you create a field here on the device, it now works on all spoken devices, not specifically Alexa. So if you require your app to decode your content, you have to write that logic into every single app that you deploy, whether it's Alexa or Apple's Home Assistant device or Google Home, you have to replicate that code three or four or ten times. Why not just change it once in the content model so that it's reusable for all spoken devices and devices that haven't even been invented yet? This is what we mean by the content that is modular. And when content is modular, it can be easily published to entirely different experiences, not just contexts. So here, of course, we have a text article and we want this to be displayed on our website, an event microsite, and then sent out on our RSS feed. But let's say we have a video. So that video doesn't really belong on every kind of device in every kind of context. We want to send it to our video heavy iPhone app and to Roku, which is a video streaming device. But it doesn't make sense to send the video to, say, Alexa. Instead, we would want to create a transcript of that video that can then be sent to Alexa or read aloud using assistive devices. This is what we mean by creating multiple experiences for multiple contexts as well. It's the same content. There is a story being told or a message being delivered in this but it's also being delivered through a text-based format or an audio-only format. And here's an interesting use case that we came across in the past as well. Modular, extensible content can also be made publicly accessible. Now, this is really useful if you're a government office and you are mandated to make certain data public. So if you have already centralized your content and created a decoupled architecture, that same API that you built to make that happen can simply be exposed to the public and allow them access to your content and fulfilling all of your obligations by law or otherwise. Additionally, if you're an organization that has a really lively community that wants to share and remix your content, let's say you're a musician or you're a video artist or you work in some other kind of collaborative industry, you want that content to be available to people so they can easily mash it up or use it. And this is exactly what happened with this week in tech. So this is the first decoupled website that they're a podcast and video cast network. They wanted to have a publicly accessible API, not because they were required by law to make their content publicly available but because their audience consists entirely of developers and designers who want to hack away at apps in their free time and learn how to use things like Apple's SDK and ARKit and things like that. So they gave their fan base raw access to their content to use as real world data to build applications. And as a result, their fan base went and built all of their apps for them, simply because they extended their API publicly. So what does this look like to a content creator? On the content creation side, editors will routinely work with modular content in the form of blocks or components. In the Drupal world we don't use the word block, instead we say things like paragraphs referring to the paragraph module here, the word block is already taken. But the rest of the web refers to chunks of content as blocks. The paragraphs page in DDO has this very helpful illustration. This is what modular content looks like on the back end. On the left you see what a website visitor would see. And on the right, this is how this content is structured inside a node edit field. So you have a chunk of text, a paragraph of text, then an image with some metadata associated with it, then more text, then a video with metadata associated with it. You assemble it all together, you put it in the order that you want, you had published and bits of data can be remixed for different contexts and different devices. The new WordPress editor, Gutenberg is built on the idea of modular content and quite frankly I think that Gutenberg is doing more than any other tool out there to really push the idea of block based modular content forward. So here we have their content blocks. They can be edited separately, reordered and assembled in this very clean, very compressed lake interface. There is a Drupal port of Gutenberg. Version 1.0, which is production ready, was released on Tuesday of this week. So if you're interested in checking it out, please do. Prediction number three, content creators will finally, finally get the tools they deserve. As paragraph and Gutenberg demonstrate, there is an explosion of interest in creating better editorial experiences and I'm sure we've all passed. Coupled with the need for modular content, this has led to all kinds of new ways of thinking about and assembling content. I suspect that we will also see an interest in standalone CMS agnostic editing interfaces. Totally separate pieces of software that you can just drop on top of your CMS because an editor has said I prefer that tool rather than this one. Can you please install this editor and not that one? Think of it like the WYSIWYG battles in the early days of Drupal 8, you know, CK editor and whatever. A very similar kind of debate is going to take place with fully standalone editorial experiences that you just drop on top of your CMS. So these could theoretically replace many of the CMS's out-of-the-box editing tools and maybe even render the need for package editing tools totally not to split. Now while this adds some complexity to the stack, it's going to be a relief for people like me who are tired of hearing that they've chosen word press because the editing experience is better. Meanwhile, you have hosted services like Gato Content here and Contentful and they're going to compete to create better editorial experiences outside of traditional CMS's. Gato Content, which is shown here, it's focused primarily on content management at all levels. So content workflows, content governments, being able to ask multiple stakeholders for content giving them a place to put it all and then an interface to allow people to sort through it, make sense of it. And also campaign management, which is something that they've been adding recently. Contentful, meanwhile, is more laser focused, is more bare bones. It's positioned itself as an API first CMS focused on the editorial experience and the content model. One of the common complaints that I'm sure we've all heard is the inability to effectively preview content before it's published. So here's an example of Gatsby. Gatsby has a tool called Gatsby Preview and if you combine it with something like Contentful, so you see Contentful on the left and then there's a Gatsby Preview generated page on the right. As the editor here is updating see if that video is playing. Is it? It was. It was, okay, thank you. As the editor manipulates this title it automatically updates in the preview window on the right. Now it's doing it without reloading the page and this is not a production site but it sure does look like one. So Gatsby Preview can spin up a temporary page for somebody to visit and test while they're editing content make sure that it looks good, play around with the window size, load it up on their phone or whatever they want to do, then hit save. So they know exactly what it's going to look like on all devices because they're working with an actual web page. We're also seeing some really bold new ways to make CMS's more editor friendly without the need for a developer to make changes to code or to templates. So this is an example of one of these many initiatives. This is DxA from Cohesion. It's built in Drupal. And DxA aims to reduce the amount of development needed to produce great content and user experiences in Drupal. This is just one of many. Emulsify, which is a tool that we built for Kitchens, enables component-driven development for Drupal theming and design. So if you're into component-based design development this is the tool for you in Drupal. Emulsify was released in spring 2017. That's just a few years ago, just two years ago. It currently has 60,000 installations. So the demand for these tools that enable content creators and designers is just enormous. And looking ahead content creators are going to expect more and more from these kinds of toolkits. So Emulsify, for example, now includes a living style guide that serves as both a reference for design patterns and the actual code the market and CSS that powers the site. So when you make a change to your style guide it makes a change to the site itself, the code itself. And living style guides allow all of your teams a single access point to know how to style their content, to know what things the right kind of typeface to use, the colors, the sizes of things, the interaction happens, the button colors, all of that. Content creators are going to start demanding really robust tools like that as well. Prediction number four, CMSs are going to focus on specific verticals and use cases. So we've already seen a lot of the specialization. You have marketing automation platforms, newsletter and email management tools, constituent and campaign management platforms, so things like sale through for email and personify for campaign management. And all of these tend to be especially strong in a set of verticals. But what's really interesting is the rise of CMSs that are specifically targeted to media, entertainment and publishing. So here's Thunder. Thunder is a Drupal 8 distribution. It is sponsored, maintained and used by Hubert Bird of Media, which is one of Germany's largest publishers. As you can see, they message as a storytelling, media handling, content scheduling application for publishers. Here we have ARC Publishing from The Washington Post. So when Jeff Bezos acquired The Washington Post a few years ago, he or somebody on his team took a look under the hood. And much like Amazon.com is not really an e-commerce site or a place to buy things. It's actually all of the servers and tools that run the internet. It's actually AWS. In the same way, I believe Jeff Bezos took a look at The Washington Post and said it's great that you're doing content and all, but what you really are is the CMS that you built. In the CMS that powers The Washington Post named it ARC, and they're making it they're selling it. And it is a formidable opponent. So if you work in the media entertainment publishing space, you have probably already had to compete against ARC. And we have. And we've won sometimes and we've lost sometimes, and we're actually starting to work directly with ARC on a project. But what's really interesting about ARC is that they target newsrooms. And this is what really appeals to the people who choose ARC. Is that it's not just about content management, it's about managing a newsroom which has its own unique kind of workflow. So if anybody here has ever worked in newspaper printing or magazine printing, you know all about the back office workflows of this copy gets generated at this desk and then it's sent over to the crime desk or wherever. And that it gets approved there and then it goes to layout to be set on the page. That kind of workflow still exists within newsrooms. That's the mindset. Tools like this are built around existing expectations and workflows. Additionally, ARC and Chorus from Vox are very, very, very focused on content monetization and they do some really interesting things. Some of them claim to be able to automatically throttle things like paywalls and reg walls. A paywall is the barrier that you hit when they say, hey, if you want to keep reading you got to pay for the content. If you want to keep reading, you need to at least give us some personal information. Create an account in the site, give us your email address, whatever. By throttling it they mean they're using machine learning to figure out hey, this article is really popular. You should probably extend that reg wall or that paywall a couple of articles while a bunch of people are filing into the site and let them really enjoy the content, show them a couple more pieces of content that they might be interested in, then hit them with the reg wall and hit it on a viral article. So it's doing all of this stuff automatically. So rather than some of these platforms really being content management systems, they think of themselves more as content monetization platforms. Chorus is also one of these. This is what Chorus has to say about itself. Quote, Chorus is the only all in one publishing audience and revenue platform built for modern media companies operating at scale. Revenue platform. There are CMSs that are being built today that are specifically focused on that first and not content. That as you happen to notice a subtrend here, all three of these CMSs were created by large media companies and made available to the public through open source in the case of Hunter or proprietary ARC and Chorus means. Publishing companies have also become software companies and they're trying to reduce or recoup their costs by releasing their products externally. So in the case of Thunder, they're making really smart choice. They're making their distribution public because then more people will use it, more people will support it, more people will extend it. This is exactly why we use Dribble, right? ARC and Chorus, of course, they want you to pay for it. There are many, many, many other examples to their companies. Pretty much every country, every continent the big media company, wherever you live, chances are that they're trying to monetize their CMS. Prediction 5 Machine learning will help us manage and create content. CMSs can now read, see and hear your content. This is a paradigm shift that will happen in the background and you won't be aware of it. So for example, Google is already introducing machine learning into many of its tools to improve search results, suggest responses, and automate tasks. How many people here have used a little autocomplete sentence thing in Gmail lately? It's pretty good. It's because it's learned how you talk. Your CMS is going to feel out of touch if it doesn't have a deep understanding of your content, if it doesn't know what your content sounds like or what your content reads like, looks like. But luckily, as machine learning grows in popularity, it's going to become very inexpensive and very easy to install. There are lots of open libraries and APIs right now for machine learning. And soon it will be an out-of-the-box feature or just an easy addition. Machine learning is going to drastically simplify media management. So before you had to tell your CMS where all of your content belonged. You had to tag it and file it and store it but with the help of machine learning, your CMS will add metadata to that content and perhaps even file it away for you more quickly and more accurately than you ever could. It's going to look at an image, know what's in it. It's going to watch a video, know what happens. This is especially true of images, videos, sound files. Machine learning can elevate a basic CMS to the level of an enterprise digital management system. So if you're using dams, there's a good chance that that sub is probably going to go away. Image detection can auto-tag images which helps editors search within image libraries but also enhances accessibility. It can auto-populate a lot of the metadata required for accessibility. Natural language processing, NLP, can create transcripts of video and audio files to help improve search results and also to improve accessibility. Now it's not all good news though. Machine learning is a product of its training. It is by definition reinforcing the biases that have been introduced to it. So unless you use a broad dataset and constantly teach it, yes that's right, no that's wrong, it will by definition reinforce bias. And all you have to do is Google machine learning bias and you will see some terrible, truly offensive examples of machine learning getting it very, very, very wrong. CMSs are going to create content using artificial intelligence. September 2017, Digidae reported that the Washington Post had published 850 AI generated articles in its first year of operation. This included 500 articles about the 2016 U.S. election Don't Get Me Started on bots in the election but generated more than half a million clicks quote, not a ton in the scheme of things but most of these were stories the Post wasn't going to dedicate staff to anyway, end quote. So there's a bunch of stuff out there that the newspapers aren't going to invest reporter time in that they can just send a bot to go do. You want to know what it looks like? So this AI published a lot of stories about high school football games. Here's an example. It's an AI generated play-by-play of a high school football game one Friday night. The Landon Bears shut out the visiting Whitman Vikings 34-0 on Friday. Landon opened the game with a 90-yard kick off blah blah blah right blow by blow. This is created using artificial intelligence. The New York Times Associated Press, Reuters and Yahoo Sports also use AI to write stories and this March the Press Association which is a UK news agency claimed that they can publish 30,000 local news stories per month using AI. So we've gone from 850 in a year to 30,000 per month in two years. CMSs will also interpret the emotional tone of a story and react accordingly. Now as some of you may be aware some publishers are already using sentiment analysis when they display ads on their website primarily to avoid embarrassing placement. So for example if an algorithm determines that an article is critical of Dow Chemical their advertiser Dow will not have an ad displayed on that page. If it's positive they will display that ad for Dow and in fact charge them more. But here's a more unusual example. So we wanted to see just how easy it would be to use machine learning to generate content. So for Drupal on Seattle last April we built something called Happy Quake. This is a Drupal powered website that turns happy memories into shareable postcards. So here I ran into an old friend I'm about to go visit him in Costa Rica in June and I'm really excited about that. There's images in this warm tone and friends getting together in nature and going to Costa Rica. So how did we do this? First somebody approached our booth and we said hey give us some happy memory from the last couple of weeks and they said okay I ran into an old friend I'm about to go visit him in Costa Rica etc. Two sentences, three clauses. We then identified the entities and syntax. We used a open source or openly available rather natural language API from Google sent it. These two sentences, these three clauses and I came back and said hey here are the entities that we've identified in the statement. It's friend in Costa Rica. The salience of friend is really high and that entity type is a person. Costa Rica salience is there but relatively low and it's a location. So then it suggested some images. What we did is took that phrase and these entities and we sent it to the creative commons license image library called unsplashed. Unsplashed sent back a ton of images and the editor was then the person who submitted this memory was able to choose four images from this pool that was delivered. So here we see old friend was a search that was done on unsplashed and then Costa Rica then run and then be about okay but it worked. It's not perfect. Then we needed to determine the sentiment neutral or positive and so it ranked at 0.4 so we used a natural image processing again to return 0.4. 0.4 means it's moderately positive. So what we then did with that image was we upped the saturation of the color. So for all the images that they chose we just bumped the saturation up a little bit because it's a happy memory. So it's brighter. Right? Then we looked at the category. What kind of category is this memory? Well there were these predetermined categories using a sample set of 20,000 happy memories. There's bonding, affection, enjoy the moment, achievement, exercise, leisure, and nature. This ranked at 99.5% bonding. This is a memory about bonding. Bonding with a friend going to Costa Rica. We then applied an image filter based on bonding. It gives it kind of a rosy hue. So on the left you can see the before image and on the right you can see the after image which then gave us the final product. So here's another example. I had a wastage yesterday. They were from different regions and I had a glass of white wine. They were different sizes and each was different. So this was submitted to Google natural language processing. It returned some images from a third party image library. The editor picked four images that they felt were most appropriate for this memory. It created a postcard. It changed the saturation level based on the, I think this was a neutral memory so it didn't do much with the saturation. And then it applied, I believe this was an activity category so it turns into kind of a bluish green. This is useful because if you have an article that's really somber you don't want a bunch of bright happy images to appear in it, right? You might want to tone down the color a little bit or at least find some more appropriate imagery. The end result here is a piece of content. A happy memory on a postcard that was collaboratively created between a human and a computer. The human supplied the memory, the computer suggested images and the human made the final choices. Then the computer measured the emotional tone and the quality of the memory and manipulated the images to match the mood. What we were trying to get out here like, okay, who cares? What we were trying to find was an actual problem that people run into in publishing. And that is I just wrote an article, I need to get it on the site right away. This is timely, this is breaking news. But I have to put an image on there. Now I have to go sort through my image library and it has to be relevant and, you know, our guidelines say you can't publish an article without an image because then it won't get any clicks. This can just automatically populate a relevant image and adjust the tone and do post-production, maybe even cropping based on the sentiment of the article. You can do this right now and we built this in like two days. Prediction six, reality will be augmented. So if you've seen me speak in the last couple of years you've probably heard me talk about how big VR is going to be. Well, I got too excited about all of that. VR will have a strong adoption in gaming, training and industrial applications but to the average person it's going to be a novelty and we're going to use it a couple of times and say that makes me busy, I don't like it thanks anyway. But it's still important, so for example VR is being used in combination with cognitive behavioral therapy to treat combat PTSD and some phobias. If you're interested in that I can tell you about how that works later on. AR, augmented reality, the blending of what you see in front of you and adding some stuff to it is already quietly infiltrating our lives and has been frankly for a long time. Its potential is really huge and it's precisely because it's so useful, yet subtle and insidious and I mean that in like the most literal definition of that, I don't mean evil. I mean it's just it's insinuating itself into all little corners of our lives. This is Google Lens. So from a recent CNET article they reviewed this app. It feels like a pair of smart glasses without the glasses. The camera enabled app can already be used for object recognition, translation, shopping, and recognizing the world. And in this example here, Google Lens is highlighting a menu's most popular choices based on information it's gleaned from various restaurant rating apps. And then that links to Google Maps photos and details about those dishes. So this is a really popular dish as reported on Foursquare or probably Google's places stuff. And here's a link to a photo of it. All you have to do is hold the phone up to the menu. Now Google's been touting this example for a while. This is like, hey would it be cool to see a space suit by the Bay Bridge? So imagine that you are watching you're on a Wikipedia like page. This looks obviously like Wikipedia, right? But you're on a Wikipedia like page, you're just reading about a space suit and you're like, no, I don't know what that looks like. So here you are on a desktop, you click this little asset here on the bottom right, and it shows you a 3D model and you use your cursor to kind of move it around and look at it, oh that's neat. But then on a mobile device, now it knows that you're on a mobile device, and it knows that there's some AR potential here. This is animated there. So a different icon appears on the bottom right. AR thing. So you can tap on that and then just drop it into your room, move it around and take a look at it in life scale. Right? Now this is their demo that they have been talking about for a few months. Like oh you know the future, we could do something like this. They did it. They did it. So last month Google started including 3D models and AR experiences in search results. You can try this right now. I'll show you an example in a moment. We'll get there. AR and VR experiences are often three-dimensional, so it's no surprise that we're seeing an explosion of 3D assets. This is content. 3D assets are content. Here is Sketchfab. It's a massive marketplace for 3D assets. It's like Shutterstock, but for 3D models and 3D animations. So users upload things that they make and they can give them away for free or they can charge $5 or $10 or whatever to download and use in the same way that a photographer can use Shutterstock. This also extends to 3D printing and fabrication. So here's Pinshape, which is kind of like Sketchfab and kind of like Shutterstock, except these are plans for 3D printable fabricated objects that you can make if you have a 3D printer. So here's a fully assembled 3D printable wrench with a little workable, I don't know what you call it, that little thingy that makes an open close, all built out of one piece downloaded off of this site. When I was asking our team when I was preparing for this talk, I asked our team tell me what you think the future of content is. And the most interesting response I got was from one of our support developers, Chris Martin, who described that he felt the new space race, this initiative of like let's colonize the moon, let's go to Mars, whatever is going to result in an explosion of 3D printing technology, because you can't take everything you're going to need there. You have to innovate this stuff, you have to use new materials, you have to bring the raw components and the machines that help you build the things so that the moment you need that wrench you didn't have to fly up there with a wrench, you make the wrench that you need and now you have it. So think about all the innovations that happen from the first space race, all the material science that we got and energy technologies, the same kind of thing that happened with 3D printables. This is just content. 3D objects are just content that have to be maintained and viewed and experienced on websites and phones and other apps. Prediction 7, content delivery is going to be context specific, meaning device, your mood, your experience. iTunes is going away. iTunes the app is getting shut down, it is being replaced with multiple applications that each do separate things. There's going to be a podcast app, a TV app, a movies app and a music app for your macOS devices. Those of you who have been using iOS, this happened a long time ago for you. So this is a great example of this shift to narrow context. When iTunes was first launched in July 2001, it was a game changer. It was the go-to music store and MP3 library tool. Then it added other things, movies, TV shows, podcasts. Of course podcasts were invented for and named after the iPod. 18 years later, iTunes is quote, a relic of a different era in which people bought all of their music and movies in one place. Now it's being split into all of these different channels. Apple's announcement, by the way, doubling down on creating a podcast app for macOS, falls shortly on the heels of Google, adding playable podcast episodes to search results. So if you just search for, say, Reply All, you're going to get a little area there that has the three most recent episodes and you can play them right then and there. It's searching in the podcast. It's finding, it's transcribing the text. It's getting natural language text. It's searching within the podcast itself and allowing you to then play directly, well, sort of directly, you click on a thing and then you go to guess what, Google's podcasting platform. So again, they're trying to funnel you into their context. Try this a little later, but at some point today on your laptop or desktop computer or whatever, search for dog and you'll see some stuff about dogs. That's neat. Do it on your phone and you search for dog. You're going to see, oh, view it in 3D. So there's a 3D model of a dog. That's interesting. Oh, here it is. And you can kind of spin the dog around and take a look at it and then like, oh, I'm going to tap that AR button. I'll just drop it on the floor and look at it at real scale. I did this two nights ago in the hotel room. You can look up all kinds of stuff. Lions, tigers, bears. Finally, distribution channels, of course, will be restricted and monetized. Streaming services are going to be built around exclusive content. We've known this for a while, but boy is it accelerating. So many content producers are launching their own streaming services rather than partnering with existing channels like Netflix and Gulu, who of course are creating their own content as well. CVS launched All Access in 2016-ish which provides access to streaming only content like the new Twilight Zone and Star Trek Discovery, and I believe the upcoming Picard. Anybody? It's good. Get excited. By August 2018, two years later, it had 2.5 million subscribers because essentially of Star Trek Discovery. In January this year, NBC announced that they are going to enter the streaming wars and this means that they're probably going to end their deals with Netflix and Gulu. So if you want to watch the office, you better pay NBC directly. There's money in podcasts. So don't let their DIY roots fool you. Podcasts are indeed serious business. This month, the Interactive Advertising Bureau, the IAB, reported that podcasts ad revenue in the United States increased 53% in 2018 totaling $480 million annually. They expect that ad revenue to grow another 42% this year to $680 million U.S. Podcasts used to be platform agnostic, right? It was just an RSS feed with a little bit of information. You download some MP3 files and listen to when you're done. But they're consolidating on paid platforms and applications with exclusive content deals. Who here listens to the podcast? Criminal. Anybody? So criminal did a temporary exclusive deal with Spotify or Stitcher, if you're one of the two, where you had to go to that app to listen to the most recent season. That was just the beginning. So in September 2018, I Heart Media acquired a podcast company called Stuff Media. They think that they paid well, the public thinks about 55 million U.S. This is a podcast producing company. In February of this year, Spotify acquired Gimlet Media, which is kind of a spinoff from this American life called Code Rea, and Anchor, which is a podcast creation distribution and monetization platform. They believe that that deal, those two deals totaled something like 200 million U.S. Quote, according to Spotify, Gimlet and Anchor are just the beginning of its podcast acquisition spree in its Q4 shareholder letter. The company, Spotify, revealed that it was ready to spend up to $500 million U.S. for similar merger and acquisition activity throughout 2019. Podcasts are becoming first class content citizens, and they are worth a ton. Final word. Some things you ought to know that are happening behind the scenes that don't really fit in any other category except monetization. So when you download podcasts, especially if you're downloading it through something like a closed network like Stitcher, Luminary, Spotify, and perhaps also when you're using third party apps like PocketCast or maybe Apple Podcasts, ads are being dynamically injected into those MP3 files. It's taking a clip of audio and slipping it into that MP3 file as you download it based on anything they know about you. If they know that you like these other podcasts and you have certain listening behaviors, they're going to factor that in. They know where you are because you grant a location access to that app. So they might give you an audio local or something that's relevant to your country or state of province. So just know that. If those podcasting ads seem oddly relevant, it's because that's not a broadcast. That's sent specifically to you at the moment you downloaded it. And finally, this one's a little creepy. So I want to... We'll see what you think. So when you visit a site, you're given an ad, several ads, but let's just talk about one. You're given an ad. Did you know that that space on the site, that little rectangle of attention that you're shown, is auctioned off in real time based on everything the site and ad networks know about you? So there is, within Millisat, there's a live auction happening between computers saying, I'll pay $1 for that, I'll pay $75, I'll pay $1.25, $1.30, $1.35. That's happening within Millisat and then boom, you see the ad. So the ad that you're seeing was the highest bidder for your attention. This is commonplace now. This isn't even like advanced advertising stuff. This is what's just expected for publishers these days. Okay. Let's summarize very quickly. CMSs will be content repositories, not website managers, focusing on content. Content will be extensible and modular. Just add some fields. Add a content type. Content creators will finally get the tools they deserve. We're going to hear less about how I want WordPress because the editing experience is great. Instead we're going to hear I want this great editing experience can be attached to that website. CMSs will focus on specific verticals and use cases. Thunder, ARC, Chorus. Content delivery will help us manage and create content. Reality will be augmented though not necessarily virtual. Content delivery will be context specific. You're searching for a podcast. It will give you a playable podcast and distribution channels will be restricted and monetized. I don't think we have any time for questions so that'll be it for me but if you want to grab me in the hall later I will be around. Thank you so much for this opportunity. I really appreciate it. I look forward to the rest of the event.