 Good morning everyone and welcome to your first session of DrupalCon Baltimore. I'm Josh Mulliken, this is Toby Hagler. We'll be talking about building a platform for MBA.com on Drupal8. The theme of the latest incarnation of MBA.com was and continues to be nothing between the fan and the game. And you'll see that theme repeated throughout the presentation. So for this morning session we're going to talk about four key areas that really help us meet this goal. First of all it's important to know that MBA.com is more than just a desktop website. It's more than tablet and mobile. It needs to help support other devices. PlayStation, Roku, Apple TV, Smart TVs. It's all about providing a very consistent digital experience for the fan. And content is keying on MBA.com. We have to integrate live scoring data, editorial content, live video, edited down video clips. And of course the video is very, very crucial. It's everywhere on the site. Yeah, video and content. It needs to be something that editors can be able to bring all of these things in a very fast pace and efficient way. The editors have to be able to knock down the wall between the fan and the game and get the content to them as quickly as possible. And serving something like basketball the site has to be fast. From editorial performance to front-end performance and server-side Drupal performance everything has to get there as quickly as possible. So there's a lot of different pieces that make up MBA.com. There's live scores, there's game and team data, there's editorial content, there's video. And those are crucial things to the website. There are some services that are just going to handle these things better than Drupal can like live scoring, video processing. Plus that's just kind of where it lives. That's where some of this stuff originates. A lot of this stuff gets reused and syndicated through other Turner systems. So it's important that those things continue to live where they've always lived because that's where other systems are going to need it besides Drupal. But the nice thing is Drupal 8 specifically plays really, really nicely as a multi-tiered content stack. It works really well in the content ecosystem. It doesn't necessarily mean that everything has to be managed in Drupal. It can play well with others. It's a good team player. It was one of the themes of Drupal 8 was that getting off of the Drupal Island when we try to integrate with Symphony, trying to incorporate over the things. And so that's just one of the things that Drupal 8 does really well. So we try to let Drupal play to its strengths. So all this is designed to seamlessly bring the game closer to the fan, regardless of where the content originates or lives, which we're going to show through the rest of this presentation. So one of the initial thoughts was, we're going to do decoupled or headless Drupal. We knew we were going to have to rely heavily on Drupal themed components though. So it made sense to let Drupal take at least some of the theme work, just as much of the site as it knew about anyway. In most cases, Drupal is going to be responsible for rendering about 60 or 70% of any given page on the site. And so you see in the slide, it's kind of color coded to where maybe some of content is going to originate from. So the render stack basically looks like this. Drupal renders the content on the page that it knows about. It renders this in origin. Then the page travels through the CDN. The CDN is going to be responsible for merging. Edge site includes fragments on the edge and hand that on in a very efficient manner. And then live content, other external content, other external data is going to be assembled into the page using Angular 2 apps. Also, you want to keep in mind the fact that PHP is essentially single threaded. So the use of ESI and Angular 2 to help let us assemble these things, let's just do so in a very effective manner. We're able to use multi-threading essentially, since PHP is responsible maybe for building some of these other fragments at different times, we're able to offload a lot of that out of the initial thread. All right, and we started developing Drupal.com, MBA.com on Drupal 8 in late 2015, early 2016, and none of the technologies we wanted to use were actually done. So the theme of this slide is pretty much nothing between us and abject failure, because Drupal wasn't even in alpha yet when we got started. PHP 7 was not quite out of the gate. Angular 2 was in alpha and had some big breaking changes after that. AWS wasn't playing great with Docker yet at the time, and the modules for Redis and some other things weren't ready for Drupal yet either at the time. So it was very interesting world developing while all of the technologies we were building on were also being developed. So with all of these technologies, now that we've got these in place, we've kind of worked through the kinks of getting these things to work together, that let us focus on the editorial experience and content management aspects of building MBA.com, so that way we were able to bring a finally curated game experience to the fan. So it's important to note that we didn't want to limit ourselves to just the traditional page, there are quotes. So much of what's on MBA is fluid in that what Drupal renders initially but the final pages that fans receive are not going to be the same thing. There's going to be continuing enhancements to that page along the stack for rendering that content. The site's not made up of a traditional set of node pages and views and terms. Literally anything should be able to be placed anywhere that makes sense on the site given the context of what the fan is looking at that given time, regardless of where it comes from. And that really is in keeping with the theme that nothing should be standing between the fan and getting to the content no matter where it comes from. So to do this, one of the tools that was in our arsenal is we've made heavy use of the paragraphs module. In a lot of ways this frees up the editors from having to think about things like templates and layout and being constrained to those very limiting concepts in a lot of ways and gives them a lot of power to add whatever piece of content makes sense at the time. So paragraphs leads to componentized content. These components then let us render pieces of content either directly into the page or to reuse later as ESI fragments. So now each piece of content can be rendered at the appropriate point in the page assembly whether it's Drupal rendering the page or ESI tacking that on later, pulling things in with JavaScript fragments. So there's very much this modular layout concept that you can mix and match content at will based on how content relates to each other. So the next big concept that we had to come up with for NBA.com was our content collections. Our editors work very, very fast to keep up with the games and they can't meticulously go through and curate every collection of content they want but they don't want to lose the power to do so. So we developed the content collections which kind of combines the functionality of a node queue and views. The editors are able to select a number of or in number of content items they want pinned to the top of a collection which is kind of the node queue type part and then the views type functionality they're then able below that to say and I also want to load anything that's tagged with these four taxonomy terms but maybe there's a sponsored section of content they don't want to include and they can actually exclude other content or other taxonomy terms as well so they're able to get really fine-grained control but still have the content show up as it's published and not have to be manually hand-holding a node queue. And then video of course is very important and this is one of those places where we had to have Drupal be a good neighbor to other systems. Turner Broadcasting is at its core a broadcast company. We handle video, we handle lots of it, we have existing systems that handle video so our challenge here was to make sure that Drupal can hold on to some of the metadata about video and make the Drupal ecosystem aware that a video exists but play nice with that being handled with our outside encoding systems and storage on our high availability CDNs because honestly the scale of video that we're serving out Drupal would fall over if it had to handle all of it. Let's leave pass integration. I forgot what we were going to say about that. So there are several types of videos that come into plays. There's a lot of video clips. Man, did you see that foul last night? That was terrible. That kind of thing, that's the thing that fans want to see all the way to watching live games. This was actually last night's game. In Mosaic View when you can watch the game, you can keep the camera on the leader for each player, for each team and get even the goal shot. You can watch video in a lot of different ways all the way to League Pass which is a pay service that NPA provides. I think there's some television cable entitlements involved and so all of that integrates to let the fan truly get immersed in video and keeping with having nothing between the fan and the game. So in order to bring all of this content to the fan, one of the most important things is the editorial experience. So basketball games are fast moving and the face of the game can change in a split second. So just as live games can change momentum, the editorial staff has to kind of keep pace just as fast, right? And so really nothing should stand between the editors and bringing the game to the fans. So to that point, we dedicated a pretty serious chunk of fan power to the editorial experience itself, not just to the fan-facing's website, but to the editorial experience. So we even had developers sitting in a control room during launch where as these games are going on and as things are occurring, we're able to react to things in real time to help make improvements on the fly. So in many ways, the editorial UX was just as key to the project as the website itself. So editorial experience, it's so important that we're still making continuous improvements just as we keep adding more features to the website. So one of the key things that we use for the editorial experience, of course, is the use of paragraphs for layout. So paragraphs, I don't know how many people are actually familiar with the paragraphs module and use it for a lot of content layout, but it's pretty powerful. So it's essentially, you think of it as a container of field of data that lets editors pick from a variety of content types that they can swap in and out on the page. So this is one of those things that, you know, you have a lot of field of data and lets you, based on the context of the page you're trying to build, you can arrange things on a one-off basis because editors really shouldn't care what an entity's type is, they just want it and they want to be able to drop it in quickly. So paragraphs lets you drop content in, you can nest paragraphs if you want, you can drag and move things around, but essentially it just lets editors have that content at their fingertips. We even used a lot of paragraph fields for configuration, so each individual block on the website, for instance, can have a lot of tweaks made for it, letting them, you know, rapidly adapt to the content's needs. A lot of people ask, you know, why no panels? Well, it's quite simply, it's just because it wasn't ready in early 2016 for what we wanted to do, but beyond that, the editorial staff wanted a very simple and minimal UI. They desired speed over power because that was just kind of the ultimate thing that the editorial staff needs. So, you know, in keeping with this fast pace, right, editors need media at their fingertips, and so we developed a couple of things that were really powerful tools, one of which was the content bin, which we'll see in a second, and the ability to embed media and syndicate content. So in this screenshot, what you see here is just a typical note edit page. We have a content bin, it's a drawer that slides in and out, just using a little bit of JavaScript. But what, in that content bin, it uses views to be able to search for different media entities and other pieces of content. So just using a little bit of custom theme magic, we're able to kind of show and hide the content bin on any note edit form, and then just using super simple drag and drop API HTML5 markup, we're able to make those media entities draggable out of the content bin and directly into the WYSIWYG. So there's no custom WYSIWYG plug-ins. There's nothing really all that fancy, it just kind of worked because of the drag and drop API. And then we make use of the embed module. The embed module and the embed entities module both will let you embed any sort of entity on the website, whether it's another node, a media file, or whatever. And it keeps those things as field of data so that even though you're embedding things directly into the WYSIWYG, it's completely parsable, it's not just dropping a bunch of markup in there. So you're able to, an editor is able to find what they want, drag it into the WYSIWYG and go. What's more is the content bin has another tab to let you search for content upstream. It doesn't necessarily have to live in Drupal. So one of the things we did was the same content bin can now allow an editor to search for content in an external media management system. Something that, you know, imports from, like, what, Getty images? Yeah, Getty is a really good example. Uh-huh. So, you know, a lot of upstream syndication systems that let you, you know, so you can search on keywords and a few other things. Um, so you get a paginated overlay that lets you choose which image or images to import. Uh, when you import these, they're made immediately available in Drupal. So when you, when you select this awesome dunk, you import it, it's going to be at the top of your content bin right away. Um, the nice thing is because of the system, uh, the editors never actually lose their place. So it's not like they're having to go anywhere else in another page and then have to come back and save changes. It's all happening right there in the interface. Uh, so these media items, they get pulled in as just regular media file entities, uh, directly in Drupal. So that way the next time you need it on another story, it's already in Drupal in the content bin. Alright, uh, so we also started, um, very early on with Angular 2, uh, game pages in particular need a lot of data from a lot of places. And it doesn't make sense to grab live game data that's stored in an external system, pipe it through Drupal on every page load, and push it out to the user that's gonna take us forever. Uh, so we went back and forth with a bunch of uh, front-end frameworks, uh, ended up settling on Angular 2, um, which we briefly regretted early on, um, until they got to uh, release candidate 6, which fixed all of the problems that we had been talking with the Angular team about. Um, mostly that it was early on. Uh, pretty much, uh, monolithic single page app focused. Um, but it just did a really good job of being able to pull content from our structured data systems for live data, being able to get updated content from Drupal, um, being able to go out to our other media systems to grab video content. Uh, it just worked really well for us. And the Redux model of data storage, I mean, I both need to talk a little bit on this one. Um, so, so one of the, one of the things that you run into any time you're dealing with live data, especially, you know, client-side assembly, fetching live data can be very taxing. So until a score changes, there's no need to keep requesting data over and over again, just because a user's clicked through to another game, right? So the, the data that you, you pull in, uh, using Angular, uh, gets put into a local data store. This data store is going to travel with you throughout the site. So Angular stores this data in local storage. It carries it across multiple pages. Uh, it also shares the same data with other Angular apps. So if, if you're on a, if you're on the homepage, for instance, uh, it's going to load up the entire schedule of games for that day, along with any scoring data that it knows about, then when you go to a game, uh, you know, you go to the Thunder Game, uh, and you want to watch that video, the same data is carried over, uh, for that, that game's schedule information as well as the scoring information. And so if you have multiple apps on the site, or even multiple apps on the same page, they're all sharing the same bit of data. So there's no need to go and fetch it out. Uh, also when one app says, hey, the score's changed, let me update the score, then it's immediately available to any other Angular app on the page. Uh, so that's just, you know, one more way that this kind of keeps the fan closer to the live game experience and allows them to seamlessly track games. So I think we've, uh, mostly come around here, but here's just an example of where, uh, the Redux model comes into play. Uh, if you look at the, uh, left-hand side of that screenshot, this is a game page playing a live game, and when the user came into this page, the schedule was able to render very quickly on this new page load, because all the data to build it was already in the local data store. We didn't have to make any additional, uh, API or data calls, uh, because the page already had the information it needed. And that also means that when you have that, uh, brief flash of out-of-date data on a new rendering, that it's a couple seconds old, instead of however old the cached version of your page on the server is. Okay, so this actually gets into the part, well, I'll be speaking a little bit more. Um, early on, we started on this before Drupal started the whole API-first initiative, and early Drupal 8 didn't have a whole lot available, uh, when it came to powerful APIs, so and we also decided we didn't want SQL queries getting between the fan and the game, um, in generating our content. So we have set it up so that we actually de-normalize all of our data as it's updated and push it into an elastic search instance, which is the backing store for our content API. This allows us to build as many different microservices as we want in need for various types of APIs. Uh, we have a standard JSON output content API that we built and actually have already created our version API, a version 2 API which sits alongside it. And this has also allowed other teams uh, within our development group to spin up a Facebook instant articles service in Node.js, a an Apple news uh, service, and, uh, service all without bloating our Drupal code base. And all of these things can depend on the de-normalized data structure in elastic search and get very fast results. And because Drupal is updating the data in elastic search every time there's a change, it stays up to date. Alright, and then the cloud. We actually, this was our first foray into hosting anything in AWS and our first foray into Docker. Um, and previously within Turner we were very siloed and we had a database group. We have server ops. We have, um, other operations teams. We've got a few people that used to be siloed for that but we've moved more towards a DevOps uh, infrastructure and workflow and going into Docker and AWS has really allowed us to become the masterverse of our own destiny and respond quickly and really tune the infrastructure of the site and be able to develop both the infrastructure and the code to work with each other. And so uh, Docker, it allows us to run the same Drupal infrastructure with the same version of Linux, the same version of PHP, the same version of Nginx, everything on the local development stack as we're running in production. We don't have to worry about a developer's NPM version being older or newer than the one we have in prod and it really allows us to miss some of the or not even have to think about some of those version mismatches you get when you're doing local development in a more traditional sense. Uh, also allows us to control our compute density and spin up or down as many or a few containers as we need to serve the traffic that we're getting. Docker also makes continuous integration very easy and we use Docker Compose to build our local environments, which is helpful. Um, we don't have too much time to go into the intricacies of serving Drupal on AWS. Uh, probably deserves its own complete session, which was actually little, um, saddened that none of any, uh, Drupal and AWS sessions this year, but if any maybe next year, but if anybody is interested in sharing thoughts, you can come up to us afterwards, um, and if there's enough interest, maybe we can, uh, get together and put a, put together a bot. Alright, and another thing, everything about this has been making things fast. We want to serve our content fast, and we want users getting to the game fast. So, so yeah, oftentimes, uh, that, that means you have to develop fast, too. Um, you know, there's a tent pole event coming up, there's, there's some, some thing has broken and you, you, you gotta get things up fast. So just as kind of a, a bonus slide, we wanted to talk, you know, uh, really quickly about, uh, uh, our get branching strategies, uh, that we used to, to help us do a lot of things, not lose track of what we are, uh, not introduce a lot of things into, into production that we don't want. So, because of the fast pace of development and the need to keep our mainline branches pure and, and as free as, of untested or unapproved changes as possible, uh, we used what we, we dubbed, uh, continue flow. Uh, it's kind of a derivation of wonder flow as our get branching strategies, and so if you're familiar with things like, uh, like get flow, for instance, we threw that out, uh, because that just wasn't going to cut it out. Uh, we have a, a very widely distributed team, um, and, and things, you know, move very quickly, um, and, and so things like, uh, tent pull events, you know, like the all star game, for instance, uh, it's a very good use case, uh, because you have development going on, you know, one developer may be doing a couple of bug fixes here and there, and some feature enhancements, uh, but it, all the while also working on some parallel work in a, in a separate ethic branch, essentially, uh, that may last and so the ability to get things out in integration testing without interfering with the rest of development is, is, is really critical. So the, the analogy we would try to use, uh, very frequently is that if you have an integration mainline branch, uh, a staging or a QA mainline branch, and a production mainline branch, these things never, never intersect. They're, they're running in parallel. And as a developer works on a particular feature, uh, they, they pull it in for master, they do a whole lot of work, they make things better, uh, and they, they'll merge that into integration, uh, first, so that that's up on the integration server, uh, you're able to test it, someone says, ooh, no, no, no, look, look, that creates a regression with this other feature here. You have to, you know, instead of having to worry about pulling that back out, because, you know, if you think about it, once you dump something into the river, trying to get it back out is tough. Let's, let's face it. Uh, so that's okay, you can let that go in the integration environment, because that's a, that's a transient mainline branch anyway. Uh, but once you get it working in integration, and you're happy with it, then you move it on to you merge the same feature branch back into, uh, the QA branch. The QA branch then is what the QA team is actually doing all of their testing with. There's a combination of automated and manual testing that occurs there, and everything that's there is assumed to be ready for production. Once QA is signed off on it, then that feature branch is merged a third and final time into, uh, the master branch, into the production branch. Uh, at that point then it's ready for production, and you can absolutely trust that anything that's in master, uh, doesn't need to be re-evaluated for any kind of further QA, it's ready to go. Now the key thing to this is that you never merge a mainline branch into your feature branch or the master, right? You get flow, often times the process is you know, you create a branch from develop, you make some changes, you try to merge it back in through a pull request and you get a merge conflict, a big yellow exclamation point that just ruins your day, and so a lot of times the fix is real simple, right? You just merge, develop, back into your feature branch, uh, resolve the merge conflicts locally and push it back up. Well the problem is you've now introduced a whole lot of unknown changes to your feature branch, couldn't go into QA or stage. So we worked out a lot of other ways to resolve merge conflicts that keeps things very unidirectional. Alright, um, and I don't know if there's a lot of other performance geeks in the room, um, I personally, uh, stay up uh, awake at night, uh, thinking about not only milliseconds but nanoseconds. Um, so some things that we discovered to help make Drupal 8 faster, number one, run PHP 7, and make sure you turn opcache on. Uh, opcache, um, even though the underlying uh, performance of PHP 7 is greatly improved, and there's some opcache-like things built into the core of PHP 7, in practice, opcache still about doubles the performance of PHP 7. Uh, another, another key thing is get cache, get temporary and transient data out of SQL. If it's not content, if it's not config, you don't want it in there. Um, there's no need to have, um, you know, anonymous page requests, you know, essentially writing to the database. Uh, so, so get that out of, out of SQL as much as possible. Yeah. Uh, also you want to write size your caching. Uh, my best example of this is if you've gotten a block that lists related content, and you've got maybe five different uh, derivatives of that block for five different main content sections, and you've got about a hundred pieces of content in each of those. Uh, if you just, um, use the defaults and let your cache context be the URL, you're gonna be rendering and caching five hundred separate copies of something that you only really have five different um, varieties of. So it's very important that, uh, as you're getting into tuning your performance, learn how to create your own cache context, learn how to create your own cache keys, and when your cache keys should be cleared as the related content changes, and you're gonna save yourself, uh, a lot of time in the page rendering. Yeah, speaking of page rendering, uh, just in time or, uh, you know, assembling content where it's appropriate is really crucial, too. So, you know, let Drupal render the things that it knows about. Don't let Drupal spend a whole lot of time worrying about things that live outside of Drupal. Uh, you can use, uh, you can use Akamai or Varnish or any number of CDNs that support Edge Side Includes to pull things in for you that assemble on the edge. It's gonna be much faster than letting Drupal go and fetch those things and try to merge it into the page. Uh, it also eliminates a lot of cache issues, um, when you use Edge Side Includes because you can serve a cached page from Drupal, and then conditionally add content based on user preferences, geolocation, uh, and that sort of thing, uh, to a cached page, so Drupal's just serving the cached page, ESI adds things conditionally, um, you know, and then delivers that to the user, which then of course, you know, Angular and other JavaScript, uh, Magic will, will finish the page for you. It adds the polish. Um, horizontal database scaling is also something that we've put a lot of work into. Uh, that in and of itself isn't necessarily a performance booster. You can also scale vertically, uh, but especially if you're moving into the cloud, vertically scaling your databases gets very, very expensive. Uh, before we started working on horizontal database scaling, um, I think about 75% of what we were spending in AWS was just on a really, really big uh, Aurora DB instance. Uh, and so unfortunately, horizontal database scaling is easier said than done in Drupal 8. Um, we had to do a lot of work, as Toby mentioned earlier, to get the all the transient data out of SQL so that our web containers, which we'll talk about the splitting of containers in a minute, but so that, uh, public traffic isn't trying to write to the database, which allows us to take those containers that are serving to the public and just point them to a read-only database and, uh, Amazon's Aurora DB, uh, gives us a, an endpoint that will automatically scale across read replicas uh, which makes things very nice there. Um, I guess I'll go ahead and talk about the roll separation as well. Um, we, we also wanted to fire, kind of firewall the different tasks that Drupal does, so that if the editors are doing something really heavy, we got a couple dozen editors in there, fast and furious, adding new content, adding new videos, that they're not taking up all of our compute space from the fans that are trying to get to the new content for the game. Um, and I don't know if you've ever dealt with um, Drupal cron tasks in Drupal. Drupal cron can get really heavy, so we actually separated out a third role of a utility container that will run all of the Drupal cron tasks that, that's where we will put out API endpoints for other internal systems to push content into Drupal, uh, and anything automated and back indie that doesn't actually interact with users. We put on that separate thing so that we're not bogging, again, not bogging down the compute that is serving the fan with automated tasks that they don't care about. You know, earlier when we were talking about getting cache and other data out of the database, um, trying to horizontally scale, one of the things that really helped was we actually even moved PHP sessions out of the database uh, into Redis. Um, and that was, that was uh, that was a particularly interesting lift and we learned a lot about service decoration. Um, that, that probably alone deserves another session. But um, uh, yeah, so, so, once we were able to move PHP sessions out of the database because even anonymous users still can potentially trigger a session that gets written to the database, that was sort of the last key piece to be able to put the database into read-only mode. So, most of what you see on NBA.com is actually getting served from a, a read-only uh, database. Um, another interesting point is the, uh, the Redis Sessions module that we did uh, could potentially let you extend to using other PHP um, session, uh, session handling systems, uh, using a lot of the built-in symphony, uh, native session handler, uh, plug-in. So you could use, uh, the stock PHP session management again. You can use file systems, you can use uh, MongoDB, you can use Memcache, uh, all of these things based on, uh, service decorations and, and using the Redis Sessions module as sort of a template. I've actually been talking to uh, Sasha Burdeer, um, as most people are gonna know him, uh, who maintains the Redis module about, uh, helping with uh, maintaining the Redis module, um, as a result of some of this work that we've been doing, um, and contributing back the Redis Sessions module as a drop-in replacement for, uh, session handling in Drupal. Um, the other nice thing is, is it does keep all of the optimizations that Drupal has made over the last couple of major versions, uh, including session migration, uh, session deletion, uh, preventing anonymous users from creating sessions that get saved and so on. So that sort of thing is, is, is something that it is quite literally just a drop-in. You enable it immediately, uh, using, um, sessions, uh, out of Redis. Alright, and here we've got a, a nice little graph of some, uh, performance changes that have happened over time on site. Um, ignore the big green blob at the beginning. That was, um, kind of happening as we were importing all of our data from our legacy CMS, blowing it all away, importing again, um, but the first really interesting thing is after the big green blob falls off, um, and we go up that one little spike and then back down. There were a few performance, uh, some, there were some bad performing code that got put out and then fixed. Um, and then you see that third little spike there, uh, that's actually when we went live and went from an average of 50 to 100 requests a minute from all of our QA testing up to, uh, depending on the time of day anywhere between two and 6,000 requests per second that make it through to our origin, um, after all of our, um, Akamai offload. Um, so we were actually very surprised that we got just about a 25% uh, bump in our response time there. Uh, things kind of go along. We weren't very happy with our performance there. We made a few tweaks, uh, and we actually upside that first fall off, we upsized our database and started spending way too much money on Aurora DB. Um, and then go along and that first drop before the end of daylight savings time, we turned on OpCache, thanks to, uh, James over there, and also made our first set of pulling, uh, things that Drupal doesn't, uh, easily and automatically move out of the database when you turn on a Redis module or a Memcache module. Um, and that got us down to about 250 milliseconds and uh, through various code releases that maybe released something that wasn't very well optimized and re-optimizing. Uh, we've hovered around 250 milliseconds uh, since then. Um, but we've got another round of improvements coming soon uh, for the just a few more tweaks that should hopefully be pulling us down to, um, consistently staying right about 150 milliseconds and 50 milliseconds on a response, which makes me very happy. Um, so as Toby mentioned, um, we wanted to get our cache out of SQL and, uh, we learned that SQL is not the best place to store a cache. Um, and about halfway through our optimizations, uh, we're looking at New Relic, if you don't have it, get it or another high performance, uh, uh, profiling application and service. It will save your life, uh, because you don't know what to fix unless you know what's happening in production. There's gonna be problems in production that will never occur on your other environments just because of the scale of traffic you're getting and it really changes the landscape. But about halfway through optimizations, uh, we saw that most time we were spending talking to SQL was that the dependency injection container queries were taking more time than looking up nodes, uh, running views, loading the home page, anything, um, and thanks to uh, document we found on, uh, platform.sh's website, so thanks to you guys if there's any of you in the room, uh, we were able to uh, take the dependency injection container and move it uh, into the chained fast backend, which means about 99.9 and several more 9% of the time it's being served from local memory, and so there's no network requests that have to be made and it gets that dependency injection container very fast. And then it will also use Redis now for consistency of that container. If it doesn't have it, it'll go grab it from Redis and put it back into uh, local memory. Um, not gonna really go into this in detail, we just really included uh, a sample configuration for Redis in here, so that if anybody wants to see how to set it up, you can go download the slides, uh, probably tomorrow, um, and have something to get started with for getting all of those uh, caching pieces and transient data out of SQL into something that uh, works a little bit better for that. Yeah, we also promise we're gonna try to blog more about this kind of stuff. Yes. Alright, uh, so does anybody have any questions? Uh, if there's a microphone right here in the middle of the auditorium in the room, that'll help with uh, getting those questions recorded for austerity. So when a bug comes out or a new feature happens, um, obviously sometimes you might need to clear the cache and with so many different layers of caching, what is your, I guess, cache clearing strategy when you have that many hits all the time? Um, so our strategy is we clear cache as little as possible and try to only clear the cache that needs clearing. Uh, that can present some challenges. Um, if you are needing to clear some of your code caches or your CSS or your JavaScript cache, uh, one thing we do is uh, each build we take a hash of all of our code and pop it into a uh, setting in the database so that we can check it and we, during our post deploy, we will uh, compare the previous hash and the current hash to see if any code has changed. If the code has changed, we clear all the code caches. Uh, we also make sure anytime we update the PHP in a module or update any CSS or JavaScript that's included by a library, that we update the versions of either the module or the libraries, and Drupal will automatically handle clearing those caches for us. It'll put um, it'll, if you're serving the files individually, it'll put a little version decorator that will, um, cache bust, uh, or it'll all again clear the container injection and module as cache. Uh, if you're, and other caches, it's kind of knowing your site using, uh, Drush or, uh, Drupal console to clear caches, just learning which caches need to be cleared when. Uh, and as you get a better picture of that, automate as much of it as you can so that when something needs to be cleared on a build, you clear that cache on that build. Thanks. Hi guys, thanks for a great session. Um, we had a, recently a project where we were a lot of developers, a lot of continuation, continuous deployment. How did you handle configuration management? Because we've been seeing a lot of issues where people didn't get updated configuration or overwrote stuff. Um, we use the, the features module which, um, is much more like, uh, configuration management in Drupal 8, uh, than it is any previous, uh, iteration of features. Uh, but that really helps us split the configuration management out into the modules that that configuration is appropriate to. Uh, now there is, at least in the version we have, I don't know if it's been, uh, fixed since then, I haven't checked on if there's any updates lately. You do run a little bit of a risk of getting, um, duplicate configuration between modules, so you gotta keep an eye out for that and make sure that you're not, uh, creating a the same YAML file in multiple places because it gets confused. Um, but that's really helped us keep that separate and reduce the collisions. Yeah, one, one, one thing that we've, we've also talked about, uh, uh, there's a, there's sort of an emerging best practice with using features in configuration management is that for, for the course of development, uh, it's best to store all of your information for a particular feature in an exported features module, uh, and that's, that's the same exact, um, you know, CMI, uh, YAML files. It just, it keeps it there. But when you're getting ready to go from making the leap from your staging environment where you've queued everything and you're happy with, with everything, do a single pickup of the entire CMI and drop that into, uh, production. And, and that's something that could be easily automated as well. So that's, you know, one for here and it's kind of an emerging best practice on a lot of other, uh, features, you know, there's a, there's a blog post about, you know, using features and CMI and how they kind of work together. And so you, you can do the, the fine tuning of configuration management using features and then when you're happy with the entire site and it's just a matter of picking up that and, and, and dropping it into production. Thanks. Great. Hey guys, thanks for all the good information. Setting metrics for your speed testing. So a couple of years ago there was a study about Walmart and when they optimized speed on their site, they found that dropping the speed, I want to say it's two seconds, it might have been two milliseconds, it had a, a result of increased sales and increased cart, uh, by just a ridiculous amount. So I was wondering if you guys had any metrics that you were looking at, bounce rate or a user, uh, engagement or sales or anything like that. Um, we, we do, I don't, uh, have all of that, uh, data ready in my head right now, but, uh, we definitely looked at how it affects, um, the end user and been, um, more so than even that though, we just focus on what our worst behaviors are and, uh, try to improve those. Okay, thanks. Hey guys, thanks a lot. Uh, I'm curious a little, to hear a little bit more about your kind of ESI strategy, uh, and maybe, I have a couple questions, maybe what kind of cash off load from origin you've been able to achieve, uh, and if there have been any concerns about maybe the ways you've used ESI, uh, leading to like vendor lock in, having to use a certain CDN. Um, let's see, I'll actually address the vendor lock in, uh, piece first. Um, changing vendors is always painful. Um, once you've chosen a vendor, if you're going to take advantage of the reasons you chose that vendor, then you're gonna get vendor lock in. Um, and being a, you don't want to do fear-driven development. That's a mistake that our group has made a lot in the past and we've suffered severe limitations because we were afraid to take advantage of the technology we had at our fingertips, because what if we wanted to change? Um, so that's my take on that. Um, and as far as the, uh, ESI were, um, we've since, it's very similar to the way that you would do something if you want an Angular app in your page. You, you take a block or whatever other piece of content, uh, you create a, an endpoint or a route in Drupal that will serve the HTML that you want to go in that place, and then when you're rendering a page, your template for that thing is just the code for the ESI include that Akamai or Varnish or whatever service would come back to Drupal and get. Uh, one, one, one way to, to think about it is if you were to take a block, uh, add a bunch of fields to it for configuration, you know, some sort of like set of settings for that block, uh, then place the block on the page, um, you could, you could designate for every country code, for instance, uh, use this separate fragment. And so we do a lot of geocoding because NBA is, is, is worldwide and so we want to be able to show different content to different countries, uh, based on where you're at. So if you're in Canada, for instance, uh, you might get, um, a, a block at the top of the page that's dedicated to content about, you know, Canadian players or, or something that might be of more interest. Um, so you can use blocks to use sort of the, like, you can manage which fragments you're gonna get displayed, uh, all in the UI. So you don't have to do a whole lot of, you know, code tweaks later. Um, as you, you know, deploy to creating, you know, content for other countries. Uh, and then, uh, you know, you can just manipulate that in, in that block configuration. Uh, and then the template for that block is essentially just pulling together the ESI, uh, include the, the, the, the, that include, you know, with the appropriate fragment from your configuration. Uh, and it's all conditional based on, uh, the geo-targeting, uh, ESI code. Thanks, guys. Thanks for the talk. Uh, I have two questions. First one, I'm curious, um, what made you choose uh, Angular 2 over the other frontend frameworks? Um, I mean, in the end we, we did a lot of comparison. Uh, we were kind of hanging uh, right on balance between Angular and React. And we were coming up on the deadline where we had to choose and had to start uh, developing and just kind of on gut went with Angular. Yeah. Uh, early on there was, there was sort of a, uh, there was sort of a, I don't know game is the right word, but it was basically find an Angular developer, make them write something in React, and find a React developer and make them write something in Angular 2. And depending on how well they were able to pick up the nuances of the new language that was going to help influence. And because Angular 2 was a little bit easier to pick up because it has some things from Angular but a lot of concepts from React uh, it was, it was one of those things that was kind of the push over the edge a little bit. I had actually forgotten we did the bake off. Yeah. Um, also it also helps to note that we actually did spend a little bit of time talking with some of the developers of Angular 2. Uh, and you'll find that Angular 4, which is out now, uh, has baked in a lot of the things that, that we ran into and they used that to kind of fine tune the development of Angular 4. We found out recently. Yeah. Thanks. And second one, do you guys use Preview? Um, so like Preview of, of Node content? Node content. Yeah, I'm just, um, curious if you had any issues, uh, working with Preview and microservices and frontend framework. Um, yeah, we, we had a number of issues with Preview. That was, uh, that was one of the things that we, we, we spent a lot of time early on, um, trying to get, you know, like, like the Preview of an article page, for instance. Uh, right, so editors are working fast plays. They don't want to save it yet. Um, so they will, uh, want to Preview that. So, we actually we had a lot of trouble getting Preview to work, um, with all of these other pieces. Um, there were just a lot of things that were interfering. We could probably revisit it now and, you know, resolve a lot of other issues. Uh, but one of the, one of the things to, to Preview is that we actually use a little bit of, like, work bench moderation to have editorial workflow. So, uh, an editor will make a change and it will actually save it as a draft and then they can just view the draft. Uh, so that way they're getting a much more live representation of that page. Um, and just another note on Preview from our journey, uh, not only on MBA.com but also uh, work we've done with Preview on the MBA team sites. Find out what level of Preview your editors need. Um, sometimes not even what they want but what they need. Because the more you have to Preview you almost end up with a hockey stick graph of effort. Um, when you start having to Preview the home page with all of the articles that are saved in a draft state uh, it can become quite a mess and a whole nother level of content management just to manage your Previews. So the simpler you can keep your Preview um, where it still meets your editor's needs is very important. Thanks a lot. So this is gonna be a kind of vague and broad question but what were some of the big aha moments you guys had while developing the site where the light bulb went off and the clouds parted and just something major happened for you. Yeah, um, one of the biggest ones that popped into my head is cache contexts. Um, there again, there's a few out of the box and they seem, they're the ones that you always need uh, but they're very broad and writing your own can make a giant difference uh, in not only the performance of your site but when your content becomes available because if they're too narrow stuff doesn't get updated. Uh, that's one of the biggest ones. Um, another one is listen to your editorial staff. Uh, let them dictate what their, because think of your editorial staff as a consumer of your product, right? So getting prototype code in front of, in the hands of an editor is going to let you fail early, you know, fail fast. Um, they're going to know that, you know, oh, this thing is great, you can drag and drop all this stuff and that's really slow. It's not, it's a little bit sluggish. I want to shave like a millisecond off of that time, you know, so, you know, iterating with your editorial staff is quickly as possible I think is really key. Um, in a previous life actually worked in a newsroom, you know, I was looking at journalism and I got into web development and then Drupal development, working with newspapers and other editorial staff. Editors think completely different than developers. And so working with them one on one getting their feedback is that was a really key moment. Uh, and that really changed the game for us in a lot of ways with how we built the admin interface. One quick follow-up question. What is one incident that would show up on the nba.com development blooper reel? Things were surprisingly smooth given the effort. Yeah. Um, I mean obviously, you know, OpCache was a, you know, some choice words I think were had when that was discovered. Um, I think another one, you know, along the lines of caching was storing user specific cache in between pages, instead of static cache for instance. You know, when you should be using static cache. I mean, it causes a lot of issues. If you have multiple administrators logged in as the same user suddenly you start getting a lot of cache collision because it's trying to share that in the database. So if you use, you know, Drupal static cache it can make you walk a lot easier. That was another thing we found as a performance issue. Thank you. Hello. Thanks for the great presentation. My question circles around, you talked about using local cache for some of the displays and I was wondering what was your strategy for updating these because you showed an example of the live event page. So I'm thinking like when this call changes this is supposed to change in real time. So what was your strategy for that? I'm talking about the strategy for updating live-scores. How were you updating the live-scores in the local cache? Were you pushing it out? Yeah, so in Angular 2 we developed services for each data source. And between the services and our Angular components they're aware of kind of how time sensitive that particular piece of data is. So something like the entire schedule for the season we're not going to be updating that every 10 seconds. We're not going to be looking at that feed on every page load. I think we may look at it every 10-15 minutes to pull a new copy in but things like live-scoring for the game that is live right now we go and check every few seconds and it's really just in how you configure your services. Another good strategy for doing live-scoring too is if you have a service that is capable of establishing a web socket then the browser can make a web socket connection and use that connection open between you and the service. It's a little bit trickier if you're bouncing between page and page but it's a really common strategy for live-scoring too where you keep a socket connection open and during times of no transmission then it's just open but nothing's happening so it's a very low bandwidth thing. And then when the service, the scoring service sees that a new score is updated it's able to say hey everybody who's listening broadcast out the new score and so my browser then gets that ping from the open web socket that the live score is updated in the page and then update that score in the local data store and then you can also use like a long pole effect so you can you know if say after 15 seconds I haven't received anything from that web socket communication let me go double check and make a new request and re-establish the web socket. So those are some of the features to live-scoring as well. Thanks. We're almost 5 minutes over time so I think and we've run out of lines so I think that's a good time to cut off the questions so thank you very much everybody. And if you're interested don't forget to check out Toby's other session at the opposite end of today on Dungeons and Dragons in Drupal. Yeah Yeah So you can get as you use MemoFan and so like by default in the name of the coffee you can use it to point to your server all of your big patches are stored there so Drupal you know like you know like you want to put that in the configuration and stuff.