 Welcome. Yeah. Hello and welcome. We are having a round of lightning talks, four talks after one each other. We are starting with a shorter, faster way to performance, making the cache run in shorter than 12 parsecs with faster in triple eight. I'm Fabian Franz and this is Doc Velco, and here we start. This is us. This talk got condensed down from 45 minutes to an hour to 15 minutes, so that's why we're gonna be rushing through a little. We cut a lot of stuff, but it's still got a lot of slides to go through. So this is us. Let's talk about varnish real quick first though. So varnish is a HTTP reverse proxy cache, which breaks down in three things. HTTP, we all know. We love it, we use it, et cetera. But the reverse proxy is, it means that it sits in front of a web server. And the cache part is, I hope you guys are all familiar with that, is new to Drupal 8 with the full page cache internally. Varnish is like that. Why do people use it? It's fast, it's flexible. It allows you to do certain things like this. This is a really quick thing. If the URL of the request starts with slash search, then lowercase it. It's just a little programming language that you have to configure varnish, called VCL, the varnish configuration language. And it allows you to do a whole lot of simple things like this or slightly more complex things like this. This is to deal with origin and cores and all that sort of thing. It looks at what the origin is, saves it, looks at what it is, and then depending on whether or not it's the right origin it sets the course headers, et cetera, et cetera. This is just a quick example of what the power of VCL can do for you. The advantage of this, Paul Henning Kamp, by the way, is the original author of Varnish and he still runs the project. It's sort of like a printing press for books, but for websites. So the main problem with Drupal performance and the reason why the page cache was introduced in Drupal 8 is that if you have to rebuild the page every time you have to bootstrap, it takes a lot of time. Was it 100 plus milliseconds? Just for a single request. So if you don't have to do that, if you don't have to write a page of every book by hand, but you can just print it much faster. So that's what varnish does for websites. The problem is however, if you go global, say you have a newspaper instead of a book, you have to start doing this. You have to start shipping your newspapers all over the world. It costs a lot of money, it takes a lot of time. It delays the newspaper getting to other countries. So, you know, not ideal. Instead of an airplane though, you could also use one of these. Who's familiar with this ship? It's the Millennium Falcon, don't you know? It did the Kessel Run in less than 12 parsecs. It took a chunk. Exactly. The thing is though, Lucas got a little flack for this because parsec is a unit of distance, not time, et cetera. And then he went on to explain on the blue right, like, oh no, no, no, no, no. The thing is, Han Solo was a smuggler and the Kessel Run was, you know, not a straight line. And you could take shortcuts. And he did it with a really dangerous shortcut and that's why it was less than 12 parsecs. Shortcuts, very important. That's actually, you know, the problem with web performance on a global scale is that we're bound by the speed of light. This thing wasn't, but we are. The speed of light threw fiber around the world. So, you can't really cut down latency with any sort of trick other than making the path shorter. Well, that's what we did at Fastly. We put the printing presses all around the world. So, Drupal 8 has a built-in page cache. Fun. We have caches all around the world that you can then link to Drupal. Fabian's gonna get into the details of that. We have some more features, but the key one is key-based curging. Gonna go into that real quick. Basically, what you do is if you send a response from your origin server to the CDN, you can put a header in there with keys, circuit keys, and those objects that we have in our cache are then tagged with those keys. You do then a purge command based on that key. All objects, no matter what object it is, if it has that key on it, it's purged. Looks a little like this. This is the headers of two different responses. They're both the same article. I had just grabbed some random values. You can see the article ID here, but they have two different templates. Now, normally, if they have different URLs, like one could be for mobile, one could be the regular site, so m.example.com and www.example.com. And then you would have to, if you wanted to purge them both, you would normally have to send two purge commands, et cetera, et cetera. If you just send one purge for the article in 1938, both are wiped from the cache, and if users then request that page, fresh copy is grabbed from the origin. However, if you make a change to the template of the mobile site, at this case, all you have to do is purge template three. Boom. Everything related to that template is gone from all the caches globally. You're done. So, but what about Drupal? In Drupal, what we have to do in Drupal 7, our support was pretty basic and Fastly mostly works for the expire module in expiring that. And then Dries made this blog post to making Drupal 8 fly. She's, I really have been trying to do, women, me to really make Drupal so fast, where we have this much more precise cache invalidation using cache tags, which is the same as surrogate keys. So, and we have this cache tags, and because they are surrogate keys, the Fastly module kind of only has to do, and that's why we can remove so much stuff when it was ported to Drupal 8. It really only has to support those translation from cache keys to surrogate keys and shortening them a little because Fastly has a header limit of 16 kb, and we are done. So, this is now one of the most simple, easy modules put in your API key, getting four quick dashboard, but in reality, it's one even subscriber, and you're done. Okay, so you have very good fetching support and the Fastly plug-ins, and it synergizes incredibly well. But there's more in that supporting cache cons. I'm gonna go a little more into the detail there, and so the common scenario we usually have is we have a web server, we have varnish, we have a web server, and it's happy because it doesn't get much blood on it. And with the CDN, it looks a little different because we have, again, two happy web servers, the varnish, and we have all those CDNs around the world which take the majority of the traffic, but then one day someone had a very, very bad idea. They put a product into a shopping cart, and then this looked like this. The side is slow, the web server runs away screaming, and it's really unhappy, so that's not a good thing. So my solution I've been working on, together with BigPipe and the other thing, this can be the other side of the Drupal caching system, is a 8X prototype I've done, and the idea is we can authenticate the user with a varnish, we can store a mapping of the session ID in the cache context because we have a cache hierarchy, and then we are gonna have a happy web server, obviously. So that sounds pretty complicated, but overall it's not too much, because the nice thing is because Drupal 8 was designed in a special way, there will be just one VCL file to rule them all for all authenticated user caching. Can a no configuration needed plug-in be happy? Out of the box in Drupal 8, and obviously will work with Fastly too. So the vision here is to really be able to run 80 to 90% of the read-only authenticated traffic completely from varnish into CDN too. So how does it work in a little more detail? We have like a shopping cart and it's user cached, and we have a normal block which is per permissions, and the main page content which obviously we all cached, and then what we're doing is we're just putting a placeholder in, and while in Drupal we are putting that very late in that, here we are putting an ESI placeholder, then we are putting a cache text max age, when to invalidate, that we have all available as information, we have the cache context, what to vary on, which is very similar to the HP header, and the variant example very except, and placeholders make all of that possible, and Drupal has auto place holding, which means it can detect what can't be cached, it can detect what's high frequency cache context, and it can automatically ESI those as well, and we also have those lazy builders. So one static VCL for our cases, enable ESI module, combine those placeholder strategies strategically, the web server is happy, we are happy too, here's some resources for you, and remember, as fast as in Drupal 8, you can also make some cached run in less than 12 parsec today. So questions, we have one question, give some mic if someone wants. A few more, we have this, but apparently there can be more than one. Good questions now. Deep, deep, deep, deep, deep, deep, deep, deep, deep. All right. Okay, if there's no questions, then I think we continue to the next session. You put your full screen, like enter full screen mode, try it? Oh yeah, that's fine, cool. All right, hi everyone. Today I'm going to talk to you about the largest Drupal 8 websites on Earth, and how you also can do it. I'm not the one who said that the website I'm going to talk about are actually the largest. The one who said that, I think you have all heard about the guy, he said that Lothon, which is a Swiss media news website, probably is the largest websites built with Drupal 8. This is this website you can check online, pretty big websites in the Switzerland. But I'm also going to talk about another website which is called Swedish Fights, built by another company, which is also a very big website in the news industry in Switzerland, and another one called Willis Arbota, also in Switzerland. Why am I talking about those three websites? Because those three websites have a very strong particularity. What makes them so special? They have all been launched before Drupal 8 was actually released. So those major huge websites have all been built and launched before Drupal 8 was released, as a stable release. I'm actually pretty cool. But also they are built out of the same distribution, the NPA distribution. And they are also all hosted on Platform.sh, which is a hosting solution that I'm working with. How the hell did that happen? You have three competitors, big competitors in the same country doing the same thing, working in the same industry, and they all launched their website with Drupal 8, they all launched their website before Drupal 8 was released, and they are also all using the same distribution for building their websites. And the only solution that was available for them was actually to work together to solve the same issues they were all having. They all have the same issue. They all need to fix the same problem and to answer the same questions. So they decided to actually work together to resolve those problems. And that's how NPA came out. NPA is a Drupal 8 distribution that I'm going to really present now. And the two actual founders of that distribution was a company called SoMedia, very big, big media agency, and Gassman Media. So, I think SoMedia is behind Swedishwise, but they have a lot of website that they are all migrating using that distribution. And Gassmedia built the third one that I presented, release our bot. And they are also all migrating all their website with Drupal 8, with that distribution. And who built that distribution? It's a company in Switzerland called MD System that you might have heard about. So you have those founders that put a lot of money together to build that distribution. You have MD System building that distribution. And now you also have Lothan, who is using that distribution. What is that distribution? It's a really fully functional news portal distribution, fully built with Drupal 8. Some features that I'm highlighting here, like publishing all the community, social management, third party integration with payment getaway. You can do multi-platform, multi-channel. It's built with best practice, fully test coverage. It took one year of investment for MD System to actually build that distribution. They have contributed 42 modules. They have ported a lot of modules to Drupal 8 in order for that distribution to be built. They have packaged 15 custom modules and features that are inside the distribution. And they have a shit ton of scenarios for the test, for Behat. I'm sure you've heard about Behat. It's a big investment. If you're a company like MD System and you decide to put all your power to actually switch to Drupal 8 before Drupal 8 was actually released, it's kind of a big investment. This thing with the distribution, and you're going to tell me that it's a bit weird, if you want to use it, you have to pay a buy-in fee. But you're going to tell me that license with Drupal is GPL, so I get to also give the source code with the distribution. That's where the fair partnership policy come into account. If you put a lot of money to actually use the distribution for all your websites, you're not going to automatically share it for free. You're going to also be able to resell it. So if you want to actually use that distribution or at least test the distribution, you can either come to our booth at Platform.sh and we're going to be able to put you in contact or you can directly contact MD System and they're going to figure that out for you. What are the tools that they're actually using for those big websites? So the first one, of course, is the CMS. It's Drupal 8. Why have they chosen Drupal 8? For some specific obvious reason, like the WYSIWYG, the new inline editing, which is very important for all their, how do you call that, marketing people or the people writing the news, the reporters. But also what they just talked about with Fastly, the new caching system for Drupal 8 and the services integration that were made super easy with Drupal 8. The new caching system is super important for them and I'm going to talk about Fastly right now. No, into slide, okay. We're going to come back to Fastly. Also the hosting, they have chosen Platform.sh for all of their, to host all of their website. One of the big reason was the high availability. They wanted to be able to deploy major feature all the time without putting the site down. That's what we propose with Platform.sh. We have triple redundancy for all of your service, which means like if we do, if you want to do a down upgrade, upsize or downsize of your service, you're expecting some big traffic because you're on TV or something or you have breaking news, you want to be able to very quickly without any downtime, upsize the resources that are serving your website. And also it's 100% Git-based development workflow so you can use multiple applications inside the same repo and some modules that's what they're using. And we also had very early support for Drupal 8 which was important also for them. The CDN, that's a big coincidence that they just talked about Fastly. You already know everything about it. I still want to mention that it's a pretty cool solution because of the instant purging and propagation of updates. Not sure if you've ever used the CDN. I'm not going to give any name, but you know that if you do a change in the configuration, it can take hours to actually see the change in some node somewhere. And if you're testing your configuration or if you've done a mistake, you don't want to wait four more hours to actually get that fixed, right? And instant purging is pretty interesting. And also the key-based purging that they just presented. So the reflection is that... And I think that's my last slide. Yeah, the reflection is behind that. Is that a good idea to actually build a distribution so that it can serve multiple purpose and it can serve all your clients' needs. If you have multiple projects, that might be a good idea and Drupal 8 makes it easy now. Thank you very much. That's a bit less than 50 minutes. If you have any question, I'm more than happy to answer them. Do you use features as... I think they use only CMI, but since they sent me the number of features, they might have some features. But I'd say it's only CMI, but you can check. Any other question? All right, thank you very much. I think that's the next one. That was one question I had too, if you want to answer it real quick. So what's the story behind the astronaut? Our motto is now deploy to the moon. We are behind Platform.sh and we think we can deploy your application everywhere and the next logical step would be to deploy your application to the moon. So we are already working on that. The moon cache context, that's a good idea. I think it's gonna be hard to get instant invalidation. I don't know what the round trips are all the way to the moon, but that one's kind of far. All right, I think we've been doing a blistering pace here so far. We've had at least one talk that was condensed down from 45 minutes. So this is probably the best value per minute of any session, I would say, because we're already at over one hour. So in effective time. So my name is Dan Keebrick. I'm gonna be talking about distributed tracing for performance monitoring in Drupal-based applications. I did notice a trend in the first two talks. I'm just gonna get this out of the way at the beginning. Disappointingly, there are no references to space in this entire talk. And in fact, the outline is here and that kind of proves that. I'm going to introduce and motivate distributed tracing. Explain what it is and why I started to care about it a couple of years ago and why you might as well. Talk about some different approaches to it. I think a lot of this will be motivated by some of the work that I've been doing for the last couple of years with Appnetta, but there are open-source tools and different types of approaches for it and it's pretty interesting. And then I'll talk about some of the challenges that we face when we try to introspect applications this way and some future corrections that it might head. So where this journey started for me was in 2008. I was working for a website, a company called AmyStreet.com. I mean, you probably haven't heard of them because they were competing with the iTunes store. And the goal was to sell independent music online with demand-based pricing. And this was kind of a cool idea that if people hadn't really downloaded something much, if it wasn't that popular, it would be cheaper, which would incentivize you to buy it. And what this meant, it turns out is you need all these kind of crazy features to support that, which I'll talk about in a second. AmyStreet eventually became SongZo, which became Google Play Music. So there's maybe some small part of this still surviving, but I think it's probably changed quite a bit since then. And I had a really good time working there because all the people that I worked with were smarter than me and very adventurous. And so we built a pretty cool application. It started as a PHP-based, kind of monolithic web application, LAMPstack, like you do. And then pretty soon we wanted to have Search. And so Search, you don't want to do full-text search in the database, so you use something Lucene-based, now say Solar. Back then it was something that was Lucene-based that one of the engineers on the team had made. The dynamic pricing, that's an interesting one because you kind of want to maintain state based on what pages people have seen. So let's say that a new album drops on Tuesday, like they do, and you go there and you see a price and it says, this is only gonna be 10 cents per track. And so you listen to a couple of the tracks and you hit buy, and it says, oh, you got charged a dollar a track. And what, that's not fair. Well, what happened was during the time that you were listening to, deciding whether you wanted to buy that album, the prices went up, and so sorry, it's stateless, the web stateless. Okay, so what we need is a pricing service that gives you a little ticket based on what you've seen and remembers that you've seen that and keeps it around for a certain amount of time, and you could probably do this in a relational database. Again, we had some, you know, pretty, I had some really smart and interesting coworkers, and so this was written in Erlang using the amnesia, kind of in memory database, and again, communicating over Facebook thrift, now Apache thrift, you know, to talk between languages. And so pretty soon there was just more and more cool services that built up around PHP, and so it was like this kind of whole tree of things, and I actually have a little map here. And so this was also heavily influenced by like the live journal stacks, so we had Brad Fitz stuff, so we were running PerlBowl and Memcache, and there was actually MogailFS to serve all the MP3s and stuff. And so we had this kind of topology that looked like this, and it was really fun to work on. I mean, you know, to get paid to write Erlang code, that's hard to do. And so, at least for consumer web. And so, I told you this was like 2008 timeframe, and it's been almost a decade, and so you might wonder like, you know, why does he remember this so well? That's kind of creepy, and my memory is actually not that good. The answer is PTSD. So I got a promotion, what I considered an awesome promotion to the ops team while I worked there. And what I started to realize is actually like some of these awesome ideas that we had were like a little bit weird to have to maintain. And we had an actual colo, and it was much cleaner than this, but that's not how it felt when there was a problem because this is what we had. So we weren't flying blind. Using ganglia, like you might use, you know, for kind of monitoring infrastructure, you know, basic health metrics. Now it gives us for alerting. Based on our thrift services, we started to realize, and this is a problem I'll talk about more in a second, that when things can go wrong and all these different processes, it's good to have a way to query their health. And so we make a health endpoint and then we made a webpage that would query all the different health endpoints and then you could have Nagios hit that and grab for whether the text looked good or bad. And there were of course lots of logs and every process on every machine had logs. And so this is kind of what the debug workflow looked like. And the story version of it is say, Elliot, one of the non-technical founders of the company runs into the room and he says, guys, the website's really slow or it's white-screening. Okay, Elliot, do you have any other clues? Well, it happens one in six times. It's probably one of the app servers. Oh, it happens one in two times. It's probably one of the databases or one of the pricing servers. So like, that's not really good. If that doesn't immediately tip you off or to investigate that, you tail the logs of every service and every machine. So you didn't really have Splunk. There weren't all these kind of cloud logging things. And so you would have a bash script that would tail all these logs in parallel. Maybe you wanna check the database process list as the agent and poke around. For slow stuff in particular, this was really hard to track down though because maybe it was something about like you're looping over a certain query or a query's taking a long time, but not long enough that it's running for like 60 seconds and you catch it. And so you're actually gonna insert some logging based on where you think it is and do a new release. And it takes like a full day to figure out what's going on. And eventually you're just Googling it. And so this worked. It managed to solve the problems, but it wasn't like a super efficient use of time and we would keep adding more metrics and stuff around common problems. And I had some co-workers leave and one of them called me up one day and he said, I was looking at this cool academic project called Xtrace and I think we could use it to solve some of the problems that we had at Amy Street. And okay, well, what's Xtrace? Well, the idea is if you could just follow all those requests through the application and see what they were doing in each tier, then you could kind of take that data and it's more structured than logs because you actually build a map from process to process and if there's concurrency, you'll send a unique identifier through and you'll just associate that with every little bit of data you gather. And I said, well, yeah, I think if we got traces of a lot, a lot of requests and then start mining it for data, we could probably figure out what was going on, but that sounds like a lot of work because if you think about what you'd want, let's imagine that each one of these kind of red dots is like an instrumentation point, say, we'd wanna see when the request comes into our application and maybe even in Perlbell in our previous example or Nginx, maybe like people would use today. And then when it gets hand off to the application, then maybe some stuff inside the application code and then whenever it makes a call out to an external service and this is gonna be a lot of work because we're gonna have to add all these instrumentation points, so the approach we ended up taking was basically that and I'll talk a little bit more about how it works. I wanna actually show some examples first to make this a little bit more concrete and so what I wanna show is actually based on kind of what we've done at Appnetta and also a TraceView Drupal module that was written by some people that use our software who wanted to kind of take it and add more data that's specific to say different hooks in Drupal 7 and we actually have a Drupal 8 port of this in beta right now that starts looking at some of that stuff. I don't know if this is gonna work, if not I took some screenshots of it but if the internet is not like super bad then we can actually just, it's always harder to type than someone's watching. So I thought maybe we could just walk through one of these and look at like an actual trace and just to kind of map it to what we just saw behind the scenes we're collecting all these different events and they're kind of in like a graph so we attach this unique identifier at the top level and we propagate it around and we have all these events that we generated and in this case it's pretty linear. There isn't kind of any fan out or anything crazy going on but what we do have here is a web request that's being handled by a demo Drupal 8 application that we have set up and it's pretty simple request. It's just coming for a URL called slash trace view and it's from earlier this morning I picked it out because it had a nice little structure here. We've got some kind of summary notes about it but the interesting thing is the path of that request through the application because this took a second and a half and I'm not super impressed with that performance so I wondered what's it doing during that time and from left to right we've got the timeline of the request and from top to bottom that's kind of where we were in the execution so the critical path is really this thing along the bottom. If you are used to looking at front end web performance and like a waterfall chart then we might think about it more this way and so we've got kind of, we spent a lot of time in HTTP kernel master request, fair amount of time in rendering the views themselves so we'll dig into that. This view is a little bit more compact so it's nicer for kind of projecting here and if we're just stepping through this the request starts off being handled by the Apache web server here. It's not doing much work, it's just passing that request along basically proxying it through to PHP which is where we actually start to get into stuff we probably care about. We've actually also done our first distributed system thing which is potentially go from process to process and as we start to walk through this the application code begins to, there's a spaceship outside, that's the finale. So the first kind of interesting thing that we're doing here in our bootstrapping is set names UTF-8, that's actually not that interesting and then the second thing is not that interesting either so we've got these queries very fast like 200 microsecond queries, they're running against the database and if we see where this is coming from we're still in the bootstrap and zooming out a little bit we see there actually is maybe a little bit less than 100 milliseconds of bootstrapping stuff before we actually get into the master request handling stuff and start running this and I got the five minute warning so we'll keep a pretty blistering pace. A lot of this request handling latency actually happens inside rendering our views here and in particular we can see maybe what different twig templates are taking a long time and what kind of views, subcomponents, those are pulling out as well what database queries get involved in this particular theme so pulling the settings for that. The nice thing about this second and a half page load by the way is hopefully we're running varnish in front of it or a CDN even on top of that so that we have this kind of like Russian nesting dolls type setup so that we never have to actually generate these pages but when we do we want them to be very performant so they scale well and so being able to break this down is kind of interesting. Here's something else I saw that I thought was super weird we've got some latency in Apache after PHP is done like that's kind of weird. I think what it is in this case the we're generating traffic from an Australian data center to request pages here is that could actually just be downloading our kind of our big heavy page here for that last 100 milliseconds or so across that internet we can't break the speed of light. Okay, so I'm gonna go back to the slides now and I also had a Drupal 7 request here it's pretty similar we're seeing bootstrapping in this case there's something super weird which is we're using Memcache but Memcache is being super slow when we talk to it so that's not the most exciting thing in the world to see this is Drupal 7 so we're seeing kind of different structure as far as how the CMS is evaluating different parts of it it's actually using views as well. Now this was like what I would call minimally distributed, right? A LAMPstack is basically the smallest level of actually interactive application we have on the internet there's a lot of static sites now but I'm not counting those when we start to get into a more complex application this is again getting back to my Amy Street use case then it becomes a little bit more compelling and so to give an example of that let's say that I'm making a curl call to this particular service and maybe that's like some payment processor and in that case if it takes a second like this then I just say okay well I'll try to not do that too often or that's the cost of making money but maybe it's to an internal service tier or a search service and so I might actually want to be able to figure out what happened during the processing of that request and so with this review tracing I can actually kind of jump across the wire there and see what's going on there now while we were working on this just because we happen to read this paper we weren't the only ones Google was working on it because they're ahead of everybody on everything Twitter ended up open sourcing something that they use for this called Twitter Zipkin Etsy talked about doing something similar to this in 2014 right now I'm working on a project with some people called Open Tracing that's kind of starting to plant the seeds for this there's some commercial offerings like my company as well as some that are for application performance monitoring maybe not as oriented around distributed I wanted to talk through some of the really interesting challenges related to this but I don't want to run over so instead the slides for this will be available and I will be around afterwards in case anybody wants to talk so thank you I found the thank you slide and I appreciate it any questions? Thanks for coming out I'm going to be talking about the risks of going headless a lot of people have done a lot of really technical sessions about this subject so if you're interested in the technical details of going headless this isn't the talk for you I'm going to be talking from my perspective so about me that's what I look like my name is Mark Farie I'm the director of engineering at chapter three so I managed a lot of developers I've got my fingers in every one of our projects and so I'm going to be talking about what I've seen go well and what I've seen go wrong and how we try to manage that with a couple of headless projects that we've worked on and if you want to talk to me MRF on pretty much everything where you can find me so a little bit of background high-level overview of what headless means thank you Dries for preparing this diagram for me so this does a really good job of explaining what is called headless or decoupled and sort of what that relationship is to your Drupal site so if you look at this diagram you've got your traditional Drupal site on the left hand side and that means Drupal going all the way back to I don't know Drupal 4 is when I got involved and it looked just like this then and Drupal 8 if you use it out of the box is also exactly the same and what this means is that your theme layer is tightly coupled to your system like everything that Drupal does assumes the theme layer is going to be there and you're relying on that theme layer and so what that means is you need to be a Drupal expert in order to wrangle this whole system so if you want to work in the front end you need to know a lot about what the back end's doing it and why it's doing it I've found the best front end developers in Drupal also are really good site builders they know how to build views they understand the data structures and they understand why Drupal is doing what it is on the theme layer like sometimes that theme layer can be a little bit impenetrable it's like an area of expertise so one solution that people come up to this they've seen this as a problem it's hard for Drupal if you have to be such an expert in order to make it look beautiful so they've decided to hack off the entire visual side of the site and call it headless or decoupled and so the example on the right is the front end no longer has any tight relationship to the back end your front end is just going to make API calls or use some other method to pull the data it needs out of Drupal and so the reason this is called managing the risks of headless is all of a sudden you've made your project a lot more complex so step one is distributed teams so what we've just said is that our Drupal team doesn't actually need to build the whole site we've just introduced the possibility of bringing in an outside contractor or a team of Angular devs that happens to work at the same company to build the entire front end they probably don't know anything about Drupal and they might not have any desire to learn anything about Drupal so all of a sudden from my perspective it's we've introduced a potential communication problem that we need them to clearly describe what they want how they want it formed how they want it structured and they need to ask those questions in a way that makes sense to a Drupal developer even if the data structures they're describing don't really map well onto Drupal so we all of a sudden have these difficult conversations that we've inserted into the equation which is why can't you expose it to me this way it's like well it's hard or why is it hard you have to explain two thirds of Drupal in order to explain why it's hard so right there we just introduced a risk for a project because clients and end users they care about how it works and how it functions on the front end and they don't really care that this layer of complexity is there or not like all they want is a fast, beautiful experience on the front end so you should think about what your team looks like a lot of the big success stories I've seen and I've heard from are where you get either ex-Drupal developers that have learned a framework and have done the front end or you have people on the same team so they're in the same office they can throw a pencil across the room when they have a question that seems to sort this out but I've seen when you have distributed teams or teams from two different companies that this can get really tricky really fast duplicate functionality so made this one myself I'm not the best at building graphs as you can tell so there's a lot of things Drupal can do and there's a lot of things your front end framework can do and what you realize is both systems are actually doing the same things Drupal has a templating engine Drupal has in Drupal 8 a really awesome cache layer Drupal has a lot of things that your front end framework of choice is also going to have so you really need to start asking yourself is why am I not using native Drupal like what's the business need what's the technical need like what is my strong motivation for this project and you know there's a lot of reasons you would want to go headless you know maybe you're sending the API out to four different sources like a really good way to test an API you know that you're building is try to build the entire site off of that API because then if somebody's building an iPhone app off of it if you're distributing that API for external devs to use you know it's really solid because it builds your entire front end so it has everything there that anybody would possibly need so that's a really good reason to go headless maybe it is the structure of your existing team maybe you don't want to hire front end Drupal devs maybe you don't have any front end Drupal devs you just got one Drupal backend guide that doesn't know CSS from a hole in the wall so you know he wants to still build the backend in Drupal you know and he can say I'll get the Angular team anything they need they just need to let me know and I'll make it happen so Drupal still gets to be part of the equation but you really if you're a Drupal shop and you're looking at a headless project you really need to think like what benefit is this bringing my client you know like what are they getting from chopping up the head and we just added a new layer of abstraction I was going into that a little bit in the previous you know examples but the worst kind of abstraction is unnecessary abstraction so you know right here we just created a layer where we actually have to think about everything we're serving up and how we're serving it up did we think about caching those requests you know we just heard a lot about Fastly and these distributed architectures like what happens to my front end when Drupal can't serve my request fast enough like all of a sudden Drupal is slowing down my awesome Node.js front end because the request aren't getting served fast enough so your server architecture just got more complicated I hope you have a systems team and your whole project just got a lot more complicated you also introduced a single point of failure so this graphic is getting a lot of use in this presentation so right here this gap can be the size of the Grand Canyon so you introduce communication issues you introduced a whole spec that needs to be written which is what needs to be exposed out of Drupal to this other application D8 makes some assumptions there there's a couple of different modules you can use for D7 that do this really well but they all make their own set of assumptions and those may be entirely wildly different than what your front end team is expecting to see so are you prepared to actually go and build exactly what they want are you prepared to go negotiate with them to convince them that what Drupal sends out by default is what they actually need so you've got a new single point of failure and if that caching layer goes down you might stop serving content to your front end like and that's another reason why you want to avoid a single point of failure so there's been a lot said about Headless I wrote a blog post about this so decapitated Drupal you just need to enter that into Google and I'm number one for that I feel like there are really good, really strong reasons why you would want to build a Headless site but I feel like the main motivation I see for a lot of Headless projects is shiny it's like I want to learn a Node.js framework my team wants to learn a Node.js framework and this is a way to have your cake and eat it too I can still build Drupal and I can introduce Node.js like that should be the furthest from your mind and from your team's mind in terms of why you're building a Headless project you need to have a real business driver a real strong need in your organization for why this thing needs to be lopped off because all those risks you just introduced are gonna cause pain down the road Dries wrote a much more measured approach I think he's a little bit more excited about Headless projects than I am so if you want sort of a more balanced perspective like his blog post goes into a lot of detail and he has a whole series of blog posts where he was talking about his perspective on Headless I would also like anything written by Four Kitchens they have a really good team like they have Angular devs that work there and they're longtime Drupal guys as well so they have a lot of really good resources on how to do Headless right and how to avoid some of these pitfalls that you're gonna definitely run across so that's it for me thanks for coming out anybody have any questions? Oh, all right come grab me at the booth if you have any questions later Yeah and just one more thing John Alban is actually currently addressing this need kind of that you need to learn Drupal themes and the whole theme component system that we're building at the moment that will allow also kind of having a team that's not knowing the Drupal theme layer but a different theme layer to build themes without having to deal with all that stuff that's so Drupal specific so soon hopefully there will be one less reason to go Headless