 Really excited to be here. My name is Josh Ward. I will be your substitute US history teacher this morning. Thanks for letting me. What we're really going to talk about is what bringing Drupal to the cloud really meant. I'm going to start and give you a little background on myself. You may be like, I recognize that guy not from US history class, but perhaps from some other things. I have been in the Drupal community for a while. I think my first DrupalCon, I realized today, I think I was 25. I am older than 25 now, much older, and so I feel old. It's been about 10 years. I don't know if your Drupal user number is still a thing, because I've sort of been away, I don't know if that's still a thing, but mine's 652.104. I used to be like street cred. I do have other street cred. I did party with Drees South by Southwest about a decade ago. I've done some other talks in and around the Drupal space, Drupal in SEO, Drupal in conversions, selling Drupal speed kills, which was a lot about squeezing as much performance out of shared hosting as you possibly could. My favorite talk of all time though, and if you've heard of me, hopefully it's from this one, but it was all about if you're giving the milk away for free, then they'll buy the cow, which was a fun one. I tried to get this one submitted or picked up for DrupalCon for years and it never happened, but lots of camps, lots and lots of Drupal camps. And if you don't remember me from any of this stuff, you may just think I look like this guy, especially if you're from Nashville or the South in general. I get a lot of big show Paul White references. So that's me in all seriousness though. I did vote for Pedro. He did make my dreams come true. When I was 25, over 10 years ago at this point, a dozen or so years ago, I really wanted to speak at DrupalCon. I really did. I went to a lot of camps. I respected and looked up to a lot of people in the community and I never got the opportunity. Then I sort of went away from DrupalCon or from Drupal in general, did some other things and now getting back into it and actually getting to speak here. I am super excited today. This is a big moment for me, so I appreciate everybody who came to listen today and hopefully even with the late start, I won't disappoint. In mostly seriousness, I do work for a web posting company and we're going to talk a little bit about configurations to php.ini files and things like that. I'm not going to pull the audience to sort of gauge where you're at in terms of your sysadmin or sysops knowledge. I'm just going to give you some tips. The cool thing about being back in the Drupal space is just realizing all the expertise that's in the community. The great thing is not having to freak out and panic and throw stuff together. There's always resources to help you along your way. If you have any questions afterwards, we have a booth 214. I'll be hanging out. I don't have anything to do. Once this is over, I can hang out in the hallway and chat or whatnot. I'm happy to connect afterwards. Bringing Drupal to the cloud can mean a bunch of things. I'm going to try and condense those down into three things. As my team likes to point out, it's never three things with me. It's always like seven or eight things, but it's easy to be like three things. I've got three things for you and then sub points and whatnot. Like I said, I work for a company called Nexus. I'm going to give you a little bit of background there so you understand the perspective that we kind of came into this with. We're heavy on the e-commerce side. We've done e-commerce hosting, specialized in that for about a decade. E-commerce hosting means a couple of things. The nice thing is a lot of those lessons in e-commerce translate really well over to Drupal sites, especially community sites. Anytime you have authenticated users, things get a little bit trickier than when you just sort of have static content and you're just reading maybe a blog or whatnot. That's where we've approached Drupal and where I'm going to talk a little bit about some of the e-commerce hosting lessons that we've learned and applied them now to the Drupal space. Lesson one, if you didn't know this already, PHP is a beast. It's really resource hungry. That is true across just about every CMS. Drupal is no different. When you have, again, when you have a lot of authenticated users with potential for authenticated users, or maybe you don't, but PHP is still going to be a beast. You have to figure out, how do I make PHP more friendly? How do I squeeze as much speed out of PHP as I can without just killing the server? You have this balance where you can do a lot of things on the PHP side, but you also don't want to use all your server resources because you have other things like databases that also need resources. I'm going to talk about specifically PHP opcache, which just helps cache files in memory, PHP files specifically in memory, so they can be delivered and executed faster. So a few, what I'd like to think, practical tuning suggestions for opcache. Number one, the memory consumption. So by default, opcache typically has 64 megs of memory. Is it's memory cap? You're going to want to increase that to at least 512. It's a pretty big increase. 64 is super safe, but also super slow and sort of makes opcache inefficient, especially if you have a sizable site. So at least 512. But don't just be satisfied with 512. This isn't sort of a, I would say this recommendation is not a set it and forget it. It's a set it and see what happens and perhaps increase that even more if you need it. Suggestion two, increasing the limits to the number of files that opcache will actually take a look at. So the default figure with the prime number is just below 2000. It's like 1900 and something. Opcache actually has prime numbers hard coded into it. So you could give it any number and it's going to pick the closest prime number, which is why you see that 65,407. We didn't just like try them all. That just happens to be a prime number about five away from 2000. We feel like that is a pretty safe set it and forget it number. It's big enough where it's going to grab most of the files that you're going to need, especially for larger dynamic sites. You can go bigger than that. I mean opcache has a pretty large limit that you can get up to of over I think 100,000, certainly. But this seems to be a pretty good sweet spot. So you want to increase that file limit. And then the third one, like I say, it's files, right? So your files may change like any caching system. You want eventually you want to have your latest files in cache. The default for opcache is two seconds. You want to increase that to four, especially with the file number file increase, right? So now you're looking to cache more files. So you want to give the server more time to actually go through those files and not end up in some gross loop where it never quite completes before it starts checking again. That's an issue. Number two, like I said, we're squeezing performance out of PHP here. We don't want to kill the server. And so the less frequently we can check for file changes, the better. Four seconds is not a lot of time. And so doubling that from two to four saves you on the server resource side, while still making sure that you're checking pretty frequently for file changes on the PHP side. And you can definitely, I mean, you can go higher than four, not to set it and forget it, but we feel like four is definitely safe. And you can kind of go from there. Like that's it on the opcache front. Lesson two, IngenX has the power. So we operate in a lamp stack environment. We do use Apache. But IngenX has lots of advantages. So we now run in the cloud, we run IngenX in front of Apache. We use it for a few things. The reverse proxy, the TLS terminators, we terminate SSL at the IngenX level, so it never actually has the, you don't have to terminate at the Apache level, which saves you a lot of things. And then we use microcaching. And microcaching when the sort of using IngenX as a load balancer in front of the lamp stack is great. And there's lots of advantages there. But specifically, I want to talk about microcaching. Because that's what we think that's the biggest advantage. Like the other stuff's fun, and sort of like checks boxes, but the microcaching is where things get exciting. So microcaching typically prevents really, so you can do static file caching with microcache, and you can do dynamic file caching. Microcaching caches things for really short intervals. And what it does is it prevents really dynamic content from constantly being cached more than once a second. So typically a microcache like default setting is going to be, you know, don't cache anything more than once per second. And that just saves dynamic content from hammering microcache and killing your server resources. In this case, and again, with authenticated users, right, where you can't caching with varnish, or even microcaching, it's really tricky, there's a whole lot of dynamic content. We don't even worry about dynamic content caching. Because typically, everybody's going to have some sort of unique content that you're going to have to serve them up. So caching makes, we feel like, makes a little less sense. And so we're going to focus mostly on the static side of microcache, and especially any JavaScript or CSS files, because those tend to be a little bit more consistent in static. That lets us do a few things. One of the biggies is increase that one second up to 30 seconds, right? So instead of only caching things for one second, we cache them for 30, which gives us more speed advantages. It's also way simpler to configure than varnish. So I will take a quick poll. How many people are implementing varnish with their Drupal? Yeah. So varnish is hard. Yes, it can be. I'm sure once you get your builds going, right? And maybe it's a little bit easier to, to, to template, but we find varnish to be difficult. And so microcaching is easier to implement. We can sort of have a universal implementation on the server side of microcache. And you'll see, so I don't know how apparent that is. So I'm just going to describe a bit what that graph is. The blue line is, is just kind of a basic full page cache. So we'll ignore the blue line. The red line is going to be your varnish performance, and the yellow line is going to be your nginx microcache performance. So these are tests that we did in our environment. And you'll see that up to about 200 virtual users on the site, you're getting the same performance out of microcaching that you are out of varnish. Now, once you get up above 200, then varnish becomes more powerful, and you may need to cache some dynamic content and whatnot. But at least up to that level, which is a lot for most sites, I mean, that's a lot of the internet never sees that kind of traffic. Microcaching is just easier, and, and it does the job, does it as good as, as varnish does. So yeah, nginx, lots of power, performance gains. And again, if it's hitting nginx, sort of this load balancer, you're preventing it from going full lamp stack, which prevents server load and server resource usage. So you squeeze performance out of your site or out of your environment without sacrificing resources for other things. Efficiency. Lesson three, and the final lesson, the last thing I'll sort of go over today is plan for headless. So when we were going into, you know, we've been bare metal for a long time, and as we're coming into the cloud, headless and PWAs become more and more of a thing. And so we wanted to make sure we planned for those. Real quick, the difference between like headless and non-headless, I guess. So the diagram on the left is sort of your basic, your traditional, I'll say, stack, where you've got your, you know, your content, your database, and your PHP logic rendering, and then your output to the, to the user. On the other side, what we have is we still have our application, we still have PHP, but we've introduced, like, React or Node.js or something, right? And then we output to the user, which means a couple of things. So two thoughts, two quick thoughts on headless, we're up to like seven now. I said three in the beginning. At least a couple of things that you're going to remember to do as you're, you're sort of building out your infrastructure. The one is using SCL, Software Collection Library. What that does allows you to run, dynamically run versions of like Node on the server, on the same server, right? So, so multiple versions, having multiple libraries there. It's really important. Built-in tool for CentOS. So if you're running CentOS on your server side, then you should have that available to you. We also use it to manage multiple versions of PHP on the same server. So you may have in a situation where you've got two clients hosted on the same server, sort of a shared environment on the same VM, shared environment type of thing. And one's running, you know, hasn't upgraded from PHP 5.6 yet, but everybody else is on 7.2. SCL will help you, will help you manage that. So it's just important to think of, one, to serve your existing clients, but also sort of testing new things in the future, right? SCL lets you sort of grab that latest and greatest and not break the entire server or have to turn up a new environment. The second thing in terms of headless is PHP is still a thing, but we've talked a lot about PHP and about caching. You don't get away from PHP just because you go headless, right? It's not like PHP is all of a sudden not a beast. These things are still really important, especially for your back-end users, which again is something that I think a lot of Drupal sites and e-commerce sites have in common. So you have a lot of administrators, content approvers, obviously people purchasing things or processing orders or whatnot that are in the back-end, they're in the control panel of the system, and all the PHP caching on the static side, it's a lot of dynamic content, but the PHP caching on the static side is still really, really important. So opcache is still important as well as the micro caching and whatnot. So just because you're going headless, one, you want to be prepared to test stuff because at least from our experience, the feedback that we get is sort of scatter shot right now, the range of things being used for headless and PWA's are sort of broad. So be prepared for the broadness is sort of the lesson here. And don't forget about PHP, still a thing. And rolling along, I do want to say a quick thank you to Robert Bailey. Robbie's actually in the front row here. He asked me not to point him out, so he's right here. But this presentation largely due to Robbie's help, I really appreciate it. So I want to make sure to give him credit for that. Before I sort of move on and hit the conclusion slides, I think we're a bit early, which is sweet. Didn't want to hit you with a bunch of stuff. Any questions? Yes, sir. This is really great. Thank you. So I manage hosting for like sort of an in-house hosting product for an agency. A few questions and you can just rip on them. One is, looks like you're sort of doing VMs, but not necessarily containers or like Kubernetes, that kind of thing. Yeah, maybe. What are you doing for my client isolation, right? Like it sounds like you're running a multi-tenant environment, but what are you doing for client isolation as far as isolating resources, isolating on the network level, anything like that. And then also have you experimented it all with like horizontal scaling, how any of that plays into kind of what you're doing here, because that's sort of the vague piece of a whole lot of hosting companies basically have to figure that out about how to actually horizontally scale through that context. Yeah, so I will try and hit all three of those. The first one being, are we using any sort of, at least in what I'm talking about, is there any sort of Kubernetes or container-based stuff going on? The answer is no, it's not. So our basic cloud product is built on OpenStack, some other wonderful open source projects, but no containers in what I would call the primary environment. So when you go and you get a large package from us, it's no Kubernetes there, it's no container there. It's a VM in the cloud. The second question was in regards to network security, and I will, the general architecture there is we kind of look at it as three networks. There's the internet and the public network. Then there's sort of the private network to the region that your VMs in. So we're based in Michigan, so our Midwest region is its own sort of network, and that's typically, that's the one that we use. So if you migrate from one VM to another, that's the network we're going to use. And then you have a client private network, and that's really what sort of isolates you, right? So any service that you have with us is going to be built on that client private network, and that's private just to you. You have access to that network, nobody else on the system has access to that network. So it's isolated that way. The hardware is shared, I mean it is a multi-tenant environment, but from a network standpoint, that's sort of how we, that's sort of how we chop it up. The backnet in general is just, it's private to us, nobody has access to that except for our support and SysOps teams, DCOps, things like that. Additional sort of siloing and whatnot, I could, you know, you could drive your card, I could have somebody that's an architect of that system, probably talk to you more intelligently about it, certainly. But that's about as far as, is maybe I can go there. The third question I can touch on, and that was have we met, we played around with containers in general, I think I'm getting this right, containers in general, and horizontal scaling. And the answer is yes. So we have a Docker swarm implementation, the first service we released on sort of that container platform was Elastic Search. So Elastic Search in our environment is a separate container. You get the endpoints, and you plug that into your Drupal side or your Magento side or whatever, and that's actually in a container. We don't, which we're a hosting company, we don't give you access to the containers, you're not going to SSH into it or anything, you're going to get sort of a default Elastic Search and then be able to use that as a resource. On the PHP side, so I think the release date is like middle of May, and I know that it's like right after Mother's Day. Anyways, but we'll have PHP FPM on the container side as well. So as you need PHP containers, you'll just kind of hit a button and fire those up on the Docker swarm side of things. But again, that'll all be configured and plugged in sort of orchestrated with puppet. And it'll be there, it'll be a resource there as you get more traffic. But that's sort of how we tackle that. And again, I'm so like surface level, I'm a sales guy. I didn't put that up there. But yeah, but again, I can, we've got some documentation that I can send you that probably digs in a little bit deeper. But that's sort of how we're doing that. Did I get all your questions? Yeah. Okay. Other questions? Sweet. That's my time. Again, thank you so much for coming to housekeeping slides. Contribution opportunities tomorrow, Friday, April 12. If you haven't already heard about those, it looks like room 602 606 and 6a. So definitely contribute back to the back to the platform. And then there is a I assume this is a survey for the conference, and not my talk. I don't know, I didn't clarify, but two links there, the Drupal dot org slide Drupal dot org, slash schedule is where you can find this session and then take the survey at survey monkey slash r slash Drupal concierge. That's it. Thanks so much. Appreciate it.