 Hey, folks. My name is Patrick Jensen. I work for the American Civil Liberties Union as a web developer. And I'm joined up here today with Narayan Newton from Tag1 Consultant. And I'm also up here with Matthew Cheney from Pantheon. And today we're going to talk about how we handled a big year at our website, aclu.org, in 2017. So first let me talk to you a little bit about what the ACLU does. So we're a nonprofit organization founded in 1920, almost 100 years ago. And we have almost 3 million supporters now. We're the defender of individual civil liberties and rights in the United States. And we've taken on a lot of famous cases in our history. So I have a couple examples up there. So we led the fight against Japanese-American internment camps during World War II. We fought the 1996 Communications Decency Act. So that was when they were trying to censor the internet from indecent content. And more recently, we had a big part in winning marriage equality in 2015. But we did start that fight way back in the 1970s. So the ACLU actually has about 40 Drupal sites that we managed. But today we're going to talk about one really important website that we have. And that's our Action website. And that's at action.aclu.org. And that's the website that our supporters can go and actually take action on our issues. So they can sign petitions or send messages to their elected officials. Or they can request legal aid through that website as well. In addition, that's the site that we bring in all our online donations from. And people can sign up to volunteer for our organization there as well. All this important work on that site is done through form submissions. So if you're signing up to join our email list, for example, that's a form submission. If you're trying to donate to us, that's a form submission as well. So throughout this presentation, I'm going to use form submissions as the indicator for the health of that website. The website through the time that I'll be talking about today was built on Drupal 6. We've since moved it to Drupal 7. So let me go back to around 2013 before we were on the Pantheon hosting service. We had just embarked on a big push to increase the online engagement of our supporters. And it was working really well, aside from the fact that our infrastructure wasn't really keeping up with the increased demand. So we had database straining to keep up with our new traffic. We were using CoreDrupal Search instead of Solar because we didn't have a chance to set it up and configure it correctly. And when we went to our hosts with that problem, they recommended hardware upgrades. And to get those in place, it took weeks and weeks to get done. In addition to all that, the maintenance of all the infrastructure was onerous. Maintaining tests and development environments was a pain in the neck. And maintaining infrastructure like Varnish was also a pain as well. And so our CTO knew of Pantheon through coming to events like DrupalCon. And he talked to people at Pantheon and decided that we should move our hosting solution to them. So I'm gonna turn it over to Matt now to talk about how Pantheon helped us out. Thank you, Patrick. So as you might imagine, hosting websites is hard work. It has been the last eight years of my life. And it's something I think a lot of us in this room, if we've sort of built our own setups, can start to see all the technologies and all the sort of workflow processes we have to do to make it work. That Drupal's increasingly a complex CMS and it needs a complex production and development set of environments to make it really awesome. That we need to know things about Linux and containers and engine acts and databases and PHP and solar and Git and varnish and New Relic. Like these are all things that are really important to run honestly any website, but let alone a really big one and a really important one. And we have to know really specific processes about how to keep the whole operation secure, how to make sure it's fast, how to do different workflow and different backup systems. And there's a lot that goes into it. Having sort of worked with a lot of different organizations and people who have it, you have a lot of different needs and a lot of different robots need to sort of work together. And it has to happen 24 hours a day, seven days a week. And it's a very specific set of expertise and something that to do really well requires a lot of time and dedication. But in the sort of context of ACLU and working for sort of civil rights and civil liberties, we might sort of question what does sort of get or knowing Git have to do with civil rights at all? Like it probably isn't about as much to do with civil rights as like the guinea pig wearing the sunglasses. No, it's good, right? If you know Git, that's like positive for the world. Like this guinea pig having sunglasses is also positive. But what it isn't is focused. And it isn't about sort of the things that the ACLU wants to do most in this world. Like I don't think you want to be a backup in hosting operation. And every time you're setting up a dev instance, you're not spending time actually making new content and improving new features and achieving the organizational mission. Because I think for a lot of nonprofits and a lot of even companies, like you have a mission that you're doing. And very much we think at Pantheon, I think overall in the Drupal ecosystem, like we want to stand in the shoulders of people that are experts in their field. We want to be, this is an awesome new word, a Pyramidian, which is the top piece of a pyramid. Like we want to be on all of these other levels with all these other expertises. Because in this world, we're in the ambitious digital experiences track. We want to have, I guess, ambitious websites. But what we don't want to do is make ambitious backup systems or ambitious load balancers as part of the work that organizations and companies are trying to do every day on the web. That's not what they're being ambitious about. And that by leveraging the experience of others and the expertise of others, you can be at the top of that and you can really focus on the stuff, the tip of what you want, instead of worrying about all the stuff on the bottom. And that sounds familiar to you. That is sort of why people use Drupal. It's why organizations like the ACLU and other folks leverage the experience of other people. That we have some of the smartest people out there helping to make Drupal Core really excellent. Building in trip modules and external libraries that make your work as developers even easier. And maybe even more importantly, we can benefit from this community of practice inside of Drupal and the CMS space where people have established patterns for security, for access control, for performance, for writing documentation and making things work together. And this is how we sort of start to build really awesome stuff. No website I've ever built, I could have done without the support of all these other people. And this sort of makes a lot of sense to me. And I think the same thing is true with sort of cloud services in general, managed hosting environments. Not just like Pantheon as a platform, but things like SendGrid to do email delivery and other kinds of tools. That as things get more complicated in this world, we need complicated experts to help with them, but that's okay because this is sort of the stack that we're building. And as we get more complicated CMS technology and Drupal 8 is certainly that, we really do need to leverage these pre-built kind of configurations like having object caching with the Redis system, having an Apache solar search index and having DevTest live workflows. And those are the things that you can absolutely set up on your own. You all set up some of them, I'm sure. But it's not something that's an expertise and it's not necessarily the highest priority. And it's way easier if it just works with one click. And that's sort of what I think a lot of these managed cloud services are doing. Because you can get the best in class security process and the best in class performance tooling by leveraging people whose job it is to make sure that works. Because like I was saying, I've worked for a long time, I've seen a lot of different things from a lot of different people and that's really important information. And that's true for people who work in hosting in general. And you can leverage other people's situations to help try to figure out. And so we built sort of Pantheon as an architecture that does try to leverage a lot of expertise from a ton of open source projects all over the place. And we built a containerized environment that looks sort of like this. Upstairs we're sort of demoing a bit of it. But the idea is that websites are made up of different pieces of things. But the kind of like structure I think you want for your server is the kind of structure that like previously just very sort of expensive and big sites were able to do, which is have a full cluster, right? You have a load balancer that kicks to a couple of web PHP processes that then integrates with a solar and a Redis index, has a database, has a file system. Hokes in to get for the deployments and has New Relic for monitoring. Like this is the kind of cluster kind of thing that you have. And part of why I started Pantheon and my co-founder started Pantheon is that we were doing, building out these pieces of work for a lot of big websites. And that was great because those big websites needed it, but so did small websites. But small websites didn't have the budget expertise to do it. And I think part of like containerization and open source is democratizing technology and giving access to things that weren't previously possible for individuals but can be possible if we share and collaborate. And that's really important because not only does this tech make websites fast, this tech can also act as let's say a foundation to allow for the websites to grow. Because one of the problems where I talked about earlier is it might take weeks to get new hardware. Well, the internet moves really fast. Things change really fast. You need to be able to get additional resources when you need them. That's one of the things we do at Pantheon is we have a sort of smooth scaling kind of approach we can just within not weeks, but like literally seconds, we can horizontally scale PHP to get even more resources. So if you have additional traffic, you can start to get more resources. Now this isn't a silver bullet. This isn't gonna solve all your problems just throwing more PHP at it. But I think it's a prerequisite. Having a technology platform like this is a prerequisite. So then the work you do to make your sites faster and more secure can happen on top of the work of other people and you can leverage it. Because that lets you move quickly. Because we are living a very interesting world and literally we should be prepared for what's gonna happen in the future and we really don't know what it is. Crazy stuff happens all the time and it's good to have some foundation to get ready. Thanks Matt. Indeed, crazy things happen all the time. So like we were saying earlier, we're having some performance problems back in 2013 and then we moved to Pantheon and things were going really well for us. Performance was pretty solid. And then Donald Trump got elected to be president and that kind of changed everything for us. In just the days after the election, we saw the types of traffic that we were gonna see at our site, just skyrocket. So in the five days after the election in 2016, we got $7.2 million in donations. I'll put that into context for you. In the previous presidential election, we received just $25,000 in donations in the following five days. From November 9th to 13th in 2016, we saw a 4.2 million page views on our site compared to 2015 numbers where we had 400,000. So that's a 10-time spike from what we were used to seeing. So things were starting to show some performance issues but nothing really went terrible until November 16th and that's when we really got our wake-up call. So on November 16th, our executive director had an appearance on the Rachel Maddow show and the traffic that resulted crashed our site, essentially. So we had one instant pretty much where we were able to serve like 500 form submissions per minute and then our site really started to crash and then we kind of leveled out at like 300 form submissions per minute. And during that time, a lot of our users were seeing site errors when they tried to visit our site. So this was a huge missed opportunity for us. We had thousands and thousands of people trying to get to our website, to donate to our organization, to sign up for our email lists, to send letters to their elected officials and they couldn't do it because our site wasn't up. Thankfully, our management realized that this wasn't gonna be a one-time traffic spike and that over the next four to eight years, we're gonna see similar spikes as things happened in the political landscape. And so we called in Tag1 to do some emergency work to help us out. Yes, so we came in first basically right after the Maddow interview and our initial project was just to look at the outage and figure out what happened. We did what we normally do, which is rely on some sort of tracing or monitoring tool, in this case, Pantheon provides New Relic, so we use New Relic, although we do often use New Relic or something similar to it. As you can see pretty clearly from the graph, the site was exceptionally database-bound, really, really database-bound actually, more so than we usually see and we see database-bound Drupal 6 sites a lot. I know a lot of people are past Drupal 6, maybe some people aren't, but Drupal 6 in general tends to build a lot larger monolithic queries just because of how the query builder works or the lack of a query builder, some would say. And legacy sites in general also tend to be database-bound. The reason sort of makes sense, you design a site and that site goes out and then evolves from that point and it might get updates and new features, but those new features are now making queries against tables that were maybe not designed for those new features or it gets traffic that it was never designed for and a lot of time to when that happens, the first thing that fails is the database. So we brought on three engineers, myself, Fabian and Jeremy, and started going through everything. We went through all the traces in New York from that period because it was the most reliable source of data. We got slow log from Pantheon and went through that and digested it and got the slowest queries and we did the part of this that Pantheon can't really do for you and that offloading things can't really do for you, which is we went and just looked at the queries and there were some pretty problematic queries on the site. This could be, this is a very good example of one and this is one that actually was causing significant issues when we fixed it. As you can see from the explain here, this is a three-table join and the first table in this join has no indexes of any kind, no primary key, nothing. This is particularly bad when it's the base table and it's particularly bad when it is the smallest table in the join because MySQL wants it to be the base table because it's the most selective table because it's small. So this basically destroys this query, it can't come back from this, there are no indexes on it. So this and issues like it were kind of what we're finding and for this, our solution was give it indexes. In this case, we actually had to look at the dataset and find the natural key for the dataset, basically the index that defines the table in a unique way. So we found the natural key for the table and then added some tertiary indexes to help with the joins that we found elsewhere and immediately we're down to 76 rows. We were at over 200,000 rows and those rows were being sorted on disk and now we're at 76. And you can see the result from this before we had this, the green that you see is that query and you can see it drop off and you can see it dropped off at an interesting time of day. That's because right after this interview, problems were continuing to happen and so the three engineers and the ACLU team that we brought on worked in concert and were literally on the server altering tables to add indexes. Not exactly how you want it to work, but it was required at the time because we were just continually seeing issues. So at some point we got enough of these issues resolved that the site was starting to act normally and we developed a patch set that contained all of our index changes, disabled some features, redid some queries. It actually ended up being a fairly large patch set of index changes and used the multi-dev support of Pantheon to push that patch set to a multi-dev and to copy production back to a multi-dev and undo all the changes we made so that we could validate that these changes that we were suggesting actually would work well and would work for the foreseeable future. So we built a small load test and ran it against the multi-dev and as you can see, the patch set of just going through and making sure that things were indexed had a huge impact on the site. This might not be true for some sites but a legacy deployment of Drupal 6, this is very likely going to be true when it's that DB bound. And at that point, we realized that we needed to validate further and so we built out our load test and prepared to try to throw as much traffic as we could at it at a production environment which is where we turned to Pantheon to give us a production-alike environment for testing. Thanks, Narayan. I think one thing that's really important in web development of course is to have as close to a production parity kind of test environment as possible. That you want to be able to see very clearly what will happen in a live environment when you do a change but in basically a copy of a live environment. And that's why sometimes you'll have problems where things will work or not work on your local environment but will behave differently on your production environment. And the thing is that for big sites when we're talking about clusters with load balancers and PHP processes and Redis and solar, like there's a lot of things they have to get set up to make a cluster that's worthy of testing, especially when we're talking about performance issues where all of these factors are going to go into play. And that's something that actually can be really hard if you have to provision new machines or even spin up new EC2 images to try to do this. You're sort of, to just test or validate an idea that you all have some work that you had done, you have to actually go in and sort of re-replicate your whole thing and that can be really laborious. That's one of the things where having an LXC container approach is really awesome because you can take the exact spec of what your production infrastructure is down to like the PHP I&I files, all the different versions of all the different pieces, and you can just straight up wholesale sort of replicate it. Because nobody's got time to build out another cluster just to check it out, but we can definitely have our robots do the work because they're already like cleaning our house and like mowing our grass and driving our cars and stuff. So like, why can't we just have them help out a little bit by building it out? And that's like a tool that we have where you can sort of create on-demand environments. And if you're in a position where you can do something like this, have scripts that'll be able to say, if I want a new environment to test, copy all the server configuration of all the things that are running, take a copy of the database to give it a new URL that you test. It's a great way to actually be able to go in and try to like do the kind of work and kind of serious work that Tag1 was doing to actually look at the site. And this is a cool thing if you can take advantage of it. Cool, so the changes that Narayan and the Tag1 team put in over that first crazy weekend were really successful. We finished out the rest of 2016 without the site really crashing and we still managed to bring in like 15 times more donations in our end of year fundraising than we did the previous year. But we weren't out of the woods yet. On January 27th, President Trump ordered an executive order to ban Muslim countries from entering into the United States. And that really like sparked off a lot of protests at airports across the country. But within hours, the ACLU reacted and got the first injunction against the executive order. And our supporters rewarded us for that. We had huge traffic spikes on that weekend. So whereas we would typically see like 44,000 page views in a day, we spiked up to almost 4 million. So it was like an 85 times what we were normally used to seeing on that weekend. That other spike there on January 21st is from the inauguration. So how did our site handle all that extra traffic? Pretty well, actually. So I have the graph here of the Maddow performance in the red. And that shows you how we kind of peaked out at 300 form submissions per minute. But after the executive order, we were able to handle about 900 form submissions per minute. Although the site did go down for a little bit, but we did take some steps to mitigate that. So we did two really smart things to mitigate the downtime that we had. First, we set up New Relic Alerts, which notified us way ahead of time that response times were slowing. And that traffic was really increasing. And that let us know that some action might need to be taking soon on the site to help it survive. The other really smart thing we did was we created just a really simple static HTML page and deployed it to our CDN fastly. And when the site did finally succumb to the extra traffic, we flipped the switch and whenever somebody tried to go to action.aclu.org, instead of serving the real form that they would normally see, we just served an HTML page that contained a PayPal donate widget. And the page just said something like, our site is experiencing extraordinary traffic right now. If you'd like to donate to the ACLU, please do so here. And that was like a really simple solution that we kind of got around the site crash with. So there was another site outage. They had a plan for it, but there was another site outage. And this is where it gets cool is because it was a site outage at enough traffic that to be able to really detect the issues we needed to throw quite a lot of traffic at the site because the low hanging fruit, if you will, had already been plucked by the first patch set. So basically I built a botnet, which is always fun. I recommend it in testing only. And to do that, we used a low testing tool. We use this the first time too, but we kind of extended it here called locust.io, which I recommend looking into. If you use something like Jmeter, it's a lot more fun to use. It's kind of a load testing framework. It allows you to write a Python application that uses a bunch of classes and objects that Locust provides that will create a load test when taken all together and then present a Flask UI for interfacing with it and then has all of the scale out capabilities to run your load test over a cluster of workers. It's very, very cool to use. And then I also use Saltstack, which is a configuration management tool that has some cloud integration. So we tied it into EC2 and basically I could run a single command, have it spin up any number of executors I want, have them all run the load test and attempt to take down the ACLU website, which sounds a lot scarier if I didn't have permission, but I did have permission. And we had a production-alike environment, which was very helpful. So we started throwing a lot more traffic and we needed to track the traffic through something that was not form submissions because we needed to see exactly when it would fail in New Relic because that's how we were tracking performance. So in the old patch set, the site would fail at about 616 requests per minute and that pretty much tracks with failing at around 500 form submissions based on how the path through the site goes. With our new load test and our new patch set, it failed at around 4.96,000 requests per minute, so about 5,000. But we knew that at that point, we started having issues with both the database and actually external providers. So we're throwing enough traffic at this site that our payment gateways were starting to lag, starting to time out, starting to throw errors that didn't really make sense. And also, the database itself was starting to show signs of load, not a particularly slow query as such, but queries that were expected to be fast, well-indexed queries that don't necessarily sort many rows, but maybe sort 10 and are well-indexed except for that, they were starting to slow down way more than they should, which is a really good sign that your database is about to fall over. So we knew exactly where we could get it before it fell over and then we needed to do something to fix it from that point. So we attacked this from two directions. First, the payment gateways. This was actually really difficult to pin down. We, payment gateways are not a cause of failure for sites, they just aren't, like we had never really experienced this before. So we had to figure out exactly why they were failing and what was happening, because I was pretty confident that the payment gateway itself actually had the bandwidth for this, but that something else was going on. So we developed a module called curl log, and what curl log does is it logs all the responses from the curl requests. It will do most any that you want, but we really pointed it obviously at the payment gateway curl requests and logged exactly their response. What's interesting about this is that we are working with the ACLU. So when you're logging responses of curl requests to payment providers, you could see how that might be really terrifying. So we provided in-flight sanitization. So we made sure that all the user information was stripped out of these requests, and we wanted only the technical information about the request, not the technical information about the user, because we really, really didn't want to store that. So we used that information and in concert with many, many, many attempts to get support from payment gateways to figure out that the issue here was the CDN in front of the payment gateway, and that we were actually during these load tests and during actual site down times of user traffic, we were sending so many payment gateway requests that it was triggering the anti-denial of service feature of their CDN. So this was going to be hard to fix because no one is going to turn off the anti-denial of service feature of their CDN. That's probably why they have a CDN. So what we ended up doing is developing a module called Curl Load Balance that can load balance curl endpoints because what we found is that the CDN was in the process of migrating, not the CDN, the payment gateway was in the process of migrating to their CDN. So the CDN endpoint was the preferred one because the CDN provides extra routing and it makes sure the endpoint is up, but there was a legacy one that did not have the CDN. And so we developed a module that had both of them in an internal to Drupal load balancer with a decaying ticket system. So when you have, say, in an hour, and I set the tickets to five for that hour, if you have five failures in that hour, that endpoint loses all of its tickets and is taken out of the load balancer and then the other one comes up. So if the legacy gateway, because it doesn't have a CDN goes down, it's taken out within five. If the CDN starts deciding that we're trying to denial service attack them, because we kind of are, it will take that out as well, but it will always leave one so we won't get into a situation where there isn't an endpoint. I say we, Fabian, the engineer that I mentioned earlier is the one that wrote these and he's a genius. And all these are actually open source and tied to his account on Drupal.org. The other thing we did was to deal with the database problem. This was a little bit trickier because as I said, we're dealing with a legacy Drupal 6 deployment. So the typical way to do this is okay, there's not a single bad query. It's all the good queries that are just overloading your database. So you could try to scale that out with read only slaves, but that's difficult, especially again with a legacy deployment and Drupal 6. You could try to cache them, which is what you usually do, but that would require going through a legacy Drupal site that is being replaced, the Drupal 7 version and adding the appropriate cache calls, which is not a good use of time. So we wrote the query cache module, which is basically a shim that in settings.php, you can declare which modules you want to attempt caching and it will go in and with a core hack, but I've hacked core before and add caching to those DB queries as long as there isn't a join in the query. So we'll go through and find all the simplistic queries that should be cacheable and we'll cache them. More than that, it does something really cool actually, which I have no idea why Fabian decided to add this, but it is cool. You can map multiple queries to a single base query. So if you have a single query that is like selecting everything out of a table and ordering it by something, and then you have two other queries that are doing the same thing, but with a filter, you can map all three of those queries to one base query and then it will do the filtering in PHP. What's interesting about this, and well, two things. One, that's a terrible idea. It should be obvious why that's a terrible idea, but our issue is the database and we know that we can scale out PHP containers until the world ends. So we're gonna shift the load to PHP because we can scale that out horizontally. We're gonna shift the database actually delivering data that load to Redis because this is all in a cache. So now instead of actually going to the database, we're going to a key value store, pulling things out and then sorting it in PHP and we can spread that across. This is not how you'd normally solve this, but it is a legacy deployment and this helped a lot. As you can see by this graph, this is one of the problematic queries that were immediately targeted by query cache and it just dropped off. The other thing we did that was interesting was we introduced the rate limit module, which is kind of a performance module and kind of a security module. There were lots of forms and search forms on the site that there wasn't a whole lot you could do about that performance because just search is slow to some extent and there are some web forms that were being targeted for abuse. And one of the interesting bits about working with a cloud provider like Pantheon, although it's not unique to Pantheon, is that sometimes you can't really get in at the load balancer level and do something. You have to do it in Drupal. So we wrote a rate limiter in Drupal so I can very specifically go to those web forms and rate limit very specific requests so that we're not having abused web forms, we're not having abused search. Cool, so Tag1 did a lot of great work and they used some tools that told us that our work was going to actually improve things, but would it actually work in the real world environment? Well, the Trump administration gave us a chance to test that. When the administration repealed the net neutrality rules and when that happened the internet kind of lost it. People really took this to heart and they really wanted to take some sort of action to express their frustration and our website was kind of like a conduit for that frustration. So how did we do? So I have the previous graphs up here that show you where we stood in red the matto night when we peaked out at 300 form submissions per minute. In the green we have the executive order peak which was 900 form submissions per minute. And now I'll add the post net neutrality rules being taken away and you could see the improvements that we had. So we were able to peak at 1900 form submissions per minute and we did that without any site outages. So it really did work. So on the night that our executive director appeared on the Rachel Maddow show, he said of the ACLU, the ACLU is ready for the Trump administration. We have to be, we're in for the fight of our lives. And he was referencing the entire organization as a whole here, but our website wasn't quite ready at that time. But with the help of tag one and the work of all the folks at the ACLU and with the pantheon infrastructure at our backs, we have become ready. And so we're now ready to take any traffic spikes that come our way. So now we'll turn it over to questions. You can ask us anything. We love to talk about Drupal performance and civil liberties. So actually to some of that, I used to work for an occasional ally organization of ACLU and so I had some insight in some of your predecessors. Oh, great. Selecting that previous vendor and some of the concerns that came down from the board level about using Drupal and using some of the things that I'm sure you came back up in the conversations around the logging and the privacy issues. And at the time, they couldn't use anybody like pantheon or anything that resembled that kind of infrastructure because of the concerns around data storage and privacy and all of the things that ACLU quite rightly ought to be very concerned with. And I wonder if you can talk a little bit about of how that in the selection of pantheon, how that conversation may have changed internally at all but also just were there unusual aspects of what you needed to negotiate through with pantheon to make sure that that relationship would be the kind of one you guys wanna have to be proud of and talk about in public. Sure, sure. Yeah, I'm not really familiar with the contract we specifically have with pantheon but oftentimes we have to get specialized contracts made with our vendors to make sure that they're not sharing any data that we share it with them, you know? So we use an analytics platform and we had to sign a special contract with them to say you guys can store this data but you're not allowed to use it for anything else. So yeah, there is a lot of like extra going back and forth with an organization like ACLU to make sure that anyone that does see our data is not allowed to use it in any way. And I could just provide a little more context as I think the SU has a lot of specific concerns but they're shared by other people as well but I know working with Marco on actually sort of figuring out the arrangement, a lot of it was reviewing our systems and making sure all the different pieces are checked. And I think there were some things we actually did like harden in terms of our security encryption and to make sure that that was really good for everyone. I think one of the problems I've seen was just dealing with like board level decisions like that is that those kind of things can be a little risky, right? You know if you have it in your own organization's closet that you own the data and that transferring it even to someone like Pantheon, you might trust or people you know is still risky especially for board level decision makers. And I think a lot of it comes down to like you're just making trade-offs in this world. Like yeah, it's not in your closet so you do have to trust someone else but you get the benefit of their expertise in doing the kind of things. I think that was sort of the case that was sort of made was hey, we can get a lot of awesome value and we still can be secure but this is the street we wanna walk on seem to see it'll be a good. Can you use the mic so people who see the recording can hear? Just to piggyback off that question this is from the Pantheon side. How do you evaluate the risk of taking on a customer like the ACLU because it now exposes your platform and therefore your other customers essentially to the risks associated with being on the same platform as the ACLU? So yeah, great question. I think the bigger the site, the more whole profile site the more it ends up getting attacked and gone down that we've hosted sites that have been attacked by state actors who have different opinions than some of our customers. And part of it I think is that we're sort of all in it together. That like our architecture at Pantheon is that we're putting, we have a multi-tenant platform where we're having 200,000 sites including many hydro effect and high profile sites. And to some extent there's like sort of a baking in that's happened over the last eight years that we've dealt with these kind of issues over and over again. And when we roll out new infrastructure we're doing it in an automatic way. We don't have to like trust that this new system we set up for this new customer is gonna be secure and performant because we just do the same system for everyone. That being said there's definitely risks. There's internal conversations. By the day like we wanna make this stuff work for people. We want their open source systems to be awesome. We want their organization mission to get out. To some extent just, hey, what are the big things you guys need? Let's go try to figure it out. But I think you gotta get a face forward and then go for it. It's a good lesson in the world. Hi, I'm from Pantheon. So I heard a lot of great stories but was just wondering in terms of the before and the transition. So it sounds like you were on Drupal 6 for a while. Was wondering what was the impetus behind making a switch? Were there things that you were projecting? Were you thinking about it for a while? The switch to Drupal 7? Yeah, to Drupal 7 but also looking for a partner to help along the way. So how did, you know, come across tag one? What are sort of ways that you talk to decision makers of what you're looking for and then just looking out to find a good partner? Both, not just like Pantheon from the platform side but agencies or people that can help you address problems as you move forward. Yeah. So the reason we moved to Drupal 7 was just for security support for the most part. While we were on the Drupal 6 platform we were paying for additional support actually through tag one because they have the program to port over any patches to Drupal 6. In terms of how we came across tag one I think it just kind of was a combination of, you know, RCTO is an old school Drupal developer and so he's kind of familiar with all the key players in the Drupal field and tag one has a great reputation for being kind of like the performance masters and crisis management, that sort of thing. And so I think it was kind of like a combination of like that focus that they have in addition to their good reputation as well. Just getting back to like the sort of privacy and tracking aspect but from the Drupal site building side like as like a kind of naive Drupal site builder let's say like you're making a site for a fishing club and then it unexpectedly gets involved in political stuff because of some environmental thing or whatever. Like what, do you have any suggestions for like just good best practices for making Drupal sites that don't collect more information than you're prepared to manage basically? Yeah, so we do a lot of like kind of special things at the ACLU to prevent data collection. We don't use Google Analytics for example. I mentioned earlier we have different analytics platform that we use where we've signed a special contract with that analytics provider that where they cannot use that data. We do a couple other things. We don't allow requests to go out to any other third parties. So we don't use a Google CDN to host our jQuery or something like that because that prevents Google from knowing that people are going to the ACLU.org website and we enforce that through content security policy headers. So if one of our content editors were to try to go in and include a file hosted at Google or something like that, it would be blocked by the content security policy. We use one other kind of cool module called the MyTube module. So when we do need to actually make a third party request to say YouTube because we host YouTube videos on our site, we make sure that before a third party request is made, the person viewing the site has to essentially allow it so they have to go through a click wall and once they go through that click wall then finally the third party request can be made and the video can be shown on the website but it can't happen before that. I know this is like Canada Warren's way I have a good answer, but what about comments on such a user site? Is that even possible? Yeah, I mean it's not so much a problem from like a data perspective but it's a problem from a trolling perspective. So for a long time we had unmoderated comments and it was disturbing the kinds of things that would come up on our website and actually just recently I think we turned comment moderation back on and so we actually have somebody looking at those comments before they can get posted to the website. You know, we're a free speech organization so we want to allow people to make comments on our content, but within reason of course. So that's why we implemented that moderation. Comments are kind of hard to solve just going off of that. My other job is I'm the Drupalador assistant man, lead assistant man and we have to fight that a lot because comments on issues, comments in forums there are people that entire spam networks that want to get into the forums and comment to basically have advertisements and you can do interesting things with comments like have a Drupal a role as a verified user and then not allow comments until you do until you get that verified user role and you can either have someone personally verifying them or you can have a trigger like if you do these things then you're a committed person on the website and not just a spam bot or not just someone who's coming into trolling and then you get the verify tag and then you can comment. There are lots of other things you can do but there are interesting tricks. Yeah, one other thing on the comment side of things is so we're a nonprofit organization so we can't endorse political candidates and so we wanted to make sure that any comments that come on to our website don't endorse political candidates as well and so that was another reason that we wanted to add that content or comment moderation step. I actually was asking a follow up about spam comments. I think Narayan just partially entered it but how do you deal with that? You must be a huge target. I mean I understand you have moderated comments but I mean moderators can't sift through millions, right? Yeah, I mean we don't get millions of comments typically on very popular blog posts maybe in the hundredish range and yeah, it's just people power. It's just man hours, human hours. We're sifting through it, yeah, unfortunately. Yeah, it's a huge problem for us too. We just don't have the staff I'm looking for, sorry, I'll just, I work for Linux Journal and we have the same privacy concerns that you do but we haven't found a way to deal with comments spam. I mean it's a constant thing for us and we actually still use discuss which we have personal, we all have personal problems with it. I don't want to use it, we don't want to use it but we haven't figured it out so that's why. Yeah, it's a tough problem. Yeah, no problems. So the other thing we do is that you might also have issues with but there are filters you can do. So there's one that we've used recently called PerenderX that integrates into Fastly and can do basically a heuristic against the request to see if you're a spam bot or not. So it does interesting things like a JavaScript that tracks how you're interacting with the page to see if it's automated. It has entire IP groups that are known to be bad, like abuse and their usage in the past. And so some of that combined with the verified role cuts down a lot. It still doesn't cut down everything but sometimes it helps to just remove the known bad blocks of the internet. Right, thanks. Cool, I think we're all set. Thanks for watching, guys. Good job, you guys. Anchored today. So much triple today. Are you sick around for a few more days in the comments? Yeah, I'm leaving Thursday night. Oh, nice. So I ran out of time, but I'll bring you. We have a sedentary store party. Oh, I signed up online, is that it? Oh, there's another one. Oh, there's another one? After that. Oh, the after party. It's cool. We did a bunch of like LED programming with our Duino's, so it's sort of, it's like a PVC pipe event. It's sort of, it's a Nintendo control where you can play with this. All right, all right. Yeah, I'm about to. And then the guy, the guy who started Fastly is gonna come. Oh, no kidding. We're gonna talk with David on some of their stuff. Oh, cool, cool. But he's the super fun guy who started Cisco. Okay, cool, cool. All right, nice. High vibe moment. How's it going, man? Good.