 I'm also hoping to just pull one of those. Okay everyone, it's one o'clock, so we should probably get started. This is a core conversation, meaning it will be very conversational. David and I may argue with each other up here as well as with you. So we're basically gonna present some background, some ideas, some recommendations that we came up with. And then we hope that you will come up to the mic. And yeah, Jess is here. And give us your feedback, thoughts. As you saw in the keynote this morning, the idea of auto-updates is something that's sort of on the wish list. Might be something we could do in the future, but it's still really at the research stage. So that's why we wanna do this. Core conversation basically came out of discussions we had in Vienna where Dree sort of first proposed that we might wanna prioritize having auto-updates and David and I and other people were brainstorming about how that could possibly work for Drupal, what that would look like in terms of the changes required. So, okay, so Drupal core auto-update architecture. I'm Peter Willanen, this is David Strauss. This is me, I work for Byrafts. I'm on the Drupal security team. David as you probably knows, Pantheon CTO and co-founder, also on the Drupal security team. So we both spend time thinking about security, web applications and how to deploy them. We also both have platform engineering experience. I'm with Pantheon. Peter was formerly with Aquia. So we know some of the ins and outs of what large scale sites are doing, how they're experiencing it, how deployments happen at scale for those types of services. So just to start out with a couple of links here to some background reading. So there's been certainly thoughts, a lot of thought put into how you do secure updates. So there's this update framework, which proposes a lot of detailed steps that you might need to take to securely update your web application. We're not sort of gonna reference that directly, but you might wanna look at that as sort of background or material to think about, all the complications involved in securely updating web applications. And then this company Paragon all similarly has a guide to automatic security updates. They have a pretty opinionated sense of what they think is necessary to do this process securely. So if you're sort of new to this, so just going, reading some of these and thinking about it's not just a simple case of I need to download a file and move it into place to have anything automatically updated. There's a lot of steps you need to securely do it, make sure no one's interfering with the process and no one's denying your ability to update in order to keep you on an insecure version, various things like that, that are broader considerations for the long run. Also by background, we don't necessarily mean that we endorse the ideas presented in them. They're just an interesting component to the discussion. Yeah, basically food for thought if you haven't thought about those problems based before. Okay, so I think there's some critical unanswered questions in this realm of why do we want automated updates? So one critical question, we'll maybe try to address a little bit is what is the problem we're actually trying to solve? Which users or percentage of sites will be helped by this? Is it really the majority use case of sites? Would most sites use any kind of automatic update or would it be a small minority? And really, following on from that, what's the return on investment? If we need to do significant changes, let's say to Drupal Core in order to make this happen, which means we need to fund or convince many Drupal Core developers to spend many, many hours, days, weeks going to sprints traveling around the world to make all this happen. Is that worth it for the number of sites, number of users who are actually going to be helped? I'm not sure we have the answer here, but I think we need to ask those questions because it's not like we could just add a new module to Core and it would work. We need to do some more fundamental changes to Core. Similarly, following on from that, in terms of what percentage of users or what sites are going to be helped, like what is the use case? Can we say this is only going to be useful for sites that don't have any kind of QA or approval process? If other sites and maybe most sites block it because they want to QA the updates before they happen, then it's not really that useful for many people. Are we really only talking about Drupal Core? Do we need to handle contrib projects also? Is it useful if it only handles Core? Do we have to force you to take every single update so that you're ready for the next update? Or are we only going to force you to take a security update? And what does that mean? Can we handle the case where your site is deployed across multiple web servers? And does this imply anything about like stricter backward compatibility requirements for Drupal Core and Contrib? Especially Contrib. We're saying Contrib has to be automatically updated then it has to maybe have a strict backward compatibility requirement as Drupal Core does. And some of these questions may not be answered now. We may not want to do these implementations now for things like, say, Contrib versus Core. But I think it's important that we actually, at least, ask these questions. So that we think about these potential next steps as any sort of auto-updating infrastructure evolves. So I took the liberty of kind of creating some personas as like different types of site owners for that might be running Drupal. I called one as sort of like deploy and ignore which is basically where they push the site out. They maybe do some theming work. They install some modules. Once everything looks good for the customer, let's say they're a freelancer, everything looks good for the customer. They don't touch the site anymore until the customer requests other changes. And then we have kind of diligent but with simple needs where it's like this is a developer that's much more aligned with some of the best practices around updating it where they are continuing to do maintenance on the site. But they might be deployed to a fairly simple kind of platform environment like a single VM or something like Dreamhost or GoDaddy where it's like a production environment. And then when they update, they just update the production environment. They might take a backup before. And then we have like the Sophisticate which is sort of like they might have build steps for their site. They might be compiling CSS. They might be actually using Composer with Drupal. This is a growing group and has very different needs from the first two. But I think all of these are significant and the first one is in some ways one of the most important to me in terms of handling the long tail problem of how do we actually make sure that all of these deployed instances out there are actually getting updates. Just to know people in the back, there are plenty of seats towards the front. So please work your way forward. Let yourself in. I also decided to after talking with some people that about some of the ideas here, I realized it might be worth articulating some of the premises we're operating on and going into considering what the viable options are. Because some of these might be debatable. I think few sites run just Drupal core. Even though some of the most substantial updates happen to Drupal core, there's not that much of a use case for just plain Drupal core in most site builds. Contributed modules can have changing Composer dependencies and we know that they have, and this matters because if we started to update modules then there might be a recursive set of changes that have to get pulled down as well for contrib stuff. It would not be good for us to implement a system that we're auto-updating a Composer-based module or a Composer-dependent module just breaks things. If this breaks people's sites, people will just turn it off. And then Composer dependencies themselves can require security updates. In fact, they often do in the sense that the Composer dependency is often larger than the module itself. It might have a Composer dependency for parsing a ton of data or interpreting a certain data format. Certainly Drupal core has a ton of Composer dependencies even though we manage those a little differently for how we distribute them. And also in terms of how we deploy Drupal, we're assuming that we don't wanna fundamentally change the way that you would actually deploy Drupal to a server or a platform or a host in the sense that if you have to create another user to do something or you have to go to extra special steps to get an auto-updater running then people probably won't do it. If it's considerably more than what we have today. So like the combination of the last two is actually kind of significant because those two together basically mean that the auto-update framework needs to be possible to kind of shoehorn into the existing primitives that we have like a writable files directory, a deployed code base, et cetera. Okay, so in considering the goals of something we might incorporate in an auto-update system, there's always trade-offs to be made. And so we wanted to run through sort of some of the options and considerations we have and considering these trade-offs. So one thing that you can have is security, right? So you guarantee the integrity of your downloads, you guarantee the availability of your update server. Another thing you might want is ease of use and reliability, right? So it's easy to install this auto-update system, have it work reliably. Compatibility, again with your existing deployments, potentially, and probably invented elsewhere. Do we want to try to reuse something someone else wrote or do we want to use it? Do we want to invest the effort to write something that's more Drupal-specific and meets our specific needs? I think the concern as with many other similar frameworks is you can't have all four of these things simultaneously. At best, you can pick two or maybe three. So we need to think about which of these things are the priorities. When we imagine an architecture for some kind of system that would do auto-updates. So one of the things we want to kind of put off the table, even though it's successful in a broad sense of keeping more press sites up to date, we don't want, at least as security team members and people who feel strongly about the security stance of Drupal, that we wouldn't want a simple, kind of simple system where you basically just take your existing code and override it with a new set of code. Right, so people have been imagining that you could do this, that somehow with Drupal you could just run ComposerUpdate or you could just do something and download and install the new version of Drupal and it would just overwrite the existing one and you'd be done, you'd have your new version. But I think if people have spent time doing Drupal updates, even if you're doing locally, not on your live server, on the command line, often these things go a little bit wrong. You might have some conflict with Composer, something doesn't install in the right place, you get files left behind. There's also other problems here. Of course, allowing the web server to overwrite the PHP files mean that any compromise of your web server or PHP means the attacker can now write PHP files and basically take control of your site. So it's like a very large attack service if you allow the web server to write new PHP files in sort of a way that's not validated. Yeah, and the api.wordpress.org is also kind of a command and control infrastructure for compromising sites as well in the sense of not just compromising the sites that WordPress is deployed to, but the instructions for how they all auto-update themselves is a fairly vulnerable point to attack. Like if someone could acquire an api.wordpress.org certificate and intercept some of those requests, then they could basically tell those sites to do anything. Right, so part of the problem with the way WordPress does things also is not actually doing digital signature verification on the downloads. And there's a blog post recently about people did find a way that they believe they could have gotten remote code injection into that api.wordpress.com site and basically taking control of a third of the internet if they'd been malicious as opposed to a nice people that reported it to WordPress. So in general, we don't wanna replicate this. We don't think it has the security characteristics that we want for a Drupal kind of system. So, I guess I'll take this one because I'm the one who really doesn't like Airship's system. I think that a lot of their security decisions are not actually based on achieving sensible levels of security so much as basically like let's max out each of these parameters. It's like the algorithms they choose are unnecessarily broke in a sense in terms of broad compatibility. Some of them are good choices, but there's absolutely no good reason to choose them for some of this infrastructure. Like there's no reason that we have to be using elliptic curve for signing this stuff rather than RSA. Like RSA is perfectly secure with a sensible key size. And the, the, what, well the effect on their stuff is that basically the way that they design their auto update framework is like they continue ratcheting these things up to basically whatever is the newest thing that PHP supports. So every time there's a new PHP release or point release they basically are like we're gonna adopt the new hotness and then they do it and then basically it breaks compatibility with everything else and there's no like compelling reason to do that. Also they also have a lot of weird marketing around their auto update framework stuff. It's not componentized. It's like it's actually not an option for us to just easily pick it up even if we want it to. So I'm not a fan of adopting the Airship one and also it doesn't actually solve any of the other problems that we want to solve. Like the hard problems we have around stuff like composer integration or handling this mix of contrib and core stuff is not actually addressed by their framework. Okay. So another thing that people have talked about for a long time and really you could do right now is you set up two different user accounts on your web server, right? And you can have a separate user account that runs a cron job and does the work of updating Drupal core contrib modules however you want it to. And this option has really been available for a long time. It used to be that you could do with Drush. You could probably now do it with composer or in some way basically just download the updated files and move them into place. It has to basically run on the command line. You're gonna need the separation of two different user accounts on your server to do it securely. It needs that process. It's hard to manage this if you have multiple web servers. Again, suitable for sites with core and custom models. Again, a little more difficult to do with contrib if you may have these recursive composer dependencies or may have like a large overhead resolving your composer.json to a composer log. Also, if this were successful, people would already do it. Like there's absolutely nothing stopping people from setting up another user account and having a cron job run Drush to update Drupal core like every hour. But yeah, so this is again been documented for a long time but rarely, rarely used. So it doesn't seem realistic. This is not something we could say is gonna solve that problem of the create a site and forget it users because they're just not sophisticated enough. And there's no, yeah, this is not automated. You have to go through several system admin steps to make this work. So one thing that I think is an interesting approach to look at this is instead of looking at it as stuff that's resident on disk that is sort of this immutable store or something where like the WordPress approach where it's written to disk but then it's sort of swapped out. I think it might be interesting to look at the code base that's running a site as something that's cached in the sense that if we can separate out the concerns of the bootstrapping of the request and basically handoff of control to a code base that has a cache populated with all of the necessary assets that are for running the request. And the advantage of this versus kind of replace in place is that you can actually have a chain of trust from an immutable route where you can have the immutable handful of PHP files start the processing of the request and then do a careful handoff to a particular code base that's running in terms of something closer to a vendor directory. We have some examples that I'll show in here but this is largely based on what's successful elsewhere. This is how tools like Chrome, how systems like Chrome OS and Cisco firmware work. You update the other one and then you switch to it. And they treat it as something where they may not all treat it as a cache but I think that the caching model might also support us adding additional use cases to what our coverage can include including some multi-server ones because if we treated it as something where it has a manifest of assets that it needs to process the request and it validates that those are available periodically or during the request or something like that then we can have it either freshen it as part of processing the request or as part of a cron run or like basically it gives us a lot more flexibility to think about it that way. Oh, sorry, one more thing. Also, even if we thought about it as a cache this wouldn't remove the ability to create built immutable deployments because you could pre-build the sort of cache assets point Drupal at that deliver the cache assets in an immutable way and then you could deploy it and have much more locked down configuration still. So just we wanted to run through a couple options we thought about initially how you could build such as sort of cached code system. One of the first things we thought about was can we build the entire code base of Drupal into a FAR file? So people aren't familiar with PHP this is FAR as a PHP archive. So you basically take the entire code base of a web application package it up in one of these web archives those can be digitally signed so you could satisfy your requirement for having assigned code distribution and then you'd be done. So this might be great, right? If I could take my entire Drupal code base from my site packages into a signed FAR file then that gives me just one thing to download from somewhere and swap into place. But it turns out this is not even though so there's a pro there of simplicity, right? There's one file to manage for your entire code base there's pretty heavy cons including this is a pain in the ass you'd have to build that entire code base into one file. You'd have to basically package every possible module theme anything together and doing something like this where I think this is a general problem with any approaches we're suggesting but especially for this we have a monolithic output of it if you have any kind of patch process or anything where the patch might not apply and might break the build you can't allow that, right? Because if any one part of this was broken none of it works. And also none of us wanna build a server on Drupal.org to like build the fresh FAR files in mass once people need to do a security update for everyone's code bases. Right, yeah. Yeah, so this would have pretty heavy infrastructure requirements and you know it's a huge downloads your web server might not actually be able to download it all. May I ask like a question? How would we like in this option or the previous option could we how do we deal with assets? Like JS files which might have bugs. Sure, I mean so I think well yeah because the web server needs to read from it, right? So with FAR files the web server or PHP at least can read the assets out of this but yeah you'd probably have to aggregate them or copy them out. One other option would be to actually have PHP serve that file and then assuming you have any kind of edge caching like varnish or a CDN then you cache it and it's not being read frequently from the host file system in the first place. Yeah, so FAR files do you have a way to handle that but yeah, it's definitely. It's not the option we're advocating. Yeah. So another option we thought about like this this is something you could approach and again using FAR files because you know they have this property that they can be signed and you can validate it before you unpack the archive. You know maybe we can have one FAR file for Drupal core and one FAR file for each module and one FAR file for each composer dependency. So that's nice and sort of decompose allows us to maybe reuse the build architecture. So we've built it for each module once rather than building it once for each site. Give us more cacheability, lower bandwidth but you know it's still got a lot of cons to it. It increases impedance with the composer. That's the main one. That there's no system in the composer world that has been standardized at all around building modular dependencies for composer stuff for different modules that have overlapping dependencies. Do you have a question, Cash? You would have to basically patch it and build a new FAR or possibly there could be a way to deploy it not without FAR without the auto update but this is also not an option we're advocating primarily because of that impedance with composer that while some people use composer to build monolithic FARs there's very limited support like a handful of community projects around the idea of I wanna convert this composer package into a FAR that can live in a dependency structure and we would have to own a lot of design around that process if we wanted to adopt this. Yeah and I mean problems with both of these approaches of using FAR files is what you deliver the code, right is that you lose the ability to easily poke at your code base and debug it and tweak it, patch it so that loss of developer transparency I think is an important consideration because it's certainly a lot of people get into Drupal, get into the community by poking at their code base, being able to visibly see what's inside the files rather than having to dive inside the contents of an archive. So we have a few kind of recommendations in light of those options and then like we have an option that we'd like to propose. So I mentioned in the premises thing that contrib modules can have changing composer dependencies the composer dependencies themselves may need security updates. We probably want the ability for updates to happen without human interaction for at least some subset of updates probably at least core updates that don't require schema changes at a very, very minimum. And they need to be reliable and not resource intensive actually creates another constraint that we've worked within which is kind of when I think through like the dream host use case of like I throw up Drupal onto a basic shared host and try and get things running can we actually run this auto update framework in that kind of context. And I would also like for there to be a path to eventually support multi-head setups even if we don't do it as version one or version two or even version three just to not kind of completely preclude the idea of deploying a code base out to multiple servers and having it pick up security updates. So if we have this wish list of recommendations it has some implications. So we need to be able to write this new vendor director somewhere the web server needs to be able to write it and preferably outside the doc route. I think getting outside the doc route is tricky. We didn't ship a Drupal eight with the vendor directory outside the doc route. And right now we're writing twig files into the files directory because that's basically the only place we have to put them but we need to think about where we would write this code that it's as secure as possible and also the web host would support writing there. As mentioned before there's a problem of exposing assets. Do we have PHP serve the assets and then hope that they're cash do we copy the assets into somewhere or do we use like JavaScript and CSS aggregation how do we get the assets to work? As David said we need essentially a bootloader we need a new subsystem in Drupal that's able to essentially pick between multiple code bases, multiple Drupal cores at least and decide which is the right version of Drupal core to run for this request. So that's something we need to think about how do you make that reliable how do you let it pick the right version that has every file in place that it needs to run. We need to get a secure manifest so we need to have some infrastructure in place that when we want to do an auto update that tells us what the next version is and that communicates securely that list of things in a way that someone in the middle can't interfere with it and probably in order to do that we need to have some public infrastructure core needs to have public keys that it could say I can verify the signature of this manifest against a public key that I know was generated originally from Drupal.org or somewhere. Part of the motivation for suggesting this sort of bootloader with a selectable code base is to be able to support the reliability especially on these kind of shared hosts where if we're, let's say someone deploys a site to there they're using this sort of fake cron built into Drupal 8 where as they get different requests it tries to do some work at the end of some of them to make sure that some of these routine tasks get run and we're running updates in that sort of environment we want to be working in a copy that is not the current live copy of the site. We basically want to be writing to something where we can iteratively apply the necessary updates download them, do work on that and then only actually switch to using the updated code when it's actually fully in place because it may be the case for especially as we start wanting to look forward to things like contrib updates it may not be that all updates can happen in one run it may be that it actually requires it to do accomplish a few tasks maybe even split over cron runs to actually do all the work to get everything in place. Right, so that's the first couple points here, right? Oh, right. Yeah, that by having an incremental download and assembly of the code base would be something we'd need to figure out how to do and that gives us a better compatibility with different hosting solutions and core would have to understand how to read a lock file and download essentially the specific composer packages that you need for that new version of core. I thought we had also, or David had was that you might need to really rework the installer so that instead of downloading an entire Drupal code base and installing it that the default way to do it would be to download essentially the bootloader and the auto-updater and get an auto-update as your installed version of Drupal core which would help guarantee especially if you did it straight on your live server that future updates would work also because if you couldn't initially auto-update or if you could initially auto-update essentially to get your code base that would make it more likely that the configuration is correct for future auto-updates. Yeah, for anyone with experience with Linux distributions it would be almost like a net install style process. So again, anything like patches or using get repositories in your composer would be much harder to support especially for contrib modules. Again, because those may break the build, it may be harder to be sure that it's reliable to fetch. And again, looking towards the future how do we handle multiple web servers for the single site when we have this thing where we need to in a coordinated way essentially switch all the web servers to picking the same new version of Drupal core at the same time which means essentially all the web servers need to say that I've gotten the fully new code base and that might be local to each web server. So each web server might have a new local cache of Drupal core and so each web server needs to essentially have registered itself, say I'm in the process of building a new Drupal core and now I'm done and ready to switch and when you have essentially all the servers ready to switch, the boot loader knows to go to the new one. And I'm content to just kind of think about that problem for now in the sense that I wouldn't expect a 1.0 to necessarily support that but it would be nice to know that the model could eventually evolve to support the different servers checking in and switching over atomically. The one thing worth mentioning in here because we keep mentioning Composer. A lot of the stuff we're talking about with Composer we're kind of conflating in the sense that there's Composer as like the actual Composer implementation that is publicly available and the Composer CLI. This is a fairly heavy weight system. It's probably not appropriate to actually expect that to run on things like Dreamhost. What we mean by a lot of the stuff around Composer lock vendor directories, things like that is we can adopt the Composer way of doing things and the way Composer would place things without necessarily having Composer, the actual Composer CLI run all these operations as it would in that case. And then this can provide us with some fidelity between people who use real Composer builds for their site and might need things like packages pulling from Git repos and other complex Composer features. But it would allow that the directory structure and file structure to be the same as people who just like extract Drupal to a web server and then run the installer and updater. So like we might like even if we don't do probably invented elsewhere in the sense of or implemented elsewhere, we can do probably designed elsewhere for some of this stuff. So I try to create a sense of like how this sort of scheme would work. Basically Drupal would mostly ship with an index.php and install.php, at least in the most standard kind of tar ball. The install.php would perform an initial update as part of the installation. So the installation would have two phases. It would have the downloading assets for Drupal and then it would actually have the install phase. And we could validate that all of the files were in place and iteratively pulled as part of the install process by the same means as the update. As much as this may create impedance for people installing Drupal, I think that it's actually a feature in a way that successfully installing Drupal means that you're successfully able to apply updates or successfully able to register for these updates. And we can of course offer other ways of doing it like a composer based install where you do a build process and deploy the build artifacts. But the people who know how to do that know how to opt out of this stuff and know how to manage their site properly through a sort of CI and other deployment process. And that's not as much the long tail target audience of the auto update stuff. But the way that it would work is inspired somewhat by that kind of Chrome OS thing. We could basically take the same philosophy we do with twig templates and do it for vendor directories in the files directory. Because we depend on executing PHP and twig templates, this doesn't create additional surface area around code execution or writing of things. If you can write PHP to the files directory right now, you could overwrite a twig template and take control of the site anyway. So if we're willing to do that, we could be willing to do this. And the idea here is that we just create a different vendor, like a basically vendor A, vendor B. I just smashed on my keyboard for this because I think that's part of what we do for twig is to make a non-guessable directory. But basically what we would do is the bootloader would just set the autoload parameters for the rest of the stuff. And then it would just hand off control. And index.php would ideally be very small. It would not actually be making web requests or anything like that. Like the updater itself would not live in index.php necessarily. But that could hand off. And the idea would be we'd be running maybe the first one and then as we run auto updates, we fresh in the second one. And when the second one is fresh, we actually just set a parameter in the database or elsewhere to basically switch. And if you actually want to do manual composer builds, of course, you could put vendor wherever you want and then configure it to just have the autoloader hand off to the vendor directory you've built. So there's a fairly consistent directory layout from people who use composer hardcore and people who use like just extracting Drupal and then installing it. What about the database? The like schema updates or? So schema updates are a tough question. We don't really, so there are a lot of, so one thing that I've thought about as like for version 1.0 of this is to make it where the auto updater actually just blocks or it just does not execute updates that require schema updates because the vast majority of, especially if we're talking about core security updates, the vast majority of those do not require schema updates. So we could release an initial version of this that only does the freshening and switching if there's not a schema update and block if there is one. And what I would probably do for like a PSA is basically tell people, check your site or even have Drupal email the site owner saying, you have an update waiting that requires a schema update. I can't actually auto update you when this is ready, when the security release comes out because it's gonna be blocked. Yeah, the most common example of that in a security release would be something where we need to rebuild node access for some reason. Like that's every one that I can think of off the top of my head. And then which is already like we don't do that on update for people, we just say, here's a flag, you need to do this really intensive long running long running batch data process. So it's similar to that. And you know, it's we can, I think that expecting people to have like some hands on in the beginning is totally fine. And I think that a lot of the same questions around the schema stuff, like I don't know that there's any regression here in terms of how we would approach schema updates because if you switch code base and schema updates are available, then schema updates are available and they need to be applied. And it could also be like the case with the security updates that we just ignore the schema updates as part of solving this problem and just to have the site email the administrator saying that like this needs to be run or there are follow on things that need to be run. But for the most part, we can lock down security pretty easily this way. I would honestly like for auto updates with schema updates, I would actually like to possibly wait until we see more broad based deployment of the transactional DDL and things like MySQL. I think that's available in the latest release. But transactional DDL is the idea that you can start a transaction, manipulate the schema of the site and check that everything went okay and then either commit it or roll it back. So you can basically speculatively update the schema without committing to it. And I would be a lot more comfortable introducing automatic schema updates as part of things if we can start leaning on something like that. Yeah, one thing to mention. So the tweak templates have a system for protecting them from being overridden by checking the modification time of the directory and the files. So we'd have to extend that right now that doesn't extend easily to a whole recursive directory. But we have the concepts in place already in Drupal Core for how we do that. And so that if you did overwrite one of those tweak files, it would be very difficult to make it so that Drupal would execute it. Yeah, and we can do fun things in index.php. We could be caching certain things as anti-rollback measures or anti-modification measures and even something like APCU when it's available to basically like layer security as much as we can as it's available. So part of this seems like we've said we wanted to run as a consideration on sort of hosts where we don't have a lot of resources available. That means we may need to, if we wanted to do this in the future, think about some kind of infrastructure, possibly Drupal.org, some infrastructure available for dependency resolution that can do that heavier composer process of generating the lock file. So some of these services might already be under consideration. People are already demanding them because they want better composer support for building sites. But again, something we haven't had and have debated quite a number of times of the years is a public key infrastructure for Drupal.org so that we could sign potentially releases or could sign the composer lock files so that you'd say if I get to sign composer lock file and it has a file hash in there, I know that if I download the file and the hash matches and the hash was signed then everything is good. I should mention if we do just core updates as part of kind of iteration, one of this sort of approach, we could just statically generate this lock file, put it on, it doesn't have to actually have dynamic dependency resolution until we start looking at including contrib. Right, so again, yeah, this problem space has made much easier if we only want to support Drupal core in any reasonable future time. But there's trade-offs, right? I mean, if people have Drup modules, widespread Drupal module with security hole could be almost as bad as a Drupal core security vulnerability. So again, some kind of service, web service to generate resolved dependencies as a composer lock. And then how do we manage this for sites? Do they resend the composer JSON to regenerate their composer locks? Do we automatically regenerate for them in advance so the security updates ready? Do we provide high-level assets down to the updater that basically has all of the kind of metadata they would need but individual units digitally signed and then they could assemble it on the client side efficiently? I think especially if we're starting with core we don't have to answer all these questions yet. Right, so now we're getting to the point where hopefully you guys will jump in and weigh in a little like based on these considerations. So I think the biggest case is like is there really a clear use case and a compelling use case that the return on investment would be worth the necessary effort? Do you agree with sort of the recommendations we had that we need this ability to atomically switch possibly rollback code bases so that this process is reliable so people don't turn off updates because it broke their site when they got half the things downloaded? Do we have the resources, the people to develop this? And what version of Drupal could we possibly target? And are we willing to actually refactor Drupal core to be basically venderized in a very composer-ish way so we can kind of use that as the hub of the approach? Yes, okay we can go home now. I've been trying to get Peter to take this kind of, I've been trying to get Peter to take the last question off of here because the question is Drupal eight or eight and like don't y'all worry about that? Like let's come up with what you want to do and then we'll figure out how we can do it in a Drupal eight, how we can begin to roll it out in a Drupal eight minor release, just don't sweat about it. I didn't see anything on the slides that made me say no Peter, so don't, don't. I think the harder question is actually the one before that which is you know where is the return on investment worth it and do we have the resources in terms of people and focus to do the hard work of figuring out how to make that. Yeah, I just like my boss for that one, but last question don't worry about it. Okay. Hi, my name is Mike Bainton, M-B-A-Y-M-T-O-N on Drupal.org. I've been working on this issue since 2016, primarily in a super long issue with zero dot patch files on it called I think implement automatic security updates or like highly critical patches or something like this. I have so many thoughts. Let's start with, we're asking Drupal eight, nine or 10. And oh maybe Drupal eight is okay, no, no, no, no, no. Okay, most of our sites right now are Drupal seven sites. Okay, it was 200,000, right? That were eight, so that would be 800,000 there. Keep going, keep going. So, okay, so keep that in mind and I'll get back to it in a second I guess. From that, the next thing I'd like to add I guess is, so I have a lot of code that I've written towards this. It's all in repositories, hence the no dot patch files. But in that time when I've been monitoring the issue queues and I've been monitoring all the people that have very ambitious ideas, nobody else has written a scrap of code towards this end. Now, Dries has talked about it more recently since Vienna but even after Vienna, nobody wrote any code, okay? So, I would like to keep that in mind and also just ask, is anybody that's come to this core conversation who presumably has some interest in it, have enough interest where you feel like as of today you'd wanna work on this, write code for this? Anyone? Well, but it hasn't happened. It's having people that have enough vested interest in it. I believe the problem here is that everyone with the technical skills to make auto updates happen don't need auto updates and so nobody's doing it. I'd say that maybe people don't know where to start because it's such a big problem. So, at least what I would propose as an initial start for some of this is to actually just fully composerize core in the sense of if we can get core to live entirely in a vendor directory and we can have a handoff from something like index.php to the auto-liter in the vendor directory and that accomplishes an enormous amount of progress toward being able to say switch the code base we're on in an atomic way and to be able to harmonize the problems we currently face or the challenges we currently face in terms of people using composer manually and people just kind of running Drupal the classic way of downloading tar balls and extracting them to the file system. So, sorry, I'm back. I'd like to go back to my Drupal 7 reference that I brought up just to wrap that one up in any case. And also go back to a slide where you pointed out that the airship solution was not a good solution because it was so tightly coupled to their product. Yes. How about we build something that is built with Drupal in mind but that is built in a way that solves the airship problem so that Joomla or maybe even WordPress could pick this up and use it as well because it can be done better than what anyone else is doing. There's actually nothing that would be very Drupal specific about what we'd propose. Exactly. Because the composer JSON translating into a lock file and then updating a code base to have that and then switching to it is something that you could theoretically do with anything composer based. Any composer based PHP web app. So I think there's a high, this is not the latest rendition of, or iteration of a CMS that we're talking about making, we're talking about making a totally different product and so we should make it as a totally different product just because that's a better use of everyone's time, I believe. And it also would pretty much just plug into Drupal 7 then. You might do it with composer or you might not but not doing it with composer is even easier. So if you solve all of the other problems about how do you securely overwrite files or update files or all of those things, how do you have a batch framework? How do you do all of that stuff? It's a very big intersection of problem space there. So also I don't think we explicitly said this in the deck but part of why we're emphasizing the like let's get our composer house in order first is because in the current state of the Drupal 8 ecosystem is that we have a hybrid ecosystem of the old way of doing things and the new way of doing things and we either need to write an update system that supports both or we need to support an update system that supports the new way and I'd rather not write an update system that supports the old way because it will be intrinsically broken. So I think it's a lot, I think there's a good reason to call for like let's get our house in order to do things the new way and then build an update system on that basis rather than, because yes, Drupal 7 is a much easier case because you could do a completely composer free approach for Drupal 7 and you would be comprehensively covering most of the use cases. Sorry, I'll let some other people go. So you're talking about use cases and when you're talking about the boot loader almost sounds like you could really like trivially write something like simply test me as sort of a use case for having a simple, install.php and index.php and then even tie in like being able to get contrived modules and themes just like we do with locale. So I think there's a pretty good use case for building your approach. I'd just recommend that. I'm partly calling it a boot loader because it's based on some of the kind of secure boot designs for computers and hardware devices in the sense of like you have something that is like understands how to do the handoff and knows how to validate the handoff. Yeah, I agree. So the other thing to consider here is so this process we're proposing would be easier if core is the only thing in scope because if we put module, contrived modules in scope two that means maybe you have to force contrived authors to rewrite or refactor their modules into a new layout so that it's compatible with this system. And that's potentially something that's eventually, I mean, that's why I have Drupal 9 on here possibly because if you wanna get to- And this is initial scope. Right, so if the initial scope is only Drupal core, makes it much more feasible to think about in the Drupal 8 timeframe, if we want to force contrived modules to change their file layout, change the way they do things, that's a longer time horizon. And basically I'd like the most limited scope we can do that doesn't require us to throw out the design to support contrived modules later. Sorry, I had to introduce myself to the mic before. I'm Jess, I'm XJM on Drupal.org and I am one of the Drupal 8 release managers which means that I can make decisions like we can find a way to do that in eight. You know, binary saying no. So I wanna see it back to Ambaton's question about Drupal 7, my emphatic no, for it was not no we don't have. So four fifths of sites reporting to Drupal.org usage stats are currently on Drupal 7 that's correct. However, Drupal 7 releases out twice a year, sometimes three times a year. Currently, and most of those are only security updates. Like there's like one or two people who review security patches for Drupal 7. There's a long RTB CQ of Drupal 7 issues a winning commit. So practically innovations are not going to happen in Drupal 7. Also essentially in a long-term support release while we do wanna add new features, do innovative things, possibly dangerous things in an experimental way within the framework of Drupal 8 that's not something that's gonna be able to happen for Drupal 7, whereas with Drupal 8 like the way that the problems that Drupal 8 has now what David said about get our own composer house in order is something that needs to happen anyway. And so people who talk about this initiative are treating that as a dependency already a lot in people in decision-making, product-managing capacity. So that's why I think that it's okay to leave Drupal 7 off the table for the discussion. If it turns out what we build is useful, then of course we can open source it and can use it by anyone. That's great, but let's make sure that we're not gonna shoot off our feet first and then that'll be good. Hi, Steve Perche, Steve Vector on Drupal.org and just about everywhere else. I just wanted to point out somebody has started on the composerizing all-of-core thing. Talking with David last night, this reminded me of the Devel security vulnerability from like two years ago where there was an insecure file in Devel module and a bunch of people realized, oh, this shouldn't be web-accessible at all. This should just be under the vendor directory. So that spurred some discussion and someone has written a composer plugin, Drupal Composer Paranoia. I just found it. He's right here, I didn't know who you were. You should be talking about this, not me. I just wanted to share, somebody already in the room is working on or has written on this subject already. That's all I wanna share. It does what they're talking about. It moves, I think it moves all. Yeah, so for the recording, it puts all executable via PHP under the vendor directory and it has handling for the static assets. Somebody's running it on 50 sites. We're further than we even knew. This is amazing. Hey, so I had asked earlier who else is interested in working on this and a bunch of people raised their hands I didn't figure out who they were. Is anyone planning on sprinting on this later in the week? And if so, should we maybe set up something more formal? I think we have a meeting at 11 o'clock on Friday to talk about it during the sprint. So that would be one point. That would be one point, Friday's many days from now. Is that in the sprint lounge or is it? Yeah, yeah. Now, it's somewhere near the sprint lounge. Okay, and then the one other thing is that there's been a lot of discussion here about the bootloader and stuff. And I just wanted to point out, yes, that's very Drupal specific, but I think that don't discount my suggestion of doing an independent thing as well that just does the composer stuff and having maybe an even better enhanced version of auto updates in the fact that you have a bootloader in Drupal but maybe another application that doesn't have that functionality. Oh yeah, I would have no problem with this being done as sort of an out of band implementation that eventually dovetails into, okay, we have a project happening here that is auto updates for composer based projects, even on lightweight systems. And then we have get our composer house in order in Drupal core. And it just so happens that when these two projects come together and our composer house is in order and there's this foundation of like, how do you invoke a composer code base in a switchable and updateable way, then we would suddenly have auto updates for core if not more. My name is Tim Whitney, Tim and Whit, everywhere. So you guys had a slide about personas and I think that's a really important thing to go back to. One of the main things on the issue that was mentioned, that I think by you about was the persona we're going for here, is it the, it's not the person who is running composer necessarily and trying to get their site update, using git, all that stuff. It's the person who just says, hey, I want like the deploy and ignore type of deal. I'm trying out Drupal, I have a simple blog. I know we're not in that blog space anymore, but is that solidified? I know this is like the architecture question, so there's a lot bigger things, but in order to hit that like 1.0 mark or even that beta mark, it seems like we really gotta nail what persona and the user we're trying to target for these auto updates. Yeah, absolutely. Agreed. Half of what I was going to say is what he said, but I'm Ryan Aslet, Mixologic. I'm on the Drupal Network infrastructure team, so part of the other half of the signed key side of things. And by the way, that's something we definitely want to do and can do it completely independent of everything else, mostly at least provide the signed infrastructure. But I did want to talk about the personas because I think that it is really important we focus on when we talk about an automated update process, we're talking about a process first, is we have to figure out what's the process of updating our sites, and we can only automate it once we get to the point where we don't need a human to make any decisions. That's, and so those few steps need to go into it, and so the sophisticate is always gonna be someone that's gonna need to make a bunch of decisions. They're gonna have BDD tests, they're gonna have something really complicated, so it would be great if every time we come, as we look through all of these requirements and we look at things, we frame them through the lens of these personas, because when you say I want this to work for multiple web heads, I'm like, who is it to deploy and ignore people that need multiple web heads? That's, so as long as we look at all those things and consider that. But the other thing is back to the Drupal 7 sort of thing is having been someone that looks at the way the current update system works now and the way it's architected and it was built way back in Drupal 4 and 5, and it's kind of a heavy tax on our system, but we look at how sites are updating now and with Drupal 7, we're talking about a ton of sites that are not updated. And so no matter what we build, those sites are not updated. Like the cat's already out of the barn there, cow. So I consider the first case, or actually in some ways for like a 1.0, I consider the second persona to probably be the most important, because if we're not automating schema updates, then we're gonna be limited in our ability to support the first one with initial thing, but it would give us the ability to send out a PSA that is like as long as your site is up to date as of this date, the security release will auto-deploy. And then people don't have to set their alarms for three AM halfway around the world because we're releasing an update then. You can constantly refresh a page after your blood organ brings you down. Oh yeah, that's another problem. I promise we're going to fix what happened with the last security release. You won't get the 500 errors next time, hopefully. I promise that too. So I agree on both points. I think that the sophisticated to use this sort of system would require features we probably shouldn't put into 1.0. However, like by doing the layout in a composer way, we at least are friendly with the way they're laying out their projects. And then I think we can solve diligent with simple needs with 1.0 in the sense that as long as you're willing to actually do your schema updates, then your position to get the security ones. And then we should, I think we should iterate and then eventually solve the first one too. And I did have kind of one more question because it wasn't clear to me what the like scope was. Like, are we talking just security updates? Or are we talking like being able to take core through minor releases? Just give me the mic, give me the mic. So initial scope, definitely not. So in order to answer Ryan's last question, I'm going to add a persona to your list that I think is missing. So between deploy and ignore and diligent, but simple needs, I only want to update my freaking site once a year. If it's once a year, I can do it. And so there are other things that we're doing from a process standpoint to try to get to the point where that can be. Because if you've got a small university or even like it's possible to plan for, okay, we're going to have our two weeks out of the year where we set aside a little bit of time to get our site updates done for our department site. That's feasible. Thank you. And so I think that we're not, so deploy and ignore would require automated minor updates. If you want to see me have a meltdown, try to add. So we need to not try that possibly ever. But it's definitely not something that we're ready to do at this point. We just can't. If you don't have to run updates across a minor release, patch level updates, which includes security updates, are supposed to be very non-disruptive. If something breaks in your site after you apply a patch release of Drupal 8 core, please find me. Don't get angry in my face, but calmly say, Jess, something went really wrong here. Or actually don't find me in person, put an issue in the Drupal 8 or issue queue. And market critical, because that should not be happening. If we break something in your patch release, we screwed up. And this is also tied to the point of database updates. We, as a policy, do not, in certain circumstances, we'll change the policy if it's like a really important major critical issue. But as a policy, we don't put anything that requires database updates in patch releases. So that, a lot of these problems get solved is if we are allowing patch level updates to be run by the auto-updater, which starts with security releases because there's a very real need for that that I think the security team cares about. And then we have, patch releases should have mostly the same limited potential disruption there. We need to get that working. And if there's proposals under discussion for changing the way that we provide security coverage for minor releases currently, that would allow those patch level updates and security updates to putter along by themselves without a person for a year. And if it does get stuck on a composer build or something for someone who actually is like a hybrid of those two, or hybrid of the last two in that once a year thing, which I think actually a lot of sophisticated organizations do only want to do their stuff once a year anyway, then that addresses that problem too. Yeah, but no to minor updates with, I mean that's not even something we should, I don't think we should discuss it at this point. And I also, about the diligent with simple needs and what the scope of what the diligent, simple needs people need. I have an actual picture here with actual data. So we very recently had a very highly critical security app that I'm gonna turn around so I can look at the room. Who here is running SA Core 201802 on their Drupal eight sites or their Drupal seven sites already? If you have a Drupal site and your hand is not up, leave right now and please go apply this update. And then you can come back and listen to the rest of the session or the next session effort. I'm not joking. Like if you didn't raise your hand, the only excuse right now is that you don't maintain sites yourself, which is great. Or you weren't listening to what I said. If you haven't run security updates on your Drupal six, Drupal seven or Drupal eight sites, there's a nice man here who's a part of the LTS program. There are patches available. If you're on Drupal eight three and you're like I can't update to eight four oh yet, there's a release available for that. Please, please, please run the update. So the data I have here, I'll turn it back around, is that, so we have this highly critical security release. We put out an announcement a week ahead of time. This update's gonna happen. It's very important. Lots of news and noise on the internet. After 10 days, about 40% of Drupal sites had updated to this security release in Drupal eight. I can't speak to Drupal seven. And that is huge. That is way more sites than we usually see within that scope. And we're talking about it's still less than half of the Drupal eight sites that exist. And Drupal eight, these aren't the Drupal seven sites that have been sitting around since 2012. Drupal eight has only been out for two and a half years. So there's a very real need for security coverage that fits in the top couple of those that we can solve. And I think we can solve it in Drupal eight. I think we could also treat the suggested persona as almost as if they take a once a year break from being deployed and ignore. In a sense, is that accurate? Okay, and then of course we would need to have the patch release availability go back 365 days in order to ensure that people who touch their sites once a year would have an available path, right? Okay, perfect. I just wanted to make sure I understood. Okay, two real quick things. One, I really like that idea with the year long option. The other thing was I would just like to call out some of the work that I have already done with making it as a separate product. So it turns out that FAR files are a really good way to go for that because you can take all that complexity and put it in one file. Your application that does the update is small enough that that's viable and then users only have to put one file in addition to have all of this updating capability. And I've already done a lot of proof of concept work with like how does that link into Drupal's administrative UI? How do you authenticate a user between Drupal and some other application? So that's all stuff that I'd be happy to go over with anyone that's interested in pursuing that. Thanks. Okay, last question quick or we? We can take it offline. Yeah, I know. Okay, all right, so please evaluate our session. Go to node 20, 10, 10. Thank you all for coming. There's a few this morning. There's no, it's hard to spell check that format of course.