 Hi. Hello. I just wanted to thank you all for coming to our session on how you can build passive income by selling Pantheon to the friends and family who already need it. Oh, yeah. It's a great turnout for our Spon Con. Thank you for coming to a sponsor. This is the most people I've ever seen come to a sponsored session. So I feel pretty good. For those of you who don't know us, my name is Josh. This is David Strauss. And we're co-founders of Pantheon. And we were there from the beginning, built a lot of stuff together. Now, we're working on the next generation. And if there are any direct competitors, can I just ask you to not be kidding? We are going to explain what we're doing with the next generation architecture. And it is kind of a little like if you want to copy our whole business and try to beat us to the punch, this is the playbook. And what we do is show in the spirit of openness, transparency, trying to show the way, because there are lots of people who are trying to solve these same WebOps type of problems as we are, but can't use our platform for whatever reason. So if you have to do it yourself, this is not a bad place to start. It's all about moving on up. So that's just what I said. How do we reinvent ourselves, reinvent ourselves, or are in the process of reinventing ourselves on GCP? For those of you who don't know about Pantheon, we emerged from the Drupal community, founded in about 2010. Today, we have around 20,000 active users that are regularly engaging with the platform, building websites, launching websites, maintaining websites. The websites running on Pantheon reach about a billion unique visitors monthly, a little bit over now, actually. We power Drupal sites. We also power WordPress sites. And we're now powering JavaScript-based frontends, whether they're attached to a Drupal or WordPress backend or just running on their own. And our engineering team has grown to over 100 full-time engineers, plus designers, product managers, QA, and other stuff. So we've got a lot of momentum behind this thing that we've been working on. And we're very proud of it and happy to share with all of you. David, I don't know if you can read it. No, I think that sums it up great. OK. And we've got cool new colors in our graphic design scheme now, which I'm super thrilled about. This is way better than just yellow and black. So we'll do a little bit of history, just so you can set the stage and context. We had to do create a really significant amount of net new technology in order to deliver Pantheon originally. Sort of between 2010, when we first got started in 2013, 2014, we invented a lot of stuff. And it's because we didn't have the camel case word for it at the time, but we were trying to deliver this idea of web ops, which is, hey, everybody should have a totally scalable production environment. And hey, everybody should have non-production environments that are available on demand that are the same as the production environment. And you can't do those things if you're delivering a platform in the old way at the time, like it's still the state of the art in many cases, is slinging virtual machines, building clusters. It's just not possible to replicate architecture affordably, I guess, at that scale. Your point around generosity, I think. Yeah. A lot of it we conceive of today on a different, more like financialized perspective. But from the beginning, it's always been this concept of us being able to be generous with customers, because if the infrastructure costs can be maintained well enough by us, then we can be generous to customers in terms of being able to build tools like multi-dev, because it means that there's fundamentally an easier problem to solve around the continual cost of that infrastructure, around the capacity planning for that infrastructure. And we've been able to get our efficiency to some really interesting levels with kind of scale to zero style tech around the stuff that we've been doing for 10 plus years now. Yeah, and at the time that we started, like we had to kind of roll our own for an awful lot of things. And we wanted to be able to sustainably give away free sandbox sites. We wanted to be able to give people as many development environments as they needed. We didn't want to have to charge extra for those things. In order to do that sustainably, not just lighting a bunch of VC money on fire, you have to figure out technology that lets you do that in a sustainable manner. So we had to dive deeply into container technology. At the time, Docker didn't exist. It was a different startup that hadn't pivoted yet called DotCloud. We built and run still due to this day, honestly, our own Git infrastructure, because at the time, in 2010, 2011, Git Hub had data all over it. We were pretty sure that Git was the winner, but Git Hub was one of a few things. And we couldn't force people to sign up for another product just to use ours. When we first started, we used Rackspace as our underlying infrastructure provider. And we did that because AWS was slow as balls at the time, where they had very, very poor disk speed and not so great CPU speeds. And we wanted Drupal to be fast. So we did put extra effort in to let us use kind of a more old school infrastructure provider because they could give us solid state disks and faster CPUs. The orchestration of this involved like marshaling an army of over 500 Jenkinses. If anybody's used Jenkins, it's a pretty cool tool. You're using 500 of them coordinated together. It's also pretty cool, but it's a lot of work. The Hudson project finally closed shop, by the way, for anyone who's old school or not. You remember RIP Hudson. We can pour one out. Yeah, Larry Ellison finally did it in. And last but not least, we had to write our own file system, which was a surreal experience that continues to be surreal to this day. It was a moment where we realized that there were no good network mounted file system options for the scale we were looking for and the agility we were looking for. We were sat in this room and it was like, do we have to write our own file system? Yes. Yes. So we did all this stuff, right? And we're trying to get rid of as much of it as possible because the truth is that the, oh, and we are making progress getting rid of it. So one of the things that we did originally was we had varnish integrated in the stack, like every website had a varnish in front of it because that's the way it should be. But that's something, you know, we don't really wanna run that if we can help it and wouldn't it be better if varnish was distributed around the world rather than just being on the, just located with the same stack of resources of serving your Drupal website. So that's what we did with Fastly. We partnered with Fastly starting in 2016 and let's encrypt. So you got HTTPS out of the box as soon as you point your domain name at Pantheon and it's globally distributed and this cache and validation and like all that stuff just works. And this is actually a way for us to reduce our operational overhead, lighten our burden while delivering more benefit to developers and customers. Yeah, and I think one of the biggest things I like to point out about the value that we've achieved here is that it's not just like an additive thing of like one plus one equals two here. This is more like a one plus one equals three with the integrations that we have because you don't want HTTPS deployed to just your origin for your main website. You want that deployed to the edge. You want the integration between the cache and validation and your content sources. So we've been able to ship a lot of integrations that I think that no matter how big the budget has been for our project, I don't think it's ever been done with Drupal outside of the sort of integrated stack that we shipped this stuff on. Yeah, I'm sure other people have done it on one-off basis, but we make it easy out of the box. The other thing we did was find that we were like, okay, Rackspace isn't it. And they told us that too, that they were like basically getting out of the business. They ran up the white flag and said, we're just gonna be a support company. We're helping all of our customers move to these clouds. Can we help you? And we said, no, we'll figure it out on our own. And we moved all of our workload from Rackspace to Google Cloud Platform, which we're gonna talk about more in depth here. The bulk of that migration occurred in 10 days. The time scale on this is not July 2010 to July 2022. It's just July 10th to July 22nd. Summer of 2017, moved like 45,000 active websites in two weeks. A lot of work and planning went into that, but we were able to, as an example of the operational efficiency we get with our platform at the level that it's abstracted. And it was also really good for us to move up to a infrastructure provider in GCP that could offer us a lot more than just slinging fast virtual machines. I don't know, David, you wanna talk to like how we chose, why not Amazon? Like what? Well, I feel like we'll, we will get into a lot of the details of that, but the fundamentally like GCP is built from the opposite direction of AWS. In the sense that AWS started off with virtual machines, started off with cloud storage and block storage, and then built all the services upwards from that. Google, on the other hand, was building upwards from, building from different foundations, foundations like Bigtable, Borg, things that were already sort of internally utility managed services. So their conception of how infrastructure works, particularly data and on-demand compute is just fundamentally different in the sense that it's less around stamping out footprints of infrastructure and then dedicating it to a purpose and more about reconceiving of the implementation of infrastructure so that it fits a model that is then scalable and deployable on a more dynamic foundation. So yeah, they, the Amazon Smart Move to meet the market where it is, like move all your machines to the cloud as virtual machines, but Google sort of is playing a longer game and we think that has the potential to leapfrog them and is doing so with some of the services they offer. And really like the interesting thing is, so I'll just back up for a second, this graph was the graph of our moving, but it was a lift and shift move. We went from virtual machines at Rackspace to virtual machines at Google and with the idea of moving providers, making sure everything still worked, making sure all the networking was good, and then we would be able to thence forth, invest in modernizing things as we were able to leverage things from Google. We did a lot of that internally over the past, the first few years, because like our internal engineering organization, and frankly is still getting benefits from being based more like Google cloud native, but we're finally getting there for our customers, where we're able to leverage higher level things at Google that can deliver the same value that Pantheon delivers with its proprietary technology and potentially more, which is what we're gonna talk about. So starting with this idea that as we are able to deliver in this next generation context, we can draw a Google project box around every single customer, which gives us like a future ability to be flexible, which is really, really enticing and interesting. And this is again, kind of something to be hard to do with a different cloud provider. It was kind of a surprise to us that we could do this because we basically started designing these tools and looked at various quotas and limits on GCP projects, and we're like the only way we could possibly make this work is by just spawning thousands of projects on GCP. There's just no way that could happen. And then we talked with them and they were like, no, that's totally cool. So like we have built our whole approach around some of these new products around basically stamping out extremely isolated infrastructure at the project level, but built from these sort of scale to zero foundations that allow a mix of isolation and efficiency. Yeah, so the benefits will be that, you know, we'll have flexibility, we'll be able to offer stronger guarantees to customers. Like right now today, with our proprietary technology, we have this like big, one big platform that all of our customers share. In this next generation model, we'll be able to give people much stronger guarantees around their resources, their network space, their data storage, and like Pantheon's very secure. Like I have no real worries about people like getting at each other's data on our shared platform, but you gotta kind of take my word for it. And you all know I'm trustworthy, but not every CIO knows that I'm trustworthy. And they've only got 30 minutes to make the decision. So it's way better for us to say, Google's got it. You know, they're at the level of assurance they can bring it significantly higher. And we didn't take these approaches previously because a lot of them would require stamping out pretty lightly used infrastructure on a wide scale and cost inefficient basis in the sense of, it just doesn't make a lot of sense to deploy infrastructure that's gonna be used five percent of the time or five percent of its capacity on an exclusive basis. But like we're no longer forced to choose between these things. Yeah, we can kind of have our cake and eat it too, which is great. And the other thing that's nice about this is not just the efficiencies that David's talking about like costs and our ability to offer things for free or not have them be like metered or like with additional environments, but also the agility and the quickness with which we're able to deploy these things. Like again, at the time when we were first designing Pantheon, we wanted to give people on these teams doing the web ops, as close to a real-time experience as possible. It's not instantaneous, but like spin up a new project in just a couple of minutes, give another environment and let it get 30 seconds or less. That sort of speed to provision is now also where these cloud platforms are, whereas, even just like, I don't know actually, I haven't run any benchmarking, but like still stamping out a new virtual machine from a machine image can take one to five to, sometimes it's like an indeterminate amount of time, which can be really frustrating for users when you're waiting for a thing to provision. But let's get into it. Let's get into like what it is we're actually doing. So the first thing we're doing is we're embracing cloud build as a cornerstone of the new architecture and the new developer experience. Importantly, this means that you will no longer be forced to use our janky git infrastructure that we stood up in 2011 and haven't really modernized since. You will be able to bring GitHub and friends shortly to the equation, which is what everybody wants now. We realize that our git infrastructure is in the way and we're gonna get it out of the way so that people can bring wherever they prefer to do, that whichever of the popular social coding platforms they're using, they'll bring that, connect it to Pantheon, which will connect the cloud build, which will run all the processes to deploy code when it's pushed, when a pull request is created, when a branch is created, et cetera. I don't know if you have anything to add. I'm just trying to think of a joke about us opening up Bizarre again as a coding platform. But the app... That's right, secret subversion support. No, never again. So what happens in cloud build? Cloud build is a, you can think of it as, this is a replacement for the army of Jenkinses. It does a build process and will deploy. It will deploy to Cloud Run. Cloud Run is ready, we think, and our early testing confirms this, to replace our main, the number one thing we spend money on Google right now, which is our massive container matrix that runs all the PHP environments. And this is something, again, that we really can only get from Google. The equivalent, nominally competitive tech from the other cloud platforms is still one or more generations behind. Primarily in the ability to do things like scale upward and downward, in the sense that we can deploy things onto Cloud Run, have the container image available, and have it actually scale down to zero containers deployed. This is less important for production environments where we might actually still force a minimum of higher than zero. But when it comes to pre-production environments, our ability to provide features like multi-dev, it's important to be able to deploy a stack and not have that stack sitting around, doing almost nothing except for when the developer's working. Yeah, and be able to create more on demand and not have resource limitations around the stuff. Cloud Run, and Cloud Run also performs really well in all of our benchmarking and so forth. And it's only recently gotten the maturity it needs to actually handle the full use case that a Drupal runtime would require. And then, of course, there's some things that we had to invent ourselves, so that this is offline in our container tech. It's not all Google. We're still deeply investing in our relationship with Fastly. They have a new project product, which is called Computed Edge. Think of it as a way that you can, anything you can compile to WebAssembly, you can deploy to Fastly, and they will run in the same computing environments where they run their varnish. So basically, it's a way to deploy a lightweight application to now upwards of like 80 points of presence around the world. And this will allow us not to retire because we're still gonna have an application that gets deployed there, but to really move, again, up the stack, our routing technology. That's, I didn't, wasn't a bullet on the other slide, but we had to figure out a routing approach that would allow incoming requests to pantheon to then be sent to the right ephemeral container instance on our proprietary matrix. And that's like running, depending on what time of day, a week between 800,000 and 1.2 million different runtime containers. And it's like there's not a lot of like regular load balancing tools that will take that job. So we had to invent some of our own technology there for load balancing and sort of like managing self-healing and other stuff like that. We'll be able to push that up to the edge. And it means that a request will come in, get terminated on TLS, go to some of the caching configuration, and then ultimately, if it's missing the cache, we'll get routed directly to the Cloud Run container pool that will serve that request without having to go through any additional layers of say multi-tenant load balancing. Yeah, and it lets us do pretty neat things in the future around like potentially targeting load geographically, being able to actually spread applications out across wider areas. It'll be pretty cool. There's actually some, I would say also, like we're not gonna use it, but there's some pretty good stuff if you're thinking about large scale routing, like in open source, it's also catching up to what we had to invent ourselves, this tool we called sticks, which takes a request from the land of the living to its, what did you say? Well, to its final destination. To its final destination. Yeah. Like the stuff that they're coming up to with now with Istio and parts of the Anthos framework, they're able to handle that type of scale now, or they're getting close. So if you have that type of problem, you don't have to use fastly or computed edge. There's an open source solution that's pretty good in that regard as well. I think we actually kind of anticipated Web 3 with sticks, because like the whole concept is that you're supposed to pay like a token to the person who's running the, I forget who runs it, but in the mythology, but like you pay a token for your request to go through. Right, yeah, to get over it. I think it's a fungible token though. A fungible, okay. Yeah, okay. No, no, no. Anticipated, anticipated. We're riffing, it's a joke. So then, okay, like behind the runtime, behind the PHP environment, there's all the other stuff that Drovo will need to talk to to actually serve content. And there, it's like, this is more, this is frankly more commodity, right? Like, you know, cloud-based SQL. Lots of people have that. A key values store. Lots of people have that. Search backend. We're really conscious, we're really, really thinking about making Elastic our go-to rather than Solar for the next generation. There's just much, much wider support in the broader, off of Drupal Island for lots of things around Elastic. And I think we could probably help level up some of the search capabilities within this community. And you know, I'm not saying this is for sure, but we're definitely considering it. But these are all things that we can just assemble and orchestrate from Google Cloud without us having to manage the underlying infrastructure and machines so that we can, you know, the punchline on all this is that we can get more of our engineering organization up the stack, which means they can focus more on things that are gonna matter to you versus just making sure that the platform is stable. Which of course everyone cares about, but it's not that strategic. Yeah, not much to say. Not this one. Yeah. And then the nice thing about Google is they have this cool marketplace capability, which then lets you bring in other services. So whether that's, you know, if you are like really into solar, search stacks is here. They're great. They do the really good job at delivering solar as a service. And they're available through the GCP marketplace. So in, you know, in this future world of Pantheon, if there's something that you need that we're not providing out of the box, you'll have access to this marketplace to be able to get it and have it plug into, again, that project that's gonna surround your application on Pantheon. And it will no longer be this somewhat more complex thing of doing weird secure gateway stuff to connect other services. And this also goes for the other cloud services at Google. So for instance, a lot of people are using like Google's big table. If you do anything with even medium data, let alone big data, big tables just fricking awesome. And BigQuery is even awesome because it gives you this great interface on top of it. And you can write SQL queries that have embedded machine learning models in them to do predictive modeling and so on. It's really awesome stuff. And so like we have some customers that are really leveraging that with as part of their web mission and their web project. And wouldn't it be, it'd be way better for them if they could just plug that in to Pantheon versus having to do like a rather, again, convoluted data extraction and load process today. Yeah, yeah. Simply by conceiving of our projects and deployments in a way that is sort of more legible to GCP and its integrators really makes those available much more directly off the shelf both to us and our customers. And then we return to the file system which we codenamed Valhalla when we created it and we still are probably gonna have Valhalla. Yep. We have done an experiment on this and we've used, we've tried out like Google's off the shelf file system which is file store. Works great. Has a minimum of two terabytes, has no copy on right and has a whole bunch of other limitations where yeah, we can get a site running but to truly provide that sort of Pantheon great experience from a workflows perspective and the ability for us to just spawn these environments out. This is probably one of the few places where we're gonna have to keep some of our tech. So yeah, and we're evaluating, like we have a good, we got like a red phone line to the folks at Google now because we worked them for quite a while and we're partners but I don't know if they're, I don't think they're gonna be able to solve this problem on our timelines. We'll probably end up having to solve it again. That's okay. So and the other thing is, this is our next generation Drupal infrastructure that we're working on. This is also the current generation of the infrastructure we're using to serve JavaScript front ends today. You can see it at the booth. Steve's got a great little demo of what this looks like. You connect your GitHub repository, it goes through Cloud Build, deploys to Cloud Run and we handle the routing at the edge. So like we're, we don't have any of the content services in here. There's no like database or file store or anything yet but this tool chain is being used in production today by customers who are running JavaScript front ends on Pantheon. So I'm gonna hand full of customers now because we're still in the very early phases but that's gonna open up a lot over the summer and we're gaining confidence in this architecture because we're putting it into production now and we expect in the next year or so to start to put CMS sites on the same type of architecture in production and eventually it'll become like our default or probably our default or go-to architecture. And we're keeping some of the Pantheon goodness in this sort of process too. Like rather than it being a dashboard where you see your Git branches from our repositories and then can promote them into multi-dev environments, we have it tied in with the pull request process. So like you create your branch, you do a pull request, that fires up a new stack out in Cloud Run after doing the build a process and then you get a URL so you can go to the stuff that is representing that branch. So like we're still keeping the whole fork the stack kind of capability here. Yeah and so and you know this involves us owning different pieces of technology. So now we run a GitHub app and we have a whole abstraction layer so we can do GitHub, GitBitBucket and GitLab and we have to manage events across all these things and we have like a whole orchestration layer that is, we had an orchestration layer before but it's more complicated orchestration layer now. But it's better, it's better because we can, we're moving our engineering resources closer to the things that you and your customers really care about and we're able to kind of offload or outsource a lot more of the very drudgery oriented end or high stakes security wise labor to Google who are pretty good at this so we can trust them as a partner and so I think it's a way for all of us to continue to evolve and get better at the work that we do. I believe that is the presentation. We just sort of whipped through it. We're happy to answer any questions you have about this content or anything else if you just want to jibber jabber we're happy to do that too. But does anybody have a question? Show of hands? Yes. So the question was, is there a difference in pricing when it comes to the jam stack approach versus doing things traditional way in Drupal? The answer is yes and I don't know exactly what it is yet because that's one of the things that we're figuring out. So show of hands, how many of you all have used Netlify or something like that? Yeah, OK. So I'm very interested in understanding, I think they have a great product and basically we want to compete with them and we want to compete with them and bring Drupal and WordPress to the party too. I think the way that they price based on how many seats you have, how many environments you create, how many build minutes you use has been, I've talked, people find that frustrating because it's hard to predict, it's hard to understand. It is tied to their underlying infrastructure costs so I get why they do it, but it doesn't seem particularly good for customers. So we're in the process of figuring out what does pricing for a good jam stack product look like. And I think we're gonna try to keep it simple. I think certainly for simpler projects, like the cost basis should be low because it's, you know, especially if it's a static site or something like that, it's like pretty easy to run and scale. But we don't have that 100% figured out yet today. But if, you know, open to hearing people's thoughts about it for sure. Yes. Is there a next generation varnish on the horizon? I don't think that there's a next generation varnish from a tech, like a new caching technology in terms of like really the work is more, honestly from a performance standpoint, if that's kind of what you're thinking about, the work is more in optimizing how people actually build their front ends so that when, like what varnish can do, and it does a really good job. It's hard to imagine something being that much better. Like fastly is varnish in a local point of presence. And it will send you the bits as fast as it fricking can in response to your request. So there's not much upside to be gained in sending the bits faster or from a closer location. But the contents of those bits and how they're interpreted by a browser is where the battle for performance is being waged right now. So like understanding how to make that payload as small as possible, ensuring that it's all the images are optimized, that there's not render blocking JavaScript in part of it. It's more about like actually thinking about it from how a web browser or user agent runs and optimizing kind of the execution of the payload, not the delivery at this point. It's a good question though. And in case this is at all flavoring the question, we are still continuing to partner with Fastly for the delivery of the infrastructure for decoupled front ends too. So like whether you're deploying a Drupal site to Pantheon or you're deploying a JavaScript front end in front of it, they will go through the existing sort of varnish flavored infrastructure for caption purposes. The question was, does the current composer build process already use cloud build? And the answer is sometimes? Yeah, I think it's sometimes. We have an abstraction layer above that build system that is able to use different delivery back ends among them cloud build. And I'm not sure where we've turned the knob on that yet. It's less of an implementation thing and more of a cost and isolation control question. Yeah, when we first implemented it, the answer is no. And what we've been doing is looking at ways to embrace, moving up the stack, moving on up and leveraging kind of configuration and implementation around cloud build as a generic job runner, which it can totally do. And so today, I'm not sure if 100% of the composer builds go through that process, but that would be our aspirational goal for sure. I can say 100% of the builds for our decoupled front end go through that. And those builds happen in the same project that they get deployed to. There was a question here, yeah. The question was, are we planning to continue to support upstreams generally as part of this? The answer is yes. We didn't talk about it a lot in here because we're kind of talking about like the hosting part of Pantheon, but the value propositions that we bring that are actually differentiated from hosting, in my opinion, are about productivity, principally like multi-dev, dev test live, work close, stuff like that. And then the ability to manage a portfolio at scale, which is really like the ability to have like one code base that you can deploy a hundred times or a thousand times and manage a release process around and so on and so forth. And so that we're 100% committed to keeping. This particular details of like, okay, well, how do we link a GitHub repo and like are we helping people manage their own GitHub repo sort of hierarchy or are we still last-miling it? I'm not sure yet. We have to get through a lot of the details and kind of product development there, but that value proposition of being able to manage many, many, many, many, many instances of the CMS that are mostly the same, although potentially a little different as needed, is gonna be something that we're very focused on continuing to deliver and hopefully improving. Like frankly, when we have this next generation build pipeline in place, we should be able to make real products around release management. So, you know, people do this on their own with Terminus, but like, why doesn't, wouldn't it be cool if Pantheon lets you tag three sites as your Canary sites and auto deploy to them first and like autopilot tested it for you. Like wouldn't it be cool if there were some other sites you could mark as like hold these until the very last because we know they're heavily customized or highly sensitive sites, right? Basically helping people do common deployment workflows at scale with some like, you know, kind of opinionated approach in the same way that we have an opinionated approach to like releasing sites for a particular, releasing changes for a particular site. So long answer, it's gonna get even better, short answer, absolutely yes. Yes. It's not a lot of faster. There's not a lot of gap between the compute resources that we're already deploying today and the kind of compute that you're going to get on something like Cloud Run. CPU is usually the bottleneck for the PHP side. For the database side, it's usually, you know, memory and IO throughput. We may see some improvements through Cloud SQL in the sense that there's some joint optimization between the persistence layer and the database engine on top, but from our own testing, we're actually seeing pretty parity like performance. We, one of our earliest evaluations of this stuff was wire this all together, deploy a Drupal site to it, throw some new relic on there, develop a load test plan that touches the different aspects, everything from file creation to database rights to content create, regular content creation in the database. And we benchmarked it and it's basically a parity. Like this is definitely not motivated by unlocking a new tier of performance. It's more about the capabilities of the infrastructure, the reliability of it, modernizing aspects of the developer experience, and ultimately being able to refocus some of our efforts higher level in the stack. There's a lot of speed in those. Oh yeah, yeah, yeah, yeah. So I think I'm not gonna promise because again, we are gonna have to build a more complex orchestration layer right then we currently have today, but if we do a really good job there, it should be able to give you new environments faster. And in particular, as we, some of the bottlenecks in the experience around that to be totally transparent, just don't point and laugh too much, it's actually the user experience part in the dashboard. Like it's our own JavaScript app that's not optimized and we're replacing that too, but it's not really part of the story. So yes, the developer experience and how long it takes to go through these workflows, we absolutely wanna bring the time down on that. Like we're absolutely looking at like what does build time look like with Cloud Run? There are some things that we can do faster. There are some things that might get yeah, just a little bit slower, but like we're taking a really close look at that and are doing our best to make it like so that there's no degradation. And in some cases we may choose confidence over speed in the sense that we would very likely choose an approach that is like 99.9% successful in its completion rate over one that's faster, but is like breaks 10% of the time. Yeah. The other thing I would say that is almost certain as we get to deploying this for CMS sites is the deployment pattern that will take as a default will be similar to the one that we're doing at the Jamstack project, which will mean that there's no requirement to go through a test environment before you get to production. You'll go directly from feature branch, if you want feature branch to production. And the first environment that spawns is attached to your default branch and get and then that is what you sort of fan backwards from in terms of your development process. There is yes. Yeah. So the question is maybe more about like, you're talking about like the demo that we have at the booth. So the approach that we're taking and it's about like, well, what is this thing you're talking about with personalization at the edge? So I can give you my best answer on that. The approach that we're taking to content personalization is one that's based on really observing what has not worked well in the market over the past decade. And that actually takes several forms. There's certain technical challenges that we can solve with the architecture and doing things at the edge instead of in the user agent that are gonna make it easier to guarantee a good user experience. There are certain things that are strategic. Like you get a lot of people who buy personalization tool without a clear use case and lo and behold, a year later, they're not doing much with it. That's something that our marketing and sales team can help guard against and we can try to give people more opinionated product. But in terms of what it is we're actually doing there, we're building into it. So the product we have today is really a software development kit. So it assumes that there's gonna be significant implementation done because you need to wire this up to your content editor stack in Drupal. Like are you a paragraph shop or are you like, and what we wanna do is make it as short and easy as possible to connect, you can think of it as the pantheon component is all the tricky wiring between the CMS and the edge to do this orchestration and cache variants and identify audiences and deliver the right variant. We wanna make that something that's like people don't have to think about too much. But you do have to think about where your content editors see the option to switch from one experience to another. And we've talked with enough people and there are enough diverse opinions about the right way to do this that we didn't feel like the place for us to start was to say, here's the thing, this is what your editorial users must use. I think we'll probably get to a point where we have something that's like, if you want it, you can use it, but it will be built on top of a, in kind of like the Drupal way, an SDK level module that you can do your own custom implementation on top of. Because some people wanna vary each paragraph. Some people might just wanna switch to a whole different page. Some people, you know, there's like a lot of different use cases. So, and we're kind of in learning mode to be totally honest. We're looking for partners who have projects where there is a clear use case so we can help them deliver quickly on a successful use case and then close the loop a couple of times and we'll develop stronger opinions. Yeah, we basically want to improve that dynamic content assembly process of the edge, which reminds me like, how many people in here have successfully deployed ESI as part of their websites? There we go. How many people have? Exactly. How many people have had a conversation about using ESI as part of their website? Okay. Well, maybe it's worth. Well, it turned out to be so ineffective that I think it actually fell off the like Overton window of plausible like perfection methods for Drupal sites. It's true. But there was an era where we were all about this thing of like edge side includes of this idea of splicing together content with these individual things that are sort of mixed and matched at the edge. And the problem has always been actually making productive use of it. And that's really what the question comes down to for some of these personalization and dynamic edge tools. Not does it technically work, but is this something that actually will fit into the workflows that teams have and allow them to get business value with the resources and expertise they have on the team? Yeah. Brittle. Yeah. The last thing I'll say is the fun thing about the edge integrations roadmap is once we get confidence in the pattern that we're using and some confidence around what content editors, what like the Drupal developers and the like non developer content editor personas need to succeed. There's like a bunch of like adjacent use cases that are really fricking cool because there's like, oh let's personalize this homepage for audience A, B or C like returning visitor or person from the UK or whatever. But that's actually technically not different than let's deliver a paywall for non paying subscriber customers. So the idea of being able to deliver like actually gated content or quote unquote logged in experiences that can actually scale to like hundreds of thousands of concurrent users because actually they're not really getting different, you know, they don't need Drupal to deliver the page to them because the page that they're getting is functionally the same as everybody else that's of the same class of user. There's a bunch of really cool use cases we can knock out that are next to content personalization. This is part of why Fastly is so popular in the media industry because it allows doing these dynamic content delivery decisions of the edge about even paywall to content and other personalization or customization of it as it ships out and not having to go back to origin for it. Anyway, we digressed. Other questions from the room. I'm gonna do my like, oh, there we go. Is there a reason why? Actually, I didn't think about the fact that there's like a preponderance of after lives. The mythology thing is just cause it's pantheon and we thought it was clever. And like I'm like 40, I don't mean 43 like next week and we started this thing like 11 years ago. And like one thing I've learned between age 32 and 43 is that if you are being clever, the odds are very high that you're basically screwing over your future self. But yeah, we thought, you know, it's fun, right? It's fun. You could name your projects after deities and so forth. But I didn't actually, it's a good, maybe there's something subconscious going on with like sticks, Valhalla. I don't know, we don't have a Hades. So. That was sort of part of the origin of it. Yeah, because the whole idea around project mercury like the predecessor to some of the pantheon stuff was that there would be other things in that space named after other gods and that we would assemble them together. Like wasn't there a whole idea that there would be like a CI one or something? Yeah, yeah, we had, we had a, it wasn't very widely done. We had a Jenkins based thing that was like Vulcan. It was like the forge. So you've got like your pantheon of different things. And yeah, again, cleverness. Like name your company pantheon. Everybody we've ever hired into marketing at this company is like, what are you doing? That's like, you can't get that name for it. Like that's not available as like any, you got a dot IO, good for you. Great job. You're never getting the dot com. You're never winning first rank in the, in, in Google unless somebody compares it with a technical keyword. But, but yeah, here we are. Having been a little too clever by half. Yes. Thank you. Well, thank you for your moral support on our naming choice. And I'm gonna let David answer the question of like what about a web IDE? Cause he has thoughts. So yeah, pretty strong lines actually. So one of the things about cloud run as compared to our existing container foundation is that it does not have much support for say a mutable environment. There are ways we could kind of cluj one in there. But it's really time for us to revisit how this development experience works. And I've been setting my sites most frequently on dev container, which is also the underlying infrastructure that runs things like code spaces at GitHub. It's a way to specify a container or a cluster of containers as a compose with a compose file that you can have the whole constellation of services and it's possible to fire them all up with one click on say a GitHub repo or locally on your machine with visual studio code. And I do say visual studio code cause you have to use like the official releases to have the support for this, but it provides actually a lot of portability whether it's cloud IDE cause code spaces will fire that up and provide a VS code experience in the browser. But also if you have an IDE installed with visual studio on your desktop, visual studio code specifically, then you can pull it down there, run the entire local development experience as well. And it's exactly the same experience in the sense of the container images and the IDE. What's really cool about some of these approaches is it also allows us, allows tying in things like step-based debugging and other tools that are, that have proven really hard to tie in from cloud infrastructure to IDEs because like they often connect in challenging ways. So like it's very likely that we will continue to explore that space, whether it's literally dev container versus Git pod versus Eclipse Che, those are all different approaches to like cloud IDE and portable experiences of development across local and remote. And we're probably just gonna look at that pretty much from scratch around this in the sense of how to provide a great dev experience. Yeah, it gets into, I think two things that we feel pretty strongly about that are true today of Pantheon and need to be true tomorrow. One is, inevitably you build the product for yourself, right? So David and I, when we were first architecting the experience of Pantheon, we were very much shaped by our experience being technical leads and agencies and having to hop from project to project to project, often multiple projects in one day. And the overhead of like having to recreate your local tool chain to come in and help someone debug something, we just didn't want that. And in fact, we had in our various agency, our own hobby infrastructure, we had figured ways around that. And it was very important for us that you could get like debug something on the platform quickly and effectively. So we need to keep that. And then the other thing that we wanted to ensure is that one of the big promises of Pantheon is you can deploy with confidence, right? The development and like the environments you're working with in pre-production should behave exactly the same way as the environments do in production. So like if your test pass, you can hit the button and it could be the middle of the day on a Wednesday and don't need to worry about it. So there's lots of work for us to do to basically make sure that we're still meeting all of those guarantees with this new infrastructure, but we'll take that really seriously. That's a core promise we make to all of our users. Any more for any more? You can do my... By the way, the stuff that we do on the IDE side will probably occur first for the decoupled support because it does not have like SFDP mode for the reasons that I've already outlined around Cloud Run. Yeah, so that's another way to think about it. Like if you're curious to like track what we're doing with this, like you should get in the early access program for the decoupled stuff because a lot of the developer experience that we build out there will be just bringing to Drupal and WordPress projects as we extend the platform to include all the other things those need to run. Also to answer your other part, no, we did not cover that in the presentation. All right, I think we're good. Thank you all for your time. Thanks for coming to our spawn con. Looking forward to building the next generation of the internet together. It's been a pleasure.