 Think about how your organization delivers applications, not just now, but how they could change in the future. Today, delivery means coordinating specialized teams of people, unique configuration, and dedicated tooling at each stage. By connecting it all together, we're aiming for a stable, reliable pipeline that keeps new features developing quickly, where we're monitoring production continuously, and security requirements are enforced at every stage to keep the process compliant. That maturity would allow us to clearly define and trust our non-failing system, while understanding when and how to respond to failure scenarios, including those we don't yet test against. But part of what makes this complex is that the pipeline doesn't just need to be stable, but also flexible. An essential expert at the packaging stage may leave your organization, or the tool you depend on becomes obsolete, and how you piece together each stage needs to remain flexible enough to adapt. Enforcing compliant standards is one thing, but modifying a pipeline for new requirements is another. Needing to switch to a different cloud provider or simultaneously maintain apps on AWS, but others on Google Cloud, for example. Balancing stability and flexibility with this complexity leads teams to see there's some standardization badly needed here. This means abstracting away tooling, cloud providers, secrets, everything into a generalized internal platform. Something that's stable, but adapts to the reality of evolving dependencies, onboarding and offboarding, and how the needs of the business are going to inevitably change over time. But this isn't a small task, because doing DevOps has evolved too. There aren't a handful of concepts your teams need to understand to be both stable and flexible. There are hundreds. In best case, there will still be 10 to 15 integrations per pipeline you'll need to manage forever. And things will continue to change. More applications will depend on this internal platform, even those that use different frameworks and programming languages. That just means more tools to connect to a common API, more concerns to understand and address, and more time. A lot more time. Time that's spent either building this platform to address all the scenarios that are hard to anticipate about how software changes or time spent reinventing the wheel endlessly, creating variations of your pipeline for every new use case. And all that time could be better spent on applications. Platform SH provides a solution that addresses this complexity and strikes this balance of stability and flexibility you're trying to achieve internally with one unified, secure enterprise grade platform. It's an abstraction that relies on Git, which you're already using. Commit your infrastructure and follow a few simple rules, and you get the packaging, provisioning and deploying stages of your workflow taken care of for all of your applications plugged into your existing pipelines. It provides the mature platform you'd engineer internally, giving you back time to work on your applications. Then you can leverage our ecosystem of tools like DDEV and Blackfire to move towards optimization, whether that be application performance, how you collaborate, all of your operations, or the environmental impact of your resource usage. Let me show you how it works starting with our workflow. Here's the main configuration file for deploying a Drupal application on Platform SH. We'll define how we want to build and the infrastructure to provision. We name the container and set it to use PHP80. There's no patch version. Platform SH manages these images, and when a security patch is released, containers just get updated in the background automatically. This helps because when I think about compatibility, usually I'm concerned with PHP80 versus 81, not 801 versus 802. I want to have these extensions enabled, then install my dependencies with Composer. For deploys, I'll activate Drush, rebuild the cache, apply database updates, and import configuration once Troop was installed. I configure the web server and add a block that defines which services this application can access, a database and cache service. I don't have to figure out how to deploy these services. I add another configuration file that defines them. So here's MariaDB and Redis. These are managed services, and I can add anything I need here from what's available with Platform SH. Here's another block that says how much disk I need and which directories need write access at runtime. Best practice is read-only at runtime, and Platform SH is going to leverage that to handle environments and build images later, but I can define those exceptions here. Last thing to define is our cron schedule. Before we commit all of this, I'll add one final file for a router container. In this case, it's just directing all requests to the app container. So two things to notice here. First, all access has to be explicitly committed. An application can access a service without a relationship, and the world can't see an application without a route. Second, when we did define those routes, we use this placeholder default. We'll see in a bit how this, with that read-only expectation, enables environment-independent build images. All these files are validated when we push, and Platform SH provisions this cluster of containers to make up my production environment. Inside the app container, we have a built-in environment variable called Platform Relationships, which is going to provide credentials. Credentials I can then use in my settings file to connect to those services. So here's the question. In your pipeline, how many people are involved and how long does it take to upgrade infrastructure? Say, using PHP 8.1 instead of 8.0. On Platform SH, every developer can by creating a branch and updating this type definition. The new branch results in an environment, which is actually a byte-level copy of what we just deployed into production. Platform recognizes the branch command, but then also detects that this hash for the current state of the branch matches production. Of course it does, we haven't pushed commits, but we also have a build image already available for that hash. The image we're using in production, so we can just reuse it for this environment. This is the environment-independent builds idea. Copies of the same code, the same infrastructure, and also a copy of all that production data in our services is now available in isolation. No tickets open, just create a branch, and now we have a true staging server. I can edit the configuration to use PHP 8.1, update my dependencies, and push up to the environment to review the upgrade. I could connect this whole process to my pipeline in GitHub and make sure all my tests still pass, but this looks good. So let's merge and promote this to production. Here's the command, and here's the production environment now running on PHP 8.1. Similar to branches, PlatformSH identifies the merge commit, connects the unique hash with the build image from our staging environment, and reuses it. This is good. We've already tested this change using production data, and there's no chance of an unanticipated post-merge build error happening here. It's the same tested image, we're just reusing it. I forgot to ask before, with your own pipeline, would you ever deploy on a Friday? I hope it's what I've shown that you can see on PlatformSH you can. By treating Git as the source of truth, committing infrastructure, reusing builds, and tying branches to environments for code and data, we get true staging environments for every feature we work on. We didn't need to open a ticket to get them, and we didn't need to configure a bunch of external tools. We just created a branch and started working on the feature. If we did need to rely on a ticket to get this environment, we'd also likely have to stick to this dev stage prod workflow we're all familiar with. But here, we can instead have as many environments developing in parallel as we have features. This, on its own, can add a lot of flexibility to how our organization works. It also can be difficult to anticipate the kinds of apps that we'll deploy. Sure, today it may be just Drupal, but next year we may want to decouple. Five years from now, we may be more interested in a collection of microservices written in PHP, but also Python, Ruby, or Go. Being flexible enough to do this with an internal platform means learning and integrating entirely different tools and vendors than we're using now. But everything about the workflow I've shown on PlatformSage works for every major language, not just PHP. And that means that if we want to try a new piece of technology like decoupling Drupal with Next.js, we can just create a branch, add the app, and test that change. Here's an environment where I've added that new front end. We can experiment here in isolation until everyone signed off on the change and promote it to production just like I've shown before. If for some reason we decide to go with a different front end or abandoned decoupling altogether, no sweat. We can deactivate the experiment for now and revisit it later. This leaves us room to modify our projects as our business needs change. We can add more specialized applications, greater redundancy, more resources, and provide more environments for more developers as our organization grows all from a single platform. Another thing that's difficult to anticipate is adding sites, sites that need to adhere to the same standards from the beginning. Defining and enforcing these standards across vendors becomes a source of complexity that needs to be accounted for in an internal platform. The workflow I've shown here applies to every project in your organization. A fleet of projects, say all of your client's sites, can share common environment variables, team access limitations, and even environments and code bases with a single API. This organization of three Drupal sites share the same code base, and I want the same team of developers to be able to work on all of them. I could write a script that wraps around the user add CLI command to verify and update Alice's access to production, staging, and development environments for every one. I can run it once and she's onboarded. Then I can define custom API endpoints for every project by adding a source operation to the code base they share, endpoints that give my team greater ability to schedule tasks every project needs to execute regularly. This operation updates composer dependencies, running composer update and committing that change to a dedicated non- production environment. This script wraps around a CLI command to trigger that endpoint, something I can run within a pipeline to update dependencies for every project, say once a week. We can set this up to notify the development team in Slack when the update environment is ready for review or auto merge the changes when they've passed all of our tests. Another example would be a custom endpoint that leverages a shared environment variable upstream repo to pull the latest changes from that shared code base. At some point, maybe we do decide to decouple and add a next JS front end to every client site. Once I add the new variable to the part of my pipeline that audits and updates variables that are shared across every project, that operation can be scheduled to pull those upstream changes periodically. After that, we'll know that any new feature merged into our primary development code base for the organization will be dispersed to every client site in our fleet. So, Platform SH eliminates the need for you to build the internal platform that implements consistent standards across all of your apps. Packaging, provisioning and deploying stages of your workflow are built in, giving you reliable environments without managing integrations between tools and vendors. For every environment, the only entry point is Git and SSH. You can't tamper with a live environment without pushing the commit and you control which team members can push to which types of environments. Access, secrets and other environment variables get rolled into the inheritance built into Git. Everything is explicit. We already fulfill the majority of your compliance requirements and you can leverage this inheritance to enforce new ones across every project and for every cloud provider. There's no difference in the platform experience for wherever your apps need to deploy to. Hopefully by now, I've been able to show that Platform SH fills the role of what you might hope to build with an internal platform. It brings stability and flexibility that has a measurable effect on how teams collaborate, how often they deliver features and how responsive the whole organization can be to vulnerabilities, evolving standards and to potential pivots and objectives that require new technologies and more applications. You can't optimize what you can't measure and Platform SH is all about optimization. You get a single API that allows you to scale, iterate and govern every kind of application so DevOps teams can focus on optimizing that API and not just your operations and how your team collaborates but also application performance and resource usage. Platform SH provides infrastructure metrics to give insight into the trade-off between the resources you're paying for and the resources you actually need. You'll get notifications when it's time to increase them in auto scaling for enterprise applications that does it for you. When traffic normalizes, you can quickly scale back down giving you fine-grained control over the resources available at any time across your whole fleet. Blackfire provides even more insight to implement an optimization strategy in three parts, application performance monitoring, profiling and performance testing. The APM allows us to define and respond to performance issues occurring in production letting us know when something is wrong that needs our attention. It will identify problematic transactions now and those that have a pattern of degradation over time. We can then use the profiler to get to the root cause of that performance bottleneck. It provides metrics for every transaction that occurs across a lifespan of a request and even allows us to define custom metrics unique to our business logic. Once we've uncovered and fixed the bottleneck, performance testing paired with that fix ensure that future work doesn't ever introduce a regression. With these tools, we can think about a performance budget of all of our applications, an upper limit that's enforced by our test and cannot be crossed with any new feature. We can better understand the state of our code and the resources it actually needs to function, putting us in a very good position. We're not just throwing more resources at performance problems anymore. That's cheating. They cost something. Money, of course, but across every app and every organization there's a real non-trivial impact made on our environment. Platform SH is doing its part behind the scene, optimizing regions, environments, and container density, but getting to this point of optimization for your organization is crucial so that you can help too. If you can make all of your applications faster by upgrading infrastructure easily, by scaling responsibly, and by adopting continuous observability strategy with this well-defined performance budget, good measurement, it'll result in real change to your organization's impact. And that's a great thing. So thanks for letting me talk about Platform SH for a bit. I hope that you give it a try. If you have any questions, feel free to join our community of users and platformers, all trying to write better code with less impact, and check the links in the description for more info. Until then, I hope you have a wonderful day and deploy Friday.