 the Continuous Delivery Foundation Interactive Landscape and sort of what we can expect for the future in terms of CDF. But first I think what we'll do is we will, I'll do some shameless promotion of myself. Again, I'm Tracy Reagan. I am the CEO and co-founder of Deploy Hub. We are a microservice management platform that really specializes in putting configuration management back into a microservice architecture. I'm on the board of the Continuous Delivery Foundation and I actually was on the board of the Eclipse Foundation, one of the founding members, which was a great experience to learn about governance and open source. I live in New Mexico. I spend a lot of times with my horse and my dogs and I am a serial volunteer. I do a lot of volunteer work for the community and that's probably why I love open source so much. So let's talk just a little bit about the Continuous Delivery Foundation. It's a pretty exciting new foundation. Under the Linux Foundation, the Continuous Delivery Foundation seeks to improve the world's capacity to deliver software with both security and speed. We're always thinking about better ways to push and progress software through the lifecycle and there is no better time than now to have this conversation and the CD Foundation provides a platform for that conversation. Our goal is to work on establishing best practices, propel the education and adoption of Continuous Delivery and to facilitate cross-pollination and when we talk about cross-pollination, we're not just talking about between vendors but also between end users. Our end user community is very, very important. So if we think about how the CD Foundation sees Continuous Delivery, we see it as an engineering approach in which teams produce software in these short cycles ensuring that software can be reliably delivered really at any time. Now the rise in microservices and cloud-native architecture has changed how that Continuous Delivery process works and that is what is so important right now in terms of this bigger conversation about how we're managing the pipeline, how it changes and what we need to be thinking about for the future. So I know, I hear this all the time. I hear, you know, I ask people about, you know, doing Continuous Deployments or doing Continuous Testing or doing Continuous Integration and they say, yeah, we've got one and it's quite nice. And I know you have a pipeline and I know you may have been doing Continuous Integration, most companies are, but your quest for perfection is not quite over. We have some work to do and the whole idea of CI and CD will be shifting. So what is driving this change? I think what we should think about is modern architecture and how modern architecture really is different from our traditional monolithic approach. And AI and machine learning needs modern architecture. It needs the ability to be fault-tolerant, self-healing and being able to autoscale. That is talking about the future of employment and how susceptible jobs are to computerization. Now, while that's all really scary for many people when it comes to robots, robot baristas and just automation from accountants, it means that we're gonna have a pretty interesting road ahead of us for software developers. We have a lot of software to write in this new world. But the truth of the matter is the way we've put software together has now been broken. When it comes to microservice development, according to this O'Reilly survey, only 15% of companies really report massive success with microservices. And there is confusion about how microservices are managed. In particular, tracking which service is being used in which cluster and what the logical view of the application looks like. Or in other words, according to Randy Hefner, microservice development can fall apart without a coherent and disciplined and managed approach. Now that approach is going to be pretty much very relied upon by the CD, the process. The CD process will be critical in how you manage your microservices in the future and figure out how this stuff is put together. So in traditional methods, we have sorted out a lot of this business. But in a broken down or in a decomposed microservice world, it starts to change. First of all, developers tend to struggle with sharing and finding microservices. DevOps teams can't necessarily see the entire picture. And what we're trying to achieve is some level of business agility as it relates to these ML and AI types of applications. And without being able to build a CD pipeline that can support many moving parts moving across the pipelines very quickly, we will fail. So think about it as taking a wine glass and taking a hammer to it, pointing to it and saying, there's your wine glass. It's still there. The application is still there. Everything we need to do about that. We have a user that's actually using that wine glass, but it's all broken into small parts. So how do we start thinking about the CD pipeline in terms of managing that wine glass and all these broken parts? So this is the biggest problem that I see with companies moving from a monolithic to a microservice approach. When you're in monolithic, you have sorted out much of this stuff. And we see most of what we do based on an application version. We build an application version, we test an application version, we track it, we deploy it, we track tickets based on it, we do change requests based on it. But in a microservice environment, we're moving away from it. I like to say we have thrown the baby out with the bathwater. We still need the application and the application is still there. We just need to start seeing it as a logical collection of microservices and treat it the same. It's no less of an application if it's built into lots of microservices, but we have to start treating it that way. The key to understanding microservices, if you hadn't started working on it yet, is to manage how we've managed our Kubernetes environments and we've made a lot of progress on that. We now need to start turning our heads and our focus to how do we manage the applications running there? And to really kind of emphasize the challenge here is microservices are immutable. Unlike the way we used to do things, take a jar file and deploy it and copy over it. We know that my mic is off, okay. Should I switch to the phone? Check my connection and continue. Well, it says that they're having trouble with my mic and I don't see that. Please take your phone off the speakerphone for now. Hello? So is it not working? Okay, okay, thanks. All right, sorry guys, it looks like I'm having internet connection so hopefully this stays clean. So anyways, if you're really working in a microservice environment, you're really having to struggle with keeping track of how microservice connect to each other and again, on the production side of the house, we're learning to track things with ways that we can start building at our CD pipeline to support it. So let's just really talk about the pipeline, the landscape and how that's going to start changing. Now let's talk about pipeline orchestration. Continuous integration and continuous delivery is what I like to refer to as pipeline orchestration. Should I use the phone now? Do I need to do, I want it now, is there something I have to do from my, I don't have speakers, I can unplug my. My headphones unplugged, does that mean I'm now? Am I good? So pipeline orchestration is an important piece of what we're trying to address. And when I think about pipeline orchestration, I am talking about tools like CircleCI and Jenkins. These tools are really managing what you're doing inside your, within your pipeline. They don't do testing themselves, they don't do compiles themselves, but they orchestrate it. The orchestration of the pipeline will begin to change. And the way it changes is because we start having lots of workflows in your pipeline. So instead of having one big, managing one big jar file, you're gonna manage lots of little pieces, lots of little functions. So your workflow may go from one workflow per application version, for example, to 10, 15, or hundreds of workflows that aggregate up to an application version. Now, what I'm seeing in the industry today is some shifts to create better templating services so that you can support multiple workflows where you, if you have lots of workflows, you make a change at the high level. And then from that change, any child workflows get that fixed. That's gonna be really, really critical for CD pipelines in the future. The other thing is events. This whole idea of events, processing, I find to be fascinating. I feel like it's going to be really, really critical because if you're having, if you have 15 workflows that you're running at a time, that can work just fine. But if you're running a hundred or a thousand, you're gonna want to have events that can be parallelized and processing things all at the same time. So that's gonna change. And look for templated workflows and CD pipelines that are gonna be event-driven. So some of those tools that we're seeing now, you can think about Jink and Zex is pushed everything to a pipeline to create templated environments. Tecton is really sort of an event-driven kind of solution. And then Eiffel's out there. That's a messaging kind of process. These are the kinds of tools that we're gonna be seeing in the future that's really gonna change the way we see our pipeline orchestration. It's a pretty big shift. And I think the sooner we all get our heads around it and understand it, the better off we're all gonna be. Okay, software built and released. You know, we used to have these awesome unsung heroes that sat in the corner and figured out in a very methodical way how our binaries would be created. We thought about what libraries should be brought down, what source code should be brought down, what compile flags should be used. We might have used something like an Artifactory or a Maven repository to check out the transitive dependencies. But at the end of the day, we had a binary that we could trust. I used to tell everybody the way to freak out a developer would be to delete their executables because sometimes it took so long to get that build together. The last thing you wanted to do was to delete all the executables. That is changing. We're still gonna have these unsung heroes and I think most of them are shifting to the role of SREs. But in essence, they are going away. The builds are gonna be different. Now think about how those builds are gonna be different. They're gonna be smaller. You don't take, in a microservice world, you're probably gonna have things like a Python function. I mean, even if you're using a language like Go, you're gonna, it's gonna be compiled, but it's gonna be a small compile. You're not gonna be doing this decision-making to sort out what files, what the library should be put in and how you build your entire jar file because that is actually done over at the runtime. All the linking is literally done through APIs at runtime. So your functions are smaller. Your builds are much quicker. Everybody will be able to achieve a 10-minute build, which is something we've tried to do for quite a long time in the agile community. And this is definitely the direction that the microservice is taking us. But what's missing from that is we got rid of all of the SDM. Software configuration management is still important. So for example, we used to have, when we did a compile, we'd have a bill of material report. And bomb reports are still discussed. This is a still and a very important topic, bomb reports. So what a bomb report showed was what went into your builds and what versions of the software went into your build and you create an application version. And we did use that to do different reports. We said, okay, it's broken today, running in production. We just did a release. What changed? Well, we used the different reports between two bomb reports to do that. So we could look at two different application versions and see what the differences were in order to start chasing down a problem. This is very, very typical of software configuration management. We also had the idea of impact analysis where we would say we're gonna change a library that may be used by multiple people or multiple application teams. How do we impact if we make a change to that library? What is the overall impact? We still need to do that. Even though microservices should be backward compatible and not have dependencies between them, none of us are perfect microservice developers. We are not purists. We're there to get the job done and we know that we're gonna have those kind of dependencies. It just happens. So what we're building is what I like to call the desk star. These are pictures of both the Amazon and Netflix desk star. These you can find on Google. They're pretty fascinating to look at. Netflix, I think, has over 4,000 microservices they're managing. So you can imagine what kind of job that is to track. It's pretty impressive. Now, the other thing that changes is the release process. So I have now named three main kind of core competencies of our CD pipeline, orchestration, build, and now releases. With the release process, we are gonna do this releases far more frequent. We've always said that we're gonna do that. We're really proud if we can do a release every day and companies who are doing that, that's amazing. But with microservices, you're probably gonna be doing literally hundreds of releases per day if you're doing them right. It's like we're building this giant transformer and we get to transform it on a regular basis. It's not like a Lego set where we put everything together and we sit it on a shelf and say, don't touch it. It's really cool to look at, but it's not something to be played with. We're gonna be playing with our software from now on and our software's gonna look more like a transformer than a Lego set. So that means that every microservice gets pushed out to the cluster independently and you're gonna have lots of clusters. I think that a lot of companies are experiencing what I like to call cluster sprawl. Developers may want their own cluster. You may have several clusters for testing and you may even have several clusters for production. Now, while that may change as we move into service mesh, today that is the case. So while the actual deployment of containers is really much easier than it used to be for pushing a jar file out to hundreds of physical servers, for example, or even pushing it out to an image, updating a container is super simple, but tracking it and understanding the pieces and parts is the hard part and that's what we have to start thinking about in terms of how the landscape changes. So if we add some of these tools, tools like Helm has helped us tremendously with being able to create deployment files for pushing things out to container. You have container registries like Quay. You have Ansible for doing configuration management. You have tools like Deploy Hub. So obviously an area that I'm passionate about and these are sort of the new tools in the market that you can start looking at in terms of managing a new modern CD pipeline. Now there are certainly other really important pieces of the CD pipeline that we don't have the time to really cover today, but let me just pause and go through them. We're still gonna have our version control and our issue tracking. That is still going to be a standard, but instead of having a version control or a GitHub repository for your entire application, you're probably gonna have a GitHub repository for a single microservice. In live events, I've asked the question and most people say that they have one repository for each microservice. Now issue tracking can change a little bit too because issue tracking, normally what we do is we track issues as it relates to an application version, but issues now need to be identified based on these smaller units of work, these microservices. So issue tracking is gonna have to get smarter about how to track an application problem to a microservice. And issues will be opened against microservices. So we need to be able to track a ticket to a microservice as it relates to an application. The other things that are becoming much more popular and I feel like there's a lot to discuss around it is container security. Container security continues to be a focus for most organizations. Security is really hot right now. DevSecOps is a really hot area. And the problem is not necessarily with running this container security, the problem becomes the volume and the scaling. So instead of just running a scan against a jar file, you're gonna be running scans against containers for hundreds of microservices. So both like the testing, the issue tracking and the security scanning, all of that is impacted by just the volume. Remember in our CD pipeline, we're gonna have many workflows. Those many workflows are gonna be calling security scanning and being related back to issue tracking on a more volume and we're gonna have lots of it. Instead of just one or two, we're gonna have lots of workflows. So the orchestration in the CD process becomes even more critical because automation is required. We're not gonna be able to do this by hand. This is not an email solution. In order to succeed here, we have to build out a very solid CD pipeline that can really connect the dots for every single microservice. So what do you think? Let's just kind of go review what I just went over. There's quite a bit of information in that. What's gonna be different in a Kubernetes pipeline? This is basically what we're talking about. As we talk, code will be smaller. And think about it, there's gonna be less branching and merging. So while we are still gonna version our GoLang or our Python scripts, the version becomes less critical because we're not relying on it to do so much branching and merging. And in fact, the concepts of branching and merging sort of change with a microservice architecture. The library management and versioning will shift to runtime. That is a pretty big change. Again, when we do a build, we're not doing that, the linking at that time and making decisions about how we want our, what libraries we wanna include in our big binary. So we have to think about that still, but we have to think about it in a different way. And the versioning aspect is critical. What versions of my microservices make up my logical view of my application? These are the questions we have to start asking ourselves. And builds themselves will be super short. You know, five to 10 minutes, everybody will achieve that 10 minute build. Jez Humble will be proud of all of us because I know every talk I've ever heard him do, he always talks about the importance of getting clean, fast builds. Well, we're gonna get there. And when you run a build, it's gonna be about creating a container and that container is gonna be registered. So our binary repositories now will be container registries. Now, what we get is a lot more sharing and reuse between these functions. I didn't even go into talking about what's often referred to as a domain-driven design. And if you're moving into a microservices world, let's just say you're looking at your existing application and you wanna start thinking about it in terms of breaking it down into individual components. I highly recommend that you take a look at microservices.io. It's just a fabulous site, there's training that you can take through this site about microservices. And one of my favorite areas that he talks about is domain-driven design. Now, we tried to do this in object-oriented programming. But the reason why we did not succeed is because we were not good at figuring out how to manage shared libraries in a compile and link step. So what we did instead was we would rename our reusable objects, so we would have private versions of them. So it didn't become reusable, it became copied. So we branched it all over the place and we oftentimes didn't even branch it using version control. We used it by branching, we created branches by creating new names, which is even worse. We don't want that kind of solution around microservices. We don't want everybody renaming their microservices that they're using a specific one. That is not the intent of microservices. So as you stop and start looking at your CD pipeline, you have to deal with the areas that I just discussed. But you also want to think about a practical way of breaking down and decomposing your application into microservices. And what you're gonna start finding, especially if you did two or three at a company and really looked at them, you're gonna find patterns. And we're gonna hear that word more and more, this concept of patterns, organizational patterns, and how we share those patterns across these separate silos that are in these larger enterprises. In other words, how does the person on the eighth floor know what the person on the fourth floor wrote? If they wrote something that they can use, a pattern, a single sign-on, or data access retrieves to get common data, how do we know that? Domain-driven design helps to solve the sharing problem because it allows you to organize your microservices into solution spaces, often referred to as problem spaces. I like to call them solution spaces. And once you start doing that, you are far, you're much farther down the road than you realize. If you can start seeing your decomposed applications in terms of domains, and how those problem or those solution spaces should be shared across your organization, you're going to avoid the pitfalls that we saw with object-oriented programming and the lack of a good structure for managing reusable components. And then lastly, really think about your application as a logical collection of microservices. It's still there. You just have to see it in a different way. And it's made up of a collection of components and those components could be database updates, it could be infrastructure updates, and it could be microservices. But the application version is still there and you don't throw it out with the bathwater. Sort of in conclusion, giving us plenty of times, I've got quite a few questions that's come across here, but we are morphing. The CI CD process is really morphing. It needs to move faster. We need to be able to progress of these tiny microservices through the CD pipeline on a much faster pace. It will require lots of automation. We need to move away as much as possible from one-off scripted processes. And we need to start thinking about how these tasks can be event-driven so that we can execute many, many events at the same time. When we're talking about progressing small moving parts with potentially many connections, we need to rethink about the configuration management problem and understand that we lose some of that configuration management, that core configuration management data, because we're not doing a software compile and link process anymore, where we have bomb reports and diff reports. And then I would say, watch for new tooling. We're gonna see new tooling entering the market. And we're gonna start seeing, I believe, some of the back-end processes, the ops processes, like APM, having more of a conversation with our CD engine. We should be able to, the CD engine should be smart enough to indicate that a microservice has never been used or can be deprecated. It's that kind of information that we might be pulling from these APM tools based on a cluster to make some better decisions in our CD process. So I guess what I'm saying is maybe tomorrow's CD pipelines will be smart. They'll have truth tables and they'll look very different from what we're doing today, because we have to manage a lot of little pieces all at the same time, but still recognize that we're providing application solutions to our end users. Now, if you wanna see the landscape, I highly recommend you go out to the CD Foundation. Here's where you can find the landscape. And we encourage you to update it. You can update the landscape if there's a new tool out there that you feel fits into the landscape that other people should know about. This is an open source community. You can create a pull request to add that, to add a company or even an open source solution that you might be using to the landscape. And we encourage you to do that, because this is how we begin sharing information and this is how we begin to morph the CD process to support a cloud native microservice environment. So again, I keep referring to the transformer. That is what we're doing. And as we've heard, the best companies have transformed themselves from monolithic to microservices. Companies like Netflix and Google and Facebook, thank goodness Netflix is running microservice because imagine how many people are watching Netflix these days of COVID. And I know your company's next. So how do you get there and how are you successful? It's really gonna depend on your domain-driven design, how you morph your CD pipeline, and how well you can automate to address the volume of changes that will be pushed across that pipeline. Talk to me, I love having conversations. I try to block my calendar out for at least the mornings generally to talk to people about what they're doing in this new CD space. I find it fascinating. The more I talk to individuals, the more I learn what they're addressing, and the more I can help others by sharing that information. So please reach out. I would love to have a conversation with anybody on this call. And we can geek out about microservices and CD pipelines. And on that, I think I will go through some of the questions. And really everybody, thank you for the great questions that have come through here. There's several. Some of them are some personal ones and I think I'll start with that first. I was asked, I want to know about a DevOps career and what should I do to prepare for it and how did you get into DevOps? As I said in a podcast recently, I stubbed my toe and found myself in DevOps. I was a, I came from the mainframe world. I was a, right out of college, I was a COBOL programmer working on working on Wall Street, writing trading applications on the mainframe. And we had something called Endeavor. And Endeavor was a tool that you would check in your COBOL program. It would automatically do the link edit step and there would be an approval process to push it across to the LPAR that it should run in. Yes, Endeavor was the first DevOps tool. And in fact, it stands for environment for development and operations. It's still running today and runs most countries, to be quite honest. It is, it was a extremely important tool for the mainframe. And when I left the mainframe, I was shocked that I had to write my own compile JCL, basically. I was like, I have to write my own compile JCL? Why can't I just check it into something and it compiles it, links it and sends it off for me. I had been spoiled. So I served as a software developer for years and I learned to write make files and we wrote scripts to do the deployments. But I always thought that we should be able to get back to a continuous delivery model like they did on the mainframe. So that's how I got into this business. I worked for, as a contractor and I started getting jobs where I was managing the build and the release process, as well as the testing process. And I found that I was really good at it because I have a kind of a mind for puzzles and I liked putting things together. And that's how I got into the business. Now, for anybody moving into this area, I feel like it's really important right now to understand Kubernetes as a whole. Getting certifications in Kubernetes, getting certifications in AWS is sort of the core when you understand how the production environment runs or how I should say the runtime environment operates, you get a good foundation to build on top of it. Once you understand that, you can start understanding how applications run because microservices themselves are core to the new kind of way we're gonna develop applications and understanding how those microservices operate in the runtime environment is going to be critical. So my suggestion for anyone looking into getting into DevOps is to start on the ops side and start looking at how you can get certified in those areas. And there are some really good online classes that you can take for learning that. Another question is more of a technical question and that has to do with really, what is basically the questions asking what's the difference between continuous delivery and continuous deployment? Thank you very much for asking that because it is a confusing topic, isn't it? Continuous delivery in my world, when I say continuous delivery, I am talking about the orchestration, the orchestration of the continuous delivery process as a whole. Those are tools like Jenkins and CircleCI and Jenkins X and Spinnaker. Those are orchestrating the process. There is an ecosystem in that process. What it's orchestrating is the ecosystem. And when we talk... Don't know if you guys can hear me. No place to test it. Oh good, you can hear me. Yay, yay, party, party. So anyways, continuous delivery is the orchestration. Continuous deployment is actually moving your code out to your runtime environments. Tools that do continuous deployment are tools that focus on the release problem and not orchestrating everything on the top level. So when people talk about continuous delivery and continuous deployment, they sometimes mix those concepts. But in my world, delivery is orchestration because we're talking about the progression of software from dev through prod. Deployment is actually doing the updates to those runtime environments. So a deployment is updating a container in a cluster. And while that seems very simple, there are other things that continuous deployment tools do to track when it was released and when it was changed. And there's lots that continuous deployment tools do that continuous delivery tools don't do. Think about it as different as continuous test and continuous deploy and continuous delivery. Those are three different things. And I like to talk about continuous configuration management, it's not continuous delivery, it's a part of the ecosystem. And what I was going to show before I dropped off is this landscape defines that. So think about the continuous delivery as everything and tools like Spinnaker and Jenkins and CircleCI and CodeFresh, they're orchestration tools. They call these other tools inside the ecosystem that do a lot of the heavy lifting and are critically important to a successful automated CD process. See what other good questions. I know audio is gone again. Thank you for everybody for telling me. I guess I'll do one more question. And the question relates to the build process and why it is so different. If you've ever worked on the build side of the house, you would learn that you write what is often called like a build script. It could be written in for Maven or Ant or traditional make or could just be a script that calls things in an order. And when you pull things together at that point, you are defining ahead of time what libraries should be included and what you want your end user to actually execute environment. You're pushing microservices, I'm gone again. So what happens is that you can't as easily understand what your application's configuration is as you did in the build process because a microservice could be updated. Let's say you're using a security microservice, it could get updated and it creates a new version of your application. In the old days, you would have compiled and linked it with that new version and you would have known ahead of time before you ever deployed it out to production that it changed. And I think that those are probably the core of the questions. Most of them have gone to where we've lost you. I'm so sorry that that did happen. And if you again, if you wanna chat with me, reach out to me, here's my information. And I am always available. There's my coffee chat link. Just pull that down and schedule a time with me and we can chat for 15 minutes. And thank you to the Linux Foundation and the Open Source Summit for inviting me. I'm honored to be able to speak and everybody stay safe in the world of COVID. Wear a mask. Thank you.