 So when I say no one's really doing microservices, it's because they try to skip all these steps and jump to the end. And if you skip these steps and jump to the end, you actually end up with microservices. You end up with baby monolithic applications. They're just smaller. Or you end up with what's known as a distributed monolith, which is the worst case scenario. In other words, you just end up having ported some Java code to Spring Boot, and you call it a microservice. Just because you're no longer using WebSphere or WebLogic and you have Spring Boot, that is not a microservice. Just because you went to Node.js, that is not a microservice. So there's a lot of key rules to microservices, and we're going to talk about that in a session. But you do want to think about these early principles, early techniques as we get there. So agility is the key phrase in this presentation today. In your mind right now, you're thinking, why is he saying we got to do DevOps? We're not doing DevOps in my company. We will never do DevOps in my company. I appreciate that. Most people will never do DevOps properly, OK? So therefore, that would prohibit you from doing microservices properly. That's what I'm saying. But here's the reason you are willing to make those hard decisions and hard changes. You have to ask the simple question, does it make you more agile or less agile? Because the reason you do microservices is for vastly greater agility, and no other reason. Microservices is harder. Microservice is much more painful from an ops standpoint and a dev standpoint. It'll cost you a lot of extra money to do microservices. But you're willing to go that hard for it because it's going to give you vastly greater agility, the ability to change rapidly. But you may not need that. There's plenty of applications and plenty of application development and delivery teams, ops and dev, that have no need for this level of agility. You deploy every three months like clockwork. Your team is awesome. You do nail that three month deployment delivery every time, don't you? There's people here that do. You guys are professionals. You deliver every three months, and you never miss it. Be happy in that three month deployment interval. You do it awesome. Why do you want to do anything more than that? Because you might want to do more than that if the business needs you to. Or better yet, you might need to do more than that when a critical vulnerability shows up in your code base like the Stretch2 vulnerability, and you have to change now. So that's when you might have to think a little bit differently, but you have to have greater agility to change when you need to change. Business has a new API or need. You might want to change right now. You have a CVE that has to get patched right now, change right now. Can't wait three months. Even though you're good at three months, you might have to do it in one day. You might have to do it in one week. You might have to do it in one minute. Now you have to think about microservices architecture. So let's jump into this. And you'll notice also I say continuous delivery, deployment and improvement. So it actually is CI, CD, CD, CI. Continuous integration, we should be doing properly by now. Continuous delivery, continuous deployment, and then continuous improvement. That's the ultimate goal. We're moving so fast. We're learning so fast. We're failing so fast. We can fix so fast that we know how to improve ourselves and our applications and our APIs and our end user experience that we can rapidly evolve and iterate to the goal, okay? That's what we're talking about here. But here's the key bank story. We actually documented this back from 2016. I sat down with one of the lead architects on this project and so, because someone came to me inside of Red Hat and said, Burry, we know you're looking at microservices architecture. We found a great case study with this big bank in the United States that did microservices. I said, great, I wanna talk to them. I don't know too many people back in 2016 that we're really doing microservices. And I sat down with the architect on this project and we went through all those things in my little evolutionary chart. What did you guys do about this? What did you guys do about that? What did you do about this? And we made note, and it's all on this blog, by the way, and here's the funniest part. It turns out they never actually made it all the way to microservices. They did all the other things, though. They embraced DevOps, and I'll tell you how they did it. They specifically made one cultural change that fundamentally was a game changer that allowed DevOps to blossom in their organization. And I don't mean DevOps from a tooling standpoint, like, oh, we have Ansible now and Puppet or Chef. That's DevOps, no. I don't mean a deployment pipeline based on Jenkins. That's not DevOps either. I mean cultural change and social DevOps. It was one simple change. They basically said, all you programmers who want to change things all the time, because developers want to change things all the time, right, and Gene Kim has a great joke in this category. You guys know Gene Kim? The author of the Phoenix Project and all those other books I put up earlier. He looks like me. He has a goatee, he's an Asian guy, so often when I do presentations at the same conference, it's him people confuse us. It's very funny, actually. So people come to me and go, I love your book. I'm like, I didn't write a book. I do the demos. But in case, he has a great joke. How many operations people in the room right now? Oh, most of our ops people are on the Kubernetes side, but you'll appreciate this. From ops standpoint, show me a developer who's not causing a production outage and I'll show you a developer who's on vacation, right? Yeah, you know. And so think about that. That's how ops often thinks of us on the development side. We're always breaking stuff because we're trying to change things, but that's our job. Our job is to morph the code base into something cooler. We wanna add new things. We don't want that old clunky web logic anymore. We don't want Java 6 anymore. We want Java 8 and 9 and 11. And we want this latest version of Linux. We want this latest version of everything that helps us build better, faster software. So the change that they made at KeyBank that's incredibly important to understand for embracing DevOps is they said, okay, developers, you wanna change things all the time? You're on the pager now. One simple change. Ops doesn't have the pager anymore. If the app or API is misbehaving, slowed down, not working, not responding, the developer is the first line of the fence. You get the call. And guess what? You know what happens when a developer gets a call in the middle of the night? They write vastly better code. I am not kidding, right? I've seen it myself when I managed both DevOps and I think I told the story in the keynote. So simply making the developers accountable and customer facing changes everything because guess what? When you basically are woken up in the middle of the night a couple of times and it's because a bug or mostly it's because you did not think through the configuration and load testing of your application. You did not think through the use of framework so that it scales correctly under certain conditions. You didn't test it for certain edge cases that turn out they're not edge cases or normal cases. You fixed those things so you're not woken up anymore. So that's what KeyBank did. They said you developers, go to town. You're on the pager now. Developers said okay, we wanna see our stuff actually run. I think I mentioned this, right? Developers who are creating things wanna see their code run. They wanna understand that it's actually serving a user in a better way. So the developers took that task on and guess what? They wrote better software and they deployed it in a container inside of OpenShift and they owned the API and they owned the app. The operations team owns the underlying infrastructure and what's below that container. So the infrastructure has to stay up but the way you know if the infrastructure's up is if the app's not responding. So call the developer first because it's nine times out of 10, his or her bug before it's the operations problem. I love that concept but that actually changed everything. But with these changes, they embraced things like automation. They embraced things like Kubernetes and OpenShift in particular. They embraced things like an automated deployment pipeline. They embraced all these things. And guess what? They went from a three month deployment cycle to a one week deployment cycle and for them that was as fast as they needed to go. And there's actually a number of blogs on the internet now that talk about this. So like not this one but there's actually a great one from David Hennemar Hansen. Have you said DHH? They're Mr. Ruby on Rails. Now Basecamp, 37 Signals. He wrote a blog talks about they deploy their monolithic application as fast as every week. This company can deploy their monolithic application as fast as every week. And for them that as fast as they need to go. So I'm here to tell you if you're at three months and you can move to three weeks your business probably would be pretty excited by that. You can respond to their needs within three weeks instead of three months. That is awesome. But what if you could respond as fast as once a week? Now here's the trick. In Silicon Valley land, they in the case of Amazon in particular they deploy every second. Every one second. Yeah, it used to be slower. It was 1.5 seconds. They've creased their speed in the last few years. All right. So now you're dealing with companies who can respond very rapidly to different changes. So you got to think about that. So instead of a deployment cycle with three months that's the three month interval before you get feedback and you can learn. That's really why the speed matters. The speed doesn't matter for speed sake unless you're dealing with a critical vulnerability like the struts to vulnerability which burned us from an Equifax standpoint. Yes, you'd like to be able to patch that faster not wait several months while everyone's gathering all your data out of your system. Okay. Heart bleed, whatever it might be, shell shock. You'd like to patch these things instead of leaving them in production for months on in. I bet you still have heart bleed or shell shock in your production infrastructure now. And I bet you still have the Equifax problem with struts too in some of your infrastructure today. We never get around to patching these things. We forget about them and hopefully no one takes advantage of them too badly. Okay. But the problem with the three month deployment cycle is you only learn once you're in production. One of the things I like telling my developers and based on our operations folks, the rest of you might be on the development side. Here's something that was really profound that might blow your mind. I know you work really hard crafting that code. You're typing on that keyboard. You're inside your IDE. You're nailing it. You got your intention just right. Your curly braces lined up perfectly. You got your linting software going so it makes all those things look so beautiful. And you got your algorithms nailed and you check it into the source code repository and you think your job is done, don't you? I'm here to tell you it's not. I'm here to say that you actually added zero value that entire day. Your value is only realized when that code hits production and a user can use it. Then there's value, real business value. The value you've perceived when you're typing up that elegant code, that's value to you, not to your business. It's only valuable to your business when it lands in production. I don't mean staging. I don't mean the dev server. I don't mean the CI server. I mean real production. And so that's really what we're making a point here of. You don't, if until you get into production you cannot learn. If you cannot learn, you cannot grow. If you cannot grow and become better as an organization, you cannot win. And this digitally transforming world that we're dealing with. So here's what it looks like. If you deploy every three months, this is your trajectory. If you deploy every week, this is your trajectory. You're learning faster, getting better faster. Simple as that. It's all about trajectory because you're not gonna make mistakes. You're gonna fail. You're gonna have to fix things. You're gonna get it wrong. You have to rewrite that algorithm. You just did a poor API. You got a new API. Well, guess what? You can do that at one week intervals and increase your speed by learning, okay? So those who are unaware of their walking in darkness will never seek the light. Think about that for one second. Hopefully at this point, based on what I just said, you might have been shown a little bit of light and you're like, hmm, my code doesn't offer any value until it lands in production. I wrote some really pretty code. Yeah, I don't care how pretty it is. I don't care if it's ugly. What does it do in production? So let's talk about some patterns when it comes to these microservices architectures. So this is the most common pattern. I have three of them here. This is what I call the browser's aggregator pattern. This is the most common one I see. How many people actually have this solution in their architecture today? Meaning, your browser client, desktop client or API or whatever it might be, your mobile client, it talks to several backend independent services. How many people have this architecture now? Okay, good. This actually is the most common. And this is actually what most people think of as microservices. But it's actually not quite microservices. But it is very valuable. It's incredibly valuable in a retail setting. I actually documented this several years ago with Walmart. I sat down with a friend of mine who was an architect at Walmart at the time and we basically sat on the whiteboard and said, help me understand how you guys do mobilewallmart.com or whatever. And basically it looked like this, right? If you and I just painted this on my own. But you have this description. You have your thumbnails of images. You have your actual pricing, your star rating, your add to cart. Look at this right here. In-store pickup, 15 are available in your store right now based on the GPS and the mobile device. We know where you're at. And we also have a recommendation engine that hopefully gives you other things you might buy. These are actually all independent invocations of backend services. In the case of Walmart at the time, it was Node.js backend services that then talked to a mainframe in some cases. How do we know your inventory in your store? Let's talk to the mainframe. Let's find out. Because the mainframe gathered the data overnight where they did all their barcode scanning to check out processing. They know how many units they have on-shelf or in that store. So it's pretty awesome. But the problem is, these are independent invocations. Now here's what's cool. Anybody here on the website of the house know how to do HTML and CSS and div tags? Anybody do a div tag before? Wow, you guys are real back-end people? No, because here's what's the cool part. If you do use this architecture, all you have to do is manipulate the div tag. If in fact, something does not respond, let's say that mainframe does not respond, you basically can change the user interface like this. I can't tell you if any are available, the mainframe didn't respond in the time I needed. But asynchronously, I can throw up a div that basically changes the UI slightly. I can just hide it if necessary. So if you actually are paying attention, you will see that the page will actually paint in chunks based on when those other back-end services do respond. So that actually is kind of cool from a scalability standpoint, especially if a mainframe on the other end, which can be very slow at times to actually process that transaction. And actually, in their case, it might not respond at all within the SLA that they've given it, which case they will just give you a default response to the user transaction. So they don't know what is available, but they can still tell you based on the UPS that's part of the phone's architecture what your closest store is. That's easy, that inventory was hard. Okay, so you notice I said the word fallback here. In the case of a microservices architecture, the thing you're thinking about is always planning for failure. Assume failure is going to happen and assume you have a fallback mechanism to deliver some user experience even in the face of failure. That's one of the key principles of any kind of microservices architecture is what happens when it fails. Now, here's another pattern. So instead of making the browser the ultimate aggregator, let's push some of that to the server side and we put this thing on an API gateway. As a matter of fact, if you're a Java programmer, we use this thing called Zool from Netflix. Netflix, of course, gave us all these awesome tools. And so here's another one called Zool and we can programmatically build an API gateway. You can also use a Vertex technology. I mentioned in the previous session, but Vertex is a really awesome reactive toolkit. All async, non-blocking, and therefore I can use that as an aggregator. Or you could simply write Lua and do this with Nginx. It would work. So I don't know if you got a... Anyone here wrote Lua and tried to extend Nginx? You have? Yeah, did you try to do for this kind of use case where you're aggregating endpoints on the other end? Yeah, just changing like routing rules or something or filters. But what? Kony framework. The mobile framework. Okay, very cool. So, but this is an example where you put that server-side logic together. Instead of the browser being the aggregator, you make a server-side component, the aggregator of those ultimate calls, and then of course, if something fails, you know to trap it at that appropriate point. There's this architecture and this is the Netflix way of doing things. This is basically where it calls into the API gateway, but then it calls a chain of services and those services all have to respond to give back an aggregator response. This is where architects in room go, holy crap, that looks like a really bad idea. Because what happens, one element in the chain fails. Does it become a cascading fail to the user? And the answer is by default, yes. It does become a cascading fail. And so you have to guard against that cascading failure with your life. If this is the architecture you wish to employ or even this one where you can mix it. Most people do have a mixed model, if you will. They have some browser aggregation, some API gateway and some various forms of chaining. But you do want to figure out how to mitigate all the failures. So if you actually saw the Istio presentation earlier, that's where we focus on those kinds of issues. Basically, Istio helps you protect you from certain types of failures by giving you additional resiliency. And let me see, maybe I can even illustrate that point. Let's see if I can real quick. Let's see if I can talk through it and make the point. I might not be able to. So I have an application right here that is customer, which calls preference, which calls recommendation and it's either recommendation one or two. So I don't have a blue-green going just yet. I do have two different recommendations though, version one and version two. And you can see there's version one, there's a Bermundation, which is the version two, going through the system there, okay? And you can see it is a chain of microservice invocations. These are three different, completely separate services. And of course you can see that there's four pods because this service was deployed twice. And so now what you can do from a Istio standpoint is you can do all kinds of fancy stuff. Let's see here. I'm just gonna run my scripts to make this easy. So let's see here, seven. Oh, and by the way, if you wanna play along, if you're on your phone, you can go to bit.ly Istio customer number four. You can go there and you can run this yourself, okay, on your phone. Let's see here. So let me put that command in. Oh, it helps if you spell it correctly, Burr. There we go. It'll look like this on your phone, just like that curl command, okay? And you'll notice I'm just going back and forth between one and two. So this is an example where you have chaining and I'll just try to see if I can just show you this basic principle. All right, so anybody got it on their phone? Anybody have it up? Okay, and you just hit refreshing your browser to see the change. But let's see here. Now let's see, I wanna make everybody version one. Everybody goes to version one if I got that taking effect correctly. Da, da, da, d-c-t-o, get, all right, this thing is slow. Okay, but there it is. Version one, you notice everybody's on version one now? You're on version one as well? Okay, let's see here. Version one, version one, version one, okay? So what is to you, I can manipulate the routing logic and go beyond the basic service. So in Kubernetes basic service, for every pod you have, it's just part of the load balancer. As long as it matches the label set, and I showed you that in the first presentation, here I'll show it to you again. The QBCTL, get pods, get pods, show labels, come on, we're tight. All right, and you can see recommendation one is app recommendation, app recommendation, so one is app recommendation, two is app recommendation. QBCTL, get services. Here, let's look at our services. There's the recommendation service. We go QBCTL, describe, service, recommendation. And if we see here, the selector is just app recommendation. So any pod that has that label on it will basically show up in the load balancer. And because there's two pods, it's 50-50. If there's three pods, it'd be 34, 33, 33, it's a basic round robin. If you have 100 pods, you get one each. So what happens if you wanna do a canary deployment, but only 1% of your users, you'd have to have 100 pods. If you wanna do a 25% canary, four pods. So it's actually fairly basic, but it is fairly powerful from that perspective. Let me run this poll again, okay? So right now we're all at V1. Let's actually open it up a little bit to V2. So we're gonna make 25% of our transactions go to version two. And there's some version twos showing up now. Also on your mobile phone, you might get some version twos coming through. But again, it's shared. Those transactions are kind of shared. So it might take a while to get a V2 on your phone or for me here, you know? It's just 25%. So there we go, all right? But you can actually change the percentage of that with Istio, right? You can actually say, I want 1% of my users to get version two. But it's actually more powerful than that. I wanna make, let's see here, let's go back to version one only, okay? You can see it's just this weight factor that you change on your virtual service, the weight factor, so we're all back to version one. But let's try this, do, do, do, do, do. Oh yeah, let's actually try Safari only. So you can actually change the rules so it's a little bit more special than that. I wanna make it Safari only. So basically what that means is everybody, probably on your phone right now as well as here, you're gonna get to, oh, this is Chrome. Guess what? Chrome actually says it's Safari, so you should not use Chrome to test this. Let's go to Firefox, okay? So Firefox says version one there. If I actually bring up a Safari window, we'll see v2 right here. There we go, so Safari is v2 only. So you can actually inspect the HTTP headers and determine who gets to see what, which is awesome. This also means you can say, well, the user has to be logged in, has to have a certain cookie set, has to be in New Zealand, has to have a certain browser type, and you can really get fine-grained about who sees the new version. And I actually talked to one customer about this specifically, and they actually had implemented it so that only employees see the new version, not any customers, until they've decided the employees have used it for a day or two, and they feel that it works really well, and then they roll it out to beta customers, and then they roll it out beyond beta customers. It's a pretty cool idea, right? So you can base that, you can use whatever attribute you want to determine who sees what. But that's not really the point of what I wanna show you. What I wanna show you is this concept. Let's see if I can make this obvious. Let's see here. So, Qubectl, get dr. Okay, Qubectl, get vs. So this is the things that were created. So Qubectl, delete vs. Recommendation. Okay, so that removed the virtual service, therefore we're gonna go back to normal service, normal Kubernetes service, and let me delete the dr, which is the destination rule. So I'm getting rid of all the Istio stuff now, going back to normal Kubernetes behavior, so you can see it change there. But let's see if we can get this working. Oh wait, this is not, I don't think this is gonna play, I don't think this is as visual as we might want it to be. Okay, let me delete that pod. So I deleted the version two pod there, which means it's gonna fail over to version one. You notice there was no error there. That's actually an example of Istio giving you resilience for free. Part of the infrastructure just died. Part of the application said it was a little balanced against win away. Istio is clever enough to say, wait a second, let's route around that error for a moment. And so you can see it's all v1 until v2 eventually comes back online and then it'll bring it back online with zero downtime from a user standpoint, which is kind of very cool. So that concept of building for failure, like in this case I wiped out that pod, it was still okay. The application did not fail. So that concept of a retry, Istio actually has a built-in concept of try to connect that service, I can't connect that service, connect to a service that does work and give my user a good answer. And you can even set that timing interval, right? You can actually decide how long to try a service that does not work versus does work and give an answer to the user. And you can see now that version two is up and healthy again, it puts it back in and you see it. Okay, so that's all the point I was trying to make with that. Let's talk about a few more things here. Okay, we showed you this in the previous presentation, we won't walk you through that, but one common question we get is, well, how do I deal with all these databases in a microservices architecture? Well, this is the book written by Edson Yenaga on that topic, tips and techniques for dealing with monolithic databases in a microservices world. Just tips and strategies for dealing with certain things. One simple example is you have read-only databases, you know, not read and write, simple thing. We also have a unique project called Debezium that'll help you basically do change data capture. So that might be an open source project to check out, Debezium right here. So it's also mentioned in the book, but it's another open source project that basically intercepts transactions happening in one database, replace them through Kafka to someplace else. In other words, all the transactions are captured in order into Kafka and then you can put them anywhere else in the world you want. So it's a very neat little project and handles change data capture issues. So that book kind of just addresses these kinds of things, how do you deal with this kind of solution? And let's go here, oh, this is an important point. One thing to understand about microservices is that these are all not only independently deployable, they also have independent pipelines and workflows to deliver them. Independent teams, independent pipelines. So keep that in mind. That's an important part of it as well because the key element to understand is you have to have independence at a minimum, right? Independent means I can deploy my component with no downtime for the application. The application is 20 of these things. The application is 100 of these things, zero downtime to the application though my one component may fail because I'm deploying it right now, okay? So that's why that Istio technology helps you think through some of these things. You can basically bring it up in a canary deployment or better yet, here, let's run you one more demo if I can. And again, I have scripts for this but I'm gonna get the right number. There we go. Okay, well, I saw my polar over here. What I just did is I did something called a dark launch. And so all our users, again on your phone, you're seeing only version one. Version two is deployed. Version two is logging out over there. Version two is being hit yet no user's seeing a response from it. So this concept of mirroring is part of Istio. You refer to it as a dark launch. It means you can deploy a canary component in production and no user sees it. So if you are practicing this really fast cycle time deploying all the time, how do you do that with zero disruption to your user base? Maybe a dark launch is one of the solutions that you want for that. So it has another technique for basically a fast deployment, okay? Notice it says optimize for replacement here. That's a key aspect. Organized around business capabilities. You think about products, not projects. That actually will break your back. Number three right there. This is actually where things fail for microservices for most implementations in the world. Everybody thinks in terms of project, meaning I got three months and these six developers and these DBAs and UX people and whatnot and we're gonna run that three month project and then we're done. Then we disperse the team to other projects. And we budget over the next 18 months these projects, six month project, nine month project. And that's not the way to think anymore. In this world you think in terms of product meaning these people are dedicated to this application and API forever. And that also makes them more accountable especially if they're on the pager to make much better software that stands up even longer. So it's a different mindset instead of thinking in terms of developers or disposable entities who move from one project to another to another, you now say this is the thing you're gonna be working on in this company, make it brilliant and keep it so. You see the difference in attitude? Oh, you're just a disposable. Personally just toss you over here, toss you over there. You work for companies like that, haven't you? It should upset you. Don't work for a company like that. Work for a company that says I want you to build awesome software. This component is yours, make it magical and stick with it until it is. You'd rather have that, okay? But that's what number three means. But number three is one that actually most people fail at. This one number here, six and seven also things fail at. Disagentualized governance? Oh my God, no. The enterprise architecture team sitting in their ivory tower will never allow us to do that. Well, guess what? If the answer is never because of some enterprise architect, the answer is you don't want microservices. You don't want agility. You should be really happy with three month deployment, slow cycles. That's okay. Nothing wrong with doing everything every three months just that your competition is doing things as fast as once a week or once a second. That's the way you have to think about it, all right? So these things can be very critical and can definitely break your back from the perspective of will it work or not work, right? Design for failure? Hopefully I illustrated that point when we showed you a little demonstration. Think about failure as a first class citizen. You will fail. You're gonna fail simply because you're deploying so fast. If those are still failures when you're taking services up and down because you're deploying every second or every week or whatever it might be, how do you deal with that? And how do you design the infrastructure accordingly? As a matter of fact, if one of the things we do in our deep dive Istio session, I tell people the ultimate goal, by the way, is that each delivery team responsible for their different microservices deploy at nine o'clock on Tuesday morning. Think about that for one second. We in IT have been told forever, you deploy Friday night to sometimes Sunday morning at 2 a.m., right? That's when you deploy because you don't wanna interrupt anything. What happens if things go badly? But in the new world, you've instrumented things so well, you've automated things so well, and more importantly, you have a resilient infrastructure like Kubernetes plus Istio that you can actually deploy at nine o'clock on Tuesday morning and you'll know it'll work. And more importantly, if it does fail, you've got all the right people there to solve it. You don't have to wake anyone up. They're awake, they already had their coffee. And better yet, with technologies like I just showed you what you can do at Istio, you can just change the rules back. Go back to the one that does work. Forget the one that didn't work. So think about that for a second, right? So there's a different way of thinking and a different way of deploying software, but the new way of doing it is meeting these principles. So this is why I say people don't really live microservices because most of these tests you cannot pass. Is that fair? Okay, any questions on that, those ideas? Yes, sir? QRS, right, CQ, right. So what he's talking about is CQRS or event sourcing or both. And so that is a common pattern for dealing with rights to databases because as I mentioned earlier to deal with these databases, make them read only, that's the safest thing to do. But if you need read and write, then you will split read and writes. Basically, you have query on one side, read, write on the other side. So we don't have time to actually spend a lot of time on that pattern, but Edson does mention it in the book as a, where'd it go? You know, it mentioned the book as well. So you can leverage that pattern. Though keep in mind, it's a lot more code to write. So if you wanna implement it, you can, but it's even harder on the development team, but I respect people who definitely try it. But it does help you with the read-write issue, right? If you have a lot, if you think of this, this pattern, these patterns came from Netflix, all right? So all this architecture we're talking about came from the Netflix back in 2012. Guess what? They're read only. You're getting your recommendations, you're streaming content, right? And there's no rights at all. So if you live in a read-write world, you gotta think about how do you deal with those rights and duplicate data. So Debezio helps you with one side of the rights because change sheet of capture, you write to a single database that replicates it to all the other databases. And then in the case of CQRS, you basically have split your reads and writes so that you know exactly where your right code is and what database it's going to and then you forgot what to do from there. The most common pattern in that case is to not necessarily use Debezio, but write to one location and also simultaneously write to Kafka at the same time. And therefore you have a recorded, changed record of that data and specifically you have the delta, right? Instead of basically saying inventory is now 10, you basically say inventory is minus one. You see the difference in the type of transaction and then you replay the one minus one everywhere else. And that's where the CQRS event sourcing idea comes in and people do that with Kafka. Did you have a question? Users are not affected. Well, now if you have this idea, you can deploy it any time. Well, it's not just the idea, it's all the right instrumentation, monitoring, architecture, right? Everything matters. Even the downtime of few seconds matters a lot in some cases. It might. SLAs are there, downtime is there for 10 minutes. You have to pay the company back. So in that case, you do it in off-peak so that these things can be avoided. Right. So even if microservices architecture is coming, you can take the case of stock market. Now from nine to five, the stock, something goes down during that time. Right. And you try to do your deployment during people are shared. But do you work, so you work on a trading application? No, I'm not working on trading. Okay. So here's the funny part. Everyone loves to say, but if you work on a trading application, these ideas don't apply, but no one actually works on those applications. I'm not talking about trading applications. But I'm simply making fun of the fact that you're bringing up the, oh my God, worst case scenario. Yet, you have to meet the guy who works on those applications. But that's okay. So here's the point. This is actually critical to understand. You see this failure scenario here? If that thing was down, and it might have just been down for a microsecond, it was just down because we were updating at that moment, or the mainframe didn't respond in time, whatever it might have been, we have now decided what our business logic is going to be to deal with that failure. And our business logic is, we can't tell you your inventory, but we can tell you where your closest story is. If you refresh your browser, it may come back with the inventory now, but you have to refresh your browser as a user. You might even build a little Ajax request that decides to auto refresh it. I've seen that done too. But you have to build for failure. And the failure in this case is typically a business logic requirement. The most famous of these is actually one... So here, you've got a valid point, okay, in case of microservices, everything is broken down into small chunks. Right. And you can just go and pinpoint, okay, find the failures over there. Earlier also, we had an architecture where components were there. If something went down, we said, okay, this component is not working. Not necessarily, because actually, blue-green is probably not the best solution. Canary is better than blue-green. And here's why, and I could show you blue-green, but blue-green is very straightforward, right? And you do it in regular Kubernetes, you're moving your router from blue to green. And if green is not responding correctly, you're gonna get a bunch of failures while all those transactions are coming in before you can cut back to blue, okay? So your 100% of your traffic goes to blue, goes back to green. That's why blue-green is not the best strategy for this kind of rapid deployment of microservices architecture. Canary is a better one. So Canary basically says not 100% of transactions, 1% of the transactions that would fail. Or better yet, the dark launch, where the dark launch, you see the transactions in a read-only way and you don't respond to them. And in that case, you basically can monitor that for failures, like one of the examples I show in the Deeper Dive is things like, okay, here's memory going off the chart, CPU going on the charts, the logging is filling out what errors. Well, don't send any more users over there, send everyone back to the other one. So it has to do more of the percentage of traffic than anything else. Okay, but have we kind of addressed your point? I think you also said monolithic applications, and yes, I am talking about microservices architectures. No, you're talking about microservices. Yes. Earlier we had components, and if something happened, we said, okay, find this component is not working, and we'll make the correction over there. Right. So this similar kind of capability was there earlier also. Well, you could do some of these things with a big IPF5 router. You could. The other is, you know, if you talk about all these, like, a dial was, a dial came up. When it comes to running it and implementing it, it's a big, big headache. Okay. And you don't follow everything out there. I did, I did say, I did say up front, this is going to be harder and cost you a lot more. And that is true. Okay, Agile is a headache that you were just saying, Agile was a problem, right? We tried to implement Agile, it sucked, it actually was harder and cost us more. True. But there are massive benefits if you succeed in implementing these things. So you have to decide, are you willing to pay a price today to have a better future? That's all. Okay. There's nothing wrong with loving thy mono. I even say it right here. Okay. You can, you love your three month deployment, and you love that monolithic application. That's actually why I use the phrase elephant in that presentation, teaching your elephant to dance. Okay, you can make your elephant more nimble. That's the key bank story. You can make it go from three months to one week, but that's actually as fast as you can go. And it's based on the simple concept of the Agile sprint. If your sprint cycle is one week, and the end of that is a deployable artifact, that's as fast as you can go. And the only way to go faster is to break up the artifact into multiple threads of execution, multiple teams executing on their own path. So you don't have to go here. As a matter of fact, I would not recommend it for most applications. But if you do decide that you have to change your business rules faster to respond to competitive need in the market, or your business needs you to respond faster to critical vulnerabilities, or whatever it might be, think about which of those components you would break off and make fast, versus the overall thing. And let's talk about that for a second, because that's actually part of the next part of the presentation. You must be shapeless and formless like water. I got all these great Bruce Lee quotes. So here's the number one way to think of your old architecture, okay? In the old school world, for those of you in development or architecture today, how many people were working on existing code bases, meaning you didn't give birth to this code base, it's something that you inherited? Oh, a high percentage of you, okay? So I feel for you. And here's something that happens to, depending on when you started with this new code base you inherited, let's say you inherited it only three months ago. You inherited it last week. Whenever you made that inheritance, meaning, oh, guess what? You're now on this new project, and you didn't see the code base until today, you're cursing that previous development team, aren't you? You're sitting there, God, what was wrong with that person? Why did they pick this crazy ass framework out of open source? I didn't even heard of this thing called wicket, where'd wicket come from? Or they use struts two for God's sakes. Why did they use struts two to begin with? And now we have this critical vulnerability. So you're cursing that previous team. But I want you to think about this and have a little empathy for a moment. If you are in fact the startup team, the creative thing to begin with, you don't know what constraints you're under until that moment in time. At that moment, when you started a new project, someone came to you and said, you gotta rewrite all this and go. And you don't even know go. Well, guess what? It's gonna be a pretty ugly code base, okay? Or better yet, that team that you inherited that Java code base from, it was all ugly, brand new Java programmers too. Or better yet, the architecture, people said, oh, we drew some pictures. And based on our pictures, you have to use JMSQs for all interactions across components, because they're a cool architect. Well yeah, that sucked too, okay? So you don't know what constraints they were under. And in many cases, they actually tried to do it right early on. They tried to build a nice, clean three-tier architecture, right? They basically had their UI tier, their logic tier. They tried to make it nice and modular. They had a common code base, the file structure for the code even made sense, the packages even made sense, right, in Java. And so it was nice and clean when they did it on the whiteboard. But over time, it ended up looking like this. Because when different cooks come into the kitchen, they mix the recipe up a different way. And so even here, this is a good example. This should have been a nice server-side component, a nice, so let's say this is our spring MVC, right? Let's say that's the controller right there, struts action. And this was our stateless session being for the people who've been around Java for some time. And this is our nice stateful session being our JPA or whatever it might be. But here, they're like, oh, I need to call some other code. Let me just reach over and call that code. And they went around the architecture. As a matter of fact, you might have more databases here and you go straight to the database, bypassing the data access tier. And it just happens like that over time. And if you're not constantly refactoring, the code basically gets like going like a hairball like this one here, okay? So here's the problem. You had all these different people involved in this project over time and therefore you have a history and we've heard of this as a monolithic application that it is our legacy, but it is just what it is. So here's a strategy you can decide you might want to employ. It is known as the Strangler pattern. You've probably heard of it. It's very famous at this point in time, but I'll try to explain it maybe in a different way. The Strangler pattern basically says, I have this ugly hideous thing, but my business has come to me and said, can we add something new? And you're like, God, I can add something new, but it'll take me four months to figure out what this hairball looks like to add that new business logic. Or you can simply add something new on the side, okay? It's just a new feature, new thing to show up on the screen, right? I want to show the current available inventory on the screen. If you could do that for me, my competition doesn't know. I'd like to add it to our screen. Well, instead of figuring out the hairball, maybe just add it off to the side. And your browser now makes that second invocation to that server side component into a microservice that is fully independently deployable that meets the things we talked about in our evolutionary chart. We have automated it. We've made it run as a container. We've actually got all this great CICD now, right? We'd actually have automated testing for it. You'd do it right, because you now have the opportunity to do this green field thing right. And it's just sitting off to the side. And if you do have to communicate back to the old ugly, there's actually a really great solution for that, okay? Anybody here use an enterprise service bus? Okay, only a couple of you. So yesterday we ended up having an audience that were like, what about my enterprise service bus? And so you might be thinking that, because here's the, I'll show you, and this is a different slide deck, but I'll show it, because people do run into this issue, okay? So this is my really old presentation, right? So break up your application to different sprint cycles, different teams, each have their own process. But if you look at the application architecture, you might see something like this, right? Your independent team, your independent component, deploying that independent component. But what we did was we put this enterprise service bus in the middle. And the problem is the enterprise service bus technology was not a problem by itself, but we then organized a central monolithic team around it. In order to get anything changed around the enterprise service bus, you had to file a ticket. And then you had to wait weeks to get the ticket solved. It might be something as simple as a different transformation and connection. It might have been something as simple as a different route rule or different filter. You know, just a different transformation of the XML or JSON, whatever it was. But now a central team was formed. So the new way of thinking is actually distribute the enterprise service bus to all the endpoints, all right? Don't have a smart pipe, have a dumb pipe. Do you might have saw that in this presentation here? Let's go back here. So this is actually a pretty important one. Whoop, did it here? See where it says smart endpoints, dumb pipes? Don't put the logic in the enterprise service bus, put it out to the microservice. And so even in this case, this architectural pattern here, where you see this, dun, dun, dun, dun, dun, dun, dun, dun, dun, dun, dun, dun. Here, if you have to integrate that microservice back into the whole, use an embeddable transformation integration technology. So that could be spring integration or it could be patchy camel. So we, of course, recommend a patchy camel. That's actually the session that's happening next door right now, how you actually deploy integrations as Kubernetes services. So literally, your whole integration logic can be built in a simple file and you can deploy it as a pod and it'll run all the time in a background, always running within a Kubernetes cluster. Fixing things, moving stuff from here to there. FTP here, JMS over there. JMS from here, transform it, send it out at HTTP over there. HTTP to HTTP, no problem. HTTP fan out, no problem fan in, no problem. Things like that. All those enterprise integration patterns still apply, but they're not in a bus with a monolithic team, monolithic architecture, they're distributed into each endpoint. So smart endpoints. You can keep then adding these components, and by the way, I call this hug instead of strangle. Doesn't strangle sound very violent? Yeah, you guys didn't think about that when you first heard the word? You're like, yeah, let's strangle it. I hate it. I call it hug. So let's hug it, let's embrace it. So you would actually give it a nice friendly hug and you could just keep basically wrapping around it, taking away so you can see the complexity is melting away from the old code base as we add new, properly architected applications and then we shrink that old monolith down to what it needs to be, essentially making it a microservice itself. And then once you do that on your existing application component, you've nicely built it so that it deploys easily, you now can go embrace others, find other big old ugly applications and start solving them the same way. This is the most common strategy that you see for dealing with an old ugly code base, but it's not easy, okay? You notice one trick up there is that router, see right there? You're like thinking, well, how do I even get that? And if you bend your monolithic application and drop it into Kubernetes or OpenShift, because you can, that's the cool thing about Kubernetes. It can run microservices or monolithic applications. It doesn't care. As long as it runs on Linux, it'll run. So move it there and now you have that router built into the service fabric, built into the Istio's virtual service. You have the router now. You get it for free, just by running it in Kubernetes. So put the monolith there and then add your properly executed microservices around it. And then you can do all these patterns that we've been talking about. Does that make sense at all? And is this helpful? Is it, you know, because it could be relatively simple. This is, by the way, the journey I just painted here in these pictures is not easy. It will take real work. But if your goal is to go faster and be more agile, you're willing to put that work in. Because now you can make the changes you want to here. I used to actually talk a lot about what was called a business rule engine. So we have a business rule engine at Red Hat called Drouals. And it's very, very popular. It's the most popular rules engine in open source. But when do you use a rules engine is when you have dynamic business rules that have to change pretty rapidly. In other words, I have a pricing engine and based on our competition, we got to change prices almost daily. So you put that in the business rule engine instead of the Java code. Because Java code requires unit testing, compilation, building of an artifact, deployment to WebSphere, WebLogic, Tomcat, JBoss. Instead of doing all that, you just simply change the rules in a spreadsheet known as the decision table. You really edit a spreadsheet to change the new pricing rules and hit save. And now you've changed it. And that's a big win for the business. But why did you do that sort of thing so you could have greater agility? And that could apply here too. Our rules engine and our process management engine and our integration engine are all embeddable in a microservices architecture. Or monolithic architecture. Doesn't really matter. But it can also be embeddable on a Spring Boot application or a Quarkus-based application. Okay, so they can't be embeddable in Node.js, but it wouldn't be hard to bridge the Node.js world in if you had to. All right, let's see. We got a couple more things to talk about here. All right, here's the different ways to build a Java-based application and get that microservice that you're interested in. Drop Wizard was the first on the market that really kind of made this idea popular. Drop Wizard and Vertex came out about the same time with this concept of a fat jar. In other words, everything you need for your application is in a single file and a jar file. And it's a self-executable jar file. Very, very popular concept. Couple years later, Spring Boot came out doing the same thing. They also added this really cool starter palm so basically you can go to start.spring.io and just drop in these little starters and boom, it creates a fat jar and you're off and running. And this was way cooler than what we have with old school application servers and I'm an old school app server person because I didn't have to build and install my web server or web logic or my JBoss and then load a year or war into it. I could basically just take my jar file, put it on a flash drive or better yet, email it as a file attachment to somebody and say, run it. All they had to do was have a JVM and Java dash jar and run it, right? That was the nice thing about it. There's also Thorntail based on MicroProfile. That project, of course, that's a Red Hat project that we're gonna be moving away from a little bit because we're now moving into this new world and Micronaut was the cool kid on the block until Corkis happened and it's the newest coolest kid on the block and I say that, not because we worked on it, but it actually is where you can take your Java based application and not only does it run ultra fast but it also can be compiled to native executable and run that way too, okay? And what I showed you in the previous presentation was you can run as fast as it does as Node.js. Scales as fast as Node.js, smaller than Node.js, as an example. And we were focused on that because we wanted to make it happen for Kubernetes and this dynamic architecture needs a more fast runtime. It's a way to think of it. We now have this amazing infrastructure called Kubernetes that allows you to scale, burst, move, change and we had this old clunky JVM thing that just didn't wanna move that fast and so we had to fix that. The nice thing about Corkis though is all your traditional programming models that you're used to are there. You wanna build a CRUD based application, you wanna do CQRS and do all that, you wanna talk to Kafka at the same time you write your database, all good, okay? And actually one of the things you probably did not see for the Corkis stuff, I will show it to you real quick because I'm very excited about it. But I don't do it as a demo just yet because it's still kinda new. Let's see here, is it this one? Yeah. All right, oh, my dark launch is still running. Okay, let's actually turn the dark launch off. Get DR, okay, wow, that cluster is slow. I just gotta delete the destination rule on virtual service, we'll just ignore that for now. But here's a good example of something that I've been working on, it's called reactive messaging. Well, it's part of reactive messaging, a new API that comes out of Micro-Profile. But this is an example where I built a set of microservices without using HTTP, I used Kafka instead. So that's actually one of the things that Edson will be talking about next in event-driven microservices is can you leave the world of HTTP behind and go into a world where HTTP is not required for your microservices architecture. And this is a demo I've been working on. And you kinda see, it has this very simple thing in reactive messaging, you have your incoming pipe and your outgoing pipe. And if you've worked on the enterprise service buses, you're like, oh, that makes sense. If you've worked on spring integration or camel, you're like, oh, that makes sense. So the concept is very straightforward. I have a simple reactive programming here, reactive piece of code, that basically is looking at filtering the inbound transactions and transforming the inbound transactions, and out the other end to a different topic they go. It's really kind of a neat idea. And so, and I could, we don't have to run all this, but you can see this is a new programming model that we're working on from a Quarkus standpoint, and this, of course, is using Kafka. But the cool part about it is, if I switch out Kafka for AMQP, like ActiveMQ, no problem. The programming model, and that's actually the best part, the same code for Kafka or a regular broker, even instead of the two different programming models that exist. You can still use Kafka streams and things like that if you want to, but in this case, the same code for both. It's pretty nice. And so in this case, it literally maps and transforms and sends things through, okay? And you can see what my accept code is really silly, it's just looking for somebody that ends with eight. That's all. But it's looking at the actual inbound JSON payload, figuring out what to do with it, and then modifying it, sending it back out, okay? So this is a good example of where I built my microservice just simply to provide this transformer and filtering business logic and just dropped it onto the bus, right? The Kafka bus in this case. So that's an example of what you can do there. Again, we talk about it from a micro-profile standpoint. We are working to standardize all these APIs, like that API just showed you, you'd probably be afraid of it. You're like, wait a second, that's a Red Hat only API. Well, no, we do the innovations, but we try to move it back into an open standard. In this case, micro-profile is the open standard for those sorts of innovations. And literally the thing I just showed you is called reactive messaging under micro-profile, okay? Open source and open standards are certainly something we're fundamentally about. We mentioned this in the earlier presentation, the history of microservices. Do note that this world really kicked off in 2013 and 2014 when we had Spring Boot, Docker, and Kubernetes being born. But the stuff that we used, Netflix OSS, was born back in 2012, okay? So if you're using those annotations from Spring Cloud, you're using 2012 code, in many cases. But that was the old way of doing it before we had a Kubernetes, before we had an Istio, before we had these more modern architectures. And if you actually go dig through some of the Netflix recordings, they'll actually will say, have we had a Kubernetes? We would have done it differently, okay? They based all that architecture based on virtual machine architecture too. I mentioned this in the keynote presentation. It was a great example though of how you can launch things at great scale. This is one thing to be noted, right? You have your Netflix OSS, like Eureka for service discovery, which now part of Kubernetes. You have your ribbon, which is part of Kubernetes, so Netflix ribbon as opposed to Kubernetes. This is Spring Config Server as opposed to things from a business logic standpoint. But when it comes to service discovery, how does one service find another? Let Kubernetes deal with that. You don't have to actually have a library for that. How do you actually make that invocation across the wire? Is it elastic? Is it resilient? Let Kubernetes and Istio deal with that, okay? You shouldn't have to manipulate DNS on your own. You shouldn't have to figure out how to keep a separate service discovery engine running, like literally a whole nother service discovery service running, like Eureka, when Kubernetes keeps those services in its own memory. It knows where those services are. And therefore you can just talk to Kubernetes and DNS to say, hey, call that service. So it's resilient and scalable. Well, can it scale out, scale back? All these things are key principles that you'll want to have in a microservices-based platform. You want pipeline automation, you want authentication authorization, you want logging across the stack. So in the case of OpenShift, we implement EFK out of the box. You know, Elasticsearch, FluidD, Kibana, so you get all your logging automatically, aggregated logging. We implement Prometheus and Grafana on the case of monitoring. Not to mention we have a new thing called Kiali that you might have seen that in Kamesh's presentation earlier today. I think I have Kiali still running. Let's see here. I'm just gonna go look, yeah, let's see if my Kiali's running. Okay, yeah, so there it is. I don't have it animating right now. But remember customer preference recommendation that I showed you? Kiali is showing you exactly how are they healthy? Are they happy? Are they behaving well? What is the proper transactional path through those set of microservices? And you get to see it, right? So it's showing you your service graph, what services are calling each other in the right order, and is everybody healthy and happy? So this additional concept of observability and monitoring is a key aspect of a microservices platform. So we've had multiple questions about what happens if something fails? The goal is to fail fast, to fail with a dark launch, to fail with a canary, so as you fail small and fast and you monitor and you know. And then you fix. You see the difference in the way we think about it? In our monolithic deployments, we are so worried about failure that we'll take extra time to ensure we don't fail, and then yet we still fail. In this world, you know you're gonna fail, you just plan accordingly, okay? So let's see, we're almost done now. Couple more things, so that's an important one. So yeah, one last slide here for Bruce Lee quote. This is my last one of the whole conference, you guys should be happy. But I love it, a wise man can learn more from a foolish question and a fool can learn from a wise answer. So think about that, so that's why I like getting questions, because it teaches me, honestly, I learn a lot from it. And then here's one more slide that I'll hit you with because this is one of the most common questions in the back of your mind from being here all day, and that is, what is the difference between OpenShift and Kubernetes? OpenShift is Red Hat's distribution of Kubernetes. It is Kubernetes, yet we also make it enterprise ready, ready for deployment in your data center, in your cloud, public or private, and therefore we package up a bunch of things in there that we know you're going to need to make it successful. For instance, it's secure by default. You gotta log in to get in, okay? But most Kubernetes distributions, there's no user password, right? You just get in. We have a special console and a user experience, a service catalog, I showed this in the previous presentation where you can just point and click your way through a deployment, and we use S2I to do the dynamic build. You don't have to worry about doing all the command line tools. We have an Ingress solution out of the box. Every Kubernetes vendor you deal with needs to give you an Ingress solution. Like how do you get traffic from outside internet or outside cluster into the cluster? That's known as Ingress. We have this thing called a route based on HAProxy. You just basically say I want a URL and boom, you got one, and users can now talk to your application. Log aggregation, which we just mentioned, EFK, the Kubernetes and Grafana metric aggregation, image building tools, image registry, where are you gonna store the images that are getting created? We have a place to store that internally, and of course a software defined network, and as well as multi-tenancy. A lot of our customers are really big app, big customers with massive clusters, with tens of thousands of applications kind of thing, and guess what? You actually start treating all those internal customers like they're different customers, you know what I mean? So therefore you treat them as separate tenants. You want to build them separately, budget for them separately, give them different resources in CPU, you know, CPUs and memory differently, and so those concepts of multi-tenancy has always been a core part of the OpenShift distribution of our Kubernetes. And most of this stuff by the way, also continues to move upstream. We're the second largest contributor to Kubernetes outside of Google themselves, and a lot of things you see and seen today are based on Red Hat contributions to those upstreams. But I thank you all for your time. I'm being told right here the time is up, but if you have other questions, I'll be around a little bit after this, so thank you so much.