 Hello and welcome to this session entitled bring your own infrastructure or who needs to run a control plane anyhow. As you can see, I have a Halloween oriented theme. My name is Bruce Basil Matthews. I'm a senior solutions architect for Mirantis USA. And I've been doing this since before there was computers when we used abacus's and slide rules. I have with me today my good friend count smash to talk about these things because he is the premier authority on service meshes and I think there's some applicability to this which you like to introduce yourself count. Hello, my name is count smash and I'm doing this for Bruce as a fever. I had to get out of my coffin I hope I can get back in before the sun comes up. Thank you, count smash. I know that your full name is count service mesh but they call you smash, just to make it easy. Hopefully you'll be around at the end of this to answer some questions. So today let's talk about what we're actually going to be talking about where we are where we were where we were where we are and where we will be. We'll start with where we've been and writing the applications we talk a little bit about microservices and service decomposition, which I think is very important. We'll do some comparisons of bare metal virtual machines and containers, and a category I call who cares, which I think I'll unveil to you as we are presenting more information. We'll do the case for each ad service services and some more services down the road, and then we'll talk about some of the mechanics of the who cares category, which I think is an interesting side route to us. So let's start with where we were are and will be started out doing things with punch cards and paper tape finally the green screen became a really fantastic advancement in computing history. When you then had to wait for a card stack to process versus a mini computer proc processor to run it. It was a huge advancement when it got there and when you could set up prawn jobs so you didn't have to sit there and watch it to start it off and finish it. That was even better and debuggers have gone through a tremendous amount of improvement at this point in time they wouldn't even stop at the line that failed you had to figure that out yourself. And that when it no longer matter where you had to run something that we would have reached Nirvana. Writing applications on bare metal was kind of a more straightforward chore. You had application programs that you that dealt with your business logic and had nothing to do with how the computer ran. And because computing resources were finite, they were like gold, you had to make sure you only used as little as possible to get your process done. You had to write special drivers and for things like printers and storage and all of that kind of stuff and they had to be initialized in your environment when you were running something and you had to run code through hundreds of cycles of testing to make sure it wasn't going to fail once it was executed in production and if it did fail it was always the hardware's fault. Once we got to virtual machines that was a lot more computing resources available so people didn't get to be so picky about code size and as a result we started writing bloated software, and then we started sticking them in libraries so we could move them from one place to another. Debuggers actually did improve to the point where they were pointing at lines and telling you an idea of what you may need to fix. But because there were different flavors of Unix and none of the Unix providers had come together and decided on a single set of libraries. We compiled it for each one as one for HP UX, one for AIX, one for Sonos, one for Solaris because they had two flavors. And the idea of recoverability came into play and you needed to deal with that. But if code failed, it was because of the hardware, definitely. The containerization started to come in in the last two to three years and we're going to talk a little bit about the microservices and the microservices architecture and the containers themselves and their, their configuration. And we'll point out some of the differences between containers and virtual machines. A little bit about service decomposition because that's my favorite perspective on the thing. And recoverability was more my job now as the application developer on containers. But we had some shift left, self healing rules to rely on. And we would use those to be to fill that void. Since I don't have the an understanding of where the actual container may be running what physical hardware or anything I can't really blame it on the hardware anymore. Okay, so let's talk about microservices and microservices architecture a little bit. And microservices architecture itself is a set of services that communicate either synchronously or asynchronously to maintain their connectivity to each other in a stable way. Data consistency is passed from one datagram in a container to the next datagram in a container and so on. And microservices then can be developed independently of each other so your, your coders can be hundreds of miles away, and thousands of miles away and never actually deal with each other, only because the, the schemas might be well known at that point for different containers. The persistent storage comes with each container so it's self contained and you can decouple it easily and move it to another service set. And consistency is event driven, and we'll talk about event driven in a bit. Generally speaking, it's up to me to ensure that the data consistency across services exists and that's part of the developers responsibility now. Okay, so let's talk about the event driven architecture for a cloud native environments such as Kubernetes. Events occur, they are captured in cues, they are passed off to a mediator or mediators. Those mediators have channels registered with them requested by the consumers. And each event consumer takes a look at the data that's being passed to them in the event. And if it's applicable to them, they process it. In that process, they don't change the data set that's the end of the event queue for that particular event. If there is a change in the data set as a result of being processed by the event consumer, yet that event comes back to the front it goes through the queue the mediator, and back to all the consumers again so you get the idea this is kind of how this event driven stuff works. But the real important part of that is the ability to do decomposition. You have to start here whether you're moving from bare metal or you know mainframe to virtual machine to container and and to beyond to the serverless world and and beyond that. So here's some guidelines for it. And you do it at all of those levels regardless of what you had done before, you take a look at it to see what can be decomposed into microservices. And you look at it from you can look at it from a business capability standpoint. You can look at it from a design subdomain standpoint so that there would be a higher of them. It's easier to find things that way. You can also do it by taking the actions that are in the coding that you're decomposing and and isolating each one of them as a micro service, or nouns. There are several resources in a grouping, and those resources can then be identified in a catalog service catalog and things of that nature. I like to use the verb and noun cases for doing this because then I can write code that emulates the human language to accomplish it. Very quickly. When you've done this effectively everybody's got a single responsibility principle much like the grip of utilities in the old Unix world. They do one thing they do it very well. They're very loosely coupled because they have their own isolated persistent data sets. Each one of them publishes an event in its data chain when it changes the data and that event gets processed as we talked about other services that can then consume that event and process them in an event driven architecture. Our microservices the way to go for everything. The best answer I can give you is it depends. I mean if you have achieved everything that you intend to achieve in terms of being able to flexibly change the code and reduce upgrades and updates rapidly and all of those things with simple decomposition you can stop there. Because every layer that you're going to do thereafter from the containerization becomes a layer of complexity and expertise that needs to be acquired by your organization. Additionally, the networking would require some additional layers of complexity in terms of it. So I'm always asking myself if the value add of containerization and microservice generation is greater than the complexity that it introduces is probably a good idea to put it into a microservice. Otherwise, leave it as a, you know, virtual machine driven application. Okay, let's talk about the actual platforms themselves and we're moving along really quickly because I need to get to this done. They told me pretty rapidly. All right, in the bare metal world host operating system sitting on top of their infrastructure apps run on top of that using the host operating systems libraries and binaries and all of that kind of stuff. In the virtual machine driven world they've introduced a hypervisor in between hyper V KVM Citrix take your pick and sitting on top of that are complete guest operating systems of different flavors so you can have windows and Lenox and different flavors of Lenox and each one of the apps use the isolated binaries and libraries presented by that guest host versus the host operating system that's supporting the hypervisor. In the containerized world they put together the same level of infrastructure, using a single operating system, and have introduced some kind of container engine in this case the Docker engine is depicted. And each one of the containers themselves have their own binaries and libraries built into them, running different applications in each containerized microservice. They're relying on that single operating system to to run with within the container engine. Okay. Bare metal has a lot of benefits. The workload demand requires it and application has some special need for a GPU or an SRI of the smart card for networking. Noisy neighbors fewer moving parts and the network has less complexity if I plugged in the, the Nick and it works I'm good. On the virtual machine world. Now, it helps you to do that decomposition by allowing you to utilize more of a physical host to host separate application services within different operating systems. Since it runs on top of the physical servers, the hardware is emulating the physical hardware and virtualizing everything. The hypervisor lets you monitor and create and run virtual machines. And they act as a layer between the operating system and the virtual machine kind of isolating them. Each virtual machine has its own unique operating system so you don't have to worry about crossed off between them and virtual machines different operating systems can run so windows and Linux and the same host physical host running a Linux flavor or whatever. There's a case for microserve microservices and containers, which enables a much higher level of performance of your application development teams and your developers can really start running these things predictably on different environments, regardless of where they're hosted. It provides an isolated way to run all of these systems on a single host. Sitting on top of the physical host OS, as I said, there's a container engine running shares the kernel and usually the binaries and libraries from the OS kernel, plus those that have been compiled in the microservice itself. But from the result of microservices they're extremely lightweight so they start up instantaneously as opposed to a virtual machine which takes a long time to boot up before it's useful. Okay, we've done that whole thing of re working from bare metal to virtualized applications. And we decomposed everything. And they're all in containers. Now what do you do. Well, just think if you had a secured registry so that only your groups and organizations could access it and you know to rise those containers and place them in there. And that if an application actually needed encoding, for example, the container that provided that service would simply be in the catalog and show up and start up and do the processing necessary for encoding. You have to make sure that there's a server available or scalability or or networking or or anything else before starting it up, because the infrastructures are all running continually. And we'll say, Oh, let's serverless computing but not exactly serverless computing really ties you down to the provider. So, AWS, Google, Microsoft, each have their own library of serverless calls. And each one of them is different because they won't standardize why because they want to lock you in. They want to avoid this and here's how. What if instead of relying on their infrastructure only to do the orchestration and cataloging we all standardized on Kubernetes, everybody's running Kubernetes it's the same kind of Kubernetes straight across the board. All of their serverless calls in custom resource definitions that they could share between clusters. This would take advantage of people skill sets that already exist in the marketplace and only make them better. And multi cluster applications are all of a sudden of wallah because everybody's running the same basic orchestration. And some resources can be shared into the private domain and the public domain for this application development to occur, which means that the developer is completely flexible as to how to do it. If you engage something like service mesh count smash comes into play here and using something like Istio you could manage the set and segment the networks out for traffic that was only dedicated to your clusters versus everybody else's clusters, and you could ensure both quality of service and security. Okay, let's take it one step further. What if sitting in that service mesh that is connecting all of your clusters across public and private providers of resources. It is providing both a neural version of the network and machine learning algorithms that could predict in advance, which containers were needed at any given point in time in a service being executed, and simply bring them up when needed. And that way you could scale the containers when you need to for that particular event. And then remove them and bring them back down. And you could scale it across multiple infrastructures. This would be fascinating. So this would also minimize the cost of maintaining the infrastructure because you only got less of it being used at any given point in time. And the biggest thing we could do is base all of that on the context of trusted computing so that that container knows it can run on these trusted computers, because they are in the service catalog that's associated to the discovery mechanism within the service mesh. And it imagines that there are stored in the trusted registry, these instantiations of servers and devices and containers to be able to be applied to your application services. The technology would make it really distributed computing as we envisioned it when we drew it up on whiteboards, 10 years ago maybe. And it mitigates the security risks because of the integration of the trusted platform modules. To do this we're going to need to standardize the security models will have to be all the same authority models so that people recognize your authority with the certificate. That's trusted by you to that computer, and all of those kind of things to live harmoniously across all of the flavors of infrastructure. We're going to be dealing with, and that allows us to do it for containers virtual machines and bare metal, which I think is a feat. Okay shameless self promotion in the last few minutes I have here. This is a log that goes into the details of the neural network and applying service decomposition to that network in order to be able to distribute computing. And then one because I developed myself and I've used the tool, but is now owned by my company. And this is a developers tool to be able to access and, you know, deploy to multiple flavors of both Kubernetes and the Kubernetes framework, so that you could have instances that we're sitting in AWS and instances that we're sitting in private cloud, or even on bare metal that we're being accessed and code was being written to. Anyway, take a look. Hope you enjoy that. Now, we're going to leave the last three or so minutes of this presentation for questions and answers. And hopefully, I've given you some at least food for thought, if not something that you can go out and get excited about. Thank you very much. Have a great day.