 Hello, Open Summit Japan. Welcome to this session. Bring your own infrastructure or who needs to run a control plane anyway. I'm Bruce Basil Matthews and I am a member of Mirantis. I have been in the industry for about 40 some odd years and worked at a lot of different technology companies, including Sun Microsystems and Oracle and Hewlett-Packard. And when I started out, I was using abacus and slide rules to do what I do. And now, fortunately, it's advanced a little bit beyond that. What are we going to talk about today? Well, I'd like to sort of start with the foundation of where we were, are and will be. And that includes starting with where we've been and then going through some of the variations of writing applications for bare metal versus virtual machines versus containers in the world. And then moving on to microservices and serverless computing as an option. And I'd like to end that section with an important section on service decomposition because I think that as you make a transition from bare metal to VMs and containers, this need for decomposition comes into play and I want to make sure that everybody catches on with that. Then we're actually going to go through the physical infrastructure elements of bare metal versus virtual machines versus containers versus a category I call who cares and who cares is actually sort of an outlandish thought on my part as to where we're headed beyond serverless computing. I'm going to make a case for each and how about using all four of them in any different scenario and why you might want to do that. And then bringing in this serverless and microservices environment that involves libraries from public cloud offerings and maybe even our own as we move forward. And then how we'll add more methodologies down the road to accomplish that. And then I want to go into a little bit of the mechanics of this idea of what I call who cares. It's my own flight of fancy, but neutral networking during doing a neural network of elements within it, using trusted computing as a foundation and standardization, which I think will be the key to allowing us to get from as you see on the right, the main frame all the way down to the public cloud and using application code like ballerina versus Fortran and Pascal and things like that. OK, where we were are and will be and it's a very long journey. It's gone over those 40 some odd years that I talked about earlier, but we have to take that journey by taking the first steps. So in the world where we have been when you started off with punch cards and paper tapes and you're running down the hall to make sure you push them into the machine holding stack and and don't spill any or don't get them out of sequence. And when you had to wait for the stacked process and the many computer then came into play and made a huge advancement, I could run things without having to do that. Although I was still in some ways using paper tape and four millimeter that tape to do things. And then when finally we were able to set up con jobs, it was a lot easier to not have to wait. It would set it off itself and it would do something when it completed or and the biggest advantage at that point in time of where we had been was that a debugger would finally stop at the line that failed as opposed to just telling you that the entire code failed and you'd have to figure out where it was. And when none of this mattered anymore, we thought we had reached Nirvana. When you're writing applications, it's a little bit different on each one of the platforms in this case, bare metal application programs focused on business logic only. And you let the machine language take care of dealing with all of the interaction. Since computing was at a minimum, the resources were like gold. You had to make sure that you used a small portion as possible of RAM and things like that. The first programming that I ever did was targeted for eight bytes. That's one character. So you can imagine if special drivers for things were involved to do the printing or storage or even the connectivity, you had to build those drivers yourself and initialize them when you were loading the programs. Otherwise, they wouldn't be considered part of it. And you had to run your code through hundreds and hundreds and hundreds of debugging sessions before you ever executed in a production run because God help you. You didn't want to make it fail and have wasted those resources. But if the application did happen to fail, it was always because of the hardware. Moving on to the era of virtual machines and computing resources became a lot more common place. In places like VMware and OpenStack, they were pretty prevalent and able to be created quickly. We stopped caring about coding efficiency and size and all of those things. Although I can test that size continues to matter. Software libraries developed in C and other languages became common place and took the place of actual coded sections of monolithic programs. And the debuggers themselves started to hit on the line that failed and tell you the part of the line that needed to change and even make some recommendations about how to change it. And your code always had to be recompiled for the different platforms. They were all called UNIX, but HPUX was different than SUNOS, was different than AIX, was different than IRIX, etc. So you had to have different copies that were compiled against the different platforms. All of a sudden, recoverability became a very important part of things because the hardware wasn't as redundant and resilient as it could be. But if the code failed once again, you could still blame it on the hardware, which was kind of an important thing. Applications written for containers. Now we're starting to move into a whole different type of beast. And in order for me to give you that kind of sensibility for containers, I have to give you some information about microservices and microservices architecture. And give you a bit about containers themselves and what their formats entail and how they work and some of the major differences between containers and virtual machines. And once again, I want to talk about service decomposition at the end of that because you start there and especially if you're going beyond this point with me, you'd have to really pay attention to how you do service decomposition. Recoverability once again becomes even more my part of the bargain as an application developer and writer. But they have self-healing, shift-left SRE capabilities and policies that are now becoming a necessary part of the coding practices. Unfortunately, since I have no idea where something is now running on worker nodes in some kind of cluster, I can't blame the hardware anymore. But that's okay. We'll get through that one. So let's talk about microservices and microservices architectures. Microservices define an architecture that structures an application as a loosely coupled grouping of collaborative services. The services communicate, have inter-process protocols using things like HTTP or GRPC. If they're asynchronous or synchronous and other things such as like Kafka or AmpIQ, RabbitMQ, if they're not. Services can be developed and deployed all independently of each other so you can have different developers managing and deploying different containers, providing different services within the same application, and it still works. The way that that happens is by maintaining a persistent data structure that can easily be coupled and decoupled from each of the services. So an input data set to a container gets massaged and an output data container is presented to the next container in the flow. And all of that data consistency that that requires is totally up to me as the application developer and it uses in general an event-driven architecture to accomplish that. And some of the mechanisms needed to ensure data consistency across all services are left up to me. So why do people use microservices now as a foundation for their architecture as well? Because they're relatively small, they're pretty easy to find out what went wrong pretty quickly and you can fix one part of it without having to mess with all of the other parts of your microservice services that make up an application. You can deploy versions of your containers and individually more frequently so you can improve things much faster. You can localize those changes so you can quickly move from something that's talking to folks in Japan, from something that's talking to people in Europe. Pretty easy to isolate the faults because the containers themselves are individually isolated and there's less of a need to commit to a particular technology stack because we are becoming more and more abstracted and distributed across this new microservices architecture. The potential drawbacks of some of those things and you'll find these as you start making the transition is, developers have to deal with some additional complexity especially in the deployment phases of things because they're distributed and service discovery becomes an important part of what goes on under the covers and that typically involves being able to catalog the new things that are coming in or recognize things that all of a sudden show up in the clusters that run the frameworks that run your microservices. This represents a lot more security concerns than it used to so you've got to pay attention to that and as I mentioned earlier deployment is more complex than it used to be and people tell me that it requires more memory in consumption than its predecessors but I'm not convinced of that myself. This new phase of dealing with microservices in an event driven environment they've dubbed serverless and that's a misnomer of course because there's still physical hardware and physical application capabilities involved but they're only run when they're needed by the application they don't have to sit there burning cycles of computing unless they're actually in use and there are two different types of these elements some they call microservices and some they call functions and I've given a brief definition of the difference between them the function relatively small bit and it performs a single act and typical microservice has a collection of those functions into a more complex service that's going to be provided of course we always argue about it and developers will call things one thing and call things another sort of interchangeably even though there are differences and that blurs those lines but I'd like to keep them separate. If you want to find out more about serverless computing and how it works in particular I found that this particular article went through it very well what is serverless and you can go there and look to your heart's content but I'm going to give you kind of a summary of what is involved in that article in an event driven architecture that is specifically designed as a cloud native component within a Kubernetes framework so events are going on constantly clicks of mouses and entries into forms and copying and pasting and all of those things and there are queues that are made available for each one of those events to be captured each one of those events then are presented to a mediator that's actually taking one at a time in the sequence that they came in and it's presenting the the events individually to all of the channels that are registered with the mediator and it asks the consumer of that channel do you are you responsible for this event are you responsible and generally they'll say no and one will say yes and when that happens that event consumer picks up the event and processes it through its microservice that's been created within it and if it changes the datagram that went into it and came out of it it then passes it on as a new event and this whole process starts again however if it doesn't that's the end of the process it finishes it and it moves on to the the mediator moves on to the next event in the queue so some of the benefits and drawbacks of this kind of approach to application development for cloud native environments is that it's very resilient you can find alternates that you can move to quickly if one parameter isn't met within the application service itself you can scale the individual parts of it in the mediation and the consumers in the event you know capture instead of scaling the whole thing so you save resources you can update features much more rapidly once again that's based on the microservices and the containers being somewhat isolated and it's very flexible for developers to add new things and use different languages starting from the older languages like C and and Pascal and things like that going all the way up to the newer languages like golang and ballerina etc one of the bad parts of using a serverless microservices because there are no standards for each one of the providers each one has developed their own set of functions and microservices and they're different on each one of the public cloud platforms for example so if you deal with AWS versus Microsoft Azure versus GCE or GKE you've got to deal with different libraries and rewrite your applications to accommodate them so that can be problematic as problematic as it was to recompile my code to run on HP UX and sonos now it's an old school kind of philosophy but I think it's still very important in order to attain this and to move from the mainframe bare metal virtual machine containerized microservice serverless delivery you need to be able to decompose your application services into these very discrete parts that can be loosely coupled once again as we started with um and a lot of people don't have experience doing that so I'm going to give you my impressions of the things that have worked for me when I've done it um you decompose by business capabilities for example uh you define a corresponding business capability as a separate component of your thing and it's ends up being the list of microservices that satisfy that component or you decompose by the domain driven stuff within your design subdomain so application subdomains within the domains are contained within the individual microservices and containers or you can do it more of a quick and dirty way by just taking advantage of the verbs in your applications the the due factor and define those services as individual microservices and and functions that are responsible for fulfilling that specific action and that becomes decomposed the component or you can take the nouns or resources like objects um and most object-oriented folks are familiar with that one I personally like to use the decomposition verbs and nouns and resources so that I can apply human language to describing them to people they don't have to be dealt with as if they're a computer programmer if you take that approach okay effectively decomposing services so that they fit in the new cloud native realm is only a you know each one each service small set of responsibilities follow the single responsibility principle you can look that one up in most computer science books and you can get a complete description of it but really what you're doing is following the UNIX utilities policy that each one of the utilities like grep and awk etc does one thing it does very well it doesn't do those things multiply you have to put them together to accomplish that um make sure that everything is loosely coupled don't tie anything to any dependency with any other aspect of your application service or microservice if you do that you defeat the purpose um and once again each one the services that you're creating has an input datagram and an output datagram and if the datagram is changed by the service being published it has to have some impact to the next event that happens otherwise it goes to the next set of events other services that consume that event update their data based on what happened with the datagram as it went through the service in advance of itself and that's what they called the event ribbon architecture a lot of people ask me about whether the microservices architecture is any better than any of the other um methodologies for architecting application services and although i'd love to give you an exact answer i'm going to give you a rather uh funky one that's not quite appealing to everyone it depends so um if you've broken things down well enough to satisfy your needs for virtual machine hosting from bare metal hosting or from mainframe hosting to bare metal to virtual machine um then you have to ask yourself whether it's important to move to uh containerization and which will require another set of decomposition everything that you do from layer to layer to layer adds complexity and you need to have more expertise in the various uh requirements at each one of the layers for you know virtual bare metal virtual machine and a containerized microservice delivery for cloud native and those you need to know what you're doing to actually move there um networking it's a whole new layer of complexity that's added on to it because there's so much more that the networks between things rely on within the containers and if you're not adding value in terms of creating each one of them then why go to the additional complexity that's my question if you keep asking that and the answer is yes keep moving on this course if not you can stop okay let's talk about the platforms themselves bare metal versus virtual machines versus containers versus that category of who cares and I'm almost getting to the point where I can tell you what who cares means okay so here's a depiction basically of those three different architectures and how they look um you've got the physical infrastructure layer at the bottom on bare metal there's a host operating system and applications run on top of it it's a single individual sort of monolithic thing that the applications are sitting in if you move to virtual machines you've just got the same physical infrastructure sitting underlying it and there is a host operating system but now we've introduced an interceding hypervisor in which it can emulate different guest operating systems sitting in their own namespaces with their own RAM and their own uh allocations of physical pieces of the hardware at the at the lower end finally from a containerization standpoint you do have the physical infrastructure and the host operating system as we originally had layered on top of that is an engine in this case I'm saying it's the docker engine but kzeros is another version of that that morantis happens to produce and within that engine applications can be run in separate containers but they're still using the same operating system they're just taking advantage of connections to it via custom resource definitions and things like that so that's basic differences between the three the bare metal world presents some benefits over some of the others if you have workloads that demand the full computing capabilities of the physical hardware put it on bare metal if it requires specialized hardware in most cases you know they can virtualize some of them now like GPUs and you know smart cards and things like that but as this physical specialized hardware progresses some of them won't be able to be sliced off and you'll have to use the full physical infrastructure to accommodate them on the on the back plane if you host on bare metal no noisy neighbor syndrome and there's fewer moving parts if something breaks you know where to look for it and networking is much less complex because if I've plugged in the rj45 plug and the light comes on I'm pretty sure I've got connectivity in virtual machines this is where that service decomposition stuff starts to take into effect because you can split things up across multiple virtual machines in very different operating systems um when you host these virtual machines on bare metal you can increase the utilization of those physical resources a lot on bare metal typically is only about 30 percent of the physical hardware that actually gets used at any point in time on virtual machines you can push that up to 70 or 80 percent um it in this emulation mode of the hypervisor you can emulate the use of things like nick cards and uh GPUs and all kinds of other things so you can make more effective use of the physical hardware that you have that hypervisor usually has a machine monitor that's into it that allows you to create and run the virtual machines themselves and they create a buffer the hypervisor creates a buffer between the operating system and each one of the virtual machines so you can manipulate stuff in between them and uh the virtual machines themselves run in their own namespaces and have their own operating systems associated to them so you can have windows and linux and yinx platforms running on a same physical machine and they can be run at the same on the same physical server without interfering with each other which is kind of fun okay some of the benefits for those virtual machines is the hardware being virtualized to run multiple operating system instances if you need that to run your applications it's a pretty good way of doing it consolidates the multiple applications into a single system because you can host all the virtual machines there um cost savings as there's a reduced footprint of physical hardware to host your application services and you can provision them faster because the physical hypervisor operating system is already running so you're just putting a definition on top of that of the virtual machine and the increased utilization as i indicated earlier up to 70 percent okay if you've stayed with me this far and you want to deal with dealing with the containerization because you think the complexity is worth it you can run this sort of software uh in a cloud native way predictably and it'll run from server to server as you move it or virtual machine to virtual machine as you move it in pretty much the same way and it provides a way of running the the isolated systems on a single host so each one of the services can be run and isolated on the single server host typically there's a framework that's associated with that and i'll get into that in a little bit and the the as i mentioned earlier the engine you know either docker or kzeros or whatever engine you happen to be running is just running on top of the host os and it's not really a hypervisor in that way but it has properties like a hypervisor uh the difference in that and a virtual machine obviously is the operating system kernel which is shared in a containerized environment and not shared in a virtual machine environment but because the containers are really lightweight and and and tiny in terms of what their uh neat resources they need um they start up in seconds versus minutes to boot an operating system sitting in a virtual machine that's a big plus for recoverability and things like that and these containers have specific benefits over and above the virtualized or the bare metal world um you can pack a lot more onto an application whereas you might be able to pack 20 or so virtual machines onto a physical host you can do 200-2000 containers on that same host and you know you can share things much more easily in both the public and private cloud world by doing it and it'll allow you to share resources across those two together in you know hyper cluster worlds you can also accelerate development because of the quick packaging and testing and application development and it's a whole new realm of CICD capabilities that that move to agile basically and since they all share that single operating system you're only maintaining one versus maintaining the virtual machine operating systems and that sort of thing so it makes it easier to do the care and feeding okay we finally get to the who cares um if you've taken the time to move off of the mainframes to bare metal to virtual machines and the applications have been decomposed and placed into containers instead of having a single location where you can run them in the boundaries of is it the bare metal of the virtual machine or that sort of thing what if all of the containers were placed into a secured registry and that secured registry was only accessible by your organization and if encoding is needed the containerized service for encoding would be drawn from the catalog of discovered services within your application service and would simply be allowed to do the encoding for you versus having to write it into the application and then if you don't need to worry about where things are running or how things are running um you're able to scale much more rapidly across multiple platform types at the same time you can do it across the servers you can do it to scale more easily you can do it to increase the network bandwidth um all within the application services themselves um so so far what i've described is sort of like the promise of quote what they call serverless computing but in a serverless computing realm the downfall as i pointed out earlier is that the results in bender lock in the one from aws is different than the one from azure is different than the one from gke so you get locked into one basically what if instead of relying on the um vendor provided um orchestration capabilities for serverless we hosted everything in a kubernetes infrastructure that was running in all of those environments now we don't need to really deal with vendor lock in that the framework itself is the one seeking in its service discovery where the containers are what they are used to do like encoding or like increasing network bandwidth or those kinds of things um people are now becoming familiar with the containerized world and the kubernetes framework is now becoming the most popular so you can reuse your existing uh skill sets you don't have to develop a whole lot of new ones um multi cluster environments become possible across multiple infrastructures that happen to be on-prem or um in the quote cloud as people say can even create environments and resources which are shared among the private and public and allow the developers to place it where they think it runs best um and here's where i want to introduce this idea of a service mesh which layers over the top of your multi cluster environment um the one that i like to use to accomplish those kind of things is istio but there are others out there such as kong or uh superglue or or those and they'll provide a quality of service and security capability across all of the networking that is involved to can interconnect all of those environments and if you take me one step further in this thinking about who cares and uh deciding on a standardized platform uh what if we built the intelligence um into the provisioning of the containers and deployment of containers and things like that into a neural network that is sitting inside of the service mesh that's presented to all of these different clusters across all of these different platforms and it seeks in the trusted uh secured registry those things that it needs algorithms could be presented to scale up and down as required across multiple infrastructure types minimize the cost of maintaining the infrastructure and maximize its performance because now you can take advantage of it across all of them where it runs best instead of running things in individual servers we introduce the idea of tpm trusted computing and we uh ensure that only computers the physical hosts that have trusted computing that represent my organization are part of this clustered environment across multiple platforms and then all of the images in the trusted registry can be instantiated on whatever physical hosts they need to be under the control of the uh orchestrator kubernetes using the tpm technology to be able to ensure that it's the risk you know minimizes risk um and it can distribute it uh with that risk mitigated what will have to happen to accomplish this obviously is standardization and a security model that works for everybody um and this is something that the industry needs to work on in my opinion to continue a harmonious uh uh condition where all of the flavors of infrastructure and containers and virtual machines and bare metals can live together in a secured way all right a little bit of light reading for you guys to finish off on if you think that you like this kind of approach and like to read a little more on it um there's a blog out there at the link that i've shared that hopefully will give you more additional information and then finally from my company maranthus i wanted to sort of give you an idea of some freeware uh thing that's available for you called lens that will help you or manage these orchestration engines across multiple clusters uh as a developer and to be able to allow you to put things in one place or another or another depending on how you believe it will run best in your application service or cheapest or you know faster better cheaper this is a way of allowing you to decide which one of those to put them in because it will manage things on prem in cloud etc all in the same way and i'm told that there will be some kind of question and answer sessions at the end of it so i'm putting this slide in to represent that and if uh you seem to think that there's something here that uh makes sense for you uh or didn't make sense for you please feel free to give me a shout out on on email my email address is b matthus b m a t h e w s at maranthus.com please feel free that's what i'm here for thank you all very much and i hope you have a happy gobbler uh on the 25th of november take care be socially distant wash your hands and wear a mask