 All right so I think it's time so I'm going to get started so we can go off early so it's better to be early not to be late yes so hi everybody I'm Christina Lin just to do a little bit of introduction to about myself so in my morning day job is I am a technical marketing manager so what a technical marketing manager does is you know how we have each have a product so my job is to put lipstick on the pig to make the product makes pretty and good so that's presentable in front of you so that's my day job but that's fun but as a technical person I feel I have my passion is more like more on evangelizing the technical content so for me I really like to share what I've learned throughout the years of what I've been doing and things I've seen in the customer side so this is kind of a summarized view or a summarized thing I've seen in the past few years on the top of cloud native application where people are doing it and sometimes they're doing things wrong sometimes they're doing things right and just a summarized view of what I've seen before and then just so there's not going to be a lot of demos here today but a lot of concepts on how to put things together and how everything works together right so that would be something that would be the the whole agenda of this this talk so cloud native so what is so to set the base of what cloud native is basically we're just we're talking about all this application that's running on top of all the public private and hybrid clouds and then we want to make them scalable we want them to make them flexible so these are our goals we want to make sure that their speed they can we have speed fast speed to actually get them on to production we have quick access and we can scale them so flexibility and scalability is something we're looking for and so the problem is for most of my customers or people that have been using our technology or people that's been using other technology they always got bombarded with all this terminology from cloud native world like okay what's this microservices okay I've learned microservices so what is this new service mesh thing coming up and what is that like what is that event sourcing that you talk about when you do like the eventing the event driven architecture and what is this automation CICD that you're talking about and all this thing gets into their heads but they don't know which to put what and so here I am I just wanted to make things straight just to actually put things in a more organized way and to make it easy for people to understand what they are so I call this 700 BC that 700 days before container right so that's kind of what I put it so before containers I was doing a lot of application developments and application integration integration I was more on the integration side of the story so I was doing applications and application needs to talk to each other and and you know like when I was doing it I was I had this excel file which contains all the application that my application talks to and whenever my bosses or my bosses you know they asked me like who do you talk to I would bring up this excel file and start reading out to them like so these are the application I'm talking to and that makes things really chaotic because it's really hard to manage and you know to actually make if I need to make a change I need to notify all the three parties and all that it makes things very complicated so that's why we started this old SOI and ESP thing right so I have this big centralized enterprise service bus and it's moving things apart it's moving things alone and then I have to find a big services right that's good but then while I was doing the SOI stuff I found that I became the bottleneck of the whole the whole company because they were waiting for me to implement the integration the technology so that's when I started to think about you know if I can break down my integration code into smaller bits and pieces and deploy them independently wouldn't that be better so that's when I started to start to do like lightweight ESP and then that's when I got into the OSGI world because before everything was on Java the big Java EE container where everything is in a big monolithic application and then we kind of started to look at OSGI so OSGI is this Java it's also running on just this Java virtual machine but where you can isolate it modules right so you can put them into bundles so when you wanted to restart one of the one of the applications it doesn't affect the other so you don't have to restart everything so which is creating a much lighter weight and modularized things which is better right but then it comes along where it kind of started the whole cloud native you know container world where now people are switching switching to microservices so that was a big thing to us because we are kind of going on the same way but we haven't got it yet because we haven't figured out what's the best way of deploying our technology what's the best way of you know automating it because we're kind of like still trying to figure everything out but then this whole cloud world has come along and then we kind of we saw like oh this is the way we should go and this is the way we should put everything together this is how we should develop applications so that's kind of how this is so this is what the CNCF the the community of the the cloud native community foundation this is how they define what a reference architecture should be look like so you kind of see this is where you kind of this is where you kind of do your application development this is where you can orchestrate all your stuff together like container orchestrate orchestration storage and then provisioning that's the CIC automation stuff right so these are the things that define what should be in the architecture right but this is all very vague what do you mean by that right so for me then I drew this diagram here that makes more sense to me I hope that makes more sense to you right so what I'm going to explain a little bit it's a little bit small but you're going to get the slides anyway I'm going to tweet it so you guys would have the slides but for me I wanted to break it down into bits and pieces and I want to show you so that would be easier for you to understand how to build a better cloud native reference architecture or architecture right so I broke down into four different plans right so the bottom plan is the orchestration and platform where it handles like most of the container container orchestration it contains all the basic and foundations of how things work together and then we have the top the top part which is where we're going to focus more on our topic today is how to develop your application on a container cloud native world or container native world right so this is how how so this is how everything should be done and then you'll see event events coming in inside this plan and then you'll also see microservices here and you see the data sources here and then you also have the service mesh plan where it handles a lot of the controlling of the traffic of your your application where the where the traffic goes right so this is the this is the traffic control plan right so it's controlling where everything goes and you know that and then this is the resource optimization plan which is or how do I optimize my resource how do I make sure that I'm shutting my service down when it's not being used so sometimes people could call that serverless but I think serverless is more than that right service serverless is not just resource optimization serverless is is you still have to think about functions and who has been defining functions well lambda has been defining functions but nobody else has been defining functions so we still on our way to define what the functions are so I wouldn't call it serverless well it's partially serverless right but I think that's kind of a resource optimization plan right now for so that's kind of how I view this whole architecture and how they how it's been put together and then we've also got this you know technology that's related to the two different plans right so for data you have changed data cultures data integration how do I put data together and then we also like for microservices we have the domain driven design right all that kind of stuff so that's kind of my view that's that's overview of today so you've seen this we're done okay but I'm going to go into the details of what they mean and what I mean by them and how you do them right so kind of just go going deeper into what's going on right so first small microservices I think you've seen this like a hundred times so I'm not going to talk to you but talk to you about microservices anymore but basically breaking down your monolithic into microservices is good but for me the struggle is what is the size of your microservices and why do I need to decide the size of my microservices right should I break my microservices into two in a row I put two two different things in the microservices or three different things in the microservices right I how do I decide what they are how how how they should be right so the whole I think we should take a lot of cautions when we were defining microservices and domains because that that totally um when we make that decision we have to think about how we're going to deploy it right do I deploy two different functions in the services in an instance or how are they to all how are these two communicate together so and that would also affect how we do automations right because how you deploy it is related to how you do automations so when you're defining your your domains and things in the microservices you have to think about it like think about is this an independent piece of code that would be independently talking to each other right so some of the big mistakes that I see people when they're doing domain driven design wrong is they have this huge big services right in the middle which is very similar to what we have back in the SOA day where all the services talking to this one and they become the bottleneck and then you should think about how do you break it down right are you putting too much task into the services is can we make it a district can we can we separate those two and make it more distributed communication in between between those two so things in here with microservices that I see you should you know think about reiterate and sometimes with bounded context another thing with microservices which is the bounded context so you would have you know two different you know domains that's talking to each other and sometimes when I see what people does different it does a little bit wrong is when they would define a payment here and they think okay payment is is what's in this domain so whenever I need a payment I'm coming back to this domain and then trying to access that here but you have to think about it when when you talk about domain do you mean when you talk about payments do you mean payment in the shopping cart domain or do you mean by the payment insurance domain because sometimes they're very different hand they should handle differently so you know and so that's that's I see a lot of like inter domain communications way too many so that's when you should redefine your bounded context again right so some things that I see a lot of people doing a little bit wrong here or not wrong but like I think they should refactor a little bit but the good thing is like you know for microservices it's very easy to refactor right I think so the other thing I was talking about a little bit before is communication in between domains right so I have people asking me so if I have microservices here and I have microservices here and they belong to different domains how do they talk to each other right so it's always clear that between domains you should set a set of contracts or boundaries so for this this particular microservices wants to access a particular services here they have to go all the way out and then come back to the services they shouldn't do any direct communications because then you're breaking down that boundaries so then you're not doing the domain right right so the way that we're setting up the contracts is because we know these will probably be two different teams working together and whenever there's more a secret passage between the domains that's not always a good thing because then you have leaks and you know too many dependencies between each other so it's always good to go through contracts and you know and then we when we go down to the next part you'll know why I have all these different colors for my microservices but for now it's just like for microservices make sure you do your domain-driven architecture domain-driven design and set up your boundary context right and okay so now we have all our microservices created we have defined our domains it's all done it's great it's small it's fast it's simple and easy to maintain now the problem comes I have about 300 different microservices and they all need to talk to each other awesome and the better thing is I have other services trying to use my microservices so in order for this particular function that needs to be get something done he needs to call 300 different microservices to get it done awesome and it creates a great service mesh and they're trying to talk to each other and then I have to I have to get out my my excel sheet again and then starting to write down who's connecting to what and telling my my boss and oh I've got like 300 different microservices connecting to my services awesome right that's not a good way of doing things so that's why I came out with this HR integration concept so this is not this is not something that you've done physically but logically right so you have to think about the responsibility in your microservices right so you have I defined it into different responsibilities so here this is like normal microservices services where you do every single day like single business logic with its own data source right each microservices should has its own data source that was defined so you've got all this doing it here but what about these so these are the controllers the the microservices that you know helps you to put things together to hide the complexity from others trying to call your application so from from calling your services and also helping you do some transformation of data because we know not everyone wants to use Java not everyone wants to use Python sometimes people like no JS and sometimes their data format would look a little bit different so somebody has to come in and normalize it sometimes and so this is what this layer does and what about this one so this one I call it the more like a facade for everybody so this one would be more related to people that's calling you right so say for instance Netflix is giving a services to ps4 iPhone and then maybe a your Samsung TV right the format of data the data itself is exactly the same but the way they give out data would be a little bit different because ps4 wants xml with some extra metadata and then ipads wants you know your iPhone wants json with other data with your data with other in json format and maybe the your samsung tv wants a plain text who knows right they have they requested the same thing but with different format and they want to change all the time and then you have to deal with all these changes and this is the facade this is the facade where you do all that kind of stuff so I see this one will be updated more often than the core business and integration and this one would be more like you know updating it for quick changes so that's how you get your system or agile that's how you get them more more free more flexible right so the core so basically the core is just built and then and run it like basically that's just that they have like data source connecting to it and then you should have a runtime as easy and and then we have the the control and dispatch which is the facade which helps you to deal with all the different needs from different customers right and different users and then make it easy and faster for them right and there there's two different kinds of inputs right the first one is the the request and response so you sometimes you would have like a request and response i'm giving that back to them and sometimes you'll be just receiving a bunch of uh streaming of data so that's kind of what we see today from the control and dispatch so you have to be equipped and being able to receive all this all this kind of stuff and so in order okay so coming back to this so in order for us to actually create a better way of communication between your vendors your partners we have to set up contracts and what's the best way of set up contracts today apis of course because there's a standard api um documentations working to figure it's what swagger right used to call swagger that's open api standard um so there's so now how do we how do we define the apis so the old way of doing it like how i used to know is i am a developer i'm gonna go ahead and start my business uh this development and once i've finished my code i am going to tell the other guy which is talking to me and says this is my contract this is what i'm gonna do you stick with what i what i tell you to do which is kind of more like a whistle right remember the old solar days where you have whistle they give you whistle and then you kind of uh load the whistle and they will generate the code for you so you have the code ready but now what people do today and you can see that in my youtube video i have a youtube video that shows you how to do um api first so there are tools that can help you to actually uh create contracts so you can build your swagger document without any coding so all you need to do is just configure what are the url i want to expect this is the this is the data format and then it's going to generate this swagger documents for you and whatever you do is just you take that swagger documents and then give that to the developer and says this is the contract i just did up with the um the partners and just go and go ahead and implement it right so that's going to save you a lot of back and forth time between your partners and um developers so now it's it's it's more like the more adopted way of doing things right now this is api first um development and then this is why i more see more is like when you need to have the company contacts i see more code first because that's how people used to like to do work but with external users you have seen more like a contract first development and then once you have the apis of course you need to secure it i don't want to go to deep details into api management because there's a big another big topics i can talk in three hours right so you have to talk about how do i secure my apis and how do i manage all that but just remember when you have your contracts make sure you have a way to manage all these contracts it's like a like a place where you can put your file and then where your um when the people wants to see what the contracts are they get to see it right so it's just like that kind of management for your api contracts you need to do that too and then we have the composite responsibility where in the middle right remember remember that layer thing um and we have so that's that where you're going to have um a lot of service orchestration um transformation of your data you know or collecting the data sometimes the data comes in in streams you want to collect them and then give that back to the big data storage or you want to split them up into um microservices because now it's more scalable so i want to split them up and then send them all over different places where you want to normalize the data because they're coming out from different devices i want to normalize it and send it back to the back end so this is what this responsibility uh this layer is supposed to do in the in the responsibilities and then talking to external services and last but not least anti-corruption what do i mean by anti-corruption so all the application that we create today right now in the cloud native world is greenfield like it's all new it's all the new shiny content but then you still have to talk to all these big IBM machines where oops all these big machines and and then you know they're very slow sometimes um and then um you know because their cycles are longer so you can't really wait for three months to deploy your application it's not how you do things um in the greenfield application you want to do like you know every two days every hours i want to like publish my code so what does this help you to do actually you can create a middle middle tier where it hides away so you can implement a lot of things in between you and the legacy application so it hides away all the complexity so whenever there's an update that you need it will help you to transfer transfer whatever you need and then put it into a more easier way to communicate with your legacy system so that's anti-corruption there where you're supposed to so that's what supposed to uh so that's the what that's the responsibility so and then that's all the api talks and all that but then we have events so this is the diagram that i created so these two diagrams are supposed to compare next to each other um so there's two way of communication a synchronous communication and synchronous communication api are synchronous communication where you have a request and response right so the way you do communication is a little bit different when you talk about a synchronous communication where when the request comes in it always is back a response to go out right and the way you actually call you uh you wait the way you're actually calling you're triggering all the other services mostly sequential right so i'm calling this one and i got my response i'm calling and i'm calling this one and this one so it's mostly sequential and the way you can synchronous okay so just to speak uh just to set the background we're doing it in a distributed environment this is no more you know all the small logic internal you know memory calls and all that no this is all distributed so the communication between microservices are important right so this so this is the way that synchronous communication does right so if you so think about this all this data they are all independent data source so if i want to sync up if i want to do a data sync between those two say they all have um data inventory information so if i need to automate if i need to update all this i need to make two api calls to make it sync right and the way that to set up contracts in um apis or synchronous call is i can do it with open apis the swagger stuff i just talked about right so you can do that and then all this call can be monitored and managed through api management um tooling and stuff like that and then when you talk about transactions right transactions are super important because before we have xa transactions you know all these transactions but now with distributed environments we don't do we want to avoid transactions but sometimes it's not avoidable right so what do we do well there's a pattern where we can implement called saga so the way it does is you can do like compensations um i have a slide that shows you later on that shows you you know because it has every services would have a compensation and then if something goes wrong it's going to call that compensation basically is that if you're taking a thousand dollars out the compensation would say add a thousand dollars back right that kind of thing so if um the service is calling service two service two is calling service three and then something is wrong with service four it's going to come back until service three run your compensation and then it's going to come back until service one says run your compensation and then um the service one will then run it like that so that's the saga pattern where you can do that's how you roll back a transaction right so basically that's what you have um so with a synchronous call you'd kind of do that but in a distributed world making everything synchronous maybe it's not a good idea because it's a lot of time waiting right and because everything is scalable it can actually it's made a better way of doing things i'm pretty sure so that's why event driven architecture have become so popular in the distributed world um so instead of you know data coming in as a single request or you still have those command type of like requests and response coming in but then you can have states of streams streams of states coming in as well from those iot devices and then you can collect them in the buffer in the buffer and then send it through your microservices and that is instead of creating this like service mesh contacting one to the other everything will be sent into this um in this centralized store or or dispatch of your events well then people will listen to your events and if they want to hear events they will react up upon your events so you're creating a more reactive system right so that's kind of how you do it and and then that's so in in the in the front you have a buffer buffering uh place where you can store your events coming in and then in your bonding context right you would have something that's there and to and to have that contract you know how contract is very important between systems right so for apis it's uh for for rest for synchronous services or synchronous call is the apis that's setting the contracts but for for asynchronous calls it's the data is your contract the data is your contract you set you you you tell the next person that this is our contract and this will be it so there's no i think there there are works and their community they're working on creating a synchronous api but i haven't really seen a lot of effort on that yet i've seen some people are doing it but i haven't seen a really strong community but i think uh in couple of months they will be one i mean more dominant in the market and that would be your asynchronous api calls but now all i see right now with people how people do it is they have the data as the contract and then we have um transactions so how do we do transactions in a um in the event driven or in a cloud native world is that people do event sourcing so instead of like a real-time transaction rollback we do eventually rollback we do eventually consistency of the data right so when when people put in there they would um and people will listen to the state of this person and then updating it if something goes wrong they would add another say instead of um minus 100 and then take out that minus 100 they would just say uh plus 100 you know minus 100 and then plus 100 just to balance it out on the event source basically you have a store where it kind of stores all the states so instead of canceling the state you add a compensation on top of your states so it's very similar to saga but it's in it's in a way of storing storing things right and then the other way of synchronizing your data is change data capture right so the way that they do it so instead of um you know this services calls every single microservices and synchronize this call no um this one will any changes to or any changes to this so this data will then have we'll have a mechanism listen to all the changes on this data and then the people that's interested interested about this um data was changing the database will listen to that and pick it up and then updated their um updated their data for inside their data store so that's kind of what people are doing in a data sync synchronize a synchronized world where they do change data capture um in in the event driven world and then also the old good old event driven architecture the thing is to share your states right the states is immutable and so it's just the person does not know where the where their information is going out to he doesn't know who's listening to his states all he needs to do is okay i'm giving out my state anybody wants to throw my states will get my state and they can react on top of that so that is the if that's how they do that under if that's how we do it in a cloud native world right so two things we can do the the synchronous way and a synchronous way right so different ways of implementing it depending on how you want to do it right so i've got that so this is how kind of this is one of the top topologies that we can do so um and normally we have um did we connect database with a uh a uh technology called the visium i don't know if you heard about the visium uh but we call it Kafka connect in red um so what you do is you write a code that um grabs all the tables that you wanted to listen to right and you can kind of like do a filter like i want only wanted to listen to spec simple SQL stuff and then um listen to all the changes and this uh code will then detect all the so basically it's going to your um database log you know how they whenever you do a select delete it's gonna read it's gonna read from that log and it's going to write it into Kafka so we'll have a Kafka in the middle and then you have another similar um to be sure you're another another database or another system basically that's how you do a change data capture right all right so that's kind of like a summary of the HR integration but that's only the first top layer of of my talk and it's kind of i know it's going to take a lot of time but i think you should spend more time on developing it because the rest of the content the rest of the stuff are built for you they're the tools that you use you want to use in order to achieve all that greatness of building that architecture right so for the container crisis orchestration platform which is open shift or kubernetes i'm referring to the reason why exist is because now we have all this beautiful microservices flying around everywhere it creates a big headaches for your ops person because instead of having them to um you know managing just 10 big application server now they have to manage like thousand different smaller microservices so basically what the kubernetes and open shift does it helps you to reign in your container helps you to manage your container so so your microservices don't go everywhere so basically you can do low balances right so any services um were basically service discovery right so it kind of lets anybody who wants this little microservices right here um he uh kubernetes or open shift well then tell okay you're coming in here and then you're getting it here so it helps you with the discovery and because of discovery it helps you with all the low balancing because you can create a different microservices scale your microservices up and then it does all the low balancing for you and you can do that with the scaling because the open shift will control your microservices now we're located for you and then you can also the biggest big problem with all this container and this microservices is thick i have config for my production i have config for my testing i have config for my um uat environment all all they have they all have different configurations they all have different secrets passwords okay you don't write your passwords in the config map right that's totally wrong if you're doing that is you're like no because it's always you know people have access so you have to find a way to secretly hide your id and password for uh you know entering your database and stuff like that so there needs to weigh off you know managing all this and then you know i have all this container but what is in the container it's the image that you need to run right that where's that image so you need to create a place where you can find all the image and pulling it down so it has the registry to store all your images monitoring help you to see what's going on your platform so basically that platform helps you to manage all this you know crazy flying around containers and that's what um and also collects all the logs so basically that that's what it does helps you manage all your all your containers but but that's not only that doing cloud native or container native system the first thing comes into mind is other than microservices doing it right is automations because there's a lot more you need to do so you need to do a lot more automations too so to do that um now we have a operator had anybody heard about operator patterns so the operator patterns helps you to actually um it will spin up the operator so this operator what this operator would do is that it would um help you to manage your application lifecycle right so this when you have this operator spin up it's gonna see okay so now what do we have so then you have to define okay this is the services I want to stand up or this is the image that I want to stand up and and it's going to this it's going to take a look at all the things you have configured in your in your application in your open shift system and then it's going to go off and then see and then create all the services and pod and routes that's related to this particular applications and if you make any changes or anything um to the things that you have registered in your system the operator will know and then it's going to update or patch or do whatever or delete if you're going to delete that it's going to do that so it's controlling the lifecycle of your services that's making your so having that would be will make your things a lot easier because it's managing your application for you for a little bit but other than the whole application management so these are just some of the the crd's I don't want to go into the details so basically you know what openshift is right it's a big server and it's basically the big api server so basically how you want to ask kubernetes open shift to move is you call this apis right and to actually use this apis you have to define a definition so and so that's why the developers in kubernetes open shift they would define all this all these crd's and then when they and when you call the crd's they will go off and then implement all the you know what you need to do with all this configure all these definitions and then go off and create create the resource against that you know things like that so that's why you do so for for operators you need to define all this and it will spend up all that for you so the other thing for automation is pipelines pipelines are super important so these are just managing all your life cycles but this is picking it up going through a process of building it letting users see if it works and then and then promoting into production today i think we're still figuring things out because i think these are doing a little little bit overlapping stuff so i think there there will be an effort of people seeing what's going on here and then they will make few things work together maybe i can put a pipeline into my into my operators and my operator will do that or there's a way for me to build in my operators into part of the pipelines i'm not sure but i think that's how the the next the next phase of technology would go to because i think we are still figuring out in the cognitive world so that's why the people on the open shift they have decided to create these operators and that's how they manage the life cycle but then you still have people that wants to go through the traditional cicd pipeline right so i think in a way that we're still trying to see how those two things work together my question to the engineers and stuff like that's how do we make the best advantage of you know this is easy for me to to to do my life cycle but i need to move this around into different environments so how do i do that so can i embed an operator into my pipeline things like that um so that's kind of what we have and i have Jenkins in there for the pipeline but there's also another project kicking off by uh with threadheads called Tecton so basically what Tecton is is a smaller way of allowing to allowing to build a application a lot faster it doesn't have people always associate Tecton with Knative but you actually don't need Knative to spin up Tectons basically you're just defining your pipeline in a in a YAML file similar to what you normally do in OpenShift and then OpenShift go off and kick off a pipeline for you and there are efforts in um and Jenkins where they're going to build that with with Tecton so Jenkins will also with Tecton but they're still doing that right now okay how long do we have we have like 10 more minutes i'm only halfway there anyway so this is um the next layer which is the um which is the uh server smash plan so the server smash i think everybody knows about server smash right so why do we need server smash why why why is sto like people like it so much well the problem is remember this one like people talk about microservices and they say yeah when i created um all this microservice is great but people talk about how do we control this microservices you know if i want to talk to service a to b how do i know they have privilege how do i know they're okay to talk to each other right and what about versioning um when i have version two of this microservices version three of this microservices who should i talk to you know how do i you know promote all this crazy stuff um that's a lot of things that we want to do so before um the server smash or sto came out normally what we do is we have separate libraries and we will embed that into our microservices in our code and then we define you know the circuit breaking the routing and everything that you know reading the the the headers and then we redirect them to different places i do that in my code right but that's not the best way of doing it because i want to have a centralized place of control i want to control the time out time i don't want this person decided it's 10 second and then you decided 20 second right it's everywhere no i want to have a centralized place to control everything and then knowing where everything goes right so that's why um this sto thing came along and instead of writing all the code all that stuff in the code we don't do that anymore right so it will spin up a sidecar proxy and void proxy so it will it will run right along next to your application right and then um so a centralized place um and a beautiful gui kokeali so it's the management layer of the whole fco and then you can configure your policies say this service to this service time out all this time out time is 10 seconds and then you have like this services would be um redirecting to the services and they'll do this services and the um the production you know deployment policy and all that they can configure it here and then all this will be um redirect sent to sidecar and then sidecar can decide if they want to call the services here and i want you to see this one here so remember the plans i have right so this is where this the request comes in and this is your services normally people in a normal application they would just go and then call the services and the services in here that's what you think you would do but after you apply the service match or the sto into your system this is where the traffic goes so instead of going here and then it's going down to the sto proxy right here and the proxy decided okay if i want to go this way so this one will go here and then execute your code coming down here and that your proxy decided where to go so basically this is your logical way of you know how things move but basically all the traffic is going from there to this way so that is how the traffic flows from each other and the sidecar becomes this gateway that helps you to redirect and decide how the policies are applied on top of that right so these all the sidecar they are the data plan of sto and then you have control plan where they're feeding off um they're feeding off the policies back to the sidecars and then they're giving back the metrics back to your control plan and they're going to send it to Prometheus and then you can do a use grafana to make pretty pictures and stuff like that so basically that's it i have a lot more slides but um that's basically just a smash and then we have um the k-native i want to talk about a little bit about k-native and i don't want to talk about k-native build because i think people are changing stuff and we have tech talk so i want to talk a little bit about k-native and then there's um the the biggest um thing about k-native is the way that you can it can optimize your your resource right so when you know when we spin up a pod when it's still running it's taking up cpu's taking out memories wouldn't it be best if there's no flow coming in i would just shut down and when the flow comes back up i would just spring up again so guess what is the best architecture of doing this event driven right so basically k-native is an event driven architecture right so that's why you would have k-native eventing where they have cloud events coming in into k-native and then wake up your services and then there's something called k-native serving and the serving will then scale it up right and then when there's no more traffic's going in and serving will say okay there's no more there's no way when coming in i'm shutting down myself down so basically there's two different places serving helps you to bring it up and down and then you have events to trigger it on and off right so basically that's the basic of serving and that kind of concludes my architecture today sorry i speed up a little bit i'm a k-native but i can do another video on youtube so come subscribe to my subscribe to my youtube channel i can do a little bit more on k-native but i can do a demo on top of that but just remember today you went through all this right the platform what's going on in the k-native world in the cloud native world the networking the service mesh layer the k-native the optimization optimization of resource layer and then this great big piece of how to build your cognitive application with all these microservices and do all that and then synchronize data and then how do you synchronize all this data and then the asynchronous call and the synchronous call doing it differently right so yeah so thank you i have three more minutes for questions any questions okay so there's tickets the party the ticket to the party is at the registration so don't forget to get them from the registration and they will be limited so don't forget to get it today and if you don't get it today i think they're holding some back tomorrow so if you don't get it today try it out tomorrow that's it thank you