 Welcome to my talk on building and operating cloud native applications a little bit about myself I'm currently a developer advocate at redhead focusing on go. I'm sorry communities and open shift some mainly working upstream in communities Before that I was an advocate atmosphere also container space pretty much the same role Before that I was chief data engineer at map are another Startup in the head up space when head up was cool and before that I did applied research in Austria and in Ireland and Nowadays as I said, I'm mainly a gopher. I did do PHP around 2009 2010. So Got me something like I you know, I know it, but I'm certainly an expert And I'm what I would call a developer turned ops person. So I started out 1997 ish when Java was the cool kid earning my money with developing Stuff and then like the last four or five years essentially when I got into containers More on the operational side of things. So This is essentially this deaf and ops thing that you you are aware of what the other side is doing and that doesn't mean that I'm actually operating stuff. I'm just you know interested in and Understand the language of operation folks Quick show of hands Who's an admin? Okay SRE site reliability engineer, okay developer that's the majority. Yeah qa Good good for you architect Architects. Yeah, I don't be shy. That's fine Product or project management ish now comes the hot part pointy Harry boss. No Okay, so we are amongst us technicians. Cool All right, so obviously when I start with the why because the why is the main thing Like if you get that if you get the underlying motivation Then the rest follows the rest is really really simple and and you will see that at the end of the day The technology it might be overwhelming at first, but it's not really the hard part So why are we doing that? Why are we you know bothering? You know cloud native and containers and all that chest at the end of the day well, we want to Outperform the competition. We want to ship features faster You know, we live in this 24 seven everything at our fingertips, right? We get nervous if we we can't buy something immediately or can't like that cat picture or whatever and And we demand and business demands that we actually ship fast and ship faster And if we can ship faster than the competition at least that's the theory then bless you then we you know, we are better off To me, that's not the most important thing, but I do understand that the business the business needs Drive a lot of that Related to that and and as I said it can be independent in terms of money is ship around the clock, right? This old traditional way of throwing things over the fence and you say like twice a year. You're rolling out something new Turns into this many many many Small batches small feature updates if you reload your Facebook you linked in whatever app you have or web page you have there Pretty much every time you have a new version compare that with Once a year and you see this do people still know what a CD is. Yeah, who knows? Yeah, okay, right? You might have had that writing. Wow. Look at that new CD. I get it install a new version or discs before that floppy discs so that has changed and and Again, it might be business that drives certain demands that or it might be Some some some other entity, but the tendency is really to ship around the clock whenever there is a new version available you potentially also want to ship it and The internet obviously makes that possible to actually distribute the data at the software and Last but not least to me at least the most important one is this togetherness Sometimes if you're paying a consultant, this is called DevOps I always ask them how much DevOps do you want? Is it a kilogram or you know more? In reality is it's really all about that and this is really the hard part and you will see that at the end of the talk I hope that The technology that's you know, it's simple. Most of that is open source. You can just grab it for free can use it This is the hard part. Unfortunately. I don't really have good Suggestions for that other than empathy and then you know learning the language of the other ones in my experience The ops folks are a little bit better than Developers they kind of like over the years, you know, they learn languages and they they understand stuff from the development side We're kind of like yeah, why do I need to know about monitoring and this and that so maybe this is a kind of You know encourage you to to learn a bit from the upside So these this three things You know outperforming the competition And and shipping around the clock and the togetherness these are the underlying the wise Why we are doing what we're doing here in terms of cloud native now. We are moving on to something more Tengeable that so far might have been some, you know, Gardner or whatever high-level pitch to CIOs But now we're talking about the actual tenants of cloud native computing and I came up with this moniker Crip whatever a a Automation immutability and apis What do I mean by that? Automation essentially means that Processes that have been manual steps are automated in in a way like software typically It means that we replace something that manual typically means it's brittle It's error-prone people, you know, if I get up if I get paged and need to fix something You know, I might be hangarish. I might you know might not have had the best time and I you know fat finger something and whoops, you know There was a couple of years ago where someone at Google did that and whoops half of the internet was gone like And there's actually a Facebook study that showed that they have as pretty much every like the big ones have proper ultimate systems everywhere that during the week the error rate is I don't know 5% or whatever at a certain level and the weekend. Sorry other way around on weekends The error level is relatively low and during the week it goes up Why because people are there and people are doing stuff. So people are making these mistakes. I Did that myself. So I know what I'm talking about the original approach to that was Playbooks and fire drills so you would have these are the steps and make sure check against this is the expected outcome And then every now and then you would have a fire drill You would say, you know assume this wreck goes down and now let's see what we can do about that It's a good step. It's the first step and I had many discussions with people who say playbooks That's all we need right we only need these instructions and then all is good I would argue. Let's let's go a step further and although I don't like the term agility It's it it helps a lot to automate things all the things Last but not least consider the bus factor that might not be a problem if you're in a big You know company or whatever, but you know smaller environments. You might be a starter You might be contract or whatever. What if that one guy or that one person who knows, you know Not only in terms of has success to something but actually knows. Oh, yeah, this database I first need to start that and then that what if that person is not available anymore, you know moves on Bus hits that person or whatever Can you actually still keep it up? Can you? Somehow continue that operation and they are obviously automation helps a lot If you have any questions Don't be shy. I think we have microphones here. You can ask at any point in time or we will have 10 minutes at the end But I just raise your hand and a microphone will fly into your direction Immutable infrastructure who does not know about pets versus cattle has not heard that term everyone Few not okay, so obviously on the left-hand side. I think that's from the Exactly and on the other hand you have cattle, right? So the basic idea there is we used to Treat our infrastructure or servers or whatever very much like pets, right? And you kind of still see it sometimes where you know, you enter WWW dots. I Think IBM dot com and then you get redirected to WWW dot to dot whatever So you have this static partitioning and you know You have a front end that then redirects to a certain server or for databases or whatever So you actually treat these machines as very very, you know as pets is very specific and if they get sick if they You know there's a virus in it or whatever then you take care of them And and you know trying to to nurture them back to health versus cattle Well, they might not even have names. It's the cattle one two three four five and if it gets sick. Well, but luck for you next I'll come back to that later in the in the context of stateful versus state less in the sense that For stateless stuff if you have a web server application server that doesn't have state That's pretty easy to achieve if you have stateful stuff Then you sometimes actually have to resort to this pets approach and databases are a prime example of that I generally if someone is not familiar with immutable infrastructure Compare that with these mold figures where you essentially, you know, you have something and you put something in and boom That's it. You don't change it. You don't go there and carve something. Whatever it. That's not what you do, right? The main question there is In terms of operations against that infrastructure is does it support an importance and that essentially means I can We do a certain operation over and over again and always get the same result which for example Mike my bank account is an exam the counter example, right? I can certainly not do that or at some point in time. I will get a nice letter from my bank Together with that. So if you have immutable infrastructure, if you have eat impotency, then you typically have increased reverse it reproducibility and All these three things together pets versus cattle wherever it's possible at impotency and and increased Represently reproducibility horrible word that essentially together More or less makes makes this immutable infrastructure moving on to API's so this is kind of like I Don't know if it's self-explaining, but the the dish the shift over the the 10 15 past years really moved to API's and not implementation being the important bit some examples here I used to work at map are and HDFS was essentially this based on the Google paper this open source publicly agreed upon interface the HDFS interface so distributed file system and The company I worked for created a proprietary closed-source version of a distributed file system That was able to talk HDFS and with that essentially say it's a drop-in drop-in replacement for the This open source thing GraphQL being another example or communities API, which we will have a look later on in greater detail all of them The examples of them have the thing in common that You not necessarily care about implementation You might pick one over the other because the one is more you know performant or more resource efficient or whatever But you care about the API about the stability of the API Is it you know is it an open standard and so on? any questions so far Makes sense So API's obviously have to decouple things so you can if you have a nicely defined API you can decouple things and you can Just Yeah, use the API to integrate smaller bits Already mentioned to me at least rather important that They don't necessarily always have to be community defined but at least they're considered open so people can suggest additions or There is some some governance around that it could be something more formal like ITF or W3C or You know one company that has enough power to suggest that and push that through I I personally like declarative APIs, so I'm not telling you know do this and do that, but This is the expected end result go off do whatever you like to achieve that and we'll come back again In communities to see that this is actually this declarative API is and Essentially just saying this is the state. I want to have is is pretty pretty central to to communities and others Okay, so we had the tenants now which is already a step further to the actual meat of this talk and now we're really talking about the the set of technologies tools and methods that are cloud native that make up So there's this term cloud native computing and I'm gonna attempt to define it in a moment We're talking about containers and container orchestration Service measures and data measures and last but not least a little bit more in the opposite side of things observability So big question. What is cloud native any any takers? What is cloud native no food coma? Okay? So there is a Linux foundation Called cloud native computing foundation. It was born Initially essentially to host communities communities was a or is a Google started open-source project and they wanted to find a home for communities to you know make it neutral a neutral home and Over the time so we are now at around I think 14 15 project depending it changes pretty much every week new projects join Since you have but initially as I said, it was it was really committees and then over time other most of them more or less in the infrastructure opposite side of things monitoring distributed tracing fluently like lock-stash for routing metrics and events CRPC a RPC framework and so on like you have many many many more here with tests my sequel and communities Just trying to think we could go whatever service measures and so on so you have many many projects that Essentially have their home in CNCF In contrast to a patch is up. Do people know a patchy software foundation as F. Yeah, it's not necessarily an engineering Community it's more about the marketing side of things events Conferences being able to exchange Thoughts and so on it has working groups and then six and so on But it's not about like in a patches of the foundation where people come together and say we're gonna coat on that and then it will release something Every each and every of these projects is essentially required to come up with their own governance and most take that model from ASF But you don't have to you can come up with whatever Roughly speaking If you have any questions by the way or I want to say how awful the talk was hit me up on Twitter here It's on each slide at the bottom Moving on that is the current cloud native landscape that the CNCF put together and as you can see there are well You probably can't see it But there are many many things going on and these blue boxed things are already Part of CNCF and some of them are kind of earmarked. They might become Might be invited and that is version 1.1 and that again rapidly changes and gets updated The point here is really or the goal more or less to come up with a kind of a Toolbox where for each of the layers here for the cloud part for provisioning runtime and so on and so forth You have at least one Sometimes even two projects in CNCF that you know you can can use for certain use case You'd say oh, I need a server smash. So I might you know choose Linker D and or and boy for example Some of them like container orchestration. There is that can only be one. There is only one. That's cool. It is but There are examples where you have to Or more you know the old saying it works on my desktop or my laptop the cloud native version might be it works on my communities cluster or whatever cluster you have Although I tweeted that I Don't necessarily think that that is the greatest of the definitions So let's have a look at you know, where did it come from and and you know a more formal definition So people might be aware of Things like you know what Sun did back in the 2000s and BM where then AWS came up with EC2 Heroku I heard that in the last couple of days a lot when working at the booth that heroku seems to be Quite known then came out OpenStack. You had Cloud Foundry there in 2013 Docker Essentially reinventing this this container and making it actually usable that the UX really well done and then 2015 the cloud native foundation communities in cloud native foundation in it So that and it inherits more or less from all of those previous Projects products ideas taking the best of all of them and combining them who knows about 12 factor apps Okay, a few so maybe a little background. So if you're interested in it's it's a little data It's still valid, but it's like not as important as it was when it came out the heroku folks essentially put together a number of Well 12 that's why it's called 12 factors Best practices essentially describing how they run their stuff and Things like well, you know, you have a code base you have version control system where everything is there and you take stuff from there You have explicit dependencies configuration explicit and separated from the code Number of things that some of them are kind of like Well, that's nowadays. It's it's not a big thing anymore and some of them Well in modern setups There are a few more which I want to discuss with you in a moment But this is a kind of good starting points if you're not familiar with 12 factor, it certainly makes sense But keep in mind it's kind of like Six seven eight years old so it's not necessarily, you know, it's a starting point So beyond and I'm pointing out here a work in progress that the CNCF currently does to Defined properly formally find cloud native just a Google docs and you know, you're invited to go there and Comment on that. It's it's open to everyone and I'm basing my definition explanation on it But it's you know as it is work in progress, you know, we might need to refine or update what what's here my slide So one of the very defining Characteristics and that was certainly not the case for the 12 factor because that was obviously written by the Heroku folks Which did not really have portability between their environment and someone else's in mind They wanted to have obviously people in their environment But that is essentially what what things like when it is give you you you know start out on Premises and you want to move to AWS or you might have different clouds there And you want to have this portability you want to have you want to be able to move your workload from one environment to the other Without being dependent on concrete API's of that cloud provider or OpenStack or whatever you're using The question is then what is the unit of deployment? So for example The things that you package up and ship and that get launched are VMs or containers or functions Very often, but this is not like a hard requirement We're dealing with distributed systems if you think about microservices that potentially run in different nodes You end up with a distributed system, right? So this is not like there are cloud native things that just simply run on one machine This is not a hard requirement, but typically you do end up with it or eventually you do end up with a distributed system and And one of the most Probably interesting thing for many people is elasticity. So depending on the workload So you need metrics you need to know, you know, what the traffic is or how How high the utilization is it can scale it can automatically scale that can be on the application level So having more instances of the same thing running, which is pretty easy to achieve when it's Stateless or you're adding nodes. So you might start out with three nodes. So three VMs or whatever And then you're provisioning you're adding a new VM, which obviously takes longer than spinning up a container But you know, you're extending your infrastructure as the workload goes up Any questions so far regarding these four things because now we're gonna go deeper even deeper portability one of the main things and again communities gives you that is to avoid platform locking so rather than Coding against a specific AWS or Azure or Google Compute API you're essentially well, you're locked into community if you want but that at least allows you to choose your your underlying platform then It also enables hybrid cloud deployments as I mentioned earlier on either That's what I see quite often with customers they start off in a test evaluation phase on premises and then move for the real workload into AWS for example and Yeah, so that you can have both you can have you know global open answer that Depending on the type of user routes to traffic, but essentially this Really multi-cloud or hybrid cloud deployments is possible with it The unit of deployment as I said traditionally many years ago Sometimes still the case you actually think and work in terms of physical service Nowadays especially in cloud environments. You typically have the virtual machine Increasingly containers and some say that the future belongs to serverless or functions as a service So the unit that's that's the thing that you as a developer Kind of care of or have to be bothered with so if it's functions a service or serverless as the old term was then you just say okay this is my function and I upload it somewhere and Magically this function will be executed we get to that in a moment, but that's all you care if your unit of deployment Is a container however, then you probably you know you will hear or you will be bothered with what is the base image? how do I get my PHP source code into that and build my container which you're dealing with container registries you need a container orchestrator and so on and VMs yeah, so you Thinking and deploying in terms of this VM and this VM and you somehow need to get the code there and so on so VMs and containers in a sense a very very Rather similar in terms of this this workflow and how often that work together functions have a Totally different characteristics, especially the implications for Developers so because there is no Ops folks. I mean there is always some ups folks Probably not the ones that you know are in your team or in your company So who gets paged who who will fix some some broken function? that's probably you if you're the developer and Another remark regarding distributed systems the typical Assumption is that Whatever you're building there, whatever distributed system can scale out on commodity hardware This is nowadays. It's kind of like yeah, what else some 15-20 years ago. That was a big thing that actually led Google to build things like Borg and The Google file system and many other things because they said well We're not gonna pay HPE or whatever big money for these big boxes. We're gonna Make our stuff run on commodity hardware. That doesn't mean cheap hardware It just means you know commodity just can buy it everywhere and can put it in and we just put many many many of the same boxes there and actually take care of the The failover the reliability and so on on the software layer We are not you know trying to make hardware and paying for hardware that is Fault tolerant and whatever we are doing that on the software layer. Nowadays. It's kind of like yeah, I mean Does anyone of you still have kind of like dedicated special? Hardware which is not kind of commodity where it can buy it everywhere you want Anyone? Yeah, you you seem to be very special There is The good old fallacies of distributed computing around 2004 For can't remember a guy from from Sun put that together seven or eight Felices things like the network is not reliable and so on and so forth and that That's that is still true and even more so in these setups where you have public cloud private and so on so Read up on the felices if you don't know them Together With this commodity hardware, there is one thing and you might remember this wonderful term no sequel Which was never really about sequel but about the fact that racial databases were not Inherently able to shard the data right along came the MongoDB's and Cassandra's and whatnot that essentially said well I can just shard I can just chop up my data and distribute it on different nodes And still present the kind of logical unified view towards the the end user You can do like nowadays with Galera or whatever you can do that the same thing for relation databases It was just back then 10 12 years ago just not not there out available in the open source Finally we move on to the containers and container orchestration Container 101. What is a container? Any takers any brave people what is a container not not the ship thing that the thing that runs on your computer? Yes, I Take the processes. Yes. Yes. I take the processes I slightly rephrase that and say it's really just a process group technically process on steroids Taking a few of these built-in Linux kernel Concepts namespaces c-groups and copy and write file systems and that the big innovation the big You know, thank you for for doing that Docker was essentially to make that usable. So containers existed for a long time You know, we had to route I don't know 20 years ago. We had Solaris zones. We had many LXC with many many things But they were mainly operation tools operators administrators. They would know their way around They would you know be happily directly creating a c-group in a namespace and enter that namespace and whatnot and documented as ECS Or Docker run boom even my my dad can do that So really I mean don't bother with with all the details here But if you are interested in I maintain this Container's info this advocacy side if you're really interested in what exactly is the pit namespace and what exactly is the c-group X You typically as a develop you might you might have a reason why you want to you know Control a certain c-group or a namespace or whatever typically you don't it's it's kind of hidden away But it's nice to know and so that's a group Version of hierarchy one. There is now a new one version two coming out But again, that's that's kind of like hidden away wrapped away by Docker and others that you don't need to bother about that You just remember Container is really nothing else than process or process group that leverages a few kernel features like namespaces c-groups and copy and write file systems To give you this nice experience and what what's the goal of a container? Why do we use containers? What does what problem does it solve? Who has who has done Docker run? Who has ever done Docker run? Okay. Why did you do that because it's cool, right? No Because your boss told you like what why did you do that? What what problem does it solve? Okay you again? Yes, yes, excellent. I will again rephrase it slightly, but you hit the nail essentially what Containers really solve is justice application level dependency management. I don't know if that exists in PHP Please educate me. I just learned from Python that it's called virtual and for essentially say well I need this specific version So I'm gonna create a virtual environment and I can install whatever I want there I don't pollute the global system. Does that exist in PHP? Is that a problem in PHP? No PHP doesn't use versions Okay, but that's that's a problem right because on my machine here I might have version help me out five or whatever in production I use or other way around I have version PHP seven here and production I five whatever and you know, it works a lot machine When I deploy it there and it doesn't work and you know, you're sad and administrators said and yells at you and you go like shrug so That's what containers do. They make sure that all the application level dependencies are packaged up So you know that that shit that works in your machine is exactly the same that works or breaks in Production, but at least you know what it is, right? So you don't need to like oh can I quickly SSH into that box and like You don't got SSH access right in the straight as us. No, no, no So it's reproducible, right? It's about environments language specific environments and And it's also this packaging problem on a generic level not on the per language level, but on a generic level so you say I want to base this on this version of PHP, let's say 5.6 or whatever it is And by creating this container image, which the end of today uses these copy and write file systems Layers them package just them up slap some metadata on top of it You can run it locally you can say docker run boom and it works and then you put that into registry and Someone post it from there and can deploy it and has the guarantee that it's exactly the same environment That's the kind of dream that we always had right development QA whatever Production always looks the same. Voila. We've solved that problem almost container orchestration Container orchestration is like, okay. I've solved that problem running a Single container on a machine, which is you know, docker run But what about if I have microservices or I have multiple things that you know look identically Replicates or I want to chart something or whatever so more containers on More than one note Well, I need something that does this orchestration thing orchestration is really a very Fluffy term that spans typically these things. So you're talking about scheduling. I need to decide well This container here. I'm gonna put it. I'm gonna launch it on this note And then if I want to connect it then I somehow need to remember. Ah, this container is on that note So, you know if traffic comes in I'm gonna route it to this note Who did that so far without containers and container orchestrators or how did we do that? If you're not in operations, you probably don't know but you know, you have maybe had a spreadsheet Right this application runs on this note. So if someone asks it Well, you know, you have to connect to this note on this port in order to talk with that application So there was a container or an application or whatever orchestrated. It just was not very automated later on There are other approaches, you know using chef and puppet in the first generation. There was fleet from chorus You can do shell scripts whenever you find yourself that you're reinventing the wheel There's something else that actually does it maybe don't do it Maybe use that thing that actually was written for that by people who know what they're doing You have Some kind of organizational primitives. I used to work at messes here with messes There were where these kind of like static or marathon really the static constructs called groups in later on labels Where we're introduced in communities where we're organizing things with labels So we're labeling stuff saying like this thing here this part or whatever This is something that runs in production or in death or it belongs to this organization or it has been Created by this author or whatever So we just slap labels on things and then we can filter by that we can you know do set based operations We say I want to see all parts that Michael created in this namespace in this Environment in you know that are five days old. Well, most parts are not but Scaling that's one of the things we already mentioned earlier on Upgrades so you can do rolling upgrades. You can do green deployments. You can do a beat deployments Service discovery, which is kind of like the price you're paying for this automatic scheduling That's this kind of lookup thing where you say well, I don't really know where this container runs So I need to have some system that tells me to which actual VM or whatever hosts that container I need to connect to Health checks or probes as we call them in in communities Essentially if you provide certain Mechanism that the container orchestrator can check how your application is doing then certain things can be automated For example, you could say well, I'm gonna hit the route. They're gonna, you know, your application HTTP The route and if I get a 200 I consider this application runs, right? Or is is healthy Or you could have some database application where I need to connect via TCP and if I you know can connect and yeah I consider it healthy and based on that information either Yeah, some some kind of thing that looks after services routing traffic or Some local supervisor that decides when to restart a container can just automatically do things that typically used to be done By humans that would look with poke the application. Oh, that looks that I think I'm gonna restart that Any questions here quite a lot of information, but just on a very high level That's what a container orchestrator if you're if you're buying into container orchestrator And it only does one thing that it's probably not a container orchestrator, but the scheduler or whatever else So what do we use what's the standard? That's very simple as of 2018. It's communities. So the container orchestration wars are over communities has has one Essentially it takes care of this container lifecycle management You define you say here. I have a Stateless long-running workload and you know communities takes care of all that on the right-hand side You see a typical setup with all the details. You don't really see it. But once you get the slides That has components up there. That's the control plane There are a few things in there the API server, which is kind of like the brain everything talks with the API server Which is stateless pretty much all of that is stateless besides at city, which is a distributed Key value store where everything the entire cluster state is is captured So if you launch a part there, then you know, there will be an entry in at city that says This part on this node. There is a part running that belongs to this deployment, for example And then you have all these nodes in our case three those are the worker nodes that actually carry out the work and there are again a few things there that communities needs it's the The cubelette is kind of like the supervisor for the runtime you have cube proxy And then the extra runtime which now it is still the default is Docker But there are alternatives like cryo that start to replace Docker there So at the end of the day if you are you know on your machine, you would say Docker run or Docker PS or whatever So as an admin you typically as a developer you don't get there But there's an admin you would SSH into node one and you could actually say Docker PS and you would see all the parts all the containers that run that run there and communities just adds another abstraction type of containers so called parts which mainly are useful for local locality for strong coupling As I mentioned early on very very important characteristics of API the whole thing in Kubernetes is declarative. So you just say this is the state that I want to have for example. I want to have three Instances or replicas of engine X running take care of it I don't care how you do it just make make sure three are running and then you know something might happen to know three, you know power outage in that rack or whatever and then something it's called a controller that just Runs in the loop and that's the state driven part looks at it and says oh users had three and you know this node is gone So that part is gone. Oh, I need to spin up another part somewhere else Right. So you as a user don't care about how communities managers that you just say I want three go ahead You know make it so Great is super extensible and that's also very scary because there are so many moving parts Even I or others who work upstream who work in communities Don't typically know all of the extension points because depending on where you look you can write plugins for the For cube cuddle you can write plugins for that you can exchange the runtime you can exchange different types of storage networking layer everywhere you can extend the objects or resources that The API server understands you can define so-called custom resources Like you can extend it in every direction. That's scary. That's a good thing, but that's scary So typically what people do is that's the reason why people like myself have a job is they You know rather than rolling their own communities distribution They they take a an existing could this distribution like like open shift the stuff that I'm gonna show you later on and Yeah, last but not least all of that is built with this idea of being very robust and and and scalable so each of the parts You know can just die and can come up again and continue to work and many of those we typically seen thousand two thousand node clusters tens of thousands of services that's still You know doable with without bending yourself or backwards you can just do that with the vanilla setups there Any questions so far because now we are getting into a little bit more inspirational things things that exist, but not necessarily Our production ready yet. Let's put it that way Service measures who has heard about service measures or a service mesh one two of course you Two three. Okay, so three people out of I don't know 40 50 60 The basic idea is essentially the same as with communities where you say well You know if you find yourself doing certain things at Hawke on you know with shell scripts or whatever Maybe this is a good time to actually use something that actually was designed for that and in the case of service measures It's really about the communication between different entities in this case parts in within communities Is do is generic or you can use should be able to use It with with others as well, but it focuses for now on communities so forget about all these Labels there at the end of the day what you want to say is if I have something an application in communities running here and something running there this one is allowed to talk to this one, but not the other way around or You want to inject Some failure and you want to make sure that the connection is secure via TLS for example So it's about traffic management You get for free you get monitoring and tracing so rather than pushing that into developers Has ever anyone ever been asked to instrument their code? Do people actually know what instrumenting their code is? No lucky you so really it's about Well, you know we we need to get some insight into what's going on into your application so please provide an endpoint that gives us certain metrics and that The service message meshes essentially solve that problem. They take care of that outside of the application you can have policy enforcement as I said early on and It provides each of these players with an identity and enable security there and The most important thing as I said it just runs with the code itself You don't need to change the code all of these things are automatically injected through sidecars It's a car pattern Data measures essentially the same idea applied to data and the problem we're solving is essentially this one Which is not unknown or uncommon The product here launched very very recently actually UK based company called what the company is called differently, but the product is called dot mesh and essentially allows you to capture the state think databases data stores across different microservices operates on the file system level and It kind of like externalizes the snapshot in the same way that service measures Externalize this metrics issue for you. So you don't need to do that in your application You just say oh, I'm running elastic search MongoDB My SQL database and you can just snapshot that and then compare that And and aggregate that as well And it really helps you in terms of you know what happened there Well, you can go through all the logs and trying to figure out what happened Or you can actually look at that snapshot and say ah, okay I see that was the state of let's say the database table at that point in time and that's the diff here Okay, so if you if you're familiar with git and that git interface git commit and so on in terms of you exit has essentially that interface with slightly different semantics and as I said targeted Focusing on on data stores databases if you're interested in that I interviewed Luke Marston of the CEO of the company last week and you can You can watch that video on YouTube Last but not least and then we're getting finally to the demo is observability and That is a very very ob-see topic, but you should at least be aware of it what it means in the context of containers You want to be able to monitor and not only the host the box where containers run, but actually each and every individual container or pot And that means that the traditional way of doing monitoring does not really work So you need to be able to because containers come and go right. They might only run for 30 seconds maybe a minute some of them run longer, but it's not the same way that you know a VM weeks and Potentially month, but you know in terms of yeah, it could be seconds So you need to be fast you need to be immediately available to grab the metrics to aggregate stuff across different nodes same for monitoring and logging and Distributed tracing if you think about when you open up developer mode in your browser that thing that you see down there This call graph. That's the same thing for a cluster or for microservices So you see oh, you know it first went through this microservice and then it spent 500 millisecond in that one and so on so you can you get an idea Where is the bottleneck you see what you can optimize you can use it for for troubleshooting So essentially the same idea that you have in your browser Do people know what I mean this waterfall thing there? Yeah, the same thing for distributed system That's distributed tracing this little fellow here the Jaeger project There's the standard behind that open open tracing and he got being one of these products Prometheus being this monitoring in Kubernetes, it's the standard nowadays and one example a very popular one for logging for logging aggregation being this either EFK or ELK Elk stack elastic search log stage and or Fluenti or Fluenti and Kibana as the front end where you can actually query logs across different containers now I Was lying a bit There's one last section before we get to the demo. I'm doing time wise The I Believe what most people do currently is looks pretty much like that. You take your code configuration credentials you run that Put that into code repo and have probably hopefully a ccd pipeline and at the end of the day Create a binary or scripts or whatever that you deploy on either bare metal or VM That's kind of like who is doing something like that some And the others you you ship it via floppy disks or how do you get your stuff out? Okay, cool. So that's what I would then call the cloud native way You still have code configuration and credentials. That's the same, but then you also have this container image manifest Your docker file you have to contain a runtime manifest for example In communities a YAML manifest where you say there's a deployment It has free replicas use this container image expose that port and so on or docker compose or whatever you have there a service mesh manifest that defines policies and and so on and then you take that and Put that again in the repository you run it through a ccd pipeline and then Now you have this new blue bluish being more on the death side greenish more on the upside You have this container registry where at least these artifacts the container images the application container images are Stored and which then get pulled by the container orchestrator and or service mesh to actually deploy container and run it So this this handle now You know, it might have been something you used before like Artifactorial whatever but now without a container registry you cannot use containers right need some container registry People typically start with docker hub and if you're more to its production you have your own container registry there and If you look at the artifacts that we are dealing with and or producing and the respective tooling it looks a bit like that Core again, we always start with the code the configuration which again already due to the 12 factors best practices is Separate from the code. It's not you know hard-coded in there and the credentials you Potentially have so things like this is the database password. This is an API key for some AWS API or whatever the corresponding thing here is the The code repository and the CSD pipeline same for container image manifest would be the container registry Run it a manifest would be the container orchestrator and the service if if you're there the service mesh Manifest would be consumed and and worked on by the service mesh and as you can see it grows from inside out So the tip here is if you are here, right? That means you have a docker file and have done docker run on your machine right go from inside out don't start with the service mesh and Many yeah, well you might be laughing but many people come like oh my god I want to benefit from containers and I think I should I should also do some service man. She said no the thing is if you don't have Your egg together and have sorted out the CSD pipeline. There is no point You're benefiting from service match you need to do your homework have your source code in you know git or whatever you have it Have a CSD pipeline that works and then you can move on to something like worrying about okay This this should be in a container registry that should be secure should be scanned Yeah, and then you worry about well actually I might need a container orchestrator and rather than you know Having a bunch of shell scripts. You probably want to use communities or any other container orchestrator. I use communities And again, if you're hitting if you have more than five or ten Microservice and you find yourself implementing these policies and all these things By hand you probably you're ripe for for a service match to go like I actually I'm not gonna reinvent It will let the service match handle that but it's really this majority model from inside out Not the other way don't start with service measures or container orchestrators And one last thing before we finally get to the demo Promise So if you have a cluster then you have Something that is local so that might be your machine and something that is for example in the cloud or somewhere else So you have local and remote One way would be pure offline which means that both The development development environment your IDE whatever and the cluster is on your local machine Another way would be proxy which means that the cluster you somehow get the traffic network traffic off the cluster onto your local machine And that means that you know if you're calling out to another microservices from your machine It looks like it's it's you know running on your machine. It's actually running a cluster, but it's proxy Then the live thing you actually have this this separation of those and then the only way to get stuff in there is Putting you know going through the CICD pipeline and putting something in the registry Or pure online environments. There are a couple of them where you where everything even the development environment essentially lives in your browser Everything is in the cloud. That's great. If you're always online And just to give you an idea there are you know plenty of Tools that you can use from you know You might have installed the community edition of Docker for mega windows But there are many many others that you can use all have you know limitations or pros and cons Some of them are more for proxies. Some of them are more for local development But there are many many tools that help you to develop in and with containers in Yeah, microservice distributed setup There is still something there functions of service. We already talked about it the new name of functions of service is fast So you have some triggers that could be you know upload something has been uploaded to s3 or time or whatever the trigger is and As a developer you just provide that Function and that does something it's stateless a short Lift and the main point being that you can integrate because it's stateless you need to manage the state outside So you know integration in a database message queue or whatever and again CNCF provides some some guidance there where we currently are this is really new. I think it has been released yesterday or so demo So how much more time do I have five minutes? Oh, I've been talking too long. I'm sorry. All right very quick demo. I'm gonna deploy a production ready Containerized microservices. Did I miss any password here in? Less than five minutes. That should that should be do who thinks that's doable. Yes. Okay. Good. Let's see I'm gonna Leave everything here actually Like it is. I'm not gonna change anything. Let's see what happens. Oh, I need to provide a name there right blah blah blah What's missing here? Anything missing? No, fine fine fine. Oh, I need to see that's a new project. So we create a new project Whatever test I don't care Okay, no, I just using the defaults everywhere. Oh really? It's a global like I'm using OpenShift online, which is a global system It's like, you know, Google and many others with the buckets or as free buckets to need to be unique and test is Probably someone else had that idea ready So what it does now is we can have a look at that test PHP cons the first thing to remember these these steps It creates a build pipeline that could be taken somewhere else But we're just using here the build and stuff. So we say here is our source code. Please You know pull it from this GitHub repo here, I don't know if you can see that right that's where the source code lives It will then build it and everything here is done in that that pot It will then build it, you know with all the dependencies here, whatever it has cake Testing blah blah blah blah at some point in time it will be done and then it will create a container image Based on this base image that I used here PHP 7 You should see these layers very soon Being built one to that's the same thing if you do Docker build on your local machine So your layers are built it will push it to the built-in container registry so you can use an external one if you want but in our case the defaults it just pushes it to the OpenShift internal registry and Then it will say well, I'm done building that how about we deploy that Almost there and then it will kick off the deployment and at the end of the day you will have a deployment which is by default one replica of that stateless stuff running and An endpoint that I can directly use you can once you see that you can directly go there and and try it out So push successful means I've done the container this this build is done Just checking here complete cool. So I expect to see a deployment here. Yes In this case, it's both the stateless and the stateful for my sequel running great So I should now see that and that's the URL not ready yet. Okay, okay Okay, still says recreate deployment. Okay Let's view the event what's going on mounted everything. Okay almost there started yay so Currently, let's see. Okay takes a little bit until everything is there health checks kicking in very nice. Okay And it's running boom, so that was how hard it was to Build from scratch an application deployed and that is production ready This is something and I mean it this is really it has everything built in it has you know, I can look here Metrics I can look at the logs if I want to Of a single one or here boom You have Kibana there if you want you can have log aggregation about all your pods if you want to so this is really end-to-end Production ready container orchestration based on the things that we discussed early on build tools and so on At your fingertips. It's open source. You can just download it and install it and enjoy it So it looks like that the question section will be very very short, but I'm almost done here challenges It's really fast-moving. So you want to ask people like myself or my colleagues or from other companies for a bit of guidance Observability is pretty key. We haven't talked about security in our case It's built in sometimes you need to take care of that yourself But most importantly and that's something where I can't directly help not the psychologist It's a lot organizations. So tooling is the easy part, right? You can do it. You can grab it But it's really about your organizations a few resources You can study in your own time if you want to and if you want to try out open shift Go to learn.openshift.com free environment and you can try it out