 perfect I promise this coffee coming all right hopefully everybody's awake now we're waiting for the next round of coffee I'm waiting for that myself I don't need to introduce myself again but I'm willing like to introduce maybe just up nice to meet you I'm Giuseppe great and one of the reasons that I'm looking for the next round of coffee is that this just happened like around 40 minutes ago I'm pretty sure that those should be shared and not individual pizza size but we do what we can right but it was really good but let's dive in and let's start talking about service mesh sure that a lot of you are very curious about this team I heard already from some customers that they are looking forward to run a service mesh in and 4.0 and 3.11 and whatnot who dive into that but for some of you that are I would say learning about this topic now I'll do a kind of a quick brief introduction about what service mesh actually is about and what not so we we we heard for example of on the previous talk about like all those different microservices that they were implementing and you pretty much end up with an architecture that looks like this where all the services some way somehow they start to to talk to each other you are building this kind of complex distributed system on top of another distributed system that is Kubernetes right and there are some complexities to that as we started this journey to talk about microservices you you pretty much think about those services in a way that I'm just going to write my business logic but then very quickly you say well maybe I want some way to configure all those different services in a concise way so I'm gonna add configuration like another trait to that particular service then you're like well maybe I need something to do service discovery because now I have like so many services I need to add that other capability to the microservice and then you start adding all these capacities all these features and your your microservice becomes like not that micro anymore right it starts to grow in complexity and you kind of end up with this microservice and like if you're doing it in Java you have like five ten other jar files that you have to add and all the different frameworks you have to pull up right and that was like I would say very 2014 and again it was very programming language specific as well but as things evolved we we started to look into like service meshes and how we could delegate some of these concerns that we think they are more infrastructure related to the infrastructure which is where they actually belong so that's pretty much how we came up with this thing in industry about the service meshes but how does that magic actually work right so there is no magic I mean it's inside of your pod so again like quick one-on-one here you have a pod inside that pod you can have one or more containers and we have this idea of a special kind of container called sidecar and this sidecar is something that can be injected automatically in runtime so added to that pod and then we started to implement some of those services that I was discussing before on the microservice slide to that sidecar right in a way that it's managed by the infrastructure and again if something happens or you need to add another feature to that particular sidecar that can be added there without actually changing the source code of your service. When you look at the whole architecture of all these different things you get like the control plane which is where a lot of the infrastructure for a service mesh like Istio for example is so you have Yeager pilot mixer and of course the authentication itself and you have the sidecar proxies that are injected into every single pod and they start to receive the policies the configuration values from that control plane so again like now you can say hey maybe I want to add mutual GLS to all my microservices great you go there you change something in the control plane the control plane will then propagate that configuration to all your microservices maybe you want to add a specific like routing policy or some kind of retry configuration that you want to apply to all your microservices as well again you add that to the control plane the control plane propagate that to all your microservices you don't actually have to do that yourself changing your microservice for that particular capability but this I would say takes us to what we are calling as a product open shift service mesh so Istio is is one of the components but with open shift service mesh we are of course Istio is at the core but we are packaging other technologies and distributing all those different technologies as a single operator that you can install on the platform those technologies are Yeager, Kiali, Kiali is a visualization tool and I have more slides on that later you have Grafana and Prometheus for monitoring and visualization of the monitoring as well and then Yeager of course for tracing but this this whole package is what you get when you install open shift service mesh it's not only Istio so in this picture for example this is what Kiali looks like so this is a representation of for example in this case three microservices so you have one here called product page so it's a web it's a web app you have a reviews service and then you have a rating services and this data here is captured through the actual traffic that is happening between those applications so Kiali is generating this using live network data and even the lines here the colors and the status code that you see those are also live like for example if the communication between the product page and the reviews microservice is bad for some reason that line would be red and you start to see the status code changing there so it's kind of a nice way to visualize like all this pro microservices that you might have as part of your application one nice thing also that you can see here is the latency between the communication of those services for example maybe you have like three different versions of your review service so you just shipped a new version with let's say v3 and you start to notice some latency now that you roll out that version from this same view you can add weights to those lines to the graph and then you can see that for that particular call the latency is higher than for the previous version so now you have a regression and you have to decide how to address that again because you are using the service mesh you can pretty much reshape the traffic and send everything to v2 and to v1 while you're fixing v3 and you're probably going to release a new version of v4 with the actual fix to improve the latency of that service but this is like a very I'd say canonical example of what you can do with the service mesh and again how Keali helps with that another thing that you can see is like a convenient way to see like details about a specific service again in this case you get like information about the IP addresses like the internal IPs and also what the inbound and outbound metrics for that particular service looks like you get of course the status in all the different endpoints that are hitting this particular app this is a very common problem as well when you have multiple microservices you are usually like not aware of who is actually consuming your service you're like maybe I just roll out a new version here I'm not impacting that other app here right there and you actually are from here you can kind of see like all the different endpoints that are either using or being consumed by by this particular service too another very interesting feature here is the configuration of the traffic using weights so again it's very common to do some kind of traffic balancing using for example either the header or round robbing or something like that here you can specify some weights and say that for example v2 is the most stable version of this particular microservice so I want to send 80% of the traffic to that version and while I'm doing that I'm still keeping a very old version around because for some reason again I may have to keep that around for a node legacy application that needs to consume that or something like that but I'm starting to roll out a new version and that new version is already receiving 15% of the traffic if you want to change that again this is a configuration that you do you can do at Kiali or you can apply some email if you want to but again it's something that you don't have to actually change your source code or anything like that in order to roll out this change you also have some other capabilities like of course adding TLS and adding a gateway also from from Kiali like this is a new feature coming out that is also available now in the in service smash so when you add all these different components to an architecture diagram and you have something like this at least the high level architecture diagram where of course we have infrastructure and you have OpenShift itself but on top of it you have a service smash handling the traffic for all the different services that you have in any kind of application this is something that is also very very important because quite often the solutions that you embed in your microservice they are specific to a given programming language so you have like a lot of frameworks that can do certain things for Java or only for Go or only for Python the nice thing about the mesh is that it's pretty much language agnostic you can apply that to any application that is running and that's pretty much the summary of the service smash side I think one thing that I forgot to mention here that is critical and super important is that service mesh is GA in 4.2 we announced the GA last week this is something that again like we have many many customers asking us for this GA for quite a while so I'm pretty happy that we're making that available now and please give it a try again it's available in operator hub if you have an OpenShift 4 cluster running you can pretty much go there click install and you've got a complete service mesh with all these different software connected and pre-installed for you now it will transition to serverless but Giuseppe will do the introduction for that thank you William very interesting as as we are gonna see actually the service mesh is one of the ingredients is one of the components under the wood in the world serverless strategy of OpenShift product so first things first many of you when we think about serverless and AWS lambda this is what it comes to mind so it's just glorify CGI bin well it's actually not but if you think about it the thinking behind that is or less the same you spin up a process you take some event and then you put out some output of course we did better than CGI bin because we are working on security we are working on scalability and visibility on the stuff but the concept is pretty much the same is is not something pretty new in terms of concept and indeed so this is the conceptual model behind serverless of course many of you already know so there is an event flowing then there is a function but we will see that is not just a function processing this event processing the payload you are gonna put and then you are gonna do the result the advantage of this model of this model is pretty obvious it will spin up your computational power only when you need it so you are gonna save resources you're gonna optimize your workload so it's pretty much interesting in many kind of use case I'm not gonna do the the few list but of course if you are gonna have some kind of variable workload like this patching file or basically all the kind of application that does nothing for 80% of the time and then have a work peak of course this is something that fits very bad very good into the serverless scenario and as I was saying actually there is a bit of confusion between serverless and function as a service actually serverless as a concept is broader than than the function as a service saying that fast so function as a service is serverless is more like saying that a square is a rectangle so yes probably there is some relationship but it's not complete and this is also true talking about microservice and containers because we will see that k-native will provide you a broad ecosystem of building blocks in which you can run in a serverless mode many different kinds of workloads so not necessarily a function or a container or a microservice but can be something very extended can be pretty much everything that run on Kubernetes this is the serverless map of CNCF so this map all the things needed for for doing serverless many of you may recognize the AWS lambda or the Azure functions logo and that is a broad picture of tools and product bought on the cloud or on premise for doing serverless effort doing specific thing like function the service as we were saying but that's let's have a look at the broader picture so in order to do a modern application a serverless application you need a lot of different things and topics and concepts so starting from from the very bottom you will need an infrastructure you will need to provision the computational power you will need to schedule your workload then going up to the next step you will need some kind of traffic routing and network resiliency and of course as many of you can think about this is what we are going to do with Istio then you need some support in terms of DevOps tool chain has many of many of us they call it so continuous integration continuous delivery GitOps you will need some kind of event orchestration in order to complete your building blocks and then on top of that you will have your own basically development pattern so the the application by itself so zooming out on the CNCF landscape here we'll see the full landscapes so not just the serverless stuff but all the cloud native computing foundation components and if you try to map this concept into implementation in CNCF fully landscape you will see that there is more or less a one-to-one relationship and so this is what Red Hat see as a full stack implementation of all those concepts so for the provisioning for the infrastructure of things running behind the scenes of course Kubernetes is the underlying foundation for everything then in terms of traffic routing and security and network resiliency and circuit breaker and all that kind of stuff Istio as we were saying so the service mesh is a very important point then for the part of the support of the wall DevOps tool chain and in particular pipelines we are going to see some more details about this project you will see the cool logo with the cat the project is called tecton and it used to be part of K native but we will see it in a bit on top of that other building blocks for doing the typical features of a serverless application so the scale down to zero and the spin up of new containers and K native is providing you all the building blocks for doing that and of course on top of that you will choose your own pattern and language and container to run in our view an important role will be done by Quarkus you many of you will recognize the Quarkus logo and Camel there is no time today to talk about Camel and Quarkus but we have very very interesting project going in the community like Camel K that will fit very very well into a serverless architecture this is the wall picture I would like to highlight that from an infrastructure point of view of course Kubernetes the open-shift service mesh an interesting logo to take into account is Keda is a project for running the Azure function on prem so you will have your Azure function you will write it you can run it on the cloud and you can run it on prem if you want to because it's mediated by open-shift on top of that as you see you may choose your own language so you will see the Azure function logo but it may be Java maybe go we are working on cloud functions so our own open specification of functions and of course another important point that we were looking at that in the in the first slide basically is the eventing part so a very important topic of the K-Ned event of the serverless architecture is having some events that will trigger basically actions to the rest of the platform and an important point here is the operator hub because you will see that by using operator you can plug your own things into the eventing infrastructure of K-Ned if those are the principle of the architecture so it has to be distributed API centric born to be multi-cloud meaning public cloud and private clouds scalable by design secure even driven disposable and polyglot so very very way further than cgi bin so we are doing better but I will end it over to William that is probably the most important person in the K-Ned if community read that so definitely not but thanks the the that that was very nice by the way you delivered those slides thank you for doing that that was pretty good looking a little bit at the K-Native project and again if you've never heard about K-Native I'll talk more about that in a bit but I think the first thing that we like to highlight is all the members of that community and how they pretty much stack compared to contributions and there's a link there you can get a more updated chart for that but today you can see that like a lot of the say most of the interesting companies doing serverless nowadays like they're looking at K-Native as like a way to to concentrate and centralize like their efforts especially if they're of course targeting Kubernetes it's very interesting to see again some names here that it would not think that be considering like investing resources and like building a serverless framework but again you can see by the number of contributions that they are and diving then into K-Native and explaining what it is again like as it was said already like K-Native started with I'd say three modules but one of the modules was build and build evolves into pipelines and then pipelines evolve into its own project which is a tecton right and now it lives in under its own foundation including but with K-Native now then it was left serving and inventing serving is the module responsible for the auto-scaling part and it's also where for example we integrate we plug in a service mesh or Istio and that's how you can get some of the same benefits from the service mesh into your serverless applications as well and the other module here is of course inventing because again like you have those applications you can serve them and what not but the most important thing here as well is how you're going to receive events and those events are going to be sent to those applications I'll dive into a bit more details of those modules as well today in an open-shift forecluster you have the community operators for those things so you see the K-Native inventing and the K-Native serving operator that's pretty much packaging the upstream bits and shipping them in open-shift and we have the open-shift serverless operator which is the actual productized version of those same operators that actually includes inventing and serving as things that you can install through a single operator so kind of following a similar model that you saw from a service mesh looking at the user experience there is also a CLI that is coming from upstream so it's called K-N and using the CLI the way you can deploy a serverless application it's very straightforward again K-N service create you pass an image and there you go and as as as params for that particular command you can specify let's say the number of instances that you want running like maybe one limit to 10 or to 100 you can change the concurrency settings of that application as well this is something that is very different compared to let's say the more traditional FAS model that you see in other providers because usually the FAS model is like a one-to-one relationship you have one request you have one instance of that thing running have two requests two instances of that thing running here you have a little bit more flexibility that for example for for a completely stateless application that you're just serving something let's say like a web app maybe you can tweak that a little bit and say I want to run 10 requests or a hundred requests for one instance of that container and then only if I go beyond that concurrency level I'm gonna start a new container that can save a lot of like resources and I could be very efficient depending on the workload that you have and I think that's something that is really really powerful you can also of course then provide limits for resource consumption so CPU or memory as well and doing all those things from this command is something that like as someone that might be getting started with Kubernetes I'd say it's a very intuitive easy to use experience again like in order to achieve something similar to this in I would say vanilla Kubernetes like you'd probably have to be changing three or four different YAML files learning a little bit more about all these constructs in Kubernetes that are I'd say they have their own learning curve this I'd say streamlines a little bit that experience and put that together in a way that it it makes more sense I would say from a developer perspective that is just starting with this with this project if you if would put like side-by-side a comparison of Kubernetes deployment and Knative deployment right the YAMLs for that are generated from from the CLI for example you'd get something like this on the left here of course you have Kubernetes where you have the deployment description and then you have your route and then you have our service and you are specifying certain things here and you end up with about like 53 lines of YAML on the other hand you have a Knative description of the same service with like half almost half of the lines and more functionalities because again with that one you are also consuming the bits from Istio for example if you want to and you're also getting the auto scaling capabilities that you don't have on this side in the same way so it's actually delivering like you're writing less and you are say getting much much more the other interesting thing here is how you can pretty much get your applications that are deployed today in Kubernetes and you can migrate them to that model without changing anything in your code like this this application here specifically is that it's a container that was built we joke about it when we do this demo that is like an application from 2000s that is just like a front end like a guest book app built in PHP and now we're migrating that app to a serverless app without changing a single line of code or even rebuilding the container right just by changing the way I'm deploying that looking a little bit at the roadmap right so we are about here we just announced this week actually our tech preview so the tech preview is available in OpenShift 41 and 4.2 we have shipped many developer previews so like for very select customers that were already working with us interested in this technology that it had access to the developer preview and now we're going for the tech preview we intend to ship another tech preview still this year and then we have plans to take a K-native at least the serving bits to to a GA state either by the end of the year or next year again like we are working with those communities upstream again as you saw there's a lot of company there's a lot of companies collaborating that project as you might imagine sometimes we get into some disagreements about how API is going to look like and what's the signature and we spend months and months debating how we're going to call some object so that can delay but we're pretty confident at least for serving we are in a good state that like we are all agreeing on this is solid and stable enough to consider a GA product so we have prepared a demo and we have a video here for the demo because we're considering a little bit about the connectivity I'll play the demo in a bit but just to set the context for the demo you imagine that you have like I'm not sure of how popular this is here but I've seen that the increased use of QR codes for many things they probably saw that in the airports and whatnot but for for shopping as well I mean you pretty much go to a kiosk you scan your product you press a button you get a QR code you pull up your phone you scan the QR code you press pay done paid right you're done you have you I'm I joke that it's a serverless cashless payment system right and this is like increasing in popularity and I'd say it's a very interesting use case for serverless because again like what's the scale for this system like imagine that you have like multiple stores you have the back end for this system running like you have no idea if you have 10 people going to the supermarket or like 100 people like oh when when is the spike going to happen and how much we actually need from the infrastructure perspective to run this kind of application so what we did is we pretty much broke this into two three different microservices k-native services in this case and they are running as serverless workloads and then we are deploying the keyword code generator one representing the kiosk a mobile app that is going to read the keyword code and then the other payment service that in this case is going to reach out to a 30-party system called Stripe it's a payment service very popular in US that is going to actually get your information and effectively process the payment that those are the different apps and let me see if I can play the demo here this is running in an open shift 4.2 cluster right let me pause right there so the first thing here is that k-n service create that I mentioned before I'm setting here already some memory limits so for example I'm requesting only 100 megabytes of memory for this pod because it's small enough again it's very limited again if you think about the FAS model this kind of it almost behaves like a function even though it's not a function it's a full-fledged microservice it's a full-fledged app and as the service is created and deployed behind the scenes you see the developer console in open shift 4.2 creating spinning up that that service right you click on that link you go to the route or to the url for that service you see the keyword code generated I'm just going to like generate a new one live because again this is a video right but you hit enter and you get a different keyword code if I change you can see right you can't read keyword codes very easily I guess but it is a different code trust me I'm going to save that for now because again I'm going to use that later to upload to the mobile app and now I will deploy that mobile app let's see great so now I'm creating the payment service so that's the one responsible to actually talk to Stripe again the 30-party company offering the payment system again very similar thing nothing special about that one and now I'm creating the the payments the the store application that in this case here has a connection to the payment service I'm just passing that as an environment variable because again that allows me to change that application or the endpoint whenever I need without changing my source code for that you see the other service coming up behind the scenes in the developer console fast forward a bit the Roth is being provisioned that's my beautiful web app again very straightforward I'm going now to pick up a keyword code press pay and payments being processed so now it's actually making a connection to to the payment system and the payment system is reaching out to the keyword code and then here you have like an order number the amount that I was processed and you pretty much got the idea then I'm just going to repeat that same flow with a different keyword code just to illustrate that again like this is an actual live application running I'm going to skip forward here so we are right on time there you go process a different value now now what I will do is just to illustrate another feature of the developer console as well this is a way for you to import a project from git again it can be any project in this case I'm picking up like even a Heroku example again there's nothing special about the application in this case it's a Node.js app and then as part of the developer console I can specify that this application is going to be a serverless application so by just using that checkbox behind the scenes the system is doing all they have lifting to say oh you don't want to do a Kubernetes deployment you want to actually deploy a Knative service and in this case because I'm starting from git I'm actually going to build the application as well using the web interface again those same params from the CLI they are available and you can tweak how that application is going to scale using the UI as well if you want again memory settings CPU consumption and whatnot when you hit create you get a new build triggered and behind the scenes you see a new build being triggered containers being created source code is being cloned meanwhile you can see that the previous application that I deployed because it's been idle for a while it's already scaled down to zero and the build is still running right so like let's let let the build run now I'm just going to fire like a performance like testing thing just to show you how that service is going to scale and just by hitting that URL so I'm sending 10 concurrent requests with 10 threads to that service you see the pods automatically scaling to react to that concurrency or to that number of requests that I'm sending in this case even though I'm sending 10 concurrent requests to with with 10 threads each the system was smart enough to understand that it can respond to those events by just using like nine containers or nine pods right instead of just going to 10 directly and while that's happening you also see that it's going to automatically behind the scenes right autoscale down and you see that that previous build that I triggered just finished and a new application is being provisioned again all at the same time let me fast forward a bit now again the benchmark was done like the test ran you see that it's going to scale back down it's came out to zero again and last but not least that build was complete you see the status of the build here the Node.js app is being created and I would just hit the URL and there you go you have your Node.js app deployed as well nothing as fancy about that but this was again like just one example of let me just pause here very quickly using the dev console again you can see the representation of a live application you see that blue circle around what we call a donut not my idea to call it that and we have the other donuts empty right because again the service already scaled down to zero so you don't have that donut running and that particular one that dark blue it represents that there is something happening right now so it's going to be like either scaling down or scaling up at this exact moment all right so like I described that that same user experience that I demonstrated using import from git to create a serverless app it's available for other flows as well so you can do that to deploy a new image for example that you may have already built some way somehow or to create a new app like those workflows are all embedded in the developer console how are we doing on time okay perfect so to summarize when we talk about open shift of course the first thing we think is Kubernetes and of course that's one very important core component of what we have but there are many other services many other add-ons that we are adding on top of the platform that I would say like deliver the the full complete picture here on this particular talk of course I explained open shift serverless but you have other things such as open shifts pipelines that's based on tecton open shift service mesh that's based in istio or smi as well the open shift console and you have like all these different services that are what we call like part of the core platform like specifically targeting kubernetes and you have these things on top of it that you can add as operators as well also in my previous talk I also mentioned a little bit about ocm open shift cluster manager and again you can select the whole picture like using this you can deploy on any cloud provider let's talk briefly also about azure functions and keta this is a very interesting project that we did in partnership with microsoft and this is a way for you to run azure functions on top of kubernetes or open shift it's the source code is available there there are like a bunch of tutorials that allows you to get started very quickly the idea is to use keta and use this as a complement to the stuff that we're doing with k native so for example maybe you want to consume an even source that is available in azure so azure queues or something like that that's one even source for example that is not available today in k native you can do that using keta but also you may not want to just deploy a microservice you may want to deploy a function you can create a function using azure functions last but not least just to summarize again open shift serverless it's very familiar to kubernetes developer to a kubernetes user it feels very native because again those are crd's just extending kubernetes it allows you to scale up and down as you saw in the demo and of course it can run pretty much any containerized workload it's not only for functions because again as juzef said uh serverless is more than that it's a trait that you can apply to a variety of workloads that you might have and i would not be happy to go to like all the customers that that i advocated for microservices for for many many years say you know what now you have to rewrite everything as a function in order to leverage serverless like that that's that's nonsense this is something that i would say kind of proves that and works much better uh i guess that's it uh if you want to learn more uh go to the product page the documentation is already there for our tech review please give it a try and provide us feedback and there is also a link for the tutorial that it would walk you through through all the steps to get started learn about serving about eventing creating an application doing revisions versioning and whatnot shamelessly the plug basically shameless self promo for us without open source days coming end of november beginning of december please join us if you want to know more about k-native with live demo and so on so we're waiting for you nice thank you very much thank you awesome thank you very much that was a great thank you