 All right. Yeah, we're going to start right now. And thanks for joining. And this is the first session, the first day of the Opus Summit. And once again, welcome to this new session, this three tracing integration with the OpenTelemetry and Knative and Quarkus. So just a quick question. How many people actually have some experience to tracing for your application? And then you guys already heard about the OpenTelemetry platform? That's great. So my name is Daniel Oh. So yeah, I'm going to probably remind myself. So I'm working for Redhead as a developer advocate, also CNCF ambassador. So I've been spent a lot of time to evangelize a cloud-never runtime, specifically like a Java word, more than 20 years, and then like a Java script. And I recently spent a lot of time to integrate the cloud-never runtime, the end-of-project application into sublash, subsmash, and GitOps, Pypronic, ArgoCity, Tectone, et cetera, on top of the Kubernetes. So one of my responsibility, the ambassador, I really like to bring new technology into Kubernetes, as well as a lot of graduate projects. So here's my contact information, like my Twitter and my YouTube channel. Feel free to follow me and subscribe me. If you have any question around this session, as well as like a CNCF and then like a new technology, and just feel free to reach out to me directly. So the first thing, let's try to take a one-step back to try to understand why we need to think about distributed tracing at this moment. And then so many, I mean, couple of decades ago, we have just a few applications on top of the machine, like a VML and IVM, like a bare metal. At a time, we don't have any like tracing tools, just so we have like a, I'm gonna talk a little bit about a little bit later, like an APM tool, application performance management or the monitoring tool, with some nice graphical dashboard. At a time, we just keep monitoring some of the error or information in the production environment. And then, and the microservices was born almost eight years ago, almost nine years ago in 2013 from Spring Boot. And then the more and more application design and an architect based on Microsoft's architecture, and then move forward to immutable infrastructure, aka Kubernetes. And then we have more think about scalability when you run and deploy Microsoft's application on top of that. And then it will deploy and scale maybe 100K application as a pause on Kubernetes. In that case, you gotta keep tracing, monitoring and up there with that application, not only like a single loss, but also like some communication across the Microsoft's application. Here's a quick example, like a user, like a could be SRE, could be system admin, could be application developer. So how to define your application is healthy. So for example, if you already have a bunch of the experience on Kubernetes, the Kubernetes provide the health capability which is like a liveness or the readiness check. The liveness is just run, it's just say your application running on as a part. It's really not crash back off, not failure. And the readiness means your application actually get ready to receive your natural trapping and I could open 8080 pool, like a Tomcat or like a Node.js application, something like that. And then that is the default definition how to define your application is already and healthy. However, you've got some problems, sometimes all my application doesn't working and then you've got to go to that, applicant you're new out there, you've got some error like a 404 error or a 500 error. That is in that case you can say, oh my application not has any longer. And then how to find that your root codes or defect error as a developer or even as a team, you've got to find out and then you've got to go to some kind of nice dashboard and find out where is my root codes. And then sometimes you've got to find out, oh this is not just single error, it's more like a performance issue. So if you just a couple of end user, your application functionality totally working. However, you've got some big seasonality event, for example, we have a QCon and then we've got some promotion so we can give you a 30% discount for like a certificate for Kubernetes stuff. More than like 10 times end user just reach out to your website and then you spike your network traffic which occurs like a performance issue. So there are potential a lot of issues impact your application health needs. And then here is the observability how to detect your application status. So some people try to use these three pillar or term interchangeably like a metric and then logs and traces. So you can maybe say similarly with the metrics but there are actually some different definition for metrics, logs and traces. So metrics is just like a literally number of describing particular process and then actively measured from specific period of time. For example, Prometheus is so one of a popular tool to collect metrics data. It can be from the application layer and then platform layer like Kubernetes also operating system. It gather all metric data but that's not easy to figure out what that means exactly. So that's why you integrate like a graphical dashboards such as a graph on Prometheus. It make you more easily understand the metrics means and the logs you can just put in the all logs immutable logs from application or even your kernel or system level or file IO system. That's kind of all logs and the traces more like related to your Microsoft's application. I already mentioned earlier, you have a bunch of Microsoft's application on top of the Kubernetes and then each Microsoft actually communicate across that kind of stuff like a supply chain. So sometimes one of your Microsoft's application fail and then it could impact entire Microsoft's application. So that's why you got to have like a socket breaking capability even if one of the Microsoft is a fail you don't want to just fail entire Microsoft services. That's why you got some socket breaking or full total on stuff. In order to find out which Microsoft services is a fail right now and what kind of the other Microsoft services impact from that failure Microsoft services. That's you're going to trace that application not just using a log proj or some kind of log library on your application. So how do you deal with the kind of three observation components or a pillar or a term or whatever you call. So you got to have a nice like a vendor's APM tool like a Dynatrace or the CST and there are so many bunch of the trace tooling and which is like under the APM application performance management. And then there are four step like an instrument like using API or SDK or some protocol. And then with that your application actually collect the data and then processing like integrate like your dashboard like a graph on how to visualize that metric for SRE even developer. It make meaningful your actual metric and load just like a bunch of JSON files. And this is a pre-locking approach. So whenever you try to gather a metrics and then you're going to export that telemetry data in another platform from this Kubernetes and that Kubernetes or this virtual machine and another cloud provider. And then you cannot just import that data because that data already locked in some of the vendors format or some SDK. So we're going to things change that from open source way. And then with open source project and the tools you don't have a lock in but sometimes there are some different format. Okay, this is a JSON, this is a prototype, this is a binary. How to exchange that data between a collection or processing even visualization. So the problem question whenever I talk about this topic and a lot of people are asking me, hey Dan, so okay, that is really cool open source way which is we really prefer. And then where can we start? So the question just arise and then you might be interested in CNCF landscape and then under the Linux Foundation there are so many projects you can just filter like a monitoring, logging and tracing. And as you can see here, there are more than 20 like open source project like a Yeager and then open tracing, open telemetry and then like a Logostashy and then some of the vendors are actually productized that open source project on their product as like a cloud service like a SESI or on-prem stuff. So this is cool, but still you got some problem. Too many choices. If you just individual like a software engineer or just you wanna try your experiment, experimental thing, doesn't matter. You can just pick it up, one of them and then just try it on your local machine like your Mac OS laptop or Windows or like a Linux operating system. However, if you have some responsibility for selecting your technology stack for observation in Pro and then it's too many choices which tools framework would be perfect, optimized in my production environment. And then so today I'm gonna just a little bit narrow down to popular project, one is open tracing. This is a first standard tracing open source tool and then it just allows developer to keep tracing your Microsoft's application like a traffic and then telemetry data and then just explore like a backend sort of like a Yeager server or something like that. And then so in the next one of the popular project and invented by actually Google, this is not only just application side but also it's more like for course on collecting telemetry signals and data from IoT edge devices and then like hardware device and not just only software application. And this is a two open source projects so popular and I saw a lot of people actually adopt this project and tools and then stand up their own tracing and just really tracing and observability infrastructure. And then as you probably know, so one of the common practice there are multiple open source project in the same area, they'll have big competition and then in the end maybe one of them survive and the other one will be die. However, this is a super interesting part. Two projects actually combine to one super cool project open telemetry. So you don't need to abandon one of them. You already have an experience, have open tracing or open sensors, you can just adapt open telemetry because open telemetry combined of a two reducing project and provide a more better benefit. And then here's a quick example and then a quick explanation about what kind of component of open telemetry provides. So first of all, specification which he just tried to describe the multiple program language, not just only Java, but also like a PhD Python or even Golang. And then the developer actually implement and they kind of tracing stuff with the data API and SDK and using data protocol aka open telemetry protocol or ATP is now online transaction protocol, by the way. And then instrumentation, it also provide the library and for developer application developer as well as SRE to make actual the tracing the data of the telemetry signal and then export to whatever you want like a Yeager or a G-Pin and any backend server. And the most important at least as the power is the collector, you can actually correct data and the processing export to backend service just wherever you need. It's not related to any specific vendor product. It's 100% a vendor agnostic. So this is just super cool and the beauty of open telemetry stuff. So for example, so I'm gonna read a showcase Quarkus new Java framework just like a Spring Boot but it 100% focus on Kubernetes environment. At a time when we designed that as a redhead design that Java project three and a half years ago, we actually adapt open tracing as a tracing capability. And now we switch that open tracing functionality into open telemetry. Pretty easy to do that. And then we did the Kubernetes and then cloud environment not just single cloud like a multi-cloud hybrid cloud. And then a lot of end-of-project company tried to evolve introducing Microsoft's application into serverless because after analyzing your existing workload on the production and you just find out, okay, so maybe only less than 20% my application workload should be running all the time like 24 seven not entire application. So how to reduce my public cloud utilization like Amazon, Google, Microsoft it's a pay-as-you-go strategy. If you run application all the time it just all your money. So that's why people really interesting serverless. And back in 2014 Amazon Lambda is the first the serverless frontier. And now there are many public cloud company as well as the Kubernetes K-NAB project provide the serverless capability. However, the serverless just like it will scale down to zero like just a hibernate if you have any natural trapping. So this is totally different thing. So when your application running all the time as a pod or even just like a single process on top of a machine you can keep monitor, observe that application your like open tracing, open telemetry, whatever you call. However, serverless application is to scale down to zero and then go up anytime, any soon. So how to keep monitoring that kind of thing but I don't want to add some specific or some like some Frankenstein and not good experience to add some kind of logic to my application or my system. So this is a one of a challenge to observe your serverless application up and down with the tracing tool, specifically open telemetry. So today I'm going to use three open source project focus for application side like a Java and then serverless side Kubernetes and K-Native. So K-Native just in case who never ever heard about before the K-Native one of the CNCF project it allows developer NSR team to manage your just imperative Microsoft's application as a serverless. So once you deploy application with K-Native specification like a YAML file and it will scale down to zero default time period of 30 second without any natural trapping just like Amazon Randa. And then the K-Native has own auto-scaler not HPA based on Kubernetes which automatically scale up when you have a natural trapping using RESTful API or the cloud events. And then I'm going to use open telemetry tracing the application. So just in case, so Quarkus just everybody saying supersonic subatomic because the Quarkus builds on Kubernetes which means so the Java was born 27 years ago at a time Java so dynamic behavior which means when you create the Java application and running on virtual machine you can have a dynamic behavior you can run any virtual machine any middleware from any vendors which is a call at a time but things change there. Long time moving forward to infrastructure in middle of infrastructure AKA Kubernetes which means you just need to scale out same application from one to a thousand and then Java is pretty slow and then heavy weight under the dead Linux container and Kubernetes. So we optimize that kind of thing and as much as possible at build time and when you run is super fast. For example, if you have a just single RESTful API it takes like a half a second to start up and the window native SQL file like just exe file like window operating system but this is a Linux format it just take to 10 millisecond to start up. So one of the big challenge when you use sublass it needed to use the core star strategy it takes maybe three second, two second which is pre-annoying to end user. However, if you just start like a 10 millisecond to start up it never ever be figured out by end user it already scaled down buffer. And there are so many bunch of stuff and this is a hard work as a way I'm gonna skip that. So the build time and then once you build application and then it automatically build two different type of the native SQL like I mentioned and then running on Java file like on double JVM just like a traditional way. So I'm gonna stop the slide deck and get right into the demo and how it works. Okay, so here's my terminal window and then I actually created just sample application with the Quarkus and then the Quarkus community project we release every two weeks to new version which is a super fast like open-source project. We actually released last night and two, two over two and then this is a simple application just like a hello world example like a let's pull URL and hello and then just return the hello from message reactive. And then this is a traditional way how do you store data? I mean log information like using log for J something like that and then print the log log print out your console for developer experience. And then this thing and I just need to add one more application things like that. So in order to showcase pretty interesting stuff I'm gonna run Quarkus application as a runtime. It actually provide a bunch of developer experience. This is not related to 100% the open telemetry but it's pretty good experience for the Quarkus is running and as you can see, so live coding activated. Here we go. So this is a pretty interesting for developer to develop application and I'm gonna open new terminal window and then try to access endpoint and then I got a one the hello from message reactive thing and also when I press W from the runtime and it just open up the landing page and then go to WI and then it showcase graphical what kind of dependency capability you actually have right now and then back to the my IDE and then here we go. There are press all is running the continual testing which is cool. So most developer actually needed to follow test driven development. However, it's pretty annoying to have test driven development capability. You need to add a third part library or like some kind of tool. However, this Java framework I actually provide that kind of feature out of box status. So as you can see one test scenario is just testing and back to the IDE, let's try to change that like hello, I'm gonna delete it here. Hello, open source submit and EU and then I just also needed to change changing my local message and then save a file and back to the terminal and you can see I just fail my test case. I just save a file. I just need, I don't need I don't even need it to be recompiled review restart redeploy it just showcase because when I go to my application here and then it actually expect this to result. And however, when I go to a new terminal and I just access my application, I got a new one. So functionality is still working. However, I fail test case. This is a huge important because when you want follow test driven development whatever you develop sub less or just general micro services, maybe sometimes, okay, my application just working on my local. However, you just commit that application to your like a GitHub and then that code will be deployed to production in next 30 minutes due to the your fantastic CI CD prime. And then sometimes it ruin entire system like a cascading error. So that's why we needed to use tracing but also we need to make sure your application totally working not just business application checking but also test case at all. And then I just need to update to here save file and back to the here you got to succeed here. So just I'm going to add a new something like that. Here we go. And then let's try to new method here. And then I'm going to add a new pass because it'll be great to trace two different RESTful API. And then like a hello EU. Let's try to change that. Like a welcome to Quarkus, K native and hotel project. And I'm going to like a username. Okay. So that's it. And then I need to report the username from my local file system. Like a username. That's it. And then I just did it. And then back to my terminal and then I got some error already. So because I didn't even define here the failed to loading compilation value username it's automatically show me that is a literally live coding capability. And now go to a proper file and the username on my proper name then I just save a file and then I got to succeed. And then back to the terminal. And I'm going to try to new RESTful API and then welcome to Quarkus, K native and hotel Daniel. This is it. And just one more thing back to here WI and he's a compilation editor. So a lot of people are actually prepared to use GUI rather than CNI or some bio system. So you can see my name here. I'm going to change that my full name and save a bio. And back to the terminal. And then I'm going to test it. I got a new return. However, when you go to my ID the local file system also changed automatically. So maybe this is just something good for that kind of thing. And then go back to here and I just missed the log steel the previous one. So let's try to change that here too. Just copy and then paste here. Okay. And then back to the here, my terminal. And then when I go to the gritty and I got the output here welcome to the same tracing. So maybe this is just maybe traditional way how to store your log information. And then when you deploy this application to Kubernetes, you can go to path and then you can find the logs with that kind of thing to trace the logs for the troubleshooting. So now I'm going to need to just run with the open telemetry stop. So to do that, I need to add like open telemetry thing. I'm going to add export open telemetry extension. This is the allows my application to use open telemetry tool. And also I need to another extension capability to deploy this application in the end to Kubernetes. Okay, I just added two. And then here's my needed to run open telemetry and the backhand server Yeager. I have a one Docker composite file. So here we go. So I'm going to run Yeager with this export port. And then here's the open telemetry collector from Yeager. And then here's my compilation file. As you can see, I'm going to use the OTLP protocol with the GRPC and the version two as well. And I'm going to export this open telemetry signal into backhand server Yeager and using the Yeager default port. And this is some simple compulation how to collect data from application using open telemetry collector and then export the data into backhand server Yeager. So I'm going to use the Docker composite to run to container one is open telemetry, the other one is Yeager. So using Docker compose and just up. And then it just start up pretty quick and then try to make sure the running to process here open telemetry collector and the Yeager. And then go back to browser and then try to open the Yeager dashboard here. So you can see there's the just default Yeager query. And then I'm going to run my application once again. And then it automatically connect to existing open telemetry collector because I already said that they kind of stuff. Even if I didn't add any compulation on my application properly, properly file here, but it automatically set it up on Java framework because when you go to the UI, localhost 8080, thank you. And there and here's the compulation and you can actually find open telemetry all kind of resources and automatically set it up like enable through and then the kind of all kind of stuff. Okay, back to the here. And then let's try to here we go and access to like 8080, like a hello. But for that, when you go to there you know actual application tracing data. And then when I run and then back to the here I got the low just traditional way. However, when I reload the Yeager UI and I have a new service here automatically detect just like a project name. And then here's a one operation like a hello and I go to tracing. Okay, I got a one Yeager tracing data here. And also back to normal and then if you I call once again and it also automatically two kind of stuff. This right there. And then if I access to another API here and then you can see the new API when you reload the Yeager UI and a new operation find out here and then you can find a new tracing data. So in the meantime, the application the Java application automatically just generate like a local file and that local information actually print out your terminal just like a traditional way. However, open telemetry collector keep collecting that telemetry data from your application automatically and send it to backend Yeager server. This is just some kind of your depth environment for application developer standpoint how to use this kind of thing. But this is actual like a server less application it's not Kubernetes environment. So I'm gonna stop and I'm gonna deploy this application to Kubernetes right now. So this is my Kubernetes. This is the Retail Developer Sandbox which is for free any user you can once signed up and then we provide free Kubernetes cluster on the cloud for 30 days. And I already installed a bunch of the operator. Here's the Yeager operator and this is a open telemetry operator. And the last thing is open the server less between K native. So I already installed operator K native and the open telemetry and Yeager. And first thing is I'm gonna go back to my application. I'm gonna add a few configuration to deploy this application deploy Kubernetes to true. And I also deploy target which is K native. You could be Kubernetes and open ship to K native and then container image group which is my actual namespace here hotel K native Java. So I'm gonna do that hotel K native. Java and then one more thing container image and registry which is the open ship to cluster actually includes integrate container registry like a Docker hub or a Google container registry. It already have own container registry so you don't need to use external container registry but if you want you can do that. So I'm gonna use the integrate of the image registry G and registry and that service name and $5,000 port. And then I'm gonna open this part access to just application URL. And then one thing I need to add the open ship to cluster by default using server service K for TLS termination. So that's why I set it up this kind of stuff. Okay, I just done. So I'm gonna deploy this application right now to Kubernetes but for that I need to set up to create the Yeager. So create the Yeager in a new instance Yeager part on my project and then just create it and then go to like a graphical UI console and then it show me topology view here. And then I also needed to add Yeager things. Here's the, I mean the open telemetry collector. In this case I'm gonna use the Yeager, the backend service name and then here we go. So one thing deeper on my local is the Kubernetes environment I'm gonna use. Here is the solid PK because it's more like secure your application. So just copy and back to here in the meantime let's try to create the open telemetry collector here. And then I just paste that and create a new one. And now I have two part. One is Yeager is the open telemetry. And when you click on Yeager and you can find here, here is the chicken server and chicken server back the Yeager and then here's the all kind of services are exposed for your open telemetry collector. So one more thing last thing is go to KNAV serving project and then I'm gonna need to new KNAV serving which allows me deploy application as KNAV serve services. And then back to my IDE and here is my KNAV serving. And I'm gonna use a deep key endpoint which is actually part of the service in Yeager server. So copy that and I just create and then it will deploy a bunch of the part like a scale and then like a collector, someone stop. So back to my terminal I'm gonna run my deploy my application. In the meantime, it will do some of the steps. For example, packaging application like a Java application like a Java file and then create the container image using Docker file. And then once the container image built and then it will be deploy push it to integrate container registry. And then last thing, the Kubernetes worker node will run the container image as a server less. So back to the UI and then it's almost done. So I'm gonna load that top load view. So we have the all the application like a scalar and then like a domain mapping weapon. It's all HPA just deploy and then back to our working project here. So now we're gonna need to go to like a Yeager new dashboard, not my local. It provide the single sign on. So my username Daniel. Oh, here's my Yeager UI actually running on Kubernetes. And then as you can see as my local there's one the Yeager code default services here. And then once our application almost deployed back to the here, okay, we built so say deploy. So here's a new my Quarkus application. And then when you deploy Quarkus application and then here is my URL and then it will deploy in a second. Maybe I'm gonna reload that top load view. So why buy in and it's a little bit freaky here. So as you can see the part is view log and then this is a part and then back to the here and then try to access the URL like a hello endpoint. I got the same result and back to the here I got the result just like a tracing like a tracing away. However, back to the Yeager UI and then reload. And now I got a new like a open telemetry like a just sublash that deployment. And then I got a new operation like a hello and the fine tracing we have a one tracing. And then this application automatically scale down to zero in the next 30 seconds. So let's give it a give some moment and then it automatically scale down to zero. And then when you just invoke one of the less for API it automatically goes up just like Amazon Lambda just like a sublash behavior. And then after that open the telemetry, telemetry collector, get that signal from the application and then just send that signal telemetry data into Yeager server with the new service name. So it will go down to zero pretty soon. So after that it will terminate it automatically and then I'm gonna terminate it and then I'm gonna try to call one more time with a new API for example, good thing. And then it will go down. And so in the meantime we are most running out of time. I got one last slide deck here, so maybe bigger. So I already created a demo video on my YouTube channel the opinion you are at Daniel TV you can scan QR code. You just go to and find out all existing tutorial not only open telemetry but also Kubernetes and then like a Quarkus and a sublash function and then like a GitOps and algo CD pipeline. You are more than happy subscribe and then some give me some inspiration. I need to run something new about like a Kubernetes or like a cloud application development or even GitOps like a practice. And then it will be very helpful for me to create a new content like a technical demo and then like a just inside thing. Okay, let's go back to that and then application go down. There's a no pod, it's a moment zero. And then when I go to run this new app and it automatically start container is starting and the new logs here and then go to pod and you can find the log file here and then go back to Yego UI and then you can see new services just edited here. And then new operation, this is the old one, this is a new one and then go to hello greeting you can edit it and find out here. If you go to X one time and then it will trace into that one. So just summarize. So I don't even add some kind of specific thing to enable open telemetry collector on my application side. It automatically just collect the telemetry data and then push it into backhand server, which is Yego. So whatever you deploy application as the general pod was summarized on top of the Kubernetes this is a pretty easiest way keep observe your application for troubleshooting or the trace your data in your production in bottom. And then thanks for coming and I will be seeing around if you have any question and just feel free to reach out to me directly. I'm more than happy to address the question. Thanks a lot.