 Hello everybody, thanks for joining today. And my name is Daniel. Oh, I'm from the United States of Boston earlier. And then here's my colleague James Faulkner. Yes, my name is James Faulkner. I'm also with Red Hat. I'm from Orlando, Florida. So today, we're going to be talking about distributed tracing with open telemetry, K-native, and a project called Quarkus. How many of you have heard of Quarkus? Just out of curiosity, you've got a few hands raised. Oh, good. Yeah, good. So Quarkus is a Java framework. It's not the central piece of this discussion today. We kind of want to talk about observability in general and how it applies in the Java community and how you can take advantage of this to enhance how you develop, and more importantly, how you deliver software to production and how you recover from failures when things go wrong. So I'm going to start, we'll spend about 20 minutes talking about some basic concepts in this space and then some recent changes in this space. And then Daniel will also talk about a little bit more in depth, and we actually have a demo prepared for you to showcase how open telemetry is an important part of what we do at Red Hat and hopefully something to consider when you're trying to build your observability stacks in your projects. So we'll start with an introduction. This is Daniel. Yeah, so I'm a developer at Red Hat and a bunch of stuff and then CNCF ambassador. And here's my Twitter. You can just follow me or reach out to me if you have any question around this kind of topic and then CNCF open source community stuff. Yeah, and that's me over there two decades ago in Kamakurada, nearby Yokohama here. I visited and had an opportunity to visit the first time I used to work at Sun Microsystems. We were doing Solaris training and we stopped in Yokohama to do that training and then we took a side trip. So awesome to be back in Japan. Same guy. Same guy, a few years younger. Two years ago. A few pounds lighter, a few kilos lighter. A little bit more hair back then. But yeah, so happy to be back here. So let's talk about telemetry and tracing and observability. So I just kind of wanna start out describing what is observability? What does that mean? Obviously from the English word, it means you can see something, but more importantly, you can understand it. You can process it, you can think about and rationalize what you're actually looking at. So it's more than just being able to see something. It's being able to draw conclusions and be able to perceive changes in it and potentially take actions to change what you're seeing here. One of the early advances in this space was in the application performance monitoring space. So having an agent sitting on a system, watching a process and taking some metrics possibly from that and maybe visualizing what that looks like. And we'll talk about how that industry has evolved and what we're doing in this space in the open source community and CNCF in particular for how that is really hopefully improving the state of the art here. But one also important aspect of observability is that you can't or you should not make assumptions about what you're trying to observe. If you have one angle into one viewpoint on a particular system, you're only gonna get data from that one viewpoint and the only conclusions you can draw are those that relate to that kind of one simple viewpoint. Taking into consideration a much larger set of viewpoints allows you to answer questions you didn't even know you needed to ask in the past. So being able to have enough observability that you can answer questions that you're not even sure you might need to know in the future is really important when building systems. This also applies outside of the technology space. This may be familiar to some of you. So this is one viewpoint, one metric, right? One screen capture, one video angle of, as you all know from the game earlier last week. And from this picture, you can conclude, well, it's way out of bounds, right? Look at how much space is there, right? So clearly, it's out and Japan should not have gotten that goal. But if you consider a system where you have multiple viewpoints, right? You never knew this might come up. No one could have predicted this, right? If you have a different view, you can kind of understand that, well yeah, it actually was in because we have these other views and it's a good thing that we had multiple views of this particular space because we're able to answer the question more accurately and obviously to the benefit of Japan, which is awesome. So it applies here, it also applies in our world. So when thinking about observability, one place you might want to start is as a developer, what kinds of questions might you potentially ask in the future? What is the health of my application, right? Is my application up and running? Is it starting but it's not running? Is it starting but unable to talk to a database or something? If something goes wrong, what happened? Why did that happen? How can I fix that? If my application is up and running and operating as expected, what about if it's operating too slowly and affecting that customer experience, right? These are the kinds of questions that you might want to ask in the future. I'm gonna observe ability can help you answer those questions. So this is a little bit controversial, but it's an easy model, especially to those who are new to observability. It's a great place to start. These are the observability pillars. These are the things that the types of information that you would want to collect if you're doing more than just observing one specific metric, right? Metrics are obviously an important part of that. Metrics are a amount of memory that a process is taking over time. You can graph that, right? CPU load, you can graph that. Number of HTTP failures over time, you can graph that. These are important to understand the particular state of the system at a given time or over a period of time. Log files are also very important. This is an immutable time series of things that occurred in the past that you can go back and look, potentially even replay, depending on the sophistication of your logging system. You can actually replay and get to a point and kind of understand the state of the system at any particular time with a sophisticated log or you just have a simple log with HTTP access log. But these are really critical to understanding what could have potentially led up to a given problem. And then tracing, and we'll focus on tracing a little bit more in this talk. Tracing is a record of a single service invocation along with all of its downstream service invocation. So if you make a call to a restful service and it makes a call to another service and another service and then a database, then maybe a message queue, right? All of these things are related. And so having a trace of that with enough metadata as part of that trace can help you understand how your system is behaving, which system components are being called and being used to potentially identify where there are potential bottlenecks or severe problems. So these three pillars are important to have when you're building an observability stack for your particular piece of operating system or for your particular system. If you have two out of these three, that's better than none. But having all three is kind of a really great start at being able to answer any type of question you may have in the future about the state of your system at any particular time. So traditional solutions to this came in the form of the APM for application performance monitoring vendors that were around for a number of years. And the four steps that they considered to be important in building observability systems is the first instrument, right? Being able to add pieces, either add code to your particular application or to a particular library that you happen to be using an instrument, these libraries and frameworks and business logic so that they can report important aspects of that system. Sometimes that's in your application, right? You can add a counter to a business function so you can count how many customers you have or you're using a particular library like a cache like Redis or a database. Those vendors or open source projects that can add instrumentation to their library so that when you use their libraries they can then report, right? The second step after instrumentation is to be able to collect this data. Now the problem here is that there's a number of different vendors and this is kind of where I'm going with this, right? There's a lot of different vendors with a lot of different proprietary solutions and kind of open source has really made really great progress over the last three or four years and we'll get to that in a moment. But this data collection is really important to do something with the data coming out of your applications. Collection, storage, analyzing, processing and then finally visualization because as humans it's really hard to look at a huge trace or a huge set of metric data and kind of make infer conclusions and draw conclusions and change the way you're doing things based on that data. So if you have visualizations and other ways to simplify and represent that data in a way that you can understand it or the way that you want to understand it then you can get actions out of that and actionable things that you can go off and change or you can go off and fix a bug because you can understand where that issue is happening and what that issue is based on either a metric or a log file or a distributed trace. So traditional APM tools cropped up to solve this over the last two decades. There's many, many of them. You'll see their logos in a moment. But the challenge here is that they're all different, right? And this is the challenge that we face as software developers and as systems engineers that vendor lock-in is real in this space. If you have a solution for one vendor for instrumentation it's probably only going to work with the same solution from the same vendor for data collection or data processing. And so you're essentially locking yourself in. You're building your instrumenting your code. You're building out your data processing pipelines based on this one vendor's solution. That's a real problem because when technology starts to move faster, which is exactly what's happened in the last decade with containers and Kubernetes and microservices and serverless and AI and ML and all these things are changing. It's really hard for these APM vendors to keep up and it's very expensive for them to keep up as well. They have to hire more people. They're oftentimes duplicating work across different teams to meet the different needs across these different solutions. And of course that means their prices go up as well. So that's a real challenge for the APM industry as a whole. The other real challenge for the APM industry is that they are literally and figuratively playing, always playing catch up. So they're looking at systems from the outside and they're looking at libraries and frameworks and virtual machines and container technologies. When those technologies change, the APM vendors have to then change their solutions to match. They're never sort of in at the ground floor. They're not part of building these solutions. They're part of solving a problem after the technology's been built. So they're constantly playing catch up both figuratively and literally because they're not part of these projects that they're instrumenting and that they're monitoring. So new features might come out. It's gonna take that APM vendor six months to a year to develop the necessary changes in their solution to be able to monitor some new change or some new paradigm like serverless is a great example, right? When serverless came around, suddenly you don't have VMs running for nine months at a time. You have VMs running for milliseconds at a time. What's an APM vendor gonna do if they rely on having this large agent installed? If they need to install that agent on every single serverless indication, right? That just simply is not a tenable solution. So they have to build something different. And so they've spent a lot of time trying to play catch up. Meanwhile, technologies are moving the technology forward at a very, very rapid pace. So this is why open source is really critical to a solution here, which is being able to understand the different steps in an observability system and building them in to your open source solutions, which by the way increasingly are being built on open source, right? Kubernetes is a fantastic example of this. And all of the projects that sit on top of Kubernetes and in CNCF, these are the projects that are increasingly replacing proprietary solutions. So if we have observability solutions that match the way that those projects are both developed and delivered, we can really have a much more, like, a higher fidelity in terms of the observability solution that we're applying to our projects, as opposed to trying to apply something that might be six months behind the projects that we're using. So it's a fantastic situation that we're in at the moment. And that kind of is a testament to the power of open source and the power of open communities that we can kind of all come together and kind of solve this problem as a whole. And the good news is that those APM vendors that I was talking about and bad-mouthing, they're also part of this solution, which is fantastic to see. So, but as a developer, how do you get started, right? If you search on the CNCF projects website for metrics or monitoring, logging, and tracing, you'll get a screen that looks like this. There's, what, somewhere around 20 or 25 different solutions, right? How do you get started? Which one are you gonna choose? They all have very similar things in common, but there's too many choices and this is the problem. They all have different strengths and weaknesses and kind of work together, but even if you choose an open source solution, it might not be compatible with other open source solutions, right? Just because it's open source doesn't mean that it's based on an open standard. So that's sort of kind of what's missing here. And so, projects like OpenFlemmage or OpenTracing came along. OpenTracing was started by, well, before OpenTracing, there were things like, you've probably heard of like Yeager and I think Daniel, you're gonna show Yeager in your demo. Before that, it was Zipkin. Everybody remember Zipkin from like 2015, 2016? Prior to that, it was Dapper coming out of, or, sorry, coming out of Google. And some of the folks that had started Dapper also recognized this problem, right? There was Zipkin, there was Yeager, there was all these incompatible, for me, theists, there was tons. And there's this proliferation of projects, which is great, but much like an open source, right? Sometimes you can have too much of a good thing. And so the idea was, there's a bunch of different standards, quote unquote, and different projects that are popular, like Zipkin and Yeager were really popular because they came around the same time that microservices started to be a big deal. So they got a lot very popular. We used it all the time at Red Hat. We did all of our demos with Zipkin, it was pretty exciting. But it was one of many solutions, and increasingly more and more. So open tracing was an effort to sort of stop that and come up with a single standard. So how many of you recognize this cartoon? Right, it's like, there's 14 different observability standards. That's nonsense. We only need one to rule them all. So let's make a new one called open tracing. And pretty soon, now we have 15. So it's not a bad idea to do this, right? But to get the critical mass and to get the adoption, it needs to solve the problem in a good way. And open tracing did a partial, great solution, right? They brought distributed tracing. They defined what a trace was and what a distributed trace was. Trace is a set of spans with some metadata. And they also had kind of a standard API for developers to instrument their code, produce metrics, or vendors to instrument their code and produce metrics. So that was pretty useful, but an observability solution is more than just an API. You gotta have collectors, you gotta be able to recognize different formats and different on-the-wire protocols and things like that. So open tracing sort of didn't go far enough. At the same time, Google also started another project called OpenSense, because they also recognized this, that it's more than just the need of an API. So OpenSense has kind of added, in addition to tracing, they brought in metrics, right? That second observability pillar that I talked about earlier. They also did more than just an API, right? They provided language SDKs and they provided a kind of, it was called a collector, which is kind of this generic service, right? It's like a running service that you can install on your systems that can take metrics from various different formats, right? Yeager or OpenTracing, or any of the other popular observability libraries that were in use at the time, and then also ingest them and then output them to various backends and visualizations that's like Yeager or Prometheus. And so it was pretty powerful. It kind of solved more than what OpenTracing solved in that it gave you sort of this full solution that you could deploy. You weren't dependent on the other vendor to supply that. So that was pretty good. So they did a little bit more than what OpenTracing did, but they also didn't have things like the log solution. They didn't have a language level SDKs that you could download, right? You had to go get them from either a vendor or from another open source project. So it was still not a fully complete solution. So the idea is to kind of merge those, right? So that the founders of OpenTracing and OpenSense has came together, I think, three years ago now and decided that this is nonsense. We have too many competing or complimenting solutions. Let's just have one. So that one is called OpenTelemetry and that's where we are today. So the state of the art in observability and open source, especially in CNCF and Kubernetes-based projects is OpenTelemetry and it provides essentially the best of both OpenTracing as well as OpenSensis in that it has APIs for, actually I can show you a slide. But before I show you the slide, I wanna show you that it's the number two most popular in terms of activity project in CNCF as of, I think, this is in August. You can see that the red bubble up there is the OpenTelemetry project, right? Lots of commits, lots of PRs, lots of activity in the project, way more than many of the other projects in CNCF and almost as many as Kubernetes itself, which is a whole different topic. But so OpenTelemetry is gaining a lot of traction. It has over, I think it's 800 different contributors from 150 different companies and many of those companies are this APM vendors I talked about early on and you can see a list of them on the OpenTelemetry website, which I was gonna get a screenshot of but I didn't. But what OpenTelemetry brings and what you should consider when you're building out your solutions here is three areas. So first it's a specification which is important for any standard. There's an API for instrumentation and emitting of metrics. That API is language agnostic. There's an SDK for different languages like Java or Python or Go or PHP, Ruby, Rust and several others. And that SDK is something you can download, right? You don't have to depend on another project to provide that, that's part of the OpenTelemetry project. It also brings one of the great things from OpenTracing, which is the protocol, the OpenTelemetry line protocol, which is the binary protocol that they use to transfer metric information because metrics in particular can grow. As you add more microservices to your application you're gonna get a linear growth of metrics and log files as well. So having high performance on the network is really important, especially if you're trying to do very high, accurate, very high granular metrics themselves. So it brings a protocol, it brings an SDK for languages and it brings a standard API as well as a standard that says what different languages must support. So if you wanna support a new language in OpenTelemetry there was a specification for that so that defines the native types in each language and how they map to the different fields in metrics or logs or distributed traces. So it's a really great system that kind of defines end-to-end what a observability system looks like and how to collect that data, how to export the data out of your applications, how to put them into your storage systems. And then this allows APM vendors and other observability vendors to compete higher level on the stack so they can compete with things like AI ops where you can process and analyze the data and make infer what's happening in your system and what might happen in the future, right? These are areas that OpenTelemetry does not implement but vendors can implement on top and there's many vendors that are doing exactly that. So best of both worlds, it's a fantastic solution for observability. So the next question is what about Java? Java is particularly close at heart at Red Hat. We acquired JBoss back in 2006. We have a number of Java based solutions. Java has historically had a problem on the cloud, right? Let's face it, it's slow and fat and takes up too much memory and takes too long to start. So for Java to really compete in these spaces, those issues need to be addressed and there's a number of projects addressing that both in Java itself and the OpenJDK as well as in projects like Quarkis which Daniel and I are near that team at Red Hat building out solutions to eliminate all these kinds of startup and heavy memory usage of Java and make Java something that you would experience like a node developer has had for a number of years. Fast startup times and low memory. Quarkis actually beats node, by the way, in a number of use cases. So it's a fantastic solution but Java is one space where the technology is moving forward. Serverless is another space. Containers and Kubernetes in general. So how do these things all come together to solve the challenges that we have in building enterprise-grade solutions today which a number of solutions are built on Java? So Daniel, I'm gonna hand it over to Daniel to talk about how we go forward with Serverless Java. So Daniel. Thank you so much, James. All right, so here's the thing is how do we actually enable Java application to observability on OpenTelemetry in specifically serverless applications. James already mentioned earlier, so it's not good practice to print an agent in the serverless application. It's pretty awkward. That's why we need to approach something differently how to monitor or collect the telemetry data from the serverless application regardless of Java, .NET, or some stuff. But we're gonna do pretty much a focus on the K-Nave stuff. So just quick in case some people never heard about before or Quarkis thing. So Quarkis is a totally game changer for Java application. Not only serverless but also it's general Java application for business workload. We make it super easy and then fast and then lightweight, not only serverless but also reactive application or even general Java application on top of that. So with that, we shipped a lot of stuff from runtime activity, shipped back to build time. That's why when you run Java application it's pretty much faster than compared to traditional like a Spring Boot or even JBC EAP, the other stuff. As you can see in the bottom line, you can see we shipped all kind of activity. For example, annotation scanning and parsing descriptor and enable the server some of the feature. And then in the end, when you run the application as a thread or a process on Java JVM, it actually run your business services rather than the processing bunch of stuff behind the scene. And then Quarkis provide and enable Java developer build and have executable just EXE extent file in Windows operating system. You don't need to JVM anymore. You can just run the application right away just like a Node.js and it's pretty much faster than. And then let's go into the demo. I have only 15 minutes left. Okay, pretty cool. So this is my local environment and then here's my actually, I already created this morning, the Quarkis project. So this is a race Quarkis version and I already downloaded a bunch of the Quarkis extension which I call and then you can understand the map dependency. One of the good thing is here so Quarkis open telemetry explorer OTLP. So which AROS developer integrate your open telemetry stuff. So James mentioned earlier, you gotta need to run some SDK or API or some kind of someone's protocol stuff to instrument data and collect data, telemetry data. You see one of the big challenges for developer how to figure it out. However, Quarkis actually provide this kind of extension which make developer feel comfortable to use this kind of thing. And then I just create to simple the RESTful API here. So just the endpoint hello and then here we bunch of three RESTful API, like hello from REST easy and then like welcome open source dummy Japan 22 and the last REST API, like just like our session title, Distributing Tracing Integration with the Quarkis K-Nebel and OTLP. And then let's try to run my local environment to make sure the application totally working but I deployed to Kubernetes as a sub-app. So first up the Quarkis, the demo. You can actually use your Maven or a Gradle, the packaging tool, whatever you need. And then one of the good things, the Quarkis CLI is pretty much easier to run and then verify your application. Now you can see I have the Quarkis application here. Oh, I just forgot one thing. I actually need to back to my application. I already set it up, Docker Compose file to run the Yeager and then open telemetry server based on Docker container. So this is just some more known practice for developer how even also people how to run your application rather than installation, you just pull down the relevant container image as running on it. And then here is the Yeager and this kind of stuff. And then back to my terminal, let's try to use Docker Compose Compose up. Okay, oh, Compose up. It automatically start and then back to here. Let's try to make sure. And I have two Docker process running. One is hotel, open telemetry. The other one is Yeager. And then, so here is the endpoint. I just copy and then back to the UI and then try to local host. Here and then now I have the Yeager UI, totally my local as you can see. This is my local host. And then back to my terminal and switch to another one and then try to run Quarkus Dev. And it allows run my local environment as well as connect to backhand Yeager and the kind of server. Because when I go back to my application directory, I already set up my application property. And then you can see, I have the one of the beauty of the Quarkus actually provide the unified configuration. So if you have some experience to develop Java application with multiple environment like a pre-product staging or production and developer testing environment, et cetera, maybe you need to create a separate like a YAML file or a property file to maintain that kind of stuff. And sometimes it calls like some error, like a human error. The Quarkus actually provide like a prefix feature which she allows you just to pick it up relevant configuration for that environment production package or testing environment. So here's my service name and I'm only enable open telemetry. This is the only one I need to do. And then here's my local open telemetry poll which I here to set it up my Docker compose file. But this is all default configuration. You can actually skip it all kind of stuff. But I just make sure and explicitly how I set up connect to my backhand up telemetry server from my Java application. So you can actually delete all kinds of stuff because as long as you enable open telemetry extension on Quarkus in a Java application it automatically incorporate your application into open telemetry and it will send your telemetry data backhand Yeager server. All right and back to the application. Here we go. Press D or I'll go to like our WI which is one of the great feature for developer. It showcase a unified graphical interface what kind of extension capability you already have on your application. As you can see here is open telemetry stuff and then there are a bunch of the reactive and configuration you already have. And then let's try to reload my Yeager server and now have my services already here. But when I just click on find traces it's all about like some default data like a WI not actual application. When I go to operation you don't see the RESTful API. And I go back to terminal window and open this one and then try to access one of the RESTful API like a hello. And now you can see hello from, I'm gonna make it bigger for you. And then hello from REST-EG and go back to Yeager UI and I'm gonna reload the Yeager. And now you can see operation now hello I got a one hand and then go to find the trace. Now you gotta have a one Yeager stuff. So this is how it works and open telemetry actually correct data and then just back to Yeager server which is cool. Then let me try to call a different UI like a gridding. And now you can see it welcome up so some in Japan and my name is Dan and then one more thing, hotel. And now you can see this being and they kind of stuff here. And then back to the Yeager and then I'm just to reload this page and you can see the hello gridding hello hotel. So now I can see the instant to re-grab the REST API trace data. If I just call one more time and then just try in the second traces. This is totally working. So my challenge is how to make it happen in the same capability into Kubernetes cluster like a production environment, specifically K-nabry services as a serverless function. Because my application functionarily totally same. It's only different. It's a serverless is a deployment model. So the same business logic but the application just shutting down and upon based on your traffic, natural traffic not related to your business logic. So one of the good thing is Quarkus actually provide OpenShift extension which I go back to Palm XML. You can see here the Quarkus OpenShift extension which allows me to deploy this application to OpenShift cluster based on Kubernetes. I can deploy my application to vanilla Kubernetes and OpenShift ready like enterprise version of the Kubernetes. But also I can deploy as K-nabry services. The behind the scene it automatically generate like a YAML file for K-native or a Quarkus OpenShift and Kubernetes stuff. And then one good thing is I already define the production compilation here. I'm gonna deploy to Kubernetes true and deployment target K-native. And then here is my namespace name. Today I'm gonna use the Red Hat developer sandbox which is built on Kubernetes. When you go to developer developers at RedHat.com it allows you signed up for free. You can have like a Kubernetes cluster for next story days and you can actually keep extending as long as you wanna keep using that. And also it provide a bunch of tutorial how to get started your application not only Java or .NET and a bunch of stuff. So I'm gonna use this kind of stuff. And I already deploy a Yeager and then actually go to administrator and then show you make it bigger. And I already installed Yeager as operator and I already installed OpenTelemetry as an operator as well. And here's OpenShift subreddit in K-native. That's why you can see. So here's K-native stuff. And then back to the application. So as James mentioned it's a bad practice you wanna set it up like a agent for your application for subreddit. So that's why here's in my K-native servicing stuff. I've set it up like a back end GKIN server one of the popular like a local aggregation thing. So it actually point to my back end Yeager server with my hotel K-native namespace. So I just copy from here and then I just import to that MO bar which is one of the great thing of developer sandbox. And then which you install based on the K-native serving namespace. And it allows to speed up the bunch of part in the end. In the meantime, and then go back to here and then here's another hotel kind of stuff. So this is another OpenTelemetry thing. In order to create the hotel as you can see here is exporter and the receivers. I just create the K-native service based on GKIN server. And that's why receiver GKIN. And then we're gonna export to that telemetry data into the Yeager server already installed using operator on Kubernetes. So this is the how to make it happen in the Kubernetes cluster. And now you can see we have a bunch of that kind of stuff. And then when you go back to developer standpoint like a topology view, you can see here's a bunch of a thing that running on like a path for your K-native services. And then it based on like some pretty slow wifi internet. Okay, so now you can see there are a bunch of the parts already running on based on K-native. And then I'm gonna go to create a new one, something like the OpenTelemetry and then based on my name space. I just create that and then here is my OpenTelemetry and then it just create a new one. Okay, so let's give it some more and then back to my application. Now I'm going to run my application on the partners, build, I'm gonna skip unit tasks to save my time. So this one command line actually allows me to build my application like a job file. You can actually do that like a native executable also. And then it containerize the application based on the container emissive. And then it will deploy to like a container registry. Today I'm gonna use the integrate container inside the OpenTelemetry cluster. However, you can actually do like a Docker Hub, like a Azure container registry or Google stuff. And then the last thing is Kubernetes downloaded that container image and running on K-native services as a part of the sublabs. So happening a lot of stuff behind the scene, but I just need to run one single command line and it automatically happening. Because when I go back to my application and under the target directory, the Kubernetes directory automatically create that and you can see the K-native service more automatically generated. And also here's Kubernetes in mobile also generated to create like a Kubernetes manifest. And then when you go back to my application and then here is the Yeager UI. It's one of the good thing is it provide authentication based on the OpenTelemetry cluster automatically like a single sign us, oh, okay. So you don't need to create some of the user authentication authorization. So this is the, there's no kind of service at this moment. And then here's my OpenTelemetry and it will come up some new sublabs application into here. And then we're gonna just to invoke the same RESTful API like a HelloGridding or Hotel and then automatically OpenTelemetry pick it up that telemetry data from my sublabs application. And even if the sublabs application goes down and it's automatically speed up when I invoke the RESTful API, OpenTelemetry automatically pick that up OpenTelemetry data without any kind of the called modification. So now we have a new application here coming up soon. The application container creating and then can you go click on view logs. And then now you can see Quarkus application is running and then back to topology view. And here's my Quarkus application and then here's the RESTful API employee and then back to the application. And I'm going to go to, for example, Hello. And then to go to a Yego UI and reload the application. And then it will coming up soon based on the sublabs application. Now you can see we have a sublabs and this is actually the sublabs name. When I go back to the application this is the sublabs name. And then go to find trace and then we can have the Hello RESTful API. That's the thing. And then go back to one more time. We almost are running over time. Just hotel. And then I will get time two times and then back to the UI. And then now you can see so new operation just created here like a hotel and the find trace and the two trace. And then the sublabs application we goes down by D432nd and the D4K name of services. And then when you just invoke the RESTful API once again in the open telemetry pickup automatically. So yeah, thanks for joining today. If you have any question and then we are more than happy address and you have to session on the aisle or hallway. Is there any question from virtual audience, by the way? Okay, the silence is good. Oh yeah, okay. Yeah, the open telemetry always running. That is not sublabs. Sublabs only for application side. Yeah, this is some kind of infrastructure. Yeah, correct automatically. So yeah, yeah, that's correct. So this open telemetry just infrastructure. I just showcased in the same name space but in reality production environment you have to install the other name space for infrastructure like a node. And then when you set it up, they kind of think based on the label and then the GKIN server label it automatically detects that your sublabs. So in the meantime, our sublabs scale down to zero like the MS-RAMDA. And then when I go back to just call the one to invoke and then it automatically speed up like just MS-RAMDA course star strategy. And then when I go back to let the regal theme and then now you can see so the other services came out number four which means the other services here. So I don't even set it up like an agent is automatically detect because the KNAB service automatically set it up the integration between your open tele and then your KNAB service based on your application label. Yeah, one thing to add on your application if you want to trace like business logic that's like number of customers or number of orders. Yep. Is there an answer for you? Yeah. Yeah, we're going to talk to you later. Okay. Is there any other question back there? Yeah, just last question. Actually, two questions, but since we got not much time so I'm only going to ask one. So like open telemetry consists of matrix tracing and logging, right? So what are you guys doing on logging? So for example, are you stitching the tracing data and the logging data together? But because in yet here we can only see how the tracing flow is going. But if you want to see like the actual logs in there what are you guys actually doing in there? So yeah, so the question is what are we doing about logging? Is the third pillar it's not completely done yet? It's in beta form. The plan is to have a GNA for that I think sometime early next year. And in that case you can of course do traditional standalone logging like logged out, whatever. You can also include logging in the metrics. And so those logs get exported part of a particular metric so that log statements and log metadata can be associated with a particular... Okay, thank you. All right, thanks for joining today. Yeah.