 Hi everybody, welcome back to another technical demo around the Quarkus application. In this demo, I'm going to showcase how to make this truly tracing integration with the OpenTelemetry and for serverless applications based on Knave and Quarkus. But first of all, before we get started with the demo, let me try to talk a little bit about why we need OpenTelemetry and how that matters. In the cloud-level architecture, you got a bunch of applications to deploy in Kubernetes or hybrid cloud, which makes sometimes a broad operational challenge. For example, how to solve availability and performance issue real quick. In order to solve those challenges, Telemetry is a key role for the providing observability. Traditionally, Telemetry data has been provided a lot of opposite projects, but also emotional vendors. This is causing another standard issue. Telemetry enables to have a single and vendor agnostic solution, and this project also brought industry support and adoption for cloud provider, vendors and users. And then you can also, in the end, for Java developer, Quarkus also integrates this capability OpenTelemetry with one of the extensions for your cloud application deployment. And then moving forward to serverless architecture with the advantage of the application flow. Let me enable one of the common popular projects to make your existing Microsoft application and serverless. There, this video teaches how to trace your district Microsoft application specifically serverless application with the Java framework. Let's get right into the demo, how it works. Okay, this is my sample application based on Quarkus. I'm going to use the Quarkus-based version. And here's Dean, the Quarkus actually provides OpenTelemetry Explorer extension. Just add it, you can add using a given command line or Quarkus CLI, or you can just select and download G-File from code.quarkus.io webpages. And then we're going to add the Quarkus OpenShift extension as well. In the end, we're going to deploy this application to OpenShift cluster for K-native service for serverless application. The server application just is a greeting resource to Java file to handle RESTful API. As you can see, there are a simple application hello, and then just bring a log file, and as well as return text, hello from REST Easy Reactive. And one of the beauty of the Quarkus, you can actually use Reactive programming by default based on REST Easy Reactive extension. Right, I'm going to add a few more RESTful API in this example to trace this application implication from RESTful API. So I'm going to add a new path, Ola, and return Ola Daniel, my name, and the same return text, to add a logo print out of the same. But let's try to add one more RESTful API here. Let's say the new path is a greeting, and then the method name, same thing is greeting. Oh, here's maybe typo, but just I'm going to greeting, and then here to return the hello, Quarkus OpenTelemetry, and welcome Quarkus OpenTelemetry to the same text result. So here's the application property, how to define the OpenTelemetry for tracing. So as you can see, the application name, I'm going to say my service, and then I'm going to enable OpenTelemetry feature and also here the explorer endpoint. So here's the OpenTelemetry gathers all metric data from your source, for example, Quarkus application, or even IoT Edge device, and then it will send it back to the tracing server, for example, Yeager. We're going to use Yeager in this example in local environment as well. So here is localhost 4.3.17, OpenTelemetry controller. We're going to run OpenTelemetry server as a Docker container locally in a minute. So here's the Docker composed by a how to stand up Yeager server and the collector based OpenTelemetry. As you can see here, Docker composed has two container specifications. First container Yeager tracing with the explorer ports here. And another one is OpenTelemetry. And it also exports a few ports here and one of the TPC receiver and the 4.3.17 port we just specified in the Quarkus application can also mount OpenTelemetry collector configuration, which you can specify receiver and the process explorer and so on. So let's take a look at how we define OpenTelemetry collector configuration here. So we can see we're going to use the receiver by default OATP protocol for the RPC. And the other one is HTTP protocol, something like that. It's a similar concept HTTP for OATP 2 protocol. And the last thing we're going to specify the services based on receiver processor explorer and so on. You can actually specify the receiver and extension and the services explorer based on your back-end aggregation server and tracing server. You can actually use a Zipkin server for the receiver, for example. Okay, so here's my local terminal window. Let's try to run Docker composed first to start Yeager server and OpenTelemetry controller. So I'm going to use the Docker composed command line and it will take some time to run up just like that. And you can see Yeager server and then OpenTelemetry controller just to make sure it's running. And then Docker PS showcase the OpenTelemetry controller as well as Yeager server running on my local. Next thing what I'm going to do is I'm going to try to access Yeager web console for my local. Then you go to Yeager UI. There's no service editor moment when you try to reload it. It instantly showed the one service which is just Yeager query by default. Now I'm going to run my corpus application using corpus CLI. You can also use Babel CLI as long as you want to deploy in grader application. Okay, so corpus demo is running. The press W from the corpus runtime terminal it automatically open landing page as you can see and visit the web UI just for fun. Then when you go to one of the extensions here, OpenTelemetry is still an experimental feature but it's worth showcasing this demo at this moment. And the click on configuration in one of the OpenTelemetry extensions you can find all the key and value you can specify your application side to define OpenTelemetry controller which is really good. And then I'm going to reload my Yeager console UI and you can see my application which is defined by application file. My service is automatic startup. So this application and my service and then you can operation all and then there's no operation or when you define in my RESTful API. So let's go back to terminal window and then try to access our RESTful API and then switch another terminal window and then try to HTTP pi tool and access the hello and it will return less easy reactive and go back to Yeager UI and press Yeager UI and then go to operation you can see new RESTful API automatically detected which means in behind the scene OpenTelemetry automatically detects the new trace telemetry data from the application and then it was already sent to tracing server which is a Yeager automatically tapping. So when you're looking into my service and then you can find OpenTelemetry actually aggregate this telemetry data from my application. So let me try to call the RESTful API and go back to Yeager UI what happened in the next and I'm going to try to ask the new RESTful API OLA and including and you can see OLA Daniel and the welcome by text we go and then when you reload the Yeager UI you combine another two RESTful API operation here and then the same one trace and one trace because I just invoked one time. Let me try to call one more time for the OLA application and then you can see back to the Yeager UI and reload the Yeager UI you can find immediately another trace just gathering into Yeager spaces which is really cool. So now I'm going to try to deploy this application to Kubernetes which is I'm going to use OpenShift cluster for 10. Then here is I already created a namespace which is a project in OpenShift as you can see QuarkStage OpenTelemetry and then one of the good things of OpenShift cluster is allows you to install OpenTelemetry controller using operator. I already installed operator and also I already installed history tracing platform which already include the Yeager server. Okay I already installed OpenTelemetry operator that's why you can see the one part is going up and you go with the admin perspective and installed operator there are a bunch of operator but you just need to focus on. Here the discrete tracing platform which he allows me to install and create a Yeager server and then the other one is OpenShift distributed tracing data collection. So in order to deploy KNM services and we need to actually install KNM's Derby module in Edge Apollo OpenShift serverless operator. The one thing I needed to add configuration here I need to try to use a ZipKin server. In order to do that I'm going to try to tracing ZipKin server which automatically detects the tracing data when new KNM services is created. Okay looks good. Now I'm going to just from my ZipKin server which I'm going to create it again for OpenTelemetry. So go to KNM serving namespace and then create the KNM serving it's the demo file and if you go back to developer console you will see the bunch of the part will start. So as you can see there are a bunch of the parts started. One of them is the own KNM or scalar part and the webbook and the HPA and so on. Okay go back to our demo project then I'm going to try to create a Yeager server and I'm going to just sort of with all default configuration which is a really comfortable Java developer and it will get started in a minute. Let's try to add a new OpenTelemetry collector as well. So here is the OpenTelemetry collector a little bit different thing between Kubernetes version versus local one but as I mentioned earlier I'm going to try to use the KNM server for the receiver so that's why I bring the receiver as a zip in and then Yeager which is the same thing but only different thing is here I'm going to set it up the right service name rather than just local host and then in a local host you can actually skip the TLS termination even which is not insecure for the demo environment local which is a good enough for moving forward but in production I strongly recommend you have to set it up. TLS termination will secure your application that's why I put in the TLS option in the previously I set it up insecure but in production environment I'm going to set it up TLS as with the certificate file here and then I go back to OpenShift developer console and create an OpenTelemetry collector and create that bring on create button and then paste all your file in this demo view and everything is good here is once again the right service name with the name space and the receiver zip key and explorer Yeager and then once you create the button and then it will start in a second okay looks good so when you click on the cluster collector or OpenTelemetry you can see the right logs on inside the pod and now I'm going to add a few more operations here to deploy this application into OpenShift cluster as a K-never services so first of all I'm going to specify container image group which is the same name of my project name or its name and I'm going to push this containerized image to integrate OpenShift generation 3 and then the deploy through which means when you package and use the application using focus CLI or main command line it will automatically deploy to a remote Kubernetes cluster and which is Kubernetes is a native so there are three options for developer you can define the target for deployment one is K-native and for just normal application path based on Kubernetes and OpenShift and I'm going to make it available router to access to endpoint by external user and I'm going to use the separate software for X that HTTPS protocol okay so I'm going to start my demo and then I'm going to deploy this application including building using QuarkXa build I'm going to skip the test and it takes a minute to packaging application in the meantime let's try to ask the Jager server from my production environment from OpenShift when you click on the open URL it automatically integrates the single sign on from OpenShift user account so which is really awesome and when you go to Jager UI you can see there's no service at the very beginning just like a user in the environment then when you load this Jager UI it automatically shows the default one service Jager query just like the same thing in the local environment okay let's give you some moment and the back to the terminal window behind the scene in actually packaging fast star like a Java application artifact and after that it will package an application using OpenShift S2 processor you can actually use Docker build storage as well and in the end it will push its containerized image into integrated OpenShift containerized street you can also push it into external container you'll need and then in the end OpenShift Worker node pulls that image into available Worker node in happening well we did just one single QuarkXa data command line which is really good and go back to OpenShift Opology view you can see the new application just started and this is the QuarkXa application let's try to make this hot icon with the QuarkXa but first of all you can change the icon kname services to function which is more explicitly showcased in subless application and then you can add application runtime label to showcase this application is based on QuarkXa or NCB to do that and then when you go to QuarkXa runtime logs you can find the QuarkXa application running on JVM as a job file and here's the QuarkXa race project and then here is the OpenTelemetry extension and REST React or Reactive Program which is really good for support application as well okay just take a look at that this is a pod name so when you go back to Yego UI and just refresh and it automatically detects a new services just like a pod name so the reason why you can see two different pod because when you change the revision name it automatically restart another container so that's why you can see two pod name here which means the OpenTelemetry automatically gather that telemetry data from kname service when kname service is a startle and which is really good so you don't need to set it on some specific code in the application side the trace all telemetry they're already down pod so when you go back to here and then you can see the application pod and subless application automatically scaled down to zero just like a default subless behavior I'm going to copy around url and then go back to terminal and try to access endpoint for invoke the REST API and just make sure the relevant trace detected by OpenTelemetry and send it to the Yego server okay so first of all I'm going to try to access hello and then you can see the QuarkXa application automatically starts just like a QuarkXa strategy under sublation analogy and then try to access another REST API like Ola and then Qubiti and the return result is exactly the same value we can see in my local then go back to Yego UI and reload that thing and open operation and you can see three different operations you can see and then click on hello and you can see one trace because we just called one time and then here's the detailed tracing data and telemetry data and then go to Ola and so one thing so let's try to hold one more time Ola REST API and go back to Yego and find that you got two traces immediately you can aggregate the data from OpenTelemetry and back to the Yego server and one time and then you got three traces here so let's go to Gritty and you can see the code one more time Gritty and then just find trace you have a two trace which is happening simultaneously almost and you click on Gritty RESTful services you can find the more detailed tracing telemetry data which is already came from multiple deployment because GKNM services keep up and down based on demand traffic. Go find the more trace log into OpenTelemetry collector pod when you go back to develop console and then click on cluster collector pod and the view logs you can find a bunch of the logs aggregating from the OpenTelemetry thanks for watching have a good rest of the day