 వార్కోట,aluable కారా్కపమాందరాసమికుేదరిటకrosoft కార్నటనీనాయాసం. మిర్రిమabus�కసరాభాబలినం peare�ంనుహా. స౗మరిలై bei సకరడపచా. And this doesn't matter if you're a developer or you're a member of management, anyone that wants to actually gain insights into the application, I think Pomont Telemetry is the way to go for free too. So to our agenda, I would tell us about what Pomont Telemetry is, the players involved in the ecosystem. I would also give a broad level of review of the different data sources you can use to collect data if you're interested in getting data from the application using open telemetry. I also mentioned a simple application which would also have an overview of how we could instrument it if we're using a PHP library like Pomont Telemetry and PHP package. And also lastly I would talk about how anyone can get involved contributing to open telemetry. So about me, I am a computer science student at Imose University. I am also a software engineer with Patricia. Currently I am a CNCF and Linux Foundation mentee with open telemetry. I am also a cloud native enthusiast. You can check me out on Twitter with the handbook. I am looking forward to interacting with you after this session on questions you have or probably ideas you have. So for starters, what is open telemetry? So a general overview, open telemetry helps us get insights into our application. So it is a collection of tools, APIs and SDKs that we can use to instrument, generate, collect and export data from within your application. These data you can use for analyses, you can use for problem finding, you can use for tracing, you can use for whatever insights you want to gain from your application. So as I have been one point for getting information from your application, the open telemetry project also seeks to be a vendor agnostic standard for everything open telemetry. So we have the specifications worked on form by the biggest player in telemetry industry trying to have one standard for telemetry. Open telemetry is also a CNCF Sandbox project and it came to be a few years ago when open tracing and open session projects are merged together. So what can you see? Can you see two parts of my screen on one part? Just one part basically and it is getting smaller and smaller. Wow, let me see one again. Probably I could try to share it, do you think that could make it better? Yes, maybe you could try to share. Is it better now or is it the same? It is much better now. But the presentation or the slides isn't too screen-inspirational. Oh, I see. Let me see how it is now. Awesome, awesome. Perfect. Yeah, so sorry for the inconvenience but I hope you haven't been able to laugh at it before now. Yes, yes, the audio is good. Okay, so in terms of the open telemetry ecosystem, the open telemetry project is actually one of the most contributed to projects on CNCF. That's after Kubernetes and it has language support for both Go, Ruby and lots of other languages. So as regards community support, we have support out of the box from cloud providers like Azure, GCP and Amazon Web Services. So if you are on any of these platforms, you can easily get open telemetry out of the box. We also have some vendors that allow you to boot instrument and export your telemetry data to their backends. We have DataDoc, Dynatrace, Monico, Blistep and all of that, definitely for free. And also we also have some persons who use open telemetry as part of the application to kids. We have MailChimp, Shopify and Xelo. Open telemetry also collaborates with other projects both within and outside of the CNCF ecosystem. Projects like Jega, Kubernetes and Kubernetes. So all these projects are around cloud technology, instrumentation, visualization and all of that. So now let's look at a sample architecture for open telemetry. So open telemetry basically looks at treatments, it looks at traces, it looks at metrics and it looks at log. So someone might be asking what is traces? So if you have a distributed system, a trace simply tells you the life cycle of your request from the start to the end point. So let's say you have a microservice for probably an e-commerce website. Definitely you might have services around payments, services around stock, services around delivery. Those represent the different services you have. And the traces simply will give you information around those individual services. So a size trace is given us the information and the context around those services. We also have metrics. Metrics refer to things we can measure. For example, how much traffic is getting to our system and all of that. This can help us to provide the scale our server or to automate some of the processes. And aside from that, we also have the log. So the log is anything that is not covered with either your trace or your metrics. So the sample architecture is if you have an application you want to instrument, you have an entry point. So the open telemetry package provides two things for us. One, it provides an API, an application programming interface, which you can use to interact with the SDK. So the SDK basically performs all the operations we need. Why the API is in charge of interacting with the SDK to give us information to carry out transactions underneath the hood. Interact with the API gets data from the SDK. We go ahead and export that data using any of our visualizations. We could use GOMICS, we could use ZIPKIN or we could use Yeager, depending on anyone you prefer. And since open telemetry is vendor agnostic and packet agnostic, we can export to any of that without having any worries. We have support for all of that. So instrumentation. How can you get insights from your application? We have a distributed system. How can you get insights? Basically two ways. You could go the manual route or you could go the automatic route. So for the manual route, you have to get any of the packages that we have for different languages. Then probably you install the package and go ahead to instrument the behavior you want using the APIs. If you have lots of endpoints, if you have lots of behavior, this can be a little bit serious. So there is also automatic instrumentation where you use an agent. So what the agent does is the agent simply goes to the core of your application, sits there, then retrieves every request and response, then exposes it to any of our visualizers. From there we can be able to compare, to make analysis and see what we can do. So sometimes too we can combine both manual and automatic instrumentation. So currently I think it is only the Java project that has automatic instrumentation. For others we just have manual instrumentation but one of these are moving towards having something automatic. So you can be able to get insights from your application without doing much of the coding just by plugging in the agent that we are using. So let's talk about the manual instrumentation example. Let's say you have a live application which you want to instrument with the open elementary PHP library. How would you go about that? So first things first, you have to create a live application using the necessary commands. Then you go ahead and you require the open elementary PHP package. So definitely when you instrument your application using the APIs and the SDKs which the package gives you by default, you need a way to see the data you are exporting. So you can actually bundle, zepkin, jager, grafana or any other tool that you like or maybe even you can export to any other backend. Then you go ahead and you instrument their applications with your preferred APIs. Then since you've bundled the visualizers, you can also go ahead and see your expected data. Whether it's your log, whether it's your matrix or it's your trace. So there is a guide there, we don't have time so there won't be any time for demo. But if you follow this link, you'll be able to see how we instrumented a sample, Laravel applications and we are able to get data about the application. So this is how visualizing data from your application looks like if you instrument with the open elementary PHP package and other packages too. We are able to see your spans, you are able to see your traces, you are able to see your logs, you are able to see timestamps. So let's say a request is taking one minute. You can actually go and see which request is that. You can actually see from which of your micro services then you can help the bundle of that and make the whole process better. So finally, I would also want to tell us about contributions to open elementary because the project features different languages, different frameworks and different experience levels. So no matter the language you write, there is something for you in open elementary. No matter the framework and no matter the experience level. And currently most of the projects on open elementary are in beta. Only if you have with general availability. So there is still a lot of work to be done. Also as the last speaker mentioned, the different projects with open elementary offer mentorship. So if you are looking to get mentor, you can take out the CSF mentorship report on GitHub. So frankly, I think the open elementary project is actually very, very important and we can see this by the fact that almost every cloud vendor supports it and they are actually working to have standard specs. So I think persons from Africa should actually come around. We have stuff for docs. We have stuff for code for design. And if you are looking to get started, you can always reach out to me on Twitter. I'll be very happy to show you the way around so you can start contributing. So some resources. I have some resources if you want to look at. It's a good thing for don't miss. You want to look at the resources we have for open elementary or probably you want to see the stats around the open elementary in the CSF ecosystem. So that is just the oversight of how you can get insights for application using the open elementary project. Yeah, Obasi, did you hear me? Also, yeah, I could hear you. Nice. Thanks a lot for presentation. Do you plan on? No, no, no. There is nothing to do for this. Oh, nice. Cool. Thanks a lot for the awesome presentation. And do we see... Of course, if you can also share your slides to the attendees, that will be very much appreciated as well. Yeah, this is the Q&A session. This is the time for Q&A. We have any questions from the base? Please feel free to drop me on the chat session. And when do we see who will be happy to answer any questions you may have in the video that we will give in a minute. Let's see if everyone drops some questions. And if you are also streaming on YouTube, please feel free to also ask your questions on YouTube live chat. And we will be happy to answer any of your questions as well.