 Welcome everyone to the Selenium has a new trick up its sleeve to track failures by Pooja Chiman Jagani. We are glad Pooja Jagani can join us today. So without any further delay, over to you Pooja. Thank you so much, Krishna, for the introduction. Very appreciated. Hello everyone. Welcome to today's chat. I know everyone's very curious to understand what is the new trick that Selenium has up its sleeve. Let's begin. We will talk a little bit about myself. So hello, like I already said, I'm Pooja Jagani. I'm one of the Selenium committers. I've been working with Selenium for roughly two years, strangely enough, or coincidentally, my journey with Selenium started with observability itself. And I'm also a team lead at Brastax open source office. It has been a fun journey so far. So today, what we intend to cover is just a quick outline. We'll be talking about what is observability? Why do we need it? Once we understand why do we need it, we also need to know how to do it, right? So then we'll go deep into the three pillars of observability. And out of that, we ensure that the two pillars are very strongly embedded in Selenium along with how we can use it. So we'll see it with demos. Then we move on to a typical observability pipeline. Just having observed eating your code is not enough. It has to be extended a little further with some components, especially if you're running at scale. Most of these principles are applicable even if you're a standalone, you have an application in which you want to instrument observability. So it works for that as well. And then we quickly wrap up with a small case study, because everyone has a different application, right? Sometimes you're running it as a software as a service. Sometimes it's your own application that you're running on premise. So considering one such case study, we talk about how we can provide it even as a feature, like how we can extend any observability even as a feature to your users. We'll wrap up with that. So let's begin. What is observability? It's like a famous buzzword that we have, right? So I'll give everyone just a couple of seconds to go through this Wikipedia definition and then we'll break it out. Okay, I hope everyone's gone through the definition. So I'll just read it out once again. In control theory, observability is a measure of how well internal states of a system can be inferred from the knowledge of its external outputs. It might sound weird, but let's break it down a little bit. So let's talk. So what it says, observability is measure of how well internal states of a system. In our context, the system is any software application that you're developing. For us, it's also Selenium. It could be application you're developing. It could be application you're using any particular software becomes your system in this case. What it's trying to say is how well is that system functioning? What is the current state of the system based on the data that it generates or the output it produces, right? We already do this in day to day basis. Maybe we don't realize this, but the simplest thing we do is logs, right? If you're running an application somewhere or you're using someone's application to know how well it's doing, what is it doing internally? You use logs, which is nothing but the output it generates in order to choose what information do you get. So that's exactly what observability is. It gives us a peak of what's happening inside, just based on the data we choose to expose as developers or maybe fellow developers have done the same for the products we're using. Now, okay, we know the definition. We understand what it does. Maybe we're already doing it in part, but why do we need it? Now, let me back track a little. Why do we develop products altogether? I mean, there are two goals, right? On the open source world, and since I'm from a mix of open source world and I work with the company, it's a mixed goal. But for open source world, you'll do it for the greater good to help your community to support fellow developers to give them a functionality. And as a product in any company, like any corporate or any startup will be to generate revenue. Why do you want to generate revenue? How will you do that? You do that because you have a good product and your customers are really, really happy. But now I have happy customers. A lot of them are using your products. Now it's time that your products will need to scale. It could scale in different ways in terms of a different architecture. It could scale by deploying it on cloud. It could have any other different scaling mechanisms that you need to handle multiple requests, multiple loads that you will get in terms of happy customers, right? So when all this happens, you know, slowly start looking at, hey, is my system scalable? Hey, is it available? Is it reliable? These properties creep in, but in order to do that, you need observability and I'll exactly tell you why. So at the moment your system scales, let's be honest, all our systems are not flawless. There are production engineers, on-call engineers, support role engineers who do get woken up or called in the middle of the night because there's an outage in your system. There's a bug in the system. There could be a potential downtime. All these things do happen when one is operating at a certain scale. So at that point in time, what do you need? You need to know how well your system is doing. Assume you're an on-call engineer woken up at three in the morning. You will want answers. You will want answers that you might have your software has not prepared for. You'll want answers as to what is not working. Why is it not working? How long has it not been working? Did we give a wrong output? Did we get a wrong input because of that that we were not able to do something? There'll be tons of questions and you need answers to everything. This is what observability gives you. It gives you a gateway to unknown problems. If as a development team or as a developer, one has tried to put on the user hat or even like a support engineer hat to understand that what will be useful at that point? That's exactly what observability gives you. So now your system is to help your system be scalable, reliable, available, you need observability. So now your system also becomes observable. But let's look at one more perspective as to why you need it. If you're able to identify using observability on a day-to-day basis, the potential bottlenecks, the potential problem areas, or even existing bottlenecks, and you're able to rectify like a root cause analysis at the earlier stage by just analyzing the trends of the data that's been generated. In turn, you will have a productive product. If you have a productive product, your engineers are not going to be broken up as much. They're going to be happy. They're going to be productive. So you're going to have a happy humans and happy product. It's a win-win that way and that's why observability fits in. So now the main question and that's the topic for today. Why do we need observability in Selenium? So let me talk. This comes from the perspective of Selenium Grid. For those who don't know, I think everyone must be aware of the Selenium client bindings in different languages that pretty much hopefully most people in the audience have used. And at that point in time, those language bindings, they just point them to a browser and it talks to a browser. But what if I want to run things at scale? That's when Selenium Grid comes into the picture. Selenium Grid is a server. What it does is it helps you run things at scale. If you want to do parallel testing, cross-browser testing, you want to manage the load of different browsers running on different machines with different OSs. Think of it as a small server farm, small device farm, sort of a thing that you want to run. That's where Grid helps you do that. There are added benefits and we do have dedicated talks and workshops that deep dive into it. But I'll just quickly run through the different components and why observability makes sense. So today, the Grid for Architecture has kept in mind the microservice system or the microservice architecture we have today, where monoliths are broken down into these smaller services that run independently in cloud, different machines and communicate to each other. Same thing is emulated with Selenium Grid. As you can see in this architecture diagram on the left-hand side, you see a client, which is nothing but your machine where you write your tests in the favorite test framework of yours using Selenium. When you instantiate a web driver instance, what it does is it sends a request. It could be either directly to a browser or a Selenium Grid. We will be talking a lot of things from Selenium Grid perspective since it's a server here. It will point to the Selenium Grid. Now let's quickly do a walkthrough of the different components that we have. First component it hits is the router. So when you do a remote web driver, you give it a URL to 0.2 and that is the router's address. Think of the Selenium Router as a simple forwarding proxy. It is only able to identify if it's a new request or an existing request and accordingly take action. If it's a new request, it puts it in a session queue. It's a queue where all new requests are sitting happily. Then we have this component called Distributor, which knows that, hey, I have these browsers with me. Which browsers are free, which are up and running, which have some less load that I can delegate more to us. The load balancing bit, our distributor takes care of it. So it will pick a request from the session queue and delegate it to one of the nodes. As you see in the diagram, you could have one, you could have 10, you could have 100. That is where your browser runs. Those are separate machines where you would have your operating system and browser combinations that you would like. That's what would sort of give you that server farm or a device lab feeling. That's what it's going to do. So it decides, oh, this is the correct machine. It has less load. It has the browser that it needs. Please go ahead and create a session here and start talking to the browser. It also has the web drivers. And once that is done, we maintain an internal state using some things as simple as a session map. So now you'll see, and the entire communication happens by a Pub submodel event bus. So now you see we have a lot of components. They're talking to each other. If something goes wrong to know what is happening exactly why is why we need observability in Selenium. And that's why we provide it to the user. And we also try, at least in Java, what we've tried to do, we've tried to do end to end observability. So for every Selenium command that you issue, you're able to see exactly from the time it's sent on the client side to it goes through the entire architecture and circles back and you get the request. You'll be able to look at the entire journey. Now we spoke about observability, the need for it, but there has to be a mechanism to achieve it. These are the three popular pillars. They form the foundation of observability and the key is correlating the data from each of these. Let's deep dive into one pillar at a time. First comes tracing. This is my favorite. And this is, I think, we'd also have a demo after this just to see how it's done in Selenium. So let's think about tracing. As a kid, you might have this activity like join the dots where you would have to trace a picture to see the entire picture to understand what that entire cartoon looked like. It used to come in a newspaper activity. Similar analogy applies here. Tracing gives you the whole picture of the system. Let me go back a little. In our day-to-day views tracing, I don't think most of us realize, but a simple day-to-day tracing what we use is a stack trace. When there's an error, you see a stack trace. It's able to pinpoint exactly where the error happened. It tells you what parts of the code you're able to trace through the different code steps it's taken and where the error has happened. Now let's extend the same thing to a distributed system. Now you have multiple microservices talking on the network. You want to know what is happening. In order to fulfill one request, it might go through 10 services or if you have a network of hundreds of services, it might go through hundreds of services just to do that. Maybe for once a request, it's okay. But what if there's a 100x load on your system? Now you have hundreds of requests. If your system is under stress, you're not able to track what is there. That's where tracing comes in. It allows you to track a journey of a request from start to end, from its initial step to its final destination. What did it hit? What went correct? What went wrong? What took long? To give you answers for all those questions, you have tracing. Let's quickly go through an anatomy of a trace as to what a trace is. A trace, like we said, represents a single unit of request. Typically, I think all of us that are aware like HTTP requests have something like a request ID. So think of trace ID as a unique identifier to represent the same. Now, it's not just sufficient to know that, hey, my trace has touched this service, this service, this service. Okay, it has gone through this service. But what am I getting out of it? You need to know the important work your request is done as it's gone through, right? Because there will be some work it has done which has been successful, some which has failed, which is what we are more interested in. To represent these small units of work it would have done, we would call it a span. It's called a span. Again, it has a simple span ID. And to do that work, you might be doing a branching out and doing further more steps and other units of work. So you can have child spans. Think of it as a tree. You have a parent span and child span to see what it takes to get that work done. Okay, now I know what is a trace. I understand the span. But okay, I still know it's touched this service, done this work. But I need more data, I need a lot of context. How do you add the context using attributes? Attributes let you power pack your spans. What inputs were there? What were the outputs? What went wrong? Was the DB that we connected to the correct DB? Like what's the address of the DB? What response did we get if you're connecting to a database or a third party system or some other part of code? All that information, put it in attributes and you should be good to go. And throughout this process, there might be certain events which are important at a particular time. Say you're talking to a database and it's down at the same time. If you want to know at what time stamp it happened to correlate this information. So timed events are nothing but logs. So within your spans, you need to work to mark certain important times that these are the ones where important events have happened. Again, it has more context. Events is the next part that we'll be discussing in depth in the second section, but let's continue. Now we spoke about this architecture right now just. And same way, if you see, think of a trace as a single request going from client to the router to the session queue distributor node and all the way back from the single trace. So what is a span? The fact that client was talking to a router and did a unit of work. That is a span router added to the session queue and did few steps. Those are unit of works. That is the span. Within a span, we spoke about attributes. Now attributes are nothing but we are telling the router or we're telling the session queue. Hey, the client is requested for this browser session is like, okay. And then the distributor tell you, okay, for this browser, we created this request. This was the response we gave. And this is the session we created. It gives you all the details that becomes your attribute. Again, you add logs to it just to, you know, sort of timestamp important events. So that's how that will fit in. That's how it will look in architecture. This can be mapped to any architecture on your system as well. Let's go to the fun part, which is demo. This will try to show a little bit about how we do it. So in the Selenium repo, we have this trace thing.txt, right? Based on the version which tells you exactly what tracing steps to do if you want to enable tracing. So by default, tracing is enabled. And then extension flag in Selenium to add anything to the class path. And it's important why I'll just let you know. So if you're running a Selenium Java jar, you're supposed to pass few environment variables to just export this information out. I know that, okay, I have these traces, but I just need to put it somewhere so I can have a look. You pass in the jar. It's a typical Java jar command. And you use the extension flag to add some more dependencies to export this information out, to report this information out. Okay, and it's as simple as that. But the simple thing that we suggest if you want to observe this to use Yeager, it's a simple one. It has a simple Docker image that you can use. It gives very good trace visualizations. There are other tools like Zipkin as well, but we just use, it's Yeager is just simple to stand up and run to just see this in action. And then it can be extended. I start Yeager on the right. It's just running a Docker image on the left. You see, I have a select. Sorry, I'm just been pointing that they are going to start it on left. You see, we starting a grid in a fully distributed mode. That means it will have all the components we spoke about. You're firing those up. Session map has started session queue has started logs are just coming in distributor has started. It's identifying what's available. Again, we're printing some logs for the details that we have. And unknown has been added a grid is beautifully set. So this is the script. I think as we said we need to add the standing up different components. This is just a quick walkthrough of what components are going through the environment variables we are passing. This is how we can do it. It's just running different jobs. It's a simple thing and passing the variables. Now I just quickly check if my Yeager is up and running because I've started a few services. We already see that few traces have come in. We'll go into detail later, but you can just see some components already have some traces and list. Now let's run a test. That's the fun because you want to see what happens at each point. So in order to do this on the client side mechanism is also very simple. So there's a documentation where it says as to if you want to do it on the client side, how it can be done. You just need to add a few dependencies. And apart from that, it's just simple form dependencies that you have. Or if it's the same thing applies or certain properties as to the environment variables that need to be set. These are pretty much the same things. And I'm just showing as to that's what we're doing is just getting mapped. Now let's quickly go over the test we have. We are creating a URL that points to the grid. We want to run tests in Firefox. Go to a URL, find an element, wait for an element. And that's pretty much it. And then, you know, just wait for an element and just put the drive. It's a simple test. I think most folks would have interacted with this. We are now running the test. Test is running. And we are seeing a bunch of logs that are coming in on Selenium side of things. It's just showing, okay, session is created. All done. Session is also deleted because it has gone through. Now we look at Yegor. That's the fun part. Now you'll be able to visualize it. All services have been touched upon. We are able to see that quickly see from the Java client perspective. So before we jump into that, this is also telling me what are the components and how they're talking to each other. Just a simple architecture view. And now we dig into one of my favorite traces. This is a trace that we have last when we did a quick command. It is exactly telling the steps that are happening. We're running a driver command. It has sent a request and just drilling through. Now let's go to the one with the biggest one, the first one. So what is the first trace? It is when you're creating a session. Seeing your center new session command from the client side. And you're seeing some parameters as to what did we send. And now we're going through the different steps. We've added things to the session queue at some point of time. And the different logs that we have, the different steps we set. So this is what tracing lets you do. It lets you go through a bunch of things. Okay. It's reached the distributor. So the distributor part of the new session. Now this has event logs. It's telling us that within this unit of work, the span, these are the time events that this is when it got the session. And this is when it created the exact session. Right. So that is what it is doing. So this is how tracing looks. And yes, there are more things where you can just discuss as to, okay, this was the input. This was the output that we got. It's emulating the exact same thing. So this is what tracing lets you do. Okay. And so far we get an idea as to how you can track everything from end to end. But now let's talk a little bit about the next pillar. This one pillar we've seen. We've seen how Selenium does it. You can dig deep. You can keep digging. That's what the rest of the video is doing by the way. Just going through traces in depth, seeing what has happened. But I think we get a good picture of what's happening from start to end with steps that doesn't meet and how you can use that information. Let's move on to the next pillar logs. What are logs? All of us know. We record this information daily in our code. If something goes wrong, we tell, okay, how are the logs looking? What does the log say? Do we have a error log? It's a common question that we do, right? So anytime something fails, logs tell us what has happened at that point in time. It doesn't necessarily give information about what happened just before that. So we get to that a little later. This look at a simple log statement. Now this log statement is a log statement from Selenium. It says simple. It gives your timestamp. It tells you info. It's telling you session is created by the distributor. And now you're able to see what session is created by the distributor. It feels like a big dump of a stream. Now, how can you make this better? Structure the log, right? So you had the next log is a structured log. It's just a contract or between your teams so they can understand the different key values. You can think it was a dictionary, key value map store, or even a JSON. I mean, it's all the same, right? In terms of it tells you, it gives you a little bit structure. It tells you the class, log level, log message, name, time method. But if you look at the first log statement, the second log statement, apart from the structure, what new information are you getting? You're not getting any information. Structured log maybe can be used for like something like indexing or something on those lines. But apart from that, there is no added benefit. The information that you get is the same. But what if I want more information? Like we said, observability is to help you answer unknown questions. That's where enters event logs. You've seen a little bit glimpse of it in the earlier section where we spoke about there are, you know, spans of events within them. But let's talk about. So event logs are also structured logs. Yes, they are structured because it's easier to search and index and make it a little more machine readable. That's why it's structured. But the main thing is don't think of a log as a single statement. Think of it as something that represents an entire unit of work. And that has all the information that it's needed to represent that unit of work, your inputs, your outputs, your failures, your calls to the dependency service. Everything is encapsulated in one big event. You also call it as event logs, but like a one big event. And the more context you have, the more you'll be able to answer certain questions that you did not expect. If something goes wrong, you're correlating a bunch of logs for a particular unit of work and adding it in one place. So you'll have answered to all the questions. This is again, event log in Selenium example. Similar structure like the previous one. But the structure in the previous part remains constant. Here, a part of it changes. I'll tell you what. So trace ID, it's saying this is a log for a unique request. What event time? What is the event name? Create attributes. Now this is where you add as much context as you like. So if you see our earlier logs, we were just telling, we got a session in the distributor. This is what we got. What did we do out of it? Was it successful or failed? We did not have that information at all, right? But here we are telling, okay, this is a request payload that came in. This is a browser I was requesting for. I sent this information to the session. I created this particular session ID and this is where the session is running. If something goes wrong for a session, all the information is one place. That is the power of event logs. Let's have a quick look. So the setup remains very similar to what we had in the previous tracing bit. The first two minutes there was just trying to explain if anyone wants to do tracing, like a mini tutorial just to do that. The setup is the same. I have a distributor grid running. I'm creating a error scenario. Just let me go back. I think I just keep through that a little faster than I would like. I am creating a error scenario. I had an echo driver on my machine. I just removed it. So it errors out when I request for it, right? It won't be able to find a driver. It will throw an error. And you'll see the beauty of this more in error situations. I'm running the same test that we saw earlier for Firefox now. But later out, I don't have a driver on my machine. I removed it from the path. The test is running. Yeah, it fails. Not yeah, it fails, but you know what I mean. We see an error message. There's an exception. It's giving us some more information. Now what? The fun part is checking out your trace. And I mean, I know we do this. I'm running this in a simple standalone mode, but you get the meaning. Now it tells you that Java client command has failed. You see all these red check marks. You're able to see what command, what happened. Session Q received it, but it comes with a 500 status code. Something has gone amiss. But what has gone amiss? We tried to create a new session and node is telling me unable to create a session. What went wrong? Where are the details? The distributor has the details like the event that we spoke about. This is where if you see, we're able to see when I'm trying to create a new session, this span. It has these logs. These are event logs. It gave me all the information. What was the event? It aired out. What was the exception message? Okay. What was the stack phrase? Think of it as a trace within trace. Like a mini-trace as to where did it go wrong in my code? You can encapsulate as much information as you like to just know what happened. I understand we might not draw meaning out of this in just one request, but you have hundreds and thousands of requests. You would be able to do more of a trend analysis to the whole thing. Identify patterns, identify a request that are taking a longer time and see there are statistics available. Typically tracing is given out in a small, simple raw JSON format. This is just a tool that's going ahead and displaying it. But what you do with that powerful data is totally up to you. But if you have the data, you can make decisions. That's what it's about. Again, this is like a quick overview as to what I've covered. It's also telling me what was the payroll. So again, all information, what did we send? Why did it not work? Where did it not work? Exactly which point in code did the exception come in? I have all the information I need. So that is your log event. So that's our second pillar. You've seen tracing. We've seen logs. Let's talk about metrics. So what are metrics? I think a lot of us have worked with metrics, but metrics are nothing but measurements in your system. You can measure everything in your system. You feel like you could measure the number of requests and number of errors that have happened, time taken to run a particular function. You can measure as much as possible. But that kind of a data load overload is not needed. You might just want to measure important parts. And it's for one as a developer to identify what are things worth measuring? And that's why we call metrics the golden signals of the system. So it's up to the developer to identify these critical golden systems that will have meaning when you observe them outside. If the system is there and you go through your metrics dashboard, the reading that should be able to interpret the system running fine. Is it healthy? Is it under load? So ensure that that sort of golden signal mentality is sort of etched in when planning for metrics. Common types of metrics. They're particularly divided into two types. I'm not going to deep dive. I'll just give like a one liner on each because honestly the thing with metrics so far is a lot of mechanisms are available and these terms are not as standardized. But the underlying mechanisms are pretty much constant in most of the systems. A simplest one is measurement in metrics. Let's go with a counter. Like the name says you count something that is constantly increasing. Think of a counter as an upward graph or a flat graph. Value will only be increasing or at a constant level, but it will not dip. They just keep increasing. So it's a little bit of a more vanity metric like the total number of users we've had so far, total number of requests we've so like something that's only increasing. That's not going to dip. Gorge on the other hand will help you like you know measure anything at that particular time. Think of it as a temperature. Temperature value at one point is constant and second point it changes. So that's what Gorge will help you do. Like you would want to know the number of threads in my system currently. The CPU like the percentage of CPUs currently. So Gorge will evaluate that instant. That's one solution. Distribution. Now distribution is tricky because there are more sections in it. Distribution especially that's about histogram will tell you distribution of data over a period of time. It'll give you percentile values, average values, but just a tip, average values are not very useful if you're getting average metrics. What happens is again a vanity metric, it sort of hides all the peaks and the dips. So percentiles especially on 90% 90th percentile or 99th percentile will give you a better value because you know that 99% of my values are less than this. So this is one outlier point. So when things are failing, you're interested in that outlier that caused a problem and not an average. And the last one is a meter. Meter is number of events per unit time. Number of requests per hour we've handed number of users in the last one are that have visited us. Those kind of metrics. Now, this question I'm so sure I would have gotten if I had not covered this. Why are we not collecting metrics in Selenium? It is tricky. It is not a problem we've not tried to solve but it is tricky. So why would we collect metrics? If from Selenium perspective or a project perspective, I would collect metrics to know what is failing, how often is it failing, what feature is being used the most, how often it's being used, those kind of information. But if I were to instrument metrics, I would be gathering this data from users, only then it would make sense. But it is tricky because A, the data has to be anonymous. If my code is running on someone else's machine, the data has to be anonymous. We don't need the details of the user. Second, we need permissions from the user in order to do that. If all is well and good, in an open source project, we want to keep everything open. So we need to also store the data to analyze it. Even that process has to be open as to where do you store the data? What are the API keys? Or what are the keys to talk to the database? Those kind of things are open, which is again a risk. And it's a little bit of tricky problem to make sure it's a totally open process and also we're getting accurate metrics. It's as good as someone running, say they're learning Selenium and someone just running a test in a loop to do something. We don't know if it's a proper metrics that we can actually use. And this purpose, you try to fail something. They're doing a test-driven development and doing the other way around. So that's why it becomes a little tricky. But if you have your own application that you have full control over, it's not a service or library that you provide, then it may be good to have metrics because they are pretty useful. Interrupt, Pooja. You have eight minutes time remaining. Thank you. Okay, thank you. So now comes observability pipeline. Again, this is very similar to what we've already seen. We saw instrumentation in Selenium. We've added the traces, we've added the event logs. If you would add metrics in your application, that's instrumentation. We added some external dependencies. If you remember, that becomes your reporter. Data is collected. It has to be thrown out or pushed or written somewhere in batches or line by line, however you do it. But in a large-scale system, you would do like a batch processing because that is not efficient. And if you're doing that, it's up to you. You might directly want to write it to a storage layer and then use it or have something in between as a buffer in case any component is down. So you could have potentially a pipeline or a caching area or a staging area, whatever you like to call it, where you would store this information till it's written successfully over the network in a storage layer. Storage layer. Like we see, Yegor was everything. It was doing the reporting part. You can visualize it. But if you're running at scale, these are components have to be separated out so that amount, that data that's being generated almost every second or every second is handled. So then you'll have a storage layer which is able to understand this kind of data or at least store it in the format required to reap the benefits of observability. You need to either visualize the data which can potentially be again used for alerting or your own calls or your figuring out potential bottlenecks. That sort of thing. So this is a typical pipeline that fits in. It fits in for Selenium. It fits in for any other application just in order to have this another data stream that we're going through. Now just a last quick little demo that I have. It says observability as a feature. Now let's think of a use case and again Selenium fits into this. The reason we have observability is because when you run a Selenium jar, it's a black box stream. So now with observability, you're getting the insights. But yes, you have more control when you're running it on your own premises. But what if you have a tool that some other companies are providing as a service? For example, Browstack provides as the infrastructure to run tests and we can do that using Selenium. But what if the user still wants to get some insight in this situation? It's difficult. So what they have done is they've provided observability as a feature. We just have a quick look. Again, we are running a similar test. In this case, I'm just pointing to the cloud vendor that I have. I'm just running a simple test. Session is created. Things are going okay. Okay. Test is done successfully. This is just a dashboard where it's showing the test I just ran. So what it's doing is apart from providing me different kinds of already logs that I need. It's also giving us additional telemetry session, right? Where I can go ahead and download these logs. So what I've done is I have downloaded the logs and I'm trying to just visualize them. It's again, very similar to what we've seen earlier. But again, yes. So this is my Yeager. You can also write it in a non-docker format. You can download a Yeager agent. I put just let me speed up apologies. Yeah. So what I did is in the agent itself, I just downloaded. So this badger.tart.gz is the downloaded logs. Like, so like I said earlier, tracing information along with event logs is provided as JSON. So if that is downloadable, it could just be used by any tool or to do what we did. So now we are doing exactly the same thing. We started Yeager just to read those logs. So we have that up and running. And now what we quickly do is we go ahead and visualize it. So the same power is in our hands. But now if you think about it, the kind of power that we had was in terms of a, it's a black box to run something on cloud. You don't know what's happening. So you got some insights to that. But what it's running internally and what is of interest like Selenium, we are getting more insights into it as to what went correct. What didn't work out? Was everything okay? What did we send in? Again, the same questions get answered. So just to conclude, we can think of it as if you have your own application, you have a lot more control. But if you're also providing application in, as a service, you can provide observability as a feature so the users can leverage it the way already Selenium is doing it. If you have a service that, you know, other services use like cloud vendors are using it. It's also possible that if you're a cloud vendor and you have underlying services, you can abstract the data out again, provided to the user as long as the data or the observability information is available, it can be provided in different ways. And again, there are uses for it as we discussed. So that's it. That was for me today. I hope I have not overloaded everyone with a lot of information. Thank you so much, everyone. Really appreciate everyone joining in. I think we have a few minutes for questions. Krishna, could you help me out? Yeah, so we have one question currently. So can you tell about port numbers left and right? This is a question from someone from the attendees. Okay. So the port number left and right, I think you might have seen that in the Docker bit. It's more of a Docker concept. Right. It tells you that if it's a Docker concept where you have ports within the Docker that you want to expose in your host system, so port on the left is your host system port and on the right is what is in the Docker. But it does not seem to be any more for the questions. If anyone have any questions, we still have time. You can post it in the Q&A section. So the question and the question is, can we add JGOR UI with Selenoid? I think you should be able to do it as long as underlying it's using Selenium. If it's using Selenium and traces are on, you should be able to do that. No further questions yet. Let's wait a few seconds if anybody have any question. Yeah. So can we make the logs more structured for JavaScript using? It can be. So currently we have this for Java. We have seen few people use it. But if there are people or there we have audience who are interested in having this sort of structured logging available for JavaScript. JavaScript can be instrumented. We don't have that right now, but we will definitely consider it as something we have in the pipeline for future releases. Feel free to make a feature request on Selenium repo and we'll be more than happy to discuss and take it further. So can JGOR be used for APM? Not yet. I think we got this question yesterday during Manoj's and my workshop as well. I think APM is going through some restructuring. But it's the conversation we can again definitely have with them once in the future releases. They have that. Then that can be extended, but it honestly depends on APM's implementation of how they are using Selenium and underlying Selenium. So what are some practical cases when tracing logs that can be helpful as a user? Yeah. So like we said, so currently if you're running a grid, if you have a grid that you're running at a scale, imagine a grid running with 100 nodes and you're running a test suite of 1000 tests, you have so many requests going in, you don't know what is happening where, but you see a set of failures. And maybe those set of failures are not ported into your test, something has happened on the browser site or something has happened within Selenium or grid that has not worked out correctly, maybe a node crashed. It was not able to find a particular node. It's the infrastructure that you set up that could be a problem. So to get that information is where you can use tracing. Like I showed it was not able to find Gecko driver on my machine. It crashed. So you have to set up a node where the correct drivers are not there for that motion. And you'll be able to dig deep and see exactly what went wrong. It also could be when you're sending requests on a network, there are some latencies. So you would be able to study things like, oh, these requests at this particular unit of work if they're slowing down. Maybe there is a problem with the network. It also could be a transient error. So those latencies I was seeing yesterday and that data, I'm not seeing that today. So this makes the data is very useful. Obviously, single data point is not enough to identify a trend or a pattern, but when done at a big scale, or that's when you get deeper insights. We have run out of time. So thank you Pooja so much for sharing your experience with us today.