 All right, welcome everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Taylor D'Olazal, a senior developer advocate at HashiCorp, where I focus on all things infrastructure, application delivery, and developer experience. It's always a good time. Every week, we bring in a new set of presenters to showcase how we work with Cloud Native Technologies. They will build things. They will break things. And they will answer your questions. Join us Wednesdays at 11 AM Eastern Time. This week, we have Michael Haberman here to talk with us about trace-based testing with open telemetry. Also, join us for KubeCon and Cloud NativeCon Virtual North America October 11th to the 15th to hear the latest from the Cloud Native community. Some housekeeping. This is an official live stream of the CNCF. And as such, is subject to the CNCF code of conduct, which simplifies down to please be excellent to one another. Please do not add anything to the chat or questions. It would be in violation of that code of conduct. Basically, please be respectful of all your fellow participants and presenters. And with that, howdy, Michael. I'd love to hand it over to you to kick off today's presentation. OK, thank you. So hello, everybody. My name is Michael Haberman. I'm the co-founder and CTO at Aspecto. And today, I would like to speak with you about an open source that we created. This open source is based on open telemetry and how to use open telemetry in testing phases, rather than in production issues that you're having. So most time that you use open telemetry, you do that to collect traces, metrics, and logs. And you use it basically when you have a production issue, when you're trying to debug, to understand, to troubleshoot what's happening with your system. And that is super important and super great, and we do it ourselves. But this kind of raised the question for me. We are paying a lot of money to have this data, to collect it, to store it, to analyze it. It's really expensive. And we're using it only when we have issues, when something broke. So then I ask myself, OK, I'm paying a lot for that. What else can I do with it? Maybe I can use it in other places during the development lifecycle. And one of those things, looking at open telemetry, we are collecting data at runtime. So when do we have more runtime scenarios? We have more runtime scenarios when we are working in our local and when we are running tests. So then it got me thinking, OK, so when I have a production issue, I'm relying on open telemetry data to understand how my application performs, how it behaves, does it do what it is supposed to do. Basically, when I'm running tests, I'm trying to use runtime to validate, is my application doing what it's supposed to do. And then I thought, OK, how can I integrate this open telemetry data into my test? And with that, not only to use it when something doesn't work, but also to use it in order to validate, is it working the way it should do? So let me share my screen and show you a bit of a diagram to make sure we are on the same page when we are talking about tests and what type of tests I'm referring to and the different approaches you have to do testing in microservices, what are the benefits and what are the things that you should look out for. So I have here some imaginary architecture, but quite common one for distributed application. So you have a test that you're about to deploy in your CI and you want to run some tests. Therefore, it's this service under test. And it's a process that you spin up. It's communicating with some third of the API to do something. It also has a downstream service that it's relying on. This service is getting HTTP code from the service under test. Take those calls, communicate with AWS F3 or any other some cloud service out there. And this is your application. And you want to validate that application works. Specifically, I'm referring to an integration test, which for me means API testing, UI testing, and to end testing. So you have a test runner that is going to invoke some activities, going to send some network to that service in order to activate this service in some way. Could be HTTP, could be UI, could be through Kafka messaging or whatever. So I'm basically referring to any type of network testing at the end of the day. So let's review what are my options when I'm going to test this service. The first and probably the most common one would be to use Mox, either to do Mox only on things that I don't control, the external stuff, or I can even do it that way and completely isolate my service. And then I have process number one. That would be the test runner sending API code to process round number two. That would be the service standard test. And this thing is completely isolated from the word. So this thing is super useful. It's so easy to spin up. You just run the service, mock whatever you whatever dependency it has. And you're good to go. You can test it. From operation point of view, that is the simplest as it can be. From application perspective, so when we decided to do a mock, we gained a very significant amount of stability because my test always would get the same response, the same data structure from the third-party API from the mock. However, I'm not actually testing how the service perform. I'm testing how the service perform in a specific scenario of how the third-party API would respond. The same data structure, the same, most likely the same values, and also the amount of time that it's going to get response from the third-party API. The same calls for the downstream service and the AWS SDK. So that's like what's good and bad about it. And from assertions perspective, what test can I run? So the test runner is going to, let's say, send an API call and then get a response. I can only validate the response. And I'm not trying to say there is anything wrong with this approach, but it has its benefits and it has a drawback. I would say if you would open my CI in different services, you will find tests like that. Those are great. But we can have another approach. And this approach is saying, I want to test the whole thing. I want my test runner to send an API call to the service on a test, and then real HTTP calls are going to run between all of those components. For operation point of view, that's an item. You need to spin up so many things. You need to make sure all the configuration works. When it doesn't work, there is always the argument isn't the application, isn't the DevOps. So you have a lot of things to do. However, when the test run and when the test is stable, you get a real use case that you can see the whole view of your application, how it's performing, and hopefully how it's going to perform in production environment. And so then, because what I did here is not mocked, it's really real. So if the downstream service is going to upload a file to a three or object to a three, then I can test not only the response from the service under test, I can also route dedicated code in my test runner and to test the side effect. I can go to a WFF3 and ask, hey, is it there? Is it in the right format? It does it has the right permission or whatever I want to test. And basically, that's super important because it gives me a whole view. So just to give you an example, if somebody is purchasing something in my system and I want to send an email with an invoice and you want the invoice to be stored in a three, you want to validate the invoice is there. So now you can really make sure it's really, really there. The drawback in this scenario would be two things. First of all, I had to write some code that is going to F3 and doing all this validation. So if I had a bug in putting the code in F3, I may as well have a bug in checking if it's there. It's a code like any other. It's good. It has its own bugs. We are relying that we can check it in the third party that we're talking on. Maybe we can't check it either because we don't have an API for that or it's something that is not persistent. Because in F3, you can go and validate, hey, is it there? But if you send an API call to a third party, I can't go to the third party and ask it, hey, did you get an API call from in the last second or so? So this is the problem with this approach. So now I want to introduce you to what we did. So basically, and again, we're talking about the studio application having open telemetry in them. So we have a test drawing. It's creating real HTTP calls. And those HTTP calls generate traces. Those traces represent the activities that happen in the service and between the services. And then if I'll give you access from the test runner to get the actual trace, to get the trace and to validate based on that trace, then what I'll be able to do is I'll be able to take the trace and make sure that it worked. The same way that you would do if you had production issue in this system, you would go and look at the traces and ask, OK, what's wrong here? And rather than do it manually, we can do it automatically in our testing. So this is the theory behind it. And I think we talked enough. And now we can jump to see some code. So the code that I'm going to show you would be the example that you see here. So we would see two services communicating with one another. You would also see the test, the test runner. And if time permits, we will take a look at how the open source, by the way, I never said the name of the open source, which is Malabi. So you can go to and would see how Malabi is actually implemented, which is kind of simple to be honest. OK, so diving to our code. So if we take a look at our code, we have three files here. The service under test, the download team service, and our service under test spec, which is our testing. So the service under test is going to have a few API calls. This is a very demo application. So look at it as a demo thing. So we have a slash to do endpoint. And the slash to do endpoint is sending an API call to some third party API and then returns the title. We also have an invoice endpoint. The invoice endpoint is also sending an API call to our local host slash data. That would be the downstream, the downstream on slash data. It's going to put an object and put it in F3 and then response status good. And then we have slash user to fetch all the users. We have this a bit more complicated scenario where we're trying to fetch a specific user with a specific first name that we will review with more details a bit later and also creating a new user. So that is what the services are doing. And in each service, you can see right here that we have Malabi imported and we'll dive into what it's doing in a second. So let's take a test, for example. So here you can see the test to slash to do. And slash to do is sending an API call to some endpoint. So I started by calling slash to do and I'm validating the response. So far, this is a typical API integration test. You will probably go right here and start to validate the response data and make sure that it's in the right structure, the right values, or whatever you're trying to validate. And then on top of that, you are going to get the ability to use Malabi. And this is the open source. So maybe one of the most important things for me this project, I'm not trying to tell you you should start writing your test in a different way or throw away your test and move to a new test framework. Not at all. I'm trying to tell you take what you have today and extend it. So it's not going to replace anything. So the first thing that we are doing is we are getting the telemetry repository. This is where the magic happens. This is where the test process is going to communicate with the service under test, collect the telemetry data, and serve it in your test. So this is the test. We have access to our telemetry data. And here we are running our first assertion. Go to the telemetry repository, take the spans. Please take the outgoing HTTP calls. Please take the first one. And we assume that it's going to send an API call to this specific route and the status code to be 200. Again, this is the internal code. So we have two API calls. The service under test and the API call to the service. And the service sent an outgoing. So we are testing right now the internals of that service. And this is like trying to be as convenient as possible. So you have access to spans. And span is basically the event that every interaction between services or between dependencies is an event. And here you can see a whole very long list of whatever things that you can get, whether it's AWS, database operations, messaging systems like Kafka, QS, RabbitMQ. So basically you get a big list of things that you can validate on. So once you chose which type of spans you want to get, so any AWS span would be now accessible, then I may want to get specifically a three interaction. Then I want to get the first one. And now I can validate for any kind of attribute it might have. So this is like having the simplest test. And maybe before I'm jumping into the rest of the test, I'll review a bit what it means to set it up, because the setup should be fairly simple. So first thing first, we assume that there is open telemetry installed in the service under test or any other service that is running. To make your life easier, we chose to do to wrap open telemetry using Malabi. But this is purely open telemetry. If you already have open telemetry, you don't need to run Malabi instrument. You can tweak a bit the open telemetry you already have, and then it would work. So the way that it works is that Malabi is collecting the traces, collecting the spans, and allow them to be collected via HTTP call. So if we'll go to the test that we just looked, so we have here the get telemetry repository. The get telemetry repository is basically a function that is fetching the remote telemetry. So Malabi gives you the access to fetch the remote telemetry and with a specific port or base URL. So we are fetching all the telemetry data. And then before we start any new test, we are cleaning the telemetry repository so that traces won't leak between different test runs. We call it telemetry not spans because open telemetry is not only about tracing and spans. It's also about logging and metrics. So someday we may extend it to collect not only traces, but also metrics and logs and stuff like that. OK, so just to go through the process, we sent an API call that calls the service under test to collect the traces, keep them in memory, then we are fetching it from the memory assertion based on that. And before running a new test, we are just cleaning it so we won't have traces leaking between test. OK, so let's go through more types of tests that we may do. You would see that the pattern is almost similar. So if we are looking at, again, at slash users, we're sending an API call to the user, fetching the telemetry, and this is where we are starting to have stuff that are not HTTP based. So SQLize, if you're not familiar with that, that's an ORM, a JavaScript, a TypeScript ORM to communicate with your database. So basically, we are validating that we are grabbing a SQLize activity. We assert that there must be only one. So if you have a bug and it's going to be suddenly, instead of one query, it's going to be 10 queries, your test is going to fail. And then you're asserting that it's a select and you're asserting that the response is an array. Imagine what you need to go through without having the ability to look at your traces. And here we are going to even a more complex scenario, a scenario where we're calling slash invoice and when you're calling slash invoice, you have two hops. So the test framework is calling service on their test and service on their test is calling our downstream service and downstream service is going to call the AWS SDK. So here, what we are doing is, first of all, we are validating that we are in fact calling this slash data and also validating we are the only one and it was successful. We are able to make sure that we had one interaction with the AWS SDK and that's an F3 of TypeScript connectivity. We can take the payload that we sent to F3 and validate the key is the right one. So that is like being able to go all the way through and one of the use cases that I'm showing right now specifically API calls and it may be very interesting to remind that this could also work with end-to-end test. So let's assume that you're doing some UI testing and you want to validate with your, that you did, you fill up some form in your UI and now you wanna make sure that, I don't know, an email was sent, then you can do that. And this is like really, really powerful. Yeah, cool. So I do wanna show one more interesting test before I'll, so one more interesting test before diving to how Malabi is implemented. And this is a use case that we encountered several times with those people having problems with it. And that is when you have a database and a cache, I read this kind of protecting it and you want to make sure that not all the requests are ending up in your database, but rather being hit by a cache. So what we're doing here is we are sending an API call to slash user and basically creating a new user and validating it, cool. Then we're going to fetch the new user that was just created. So we're fetching it by the first name and validating that that was successful. Now, those two are going to, from the contract point of view, assure you that everything works, but now we want to make sure that the internal works. Nobody guarantees you in this portion of the code that the data is going to be found in your cache. And I saw companies having downtime because the thing that was supposed to be in the cache wasn't present in the cache. And it's hard to test it, it really is. So let's see how we can do that. So again, we are calling get damage repository. We now have all the activities available. We're fetching both the SQLized, the database one and the Redis ones. And first thing first, I want to make sure that the first interaction, so SQLized, the first interaction, the DB operation was insert. And that is because our first API was inserting to the database. Then the next thing that should happen is that we are going to Redis and we're going to try and fetch Jerry from our cache. And we are validating that we are requesting for Jerry in the right format, the right query. And you're also expecting to get that it's empty. So we are expecting it to not be present in our cache. Then what we are doing is we are again querying a database and running a select statement because we want to fetch the user from the database and then we expect to push it to the Redis. Let me go and show you, like, for me, this is a very good use case of Malabi because it really shows the power of what you can do with it. And maybe just to show you again how the code looks. So this is how the code looks like. So we called slash user was Jerry. It was present, so we are proceeding to this portion of the code. We first checked if it was present in our cache, if it was, we were just respond with it. It's not present, so we need to fetch it in our database. And once we fetch it from a database, we can push it back to our Redis. So this is ensuring that the whole process of checking if it's available, if not, pull it from database and then push it back to Redis is working and everybody is happy and we're good to go. I think that at this point, I'll jump into maybe showing a bit how Malabi itself is implemented. Again, it's kind of easier than expected, I would say, because there isn't like a lot of things happening there. So we have a few repositories here. So the first one is basically our ability to start the instrumentation. So if you remember in our service, the first thing that we did, we were calling instrument. So the instrument function is basically spinning up open to limit ring with very few changes. So change number one that we are doing is actually set up a sampler, because Malabi is communicating using HTTP, it's going to generate spend by itself. And you don't want to see Malabi's spends in your testing. So basically what we are doing if the trace, the HTTP target starts with Malabi, we are not recording it. So we won't put stuff that you're not interested in your test. The second thing is we are using an in-memory exporter. Our in-memory exporter collects the spends, stored them and waiting for them to be fetched. And I'll show you how it looks like. So our memory exporter is a very simple, open telemetry exporter, plain pure open telemetry exporter with two functions, get spends and reset spends, which are calling the in-memory exporter functions. And then we're getting all the auto instrumentation available for that, so we would get anything possible. And we do using two important things. The first one is to collect all payloads. So that is giving you the ability when you're sending an API call, writing to a database, uploading a file to a three, that gives you the ability to look at the payload itself and assert them. So it's not only giving you the ability to validate the interaction, but the actual data that is being transferred. So this is why we use the collect payload true. And the second thing that we're doing is we are suppressing the internal instrumentation. And this is kind of a funny thing that maybe some of you won't be aware of, but when you're doing like AWS.put object, that is going to create an HTTP call. And that is going to be caught in your instrumentation by default. And again, you don't want that, you don't want it, you're not going to try and make sure that the AWS SDK, the structure of the API call is correct. So we don't want to have stuff that we're not going to test. So we're suppressing the instrumentation. So that is the part of how we are collecting the data. And then in order to serve it, we have an HTTP server and this HTTP server, whenever you call slash Malabi, it's going to this router. This router has two simple endpoints slash spans would get that would return the spans we collected and the delete slash spans would of course delete them from the memory exporter. And you can see here that we are using Protobuf. The reason that we're using Protobuf to transfer the results, the traces from Malabi, from service to Malabi is because we do want to support different programming languages. So we don't want it to be, it has to be Node.js all the way. So if you want to have Node.js in your test framework, but you're testing Python or Java service, that will be doable. And then Protobuf is going to make sure that the data remains in the right structure all the time. And yeah, so that's the HTTP service and the function that we saw earlier fetching the remote telemetry is a very simple calling the slash Malabi, slash spans and doing this transformation and the same goes for the clear remote. So basically that's how Malabi is collecting the data and transferring it from place to place. The other thing that we did is kind of making your life easier when it comes to finding what you're looking for. So to filter all the spans only for HTTP, for instance, this is something that you need to know how to do. You need to know open telemetry and sometimes to know quite well how to find the right span. So we wanted to make your life a bit easier. So those would be the functions that we use in order to find the right thing that you're looking for. So if you're looking for a message received, so your service is receiving messages through Kafka. So what you would do, you would do spans dot messaging receive and then we would filter only for the right spans. And we also wrap the spans themselves in order to make sure that it's also easier right here. So for instance, if you gather the headers, that's an annoying object to work with. We wanted to simplify stuff. And also if you're trying to find a specific attribute within the span, again, just to make your life a bit easier, you do have access to the raw data itself. So let me show you just a second how it looks. So if you go to radius activities and then you take the first one, you can access the raw attributes and then you can do whatever you're looking for if we miss something that we had a bug or you have menu instrumentation, so you'll have access to it. So that's the open source. Just like to give you a bit of a roadmap thing that what are the main things that we are going to work on. So currently the test runner is going, is communicate directly with every service, which could be rather annoying. So we want to have kind of a backend. So all the traces, probably today are shipping the traces somewhere. So we want to spin up some traces backend such as Yeager, Zipkin, and then Malabi will communicate with Yeager. So the setup would be even easier. You just point the test runner to Yeager. You point your services to Yeager and everybody is happy. So that's one thing that we are going to add. Also support for metrics and logs and supporting more languages. Right now we support only JavaScript as you saw. And lastly, we very much want to add instrumentation to the test framework themselves. So then you will have the test name with the span that is generated. So we'll be able to look how, when you look at the test, what span is generated and you could have all of that all together. And I think I didn't show you the repository. So the service name is Malabi, the service. The open source name is Malabi and you can go here and find whatever you're looking for. Cool. So that was my speech about open telemetry and testing. Thank you so much, Michael. With that, I do have a few questions, but just as a, if you're just tuning in or if you've been with us, thank you for viewing. If you have any questions for Michael, if you want to talk about traces or anything like that, please feel free to throw that into the chat and we'll get those questions asked. I think, thank you for sharing that repository as well, Michael. I know that was one question that I saw was what was the name of the open source and that was Malabi and there is where you get it is on GitHub, fantastic. So if you do have any questions, please feel free to throw those in the chat. Otherwise I have a few here myself. So my first question is, how do we use tracing data today and what are some things that can be done with tracing overall? Yeah, I think, when we're looking at specifically things like open telemetry, you mostly would use it for you have a production issue and you need to fix it and you need to fix it fast. So if you would ask a manager in an R&D organization, how do you measure how open telemetry works? You would probably say something like MTTR, meantime to resolve, recover, repair. So that's how you're using it today. And I think every time that you're putting this investment to collect data about your application, you should always look for more ways. And I think the main thing that interests me is what we can do in pre-production. What we can do it in test, we can do it and what we can do with it in our CI in a local environment. I really liked that focus that you had too on testing and really showing examples of how you could get that implemented. I do recall seeing one tweet if I can find it, I'll share it a little bit later on my handle, but it kind of goes into that one library that was released for go in this case and talked about how now you can include this for your testing use cases and things like that. I really like that you pointed out that this is something that you can really factor in or refactor in and get a sense of what's going on with your code and with your overall stack. And you don't have to necessarily push to production to get some of those insights. Now granted, it's nice to have that instrumented in production so you can see what's going on too, of course, but I think that that's really fantastic. Have you seen any specific issues solved around implementing tracing or some success stories on this front that you might be able to talk to? Well, with open telemetry, yeah, a lot. And specifically with testing, I think the people who use it are mostly using it in UI testing because when you're doing UI testing, being able to understand what happened in the end of your system, the other end of the UI system. So use case that I know somebody filled the form using some UI testing tool and then the form sent an API code to service, then the service sent in Kafka message and other service would consume that message and send an email to the customer. They wanted to make sure that the email really gets there. So they did all kind of thing, they had those flaky solution and then they just used tracing, which they already had. It was really simple and straightforward. Interesting, interesting. I think when it comes to, have you also seen some, like I know when it comes to Kubernetes and some other things, you're able to take those metrics, that data, those traces and use that to say scale your workload horizontally or vertically or in some fashion. Are there any use cases that you've seen like that where people are using open telemetry to then do modifications either to their infrastructure or maybe run their code a little bit differently to have we hit that point yet? So I never saw something like that or hair, but I can definitely see that happening. I did met a company who, they are doing gradual rollout and they use the tracing to determine whether to proceed with the gradual phases. Actually, I think the ecosystem is not quite there. They had all the problems getting it done, but I think it was a very interesting use case. That is interesting. And that's cool to hear about. Yeah, I'm excited to see more of what evolves on that front and what we see come from the community and all these different use cases. I feel like there's no limit to what we can potentially see. Awesome. One question I had was, why is it possible or preferable to solve these issues with tracing as opposed to other tools or architectures or methods? So I think it's the amount of work that you need to put in to solve those things. So you have an application, this application, it's already telling you what it's doing. So when you grab this story of what it's telling you, story using traces, the ability to validate it, it's already there, it's simple. If you need to start developing dedicated code to validate tests, so the example that I gave is write dedicated code and fetch whatever was uploaded to F3. So you need to fetch it and then validate it. So you now have more code to maintain. You have more, the next developer, we need to work harder to understand how it works as opposed to traces where it's just, it's already outputted by the application. So it's just making your life easier. I will say that if you put enough work, you can do everything without it. It's not like it's doing something that couldn't be done. It's just already there, so just use it. I like that, I like that, it's already there, just use it. I absolutely agree with that. And I imagine it helps out team members as people move into new projects, or this is just yet another tool you can use to kind of get that introspection into your code or your stack when you might not have that familiarity with it. Like you said, if you are mindful and you have that full understanding of your stack and what those requests look like, what's going on and the why, then you've got good context. But if you don't, Open Tracing can really help out and hand the baton off to the next person that's working on a project, that's cool. One question I got was, what are some solutions that people use for storing data obtained with Open Telemetry and with these types of tracing tools? So you mean what database would people would use to store traces? Yeah, either different solutions, basically just repositories are storing this so they could go back and look at historical builds or compare or kind of what does that look like? Yeah, so eventually, well, either you use a vendor that stores the data for you or you storing it yourself and store it in some database. I know most people would use elastic search for that, which is super convenient because you already have Grafana Kibana on top of it and it allows you to do dashboarding alerts, whatever you wanna do with it. So that's a great approach, that's what I chose to use. I know some other people are using Cassandra, but I think the recommended thing is elastic search. Gotcha, that makes sense and makes it easy to search after the fact too, I'm sure. Yeah, yeah. Awesome, I do have a few more questions here, but definitely would like to encourage, if anyone watching has some questions, please feel free to throw those into chat and I'll be more than happy to ask on that front. Awesome, my next question was as people start to get working with Open Telemetry and with tracing, what are some common pitfalls that you'll see or some just either, whether that be like frequently asked questions or common misconceptions that come up when people start to work with tracing? So I think there are three things. The first one, the first kind of why do I need it? Question would be how is that different from logs? You can in some sense get almost the same thing done using logs, but so that would be number one. Number two would be how it's going to affect my performance and number three would be I implemented Open Telemetry, but I don't see all of my data the way I wanted to look like. So looking at logs versus traces, there is a lot of stuff to read around that, but I would say that logs are great to tell you what the process is doing, what is the story, the single process is going to tell you about Open Telemetry is about the context, about the path that is happening between services. It's going to tell the story throughout services, not within the service. About performance, yes, it's going to affect your performance like any other library that is going to instrument your service. So if you put some APM, it's affecting your performance, but I think the performance impact is definitely worth it and you can control it by controlling the sampling rate. So you don't have to collect 100% of what's happening, you can take a portion out of it. And about how the data looks at the end of the day. So first of all, Open Telemetry is quite a new project. It's not a very mature project. So it has bugs, issues that you may need to put some work in order to fix it. And in some specific cases, you need to set expectation what you're looking for to get from Open Telemetry. Awesome, awesome. Now, I'm kind of excited to see what comes out of that too because I know, looking at open tracing and open census and kind of like how I was really excited to see these communities come together and kind of converge on what's important, what are things we can measure and how do we help elevate others within the same space? So kind of in that vein, are there, what do you think are some of the good like next problems that the tracing community should focus on that might help out the community as a whole? So first of all, everything should be released as a stable. So right now, for instance, tracing is stable, metrics is in beta and I think logs is in alpha. So Open Telemetry isn't released fully yet. So I think that's what we need to get organized first. Then I would say that we need to make sure that the data we are collecting is as quality as we can get because if the data you're collecting isn't quality enough, the value that Open Telemetry offers is capped by that. Gotcha, that makes sense to me. And then it can be a hard space to solve these problems in as well. So I can imagine too, because again, context is always key when trying to troubleshoot or find out some of these things too. So it's interesting and exciting. With that, what are some good ways to get started? Say if people want to contribute or get active within the community, where are some places that they can meet up? Meetings, repositories, where are some places that people can get more information on this? So very funnily, my next session in 10 minutes or so is about getting started with Open Telemetry. So we're doing like an Open Telemetry bootcamp. I think that if you're starting, start by reading the docs and just get familiar with you. Yeah, get familiar with the docs, the terms, follow like getting started the thing and look for a good guide in YouTube on getting started. I think that would give you everything you need. Awesome, awesome. Well, thank you so much, Michael. This was just incredibly fascinating. It was great to see you walk through the code. Thank you for taking the time to kind of show all of us and talk more about Open Telemetry and tracing and really how to get started when it comes to your stack. Really, really appreciate it. Thank you, thank you very much. Thank you for having me. Awesome. Well, thank you so much everyone for joining the latest episode of Cloud Native Live. It was great to hear from Michael about trace-based testing with Open Telemetry. Thank you all for jumping in and attending. We really liked the interaction and questions from the audience. And again, we will bring you the latest Cloud Native code and presentations every Wednesday at 11 a.m. Eastern time. Next week, we will have Scott Fulton presenting Next Generation Observability with Open Source Monitoring. Thank you so much for joining us today. We'll see you next week. Have a good one, everybody. Bye-bye, thank you.