 Thank you, Christina. Welcome to everyone. Welcome to the webinar. As Christina mentioned, my name is Noah. And I'm going to talk to you today about our new open source tool, which hopefully will also be your new open source tool about testing in Kubernetes applications. So if my clicker works. The agenda we're going to go through today is we'll start with who we are at Stormforge, why we created this product, why we care about testing, why it's important to us, and what the problem is that we're trying to solve. We're going to talk a little bit about current testing models, how current testing models are flawed. And yes, they are all flawed in some way. We're going to talk about what we can do about it. Then we'll go on to the project itself. We'll talk about how it attempts to solve the problems and then how folks can get involved, either started using or involved with helping out with the project. So first of all, who are we? We are Stormforge. You may have known us previously as CarbonRelay up until last November. We are a market leader in performance testing and Kubernetes application optimization because those have to go hand in hand. If you don't have good testing, it is difficult to optimize. And if you are doing testing without acting upon it, then why are you doing that testing? So we've been doing this for a while now. We are a company who is backed by a strong machine learning presence because optimization is difficult to do by hand. So letting machines make the decisions as opposed to having to try and juggle 10 or 12 different parameters at the same time, that only makes sense. We are only on the cloud native ecosystem. As you can see, we are focused on application optimization of Kubernetes applications and all the related products that go along with that. And we're also all in on open source as you might have guessed by my job title of open source advocate and the fact that we are presenting an open source program and project to you today. If you want to know more about us, you can find us on our website, stormforge.io. We have a public Slack. And we also host bi-weekly office hours if you want to just come in and ask us questions about any or all of these topics or none whatsoever. Anything that's tangential will also happily take. So as a group that is focused on testing and optimization, we want to first establish what is optimization and how did it get us to this particular question that we wanted to solve. So optimization, at least as I'm defining it, is getting your application to behave how you want it. It is however you want your application to act once you have finished the act of optimization. Whether you're trying to normalize for cost, whether you're trying to increase your performance, whether you're trying to get better resiliency, whatever parameter, whatever property it is that you're trying to get into your application, that is part of optimization. It is your desired behavior. So how do you measure desired behavior? That's an interesting one because in order to know what that desired behavior is, you have to be able to have some form of measurement. You have to understand the tool. You have to understand the application and understand what actual properties are the desired properties in the first place, which in the world of Kubernetes applications, in distributed systems, in this cloud-native ecosystem, that keeps getting harder. Applications are more spread out. We've gone from a world of monoliths to a world of microservices. Every individual piece is behaving in its own unique way. And the idea of how to optimize is more and more complicated as we make the apps more and more abstracted. So in order to understand how those all fit together, we need to have some form of yardstick to do a measurement against. But in order to see that behavior, we have to stimulate the behavior in the first place. If you don't have any good load, if you don't have a good way to stimulate the behavior that you want to see, you can't see what the response is when you are stimulating it, obviously. If you want to see what your production apps look like, then you have to know what a production behavior looks like. If you want to know what isolated behavior looks like, then you are going to generate load in isolation. It's pretty straightforward. So we are interested in optimizing applications, which means we need to stimulate behavior that will provide us with a response that shows the properties and the qualities that we're trying to optimize for. Which gets us to load testing. As we said early on, load testing is flawed, but load testing is also a spectrum. At one end of the spectrum, you don't really know what you're testing for. Most load tests start with, I'm just going to throw as many requests as I possibly can at the front end of my application, and I'm going to see what the results are. What's the failure rate? What happens when I hit the root of my application? What happens when I hit a particular API endpoint? And you're really not looking at responses. You're looking at more of an overall status, whether or not it's even functioning at all. But when you're looking at trying to optimize for any sort of particular behavior, that's not really relevant. It's definitely not accurate. It's not giving you anything useful. If you were to put an application out in the world, you're not just going to get a large number of people hitting the front page of your website all the time. It's not what's going to happen. On the other end of the spectrum, you've got some understanding about your application. You're making some quality guesses. You have some understanding about what the traffic looks like. You have some understanding what the page architecture looks like or application architecture. You're analyzing the flow and the patterns that come into that application, and then you recreate the patterns. And this is going to be a recurring theme throughout this, the fact that you're taking existing data, stepping away from it, and then trying to recreate that existing data in the first place, that existing traffic. It requires a lot of work. It certainly has a lot of uses. But it does require a lot of engineering time and effort. And you end up with these handcrafted load tests, which require understanding of the application in the first place. And a lot of teams are, frankly, not suited to do. So that brings us on to as we talk about that spectrum, what are we trying to accomplish with these load tests? And why is understanding about your application an issue? So one of the big problems that we run into is a problem of accuracy. And if you asked your average person on the street what accuracy is, they might fumble through a definition. And one of the things that I think is interesting here is the difference between accuracy and precision. And we're focused on trying to get more accurate tests. If you asked an average person, they might tell you that accuracy and precision are synonyms, that they mean approximately the same thing. But in the world of measurement, they don't mean the same thing. Accuracy is about how on target you are, and precision is about the width of your measurement, how close to the target you are. Now, when you are using early tests without any understanding of your application, you are missing accuracy significantly. You're just kind of we call throwing it at the wall, seeing what sticks. That's not accurate at all because you don't actually know what you're aiming for. And honestly, in the wild, that's a lot of what load testing is. It's not a matter of whether or not you understand the application's behavior. It's a matter of whether or not when you perform a test, did it fall over? Is it on fire? And if it's not, people just ship it out. And they say, well, it stood up. We assume that it behaved the way that we wanted it to. Looks good to me. And if you are putting in that effort, well, then you're getting better tests. You're getting better understanding of your application, if you're putting in the effort to do so, in that you know what the results should be. But as we said previously, that takes a lot of work. And since you're stepping away from the initial traffic in the first place in order to analyze it and get back to recreating tests, this is where we make the statement that all tests are flawed because you've already stepped away from reality. You are not in a world where you're actually working with the real data. You're already working with a abstract set of analysis. So across the entire spectrum, all of your tests are flawed, whether you're doing them in an inaccurate sense or whether you're putting in the work to get as close back to what you started with in the first place. So how do you get past not having that reality in the first place? Well, one of the best quotes that I've seen is simply about testing in prod. This is very easily defining that when you test in prod, you're testing in reality. And if you're not testing in prod, you're not testing in reality. It's a pretty straightforward binary. And there's a lot of value to be said for both sides. You want to test before you get to prod, but testing in prod is the only way to see what the actual behavior is going to look like. However, why does it not reflect reality when you're testing outside of prod? Well, this is actually a lot more complicated of a question than what I've positioned here. It's not the differences in the systems. It's not infrastructure. If it was a matter of infrastructure, then you could just change the instance types that you're requesting. That's not a big deal. That's pretty easily solved. Is it a matter of how your operational teams are taking care of and managing the systems? Are your dev systems being managed by hand, but your production systems are all infrastructure as code and everything is checked in? Well, we know that that happens in reality, but really that shouldn't be a difference. You can easily solve that by adapting those operational practices to your development environments and your pre-prod environments in addition to your production environments. Now, almost all of the differences are based in usage. And I say almost there's some other stragglers that come in the concept of state in a production environment is one of them. But for the most part, it is the usage patterns that are coming into your production environments that are preventing your non-production environments from actually having testing that reflects reality. So all of your testing is inherently flawed. If you cannot get production to be simulated in your non-production environments and there is always a bit of danger in potentially testing in production, maybe you're doing destructive testing and you don't wanna point that at your live production environments or how are you testing against payment card data or any number of things. So all of that testing is inherently flawed. And we wanted to try and solve that problem because we wanted to have tests that could optimize an application for how it's gonna function in prod. And we wanted those tests to be able to be run pretty much anywhere. So that we could simulate the prod environments outside of prod. So we've presented this project which is currently called VHS and that's in quotes for a reason. We'll talk about it in a little bit. So what is VHS? VHS is a tool that will take your production traffic and allow you to use it in non-production. So how does it work? What is VHS today? First of all, since we said it is a focus on Kubernetes applications, it deploys as a sidecar. It shows up inside your Kubernetes application and allows you to understand and make use of the traffic that's going into that app. It records and replays any and all traffic that you tell it to. And it does so by storing that traffic in some form of file or manifest wherever you tell it to. Now, this is in sort of any to any format. So we've taken the ability to record traffic and we've made all of these plugins, whether it is recording the inbound web traffic and storing it on local disk or in an S3 bucket or whatever it happens to be. And we've also taken the inverse where it is reading that manifest, where it's reading that stored traffic and it's playing it back into your application for the purposes of testing. And really that's the majority of what it is, which is interesting because we saw that there was a problem in a testing environment, that there was a need for getting these tests to work correctly, but we didn't really solve it with a testing application, we solved it with a traffic application because we found that that was gonna be the most useful way for stimulating the behavior that we wanted to see in the application by completely bypassing the idea of creating tests and having to understand what the application looks like in the first place. You still have to understand it, still very important. If you just throw production traffic in an application and you don't understand what it's doing, you're not gonna get a lot of benefit, but it does take it out of the idea of creating the test. So, where would something like VHS fit into your life cycle today? You've got this record and replay tool, so you throw it in as a sidecar on your Kubernetes application. If it's receiving web traffic, then it would take those HTTP manifest, store it as a HAR file, standard HTTP archive in the storage of your choice, wherever you wanna put it. And then, as we go back and approach your CI-CD pipeline, it can play back that traffic into any number of stages. Now, the ones where VHS appears here, this is a typical CI-CD pipeline that I've seen many times. Check in your code, do your build, run your unit tests, integration tests, additional end-to-end tests, optimize the application, and then finally deploy into your production environments. And the areas where VHS is here in this pipeline, this is about where Stormforge is mostly concerned in the environments in the pipeline. We wanna help with the optimization, we were focused on what does the testing look like. However, this turns it a little bit on its ear because this is a typical CI-CD pipeline, but while this is where it would fit, the very concept of these being stages in your pipeline might change because you have the ability to stimulate that production traffic into your test cycle. So this entire test cycle might change by virtue of you being able to throw live traffic at it. And like I said, this is just a typical one that I've seen before. I'm sure that any number of people on this call can come up with new ideas, use your imagination on interesting ways that this could be placed for anything that needs to replay traffic. Maybe you wanna use it for testing, maybe you wanna use it for security, maybe you wanna use it for, come up with your own ideas. So that's what we've got. We've got a framework really that does record and replay of any traffic to and from any scenario ostensibly, but it's a new project. It's a community project and there's a lot of potential directions that it could go. For the core of the framework itself, one of the places that it could go could be to provide additional metrics. Maybe your application isn't running in Prometheus. Maybe your application, you don't really know how to get that visibility back out. Well, we'd like to throw some metrics into the core of the product so that when you run the test, when you provide that traffic that you can understand what the reactions are to that traffic internal to VHS. We're also looking at potentially testing things outside of the realm of just the Kubernetes applications themselves. Maybe you wanna test the Kubernetes platform. Maybe this is a sidecar that could be applied to the API server in the Kube system name space and you want to test not what the workload looks like of a particular application on Black Friday, but you wanna test what it looks like when your cluster that is a multi-tenant cluster is being used by thousands of users at the same time and will your API server stand up to that much Kube CTL action? There's a lot of potential there for testing things outside the realms of just the app and that leads us into versions of the app that are not just a single application sidecar. We've had some discussions around maybe the VHS implementation is to be a standalone application on the cluster or maybe it's not even a containerized version. We've had some people ask us about how would I apply this to a bare metal system because I don't want, I wanna test what a bare metal response looks like and I'm not interested in the application. I haven't gotten that far yet. So there's a few different changes that could potentially be on the horizon based on what the community needs because once again, this is an open source project. This is a community project and we're looking for a lot of people to get involved to help out with these types of directions. Let's talk about the plugins a little bit. We said that this is an any to any scenario. So what if we want other storage mediums? Well, how many different storage types do you wanna have? Today, we don't have a plugin for Azure. If anyone on the call is well-versed in writing Azure storage plugins, pop on by. We'd love to have you. What about the output that is being stored in the traffic manifests? What about encrypting that data? Right now, it's dumping it as a flat file. What about sanitizing that data? What about parsing it in some other way? These are all plugins because they're all part of just getting into the workflow that we would like people to come jump into our sessions and tell us, are these things that you find useful or maybe they're not. Maybe everyone is gonna say, hey, I'm just gonna store this locally and I don't care about encryption unlikely, but possible. What about session management? So right now we're doing mostly HTTP traffic. How do you handle HTTP sessions? How are you handling that across multiple instances, multiple replicas within your web tier? What does that look like and how are we handling that at the plugin level? And since I said we're managing mostly HTTP traffic today because that's what we started with. You need to have a starting point. What does it look like to handle other traffic types? Maybe we wanna be able to capture SQL traffic. Maybe we wanna capture GRPC traffic. Maybe we wanna capture any number of other things. And as a corollary to that, if we're capturing and playing back other traffic types, what if it's not just on the front end of the application? What if when we're testing our application we're actually using this as something closer to surface virtualization where we're simulating a database playback on the back end of the application. Because it's in any to any configuration, this is 100% possible. We're also looking at once the data is stored, once that traffic is somewhere in a file, what can you do with it? Do we have additional tools that we wanna press upon that stored traffic? What if you wanna do some manual traffic shaping? Those handcrafted artisanal load tests that I talked about. Well, we said there were a lot of work, but what if we could make them a lot less work by giving you a good starting point and allowing you to do simple shaping? Like I wanna add some delays in between some transactions to see if my e-commerce site still works. I want to increase the volume of a particular type of stored transaction. All of these things, and this goes into our ML backing, our machine learning backing. What if we wanna analyze that traffic and be able to either narrow or broaden the traffic types and the request types that are coming into your particular application? What if we wanna increase the dial on just one particular type and say, give me more of the incorrectly authorized admin traffic? See what happens when the application hits that. And one that I am particularly concerned about, once you have that traffic stored, what do you do with any sort of personal information? Any sort of sensitive data? And this goes for a couple of different reasons. You've got things like GDPR where you don't wanna store any sensitive or personal data if you can avoid it. But also what if you're playing back something that has payment card data and you're now replaying a credit card transaction? You don't want that to actually process and go through because you're running it in part of a test simulation. You wanna be able to change that up to say a test string. So all of these are ways that we see the project evolving, but we run into in this any to any scenario that we've made a bit of a Swiss Army knife. So where we're starting today, we've got HDB traffic, the ability to GZIP and tar it, the ability to store it in a handful of locations. That's a great framework. And what we wanna get to is something that a lot of people would find useful that we want the community to be involved with to tell us and to help drive this product forward for what the most useful use cases will be. How are people gonna interact with this project? Maybe it's in ways we haven't thought of yet. I'm great with that. Please come help us drive those forward. But if we attend to everything that is on that list and do all of the possible plugins, we're gonna end up with something that's sort of untenable. So part of the conversation is around what are people gonna find useful and in your organization when you're doing testing or maybe you're not doing it today because you find it too difficult and you wanted a way to start there, what would you find to be most useful in a traffic replay tool? So that brings us to the project itself, what we have and what we don't have. We have weekly meetings to talk about these things. What are people want? What are the community gonna bring in? We have a mailing list and a contributor's guide and a code of conduct and we have a GitHub work and all of the things that you need in order to manage a project. What we don't have yet is significant contribution outside of Stormforge and we wanna change that. We don't have a formal maintainer process because we don't have, quite frankly, you, all of you who are here on this call. We want you to get involved in this project. We're trying to make this a community project that is not owned by any one company and we want everyone to help us drive it forward. The other thing we don't have is a new name. From earlier, you'll notice that VHS was in quotes. Well, there's a couple of reasons. We are intending to rename this project. VHS, as should surprise no one, is somewhat difficult to go do a Google search on and in almost every context that we were interested in, the name was already taken. You're not gonna get a GitHub or a game VHS. It's not gonna happen. The other issue that's not written here is that it is someone's registered trademark. VHS belongs to a company and so while it's quippy and is evocative of the record and playback functionality that this project has been built with, really we want something that as a name belongs more to the community, want to be a little more unique and also we want it to be evocative of the cloud native ecosystem that we're making this a part of. So if you say, hey, I've got a good name for that or I wanna see what suggestions other people are coming up with, you can come join our meetings and we're going to be opening a section for suggestions during our meetings along with storing things in a document. We're expecting that sometime around the end of this quarter we are going to have a poll based on the suggestions that everyone brings forth to us and we're gonna formally rename the project probably in the beginning of April depending on the number in activity we get around the suggestions. So if you wanted to get started with this project how would you get involved? Well, first of all you would come join us on GitHub in the appropriately named rename dash this org for obvious reasons. We have a mailing list over on Google groups which is gonna be our central point of contact. It is VHS dash pre dash rename dash launch. That should be pretty easy for everyone to just remember right and just gonna type down those. Yeah, that's not gonna happen. So we also have a public Slack. Unfortunately, our public Slack registration page is currently down. So it's a good thing the link is here but we're gonna be announcing when that's fixed on the mailing list. So please come join us on the mailing list. I'm gonna leave this up for a second or two for anyone who's interested to go join the mailing list. And if you're on the mailing list then you will get the invite to our weekly Zoom meetings. They are 9 a.m. Monday mornings Pacific time, noon Eastern time, 5 p.m. GMT. There is a public document with all of our meeting minutes and agendas and that same mailing list will also be used for control of the renaming documents. So that's it, that's all I've got. I hope that everyone found this to be interesting. We've got what we think is the framework for a really great solution to testing in prod to being able to provide your production traffic in a cloud native way. And now we're going to maybe, maybe I'm gonna move on to questions. There we go. So we've got some questions here in the Q&A panel and let's see what we've got. What part of StormForge is open source and what part is proprietary? Can the service aggregation and result stored in StormForge servers or we install all the dependent service into our own servers? Okay, so one of the lines that we've drawn as a company is that anything that a customer is going to run, and I say the word customer even though it's open source, anything that an end user is going to run on their own systems, absolutely should be open source. So for StormForge, we've got this particular application. This is an open source application running in your cluster. If it reaches back to other things, that is a matter of discussion like we talked about the machine learning that piece wouldn't necessarily be open source but anything that you're running for the optimization for local testing, anything that you're running locally is going to be open source. We do have some service offerings and those service offerings aren't necessarily open source. We're still looking at continuing to grow that. I'm a big proponent of pretty much everything should be open source. So I'm gonna be pushing that forward. But yeah, I hope that answers your question. All of the stuff that you're going to install which would include the ability to run experiments and whatnot is all open source. Storage overhead required, it is primarily text. So it's not a lot. If you've ever seen the output of something like Wireshark, it's just a big blob of text. It doesn't take a whole lot and it goes from there. The VHS object itself is a image. So that's not taking a lot of space. I can't give you a direct answer because that's sort of like asking how much water is there in a swimming pool or how long is a piece of rope? But I can tell you that it's text files and doesn't take up a whole lot of space. Okay, question. What happens if a node's clock has become skewed? For example, very active nodes with a lot of CPU may go kilter in their NTP if not set up properly so you could find a five minute or more mismatch. I'm not sure I understand what the context of the question is. Node clocks are a problem. I've seen that in production many times but if we could get some more clarification on what you're asking with respect to a particular product or if you're just asking in general about node clock skew, I'd appreciate some more clarification on that question. And in the meantime, what's the next one? What about the GDPR from capturing a real traffic point of view? That is, I mean, like I said in one of the previous slides, that is one of our primary interests on how we handle traffic and we want there to be some form of plugin to handle sanitization of personal data. Now, whether that happens during the capture or it's something that has to happen as a tool after the fact, I would prefer it to happen during the captures so that it never actually makes it to the stored data in the first place as opposed to sanitizing the data after the fact which is sort of you're not really adhering to GDPR if you're doing that. And it's a great question and it's something that we really need to be concerned about and we would love people to come help us with that sort of endeavor around data filtering because right now it's not doing it and it needs to. Give a minute or two to see if any other questions pop up. But other than that, barring any other questions, I hope that folks found this interesting. Ha, replay is the issue. This is regarding the clock skew earlier. Replay is the issue. I assume you need to align events records with regards to timestamps. Imagine a reply logged before a query. This could affect Kubernetes also in frustrating ways. Yeah, so that's gonna come down to being able to replay events as they came in. If there are timestamps in there and the timestamps need to be updated then that would have to become part of, realistically that would be part of data sanitization, I think to say I'm replaying this in a scenario where I want to update any clock signatures to now and that would have to be covered at the plugin level because otherwise you're playing back traffic that already has a timestamp built into it and there's not really a reasonable way to associate that and that might be part of the test. Can your system even handle a playback that has a timestamp other than now? That's frankly something that I hadn't really thought about until just now and I would encourage the person that asked this question to come on by our meetings and present it as a formal issue, something that we need to be concerned about and something that we need to manage in the plugin management. Thank you for that question. That's really interesting. Barring any other questions, we'll give it a minute. I hope folks saw this as interesting and want to come join our meetings. The mailing list is the place to start which I will put back up here and looks like we're clear on questions. Thank you all for coming out today and I hope this was interesting and educational. All right, thank you so much, Noah, for your time today and thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today and we hope you're able to join us for future webinars. Have a wonderful day.