 Welcome back to OpenGovCon and now a really interesting brief with, we've got Will from Defense Unicorns and we've got Kingdom with, oh my God, a WeaveWorks. Sorry guys. Works. Yes. They're going to give us their journey of WebAssembly and microservices. Excited for this one. Over to you guys. Hi everyone. I'm Kingdom Barrett and this is Michael Presenter. Welcome to the Wasm and Microservices talk. Are we there yet? So I'm going to save you guys a little bit of time. If you guys want to go to another talk, whatever you need to, the conclusion is, are we there yet? Not really. There's some caveats. But there are a lot of things that do work, but it's not enough that I wouldn't bet the farm on it kind of thing. I just, you're going to get a lot of value if you actually want to implement in this talk. But yeah, you can skip through whatever you need to. So we've already been introduced, but I'll give you a little bit more about myself. I'm a Flux maintainer and an open-source support engineer at Weaveworks on the developer experience team. My name is Will Christensen. Unicorn engineer working with Defense Unicorns, obviously. And I'm a serial mentor for people getting started in the open-source world. So hopefully everyone can follow those footsteps too. So Wasm, if you're new to this whole concept, it's basically WebAssembly. It's this special compiled bytecode language that works in some kind of virtual machine that's very native towards JavaScript. It's definitely shown that it's significantly faster than let's say JavaScript running with the JIT. Or JIT. And when you go to compile for it, you just essentially treat it like a different target, like instead of X86 or ARM, you compile to a Wasm target as the general experience. So off the bat, what I discovered when I did my research and testing was there is a list of things that you cannot do in WebAssembly. And I'm gonna call these limitations upfront, but I will call them design constraints later because I learned my lesson. You cannot access the network in an unpermissioned way. You cannot pass a string as an argument to a function easily with some caveats. You cannot access the file system unless you've been permitted. All of these are things that you can be permitted to do, but except for the string type, there's no string type. You have to, as far as I can tell, you have to manage memory and count how many bytes you're gonna pass, make sure you don't lose that number. That's a little awkward, but there is a way around that as well. So... A little caveat on that too. There's an implementation feature that not only do you need to know how to pass and do the memory references, you need to make sure that any language that you were doing from the host to what's ever compiled in the WASM module will work homogenously, and that is very poorly documented. But anyway, so we came up with this idea, just kind of talking and trying to figure out, hey, what kind of talks can we get to get accepted for the open source summit? And we came up with this concept that from being in the government space that I thought was gonna be really interesting from an ATO perspective, which is how do you enable continuous development and delivery while still maintaining a consistent environment? Very similar to, like, let's say what Amazon does with some of their platform engineering stuff, but can we get that down on, let's say, a level of a Kubernetes operator? So the idea is Kubernetes operator. I want to write something in Go because that Kubernetes API gets natively designed and shared in Go. So obviously, why not be in the same language? So the idea is that the core code would be in Go. So all your functions are basically like any type of hooks or whatever you need would come in. The event hooks would then call up, instead of calling, let's say, a function or let's say a class that you would have inside of that monolithic operator design. The idea is that you could reference somehow these Wasm modules, and it could be either to like a key value store, if you really want to get creative, you can shove it in as a config map into the Kubernetes and pull it up and make the byte stream work. Whatever you want to do, but the biggest thing is Wasm could get pulled in and the idea is you could call up almost like a function and you just execute it. And each one of those executions would be a sandbox that you could control the exposure and security and what's exposed throughout the entire operator. And the idea is that you could statically compile the entire operator and control it, but that way anyone who wants to work in the sandbox for the modules, they would have the freedom within the limits of the sandbox to execute. This is the dream. Well, it didn't work. And that's the whole reason why we have a talk. So we literally tried this and we're talking about a little bit and submitted it and too much to our shock, it got accepted. So, next slide. By the way, I'm new here. What is ATO? Oh, that stands for Authorized to Operate. And basically you have to go through an authorization officer and you have to go through a bunch of stuff. Austin Bryant's back there, if you have any questions, he is great at answering how that may happen because he's been through it once or twice. So, Austin Bryant, Defense Unicorns, look at him at LinkedIn. So, this is part of the reason why OpenGovCon cares and why we're part of this track is because the ATO process, if we can mix in fast development cycles, but still have security, governance of how some of it's done and allowing also inside the WAZ modules because of a polyglot, which means any language that you want to write your code in. The idea is that you can use existing talent, not try to throw out all that talent and knowledge and having them either retool or re-hire, that kind of stuff. You can use that, tweak it a little bit and get life out of it where you may have some performance losses or there may be some nuances, but largely you can retain a lot of that domain language or that, sorry, that domain knowledge and carry it over for how things may adapt in the future. Sandboxing definitely is something that we want. We'll go into that a little bit in the future. And the main, and the idea is that we can also split some of the development time. So the main people, let's say, creating this operator, these are the people that would go through more stringent controls, very similar to, let's say, how a lot of runtimes would be. And the idea is that they can control the runtime, what would be exposed to what WAZ module. So that way everyone needs to know what they need to know. The security controls are put in place and inside the WAZ module itself, you can put in things like logging, any type of traceability that you need to, compliance stuff that you may want to do that you may not want the end implementers to put in in their sandbox environments. So you can kind of look at what goes in, goes out and control what happens. So next slide. So anyway, that's the dream. And like I said, those all the points are kind of already covered there. And just one more look at just kind of like, you know, you can add more stuff, but a lot of this is also very similar to like what unicernals were kind of trying to go for without the ability of a polyglot type of way of adding different, was it different modules and code for, you know, functional call-outs. So we're gonna go through a brief rundown of some of the languages that we were looking at, investigating, we played with a lot of them, some more than others. But when, if you were to go and implement, I'm pretty sure this will summarize a lot of what your journey would look like. You first. Okay, so spin. Go to the next slide. Yeah, okay, runtime class. Oh, so this is a little mix of two of them. So there's one that I found, and this was the first that I found was called kwasm. My goal upfront was to run wasm on Kubernetes, which I found at the time was very difficult. So this was very easy. There's a field on the deployment class called runtime class name. And you can set that to whatever you want as long as container D knows what that means. So kwasm operator breaks into the host node and sets up some container D configuration and imports a binary from, none of this is production ready, the way I'm describing it. Hopefully that doesn't sound good. But it says so right on the tin, so don't worry. Anyway, this was very easy. You could get your wasm modules to run directly on Kubernetes this way. But it does require a privileged access to the nodes, and it's definitely not ATO. So here's that warning I just mentioned here. It says only for development or evaluation purposes. And there's a company that would like to sell you a better solution. But wazzy and waggy, then I found as I was progressing through the examples that I learned from Spin, this is a great way in if you are struggling with the limitations that I mentioned before. So what you see here, this is a Ruby waggy program that basically you don't have to handle connections. The runtime handles that for you. That's how I would summarize waggy. And wazzy is the system interface that makes that possible. You have standard input, standard output. You have the ability to pass in a bit of file system or a bit of memory and functions. You can export them or import them, have them called, but only the ones that you permit. We met Michael from wasm edge yesterday, and he included us into a bunch of things that we hadn't thought about. I suggest you check out his talk if you can find it on, okay? Yeah, go back. So yeah, his talk was very informative. Just so quick little caveat, which is the majority of the wazm runtime stuff that they have worked on is mostly in Rust. Wasm edge is very interested where it's written in C++. And if you want to, let's kick over to the next one. And it's very closely related to wazm and wasm time which are in Rust and something we did play with. But if you want to talk about that a little bit more. Yeah, I found the documentation was very good. The examples for my favorite language in here. And I think that they have a lot of language examples that are pretty much parallel. So I went through example one through 10 in wasm, and that's when things really started to click for me. I didn't really understand why wazzy at first, but going through those examples made it pretty clear. Wazzy is example number 10, by the way. So they build up to it. And yeah, we can go to the next. Oh, see constraints in these environments will help scope down your problems to smaller and smaller. That was my experience and that was what I felt was the point of the design eventually. So speaking of Rust integration, I believe a lot of the wasm and wasm time and wasm integrations are, well, first of all, they're primarily written in Rust. And I believe that this is how a lot of, let's say, Cloudflare and Fastly are doing their edge deployment or their edge networking, or sorry, computing that they're doing. But the Rust integration, if you are a Rust developer, I believe since it is a first class citizen and the main tool for what a lot of this wasm stuff is working on, if you have domain knowledge of Rust already, I believe you can start exploring right now how to use wasm in your production workflow or at least start investigating doing some research on it. There's still some nuances, but the one key thing I do like about Rust, which is if you are evaluating the technology and you don't wanna worry about the implementation outside of other tools and you wanna know an early adoption of, let's say, a draft spec, Rust is one of the key places I would look. And let's kick on the next slide. And then there's also, a lot of those wasm was mostly for the browser and for having, let's say, the V8 engine running code that wasn't just JavaScript or something that was more native to the browser. So Node.js, obviously, running your JavaScript in the backend in Dino, a lot of the wasm stuff had the integration already with the V8 engine, so we found that from the command line, from a microservices perspective, was kind of really easy to implement. So the whole concept about the strings part about passing it with a pointer, if you are running Node.js in Dino, you can pass strings natively and you don't even know what's any different. So if you have someone going, hey, I looked at wasm, but they're using Node.js or Dino on the backend. There will be a demo if we have enough time for it at the end. Just using Dino, it was really simple to implement where we could actually do one thing that a lot of examples that we've discovered, which is Hello World actually works. I could compile it so it actually runs and I can pass a string in and get a string out simply from a wasm module with Dino. So like I said, I don't wanna do a lot of backend development personally with Dino or Node.js, but if someone is, I believe that they are in one of the most ready production and decent developer environments right now for adopting wasm. But a little bit of warning though, when you go to compile wasm, I discovered it's not all the same. Also, please note that I'm starting to use a little bit of AI art, so I have no idea what was gonna come up. This is supposed to be a bumpy road, so hopefully you guys see it. We'll have a little bit more examples to have fun with. But yeah, so there are three different main compilers that when it comes to wasm that has been discovered. I believe I got this from the wasm time documentation after diving really deeply in, which is, for instance, single pass. So it doesn't really have the fastest runtime, but it has the fastest compilation. So if you want fast dev cycles, it's an option. CraneLift is a main engine used in wasmer and wasm time, but it doesn't have the fastest runtime. It's much better, but it's still with slower compilation. And then we have LLVM. We're hearing slowest compile time. Sure, no one who's ever used LLVM is surprised there, but it does have the fastest runtime. So just know that just because you have wasm, it's not implemented all the same, which is very similar to if you have any type of C and C++ developers, your optimizations may actually impact your runtime experience. And then finally, as someone who was looking at this originally with that original golden idea of like I just want this operator work and go, we found out that if you were trying to implement and call up wasm time or wasmer, if you want to just run like one of their examples, a lot of the examples that work was what they call WAT or web assembly text format. And the closest thing I could get from returning go, compile it out and follow what would just replacing the WAT with go that would compile to replace it was by using was zero, which so if you are implementing was modules and you need to figure out how to call it and it just seems like it's just not working enough, please look at it. The examples are really nice. It saved days off of what I was trying to look at in order to figure out for implementation. All right, next. All right, so now all about the problems. So if you guys are familiar with Jimmy Buffett's song, Cheeseburgers in Paradise, this is not a complete happy paradise. So a little bit closer. So as King was talking about, you have to, for handling strings, you have to do some pointer arithmetic. This is not fun. This is not ideal. If you are going rust to wasm, this is actually where you're gonna have your largest performance penalty. I've heard that in some cases, they've seen like a 20X performance loss, just mostly due to the time to pass in the pointers and handling strings. So if you have anyone natively doing web development and you have a JSON response, you're gonna call up and you wanna have it parsed, parsing that long JSON response, please note that may impact your runtime performance and it's not the rust problem. That also may have been an implementation at the time for the research too, so maybe it's magically better due to, or really said, open source on it. Compiled languages may treat each other differently. And specifically, just because you're compiling Ruby or Python to wasm, you do need to compile the entire interpreter into it. So that means if you are expecting wasm to be better for boot times and that kind of stuff, if you're using interpreter language, you are basically shoving the entire language into it and then running your code. So please take note that it's not a uniform experience yet. Yeah, and note that if you're using an interpreted language, it's still interpreted in wasm. You're passing the script itself into the wasm. The interpreter is compiled in wasm, but the script is still interpreted. And you're restricted to the runtime restrictions of wasm itself, which means sometimes it may be single threaded. Good, bad, it could be, it depends, just be aware. Ruby's already single threaded, that doesn't scare me. Well, so is Python, but hopefully Python changes. Anyway, 3.12, everyone, yay. All right, each runtime. So when you go to run wasm and you turn around from a host module, you are essentially launching a separate VM that is taking and consuming that wasm and running it. Now, there are some issues also where, if you have a threading module, so when I noted with Node.js and Dino, there is one thing I've noticed, which is in go, if I use the HTTP module to do it a request from a wasm-compiled go module from Dino, there is no way that I can turn around and make sure that it's not gonna break the threaded nature of Dino and that VA engine single event loop will get broken. Now, my implementation was poor and I was gonna say this is a poor developer experience. Maybe there's an answer there, but I didn't find it. So if you are just getting started and you're just trying to mess around and try to find out what's happening, just know that you may spend some time there. And this is something that in the syscall.js library that you need to do in order to reference JavaScript from go to see if you're running anything. You're essentially loading up a function, calling it through a streaming interface and bringing the response back. So when you go to do that and it's doing an HTTP call in the syscall.js library, they even have a special note that's saying, if you are doing a function call, this will break the loop. So note that this is known, I'm hoping they're working on a fix, we'll find out. And what else we got here? Oh, and then the biggest thing is too, is you discovered this with Ruby. What happens when you have a C dependency with your Ruby gem? Oh, yeah, I have no idea how you can handle that. And all the documentation that I found was to the same effect. Thank you, Spin, for saving me a bunch of time, I didn't try it at all. Ruby compiles, if you're not aware, most Ruby dependencies are probably native, not native extensions, they're pure Ruby, but a native extension is compiling C code and then you have to deal with C code. Now, C compiles to Wasm, so I'm sure there is a solution for this, but I haven't found anyone who has solved it yet, so. Well, I'm also thinking about it this way. When you are packaging a gem, and there are some Python packages as well, they are using the binary. So there's definitely no way to do a binary translation into Wasm, binary to binary. So if you need to do it, you need to get your hands and compile the library itself to Wasm, then compile whatever gem or package that you need to include that, make sure the function calls are there. And just talking about it, I don't even have the attention span for that personally, so. Yeah, I also found that dynamic linking doesn't appear to be a thing in Wasm, so I'm not sure there is a solution there at all. Which makes it really fast, and to handle the startup time, though. There's some benefits. All right, so part two. So when I was doing this, I was learning how to use ChatGPT. Some people at the company I work at were all about ChatGPT, and like, cool, maybe I'll help you out with my talk. Nope. But in the online examples, so on the online examples, everyone's like, oh, look, it's great, check out Wasm, whatever. And specifically for Go, the WebAssembly text format, they'll have it embedded where they're taking that text file, they're compiling it to Wasm, they're running within Go itself when they call up with Wasm time, and it works great, cool. If I'll give an example, it works great. Well, two things you're gonna notice. One, you're gonna notice the hello world is now all of a sudden adding two numbers in a function which is like, okay, whatever, and then it's like doing greatest common denominator, which is like, okay, cool, that could be a little faster, whatever, it's in assembly. And then when you go to the next progression, going cool, like, you know, it's all the simple stuff without a string, that's where you discover they've been hiding that. So, we'll definitely have this one in the indicators for when we're ready, we'll give the solution that. Debugging in Wasm can be harsh. So in Go, you may have an entire stack trace, but it may be even somehow harder to read when it fails. Imagine that for everyone in your languages. Python people, if you've ever had to debug a runtime when something goes wrong when someone thought wrapping the whole thing in a click script was supposed to make your life better, it's equal to that. All right, so you've been hearing problems. I will definitely have to say, should you be excited? Yes, yeah, there's a plenty of reasons for why to be excited. It may not be ready yet, but I definitely think it's enough to hopefully, you know, move forward and start playing around yourself because when it is ready, I think especially in the government space, this is gonna be great to adopt because, you know, as I mentioned earlier, anytime that you can take existing workforce, you don't have to hire and you can get longevity out of them, especially when you have all that wonderful domain knowledge and you don't have to re-solve the same problem using a new tool and who knows what other unknown pieces. They're pretty good at runtime stuff and debugging that. So hopefully with Wasm, it'll be there. If you have a lot of JavaScript stuff, guess what? You get better control over it and it surprisingly runs faster, which is the whole reason why Wasm is interesting. And then finally, I'm sure a lot of you have an ARM MacBook and then you try to deploy something to the cloud and next thing you know, you realize, oh, look, my entire stack is in x86. Well, Wasm magically does take care of this. I did test this out on a Mac mini M2 and run it on a brand new AMD 64 system and it couldn't tell the difference. So yay, so next. So conclusion, Wasm and browser, they have been doing amazing things and this is why there's a lot of hype and why adoption is there. Full-on gaming engines, GPU rendering, everything, like, you know, hardware extensions, it's there. The Wasm, the core tech is there. There is no doubt about it. In microservices, though, we have an implementation and developer experience issue as a whole speaking for the entire Ecosphere, spin not included. Was it? But you know, and Wasm has proven itself to be solid and I do see that there's a large future for it so adopting it is, you know, right now is there. I have to say, we're not talking too much about .NET and core but Bill Evans and Tom Dupool of Liberty Fox, friends of mine, they were telling me how they accidentally compiled their desktop application to Wasm and it magically worked in the browser. They did it by accident. No optimizations, no nothing. So just based on that single experience alone and someone who doesn't really want to talk about microservices, whatever, Microsoft under the hood is doing some fantastic things. Maybe their containers are a little weird but there's nothing that we can't implement our own if we wanted to to solve that problem. But just know that Wasm is definitely a first class citizen in the Microsoft Ecosphere so do not hesitate if Wasm is a target for whatever reason to take a solid look at it. And finally, Node.js and Dino, just, they're ready for testing. You can do a lot of basic functions now outside of a couple weird threading issues. I had no knowledge. I have never touched Dino. I am not a JavaScript person. I do not want to touch TypeScript more than I have to. And it just worked out of the box and it made me excited. I hope that explains enough why you should definitely give it a look. Yeah, and there's definitely a giant asterisk on any of the claims that we've made here that we are both beginners at this. So there is a decent chance that we've dipped below the 50% rating for accuracy. If we can correct that, we'd be very happy to. So I think we're about ready for questions at this point. Just about one last caveat, which is because of our level and how many languages and how much we spread ourselves thin trying to get this work, I think we are an accurate representation of cold calling someone on your team, investigating this for about three weeks and then coming to these conclusions. Hopefully we've cast a wider net. We've had a lot of support from friends, coworkers and anyone else who couldn't run away fast enough. But, you know, that's kind of where we're at. And so conclusion, oh, sorry. So we have a conclusion and then a call for action. Sorry. The biggest thing is we think it's ready to be tested with. And the only way that is an open source community that we will make this technology better is get your hands on it now. Let the maintainers know, start talking, start bringing up issues. The technology, like I said, is solids implementation details. If you ever wanted to start, let's say, a medium blog post or a medium career and now is the time to get started because more working examples, we need that. It's missing. We can't even get ChatGPT to give us anything decent. So we need those blog articles to help us. I think Spin actually did publish a ChatGPT video. Well, look at me. If that is what you're trying to do. All right, and then, yeah, so, all right. So like I said, call to action. Also, more AI art. I just put in like, you know, what is the unicorn, web assembly and technology look like? That was too good not to include. This is no representation of anything inside of Defense Unicorns, but that looks pretty. So anyway, so, question? Oh, okay. So I was gonna say, so definitely start using it now. If you want to really help out, more Hello World, not just with Spin or anything, but like Wasm Time, Wasmar, anything where it may run on the edge or you're trying to include it or embed it inside of a program, we need that like yesterday. And really the whole conclusion is the tech is great. The implementations and the documentation need some more love. And there's one more, I think. All right, and then we have some quick key indicators for readiness. By the way, so another mid-journey I put in there going, what is the readiness indicator? And it gave me the eagle, so that is now the readiness indicator eagle from now on for me. String examples, more, better. Patterns for how to get around some of the threading issues that are known and potentially unknown throughout the process. And I am looking for, and for other indicators, as King and I still look at this and I still wanna get that operator done at some point. So hopefully I'll have an operator done, hopefully King and we'll have more spin demos in the future to show how easy it is. So Matt, we're looking for your little help. But yeah, and that's it. So now we can take questions. Now let's see if you guys wanna see a demo. There is a demo, you made one. It's a really lame demo, but it works. So hands up for a demo. Okay, the demo gods want to see a blood sacrifice, all right. So I am gonna be fancy and I'm gonna use my phone to kick off a pipeline and because it's open source, if you wish to check this out, go to gitlab.com slash buzzcrate slash apps slash go dash Dino. And you gotta go to CICD. I have not kicked it off yet. So also in this pipeline, now this is only gonna be good for five days because I'm trying to be nice to GitLab caches, but I'm gonna hit go right now and we're gonna kick off this pipeline in the background. All right, so what's happening is click on the pipeline as soon as it pops up and remember the text on the left. That works, cool, scroll down a little bit. Cool, so we have three containers running in parallel. Each one is building the go, or sorry, the wasm binary, all right. We can go look at the code in just a brief moment, but really once I have basic hello world of, I've compiled go wasm, it literally just says and go hello world. Another one says, all right, cool. There's hello, pass in a string because that's a sticking point of this talk. That's gonna be the go argument. So it'll take that and actually pass back the string, it gets saved in Dino and then actually output for the console log. And then finally, I decided to really be adventurous and do a quick search of arch Linux on Urban Dictionary and I made sure that I only show the results that say Linux in it and I did confirm there's nothing I'm gonna be terribly embarrassed about in the output. So we're waiting here and you can actually, oh, here we go. So finally, all right, we gotta wait for this to pick up. This is also right now running on a Kubernetes runner that I use for my own testing. It's running in Linode right now. It's spun up today, made the containers myself so you can trace back all of this. And ideally assuming you have the tags, right, you can probably run the same test anywhere you want. You can also start taking questions now too if anyone has any. Or we can look at code afterwards too. Okay, cool. So we can scroll down. Click the little follow button on the right. All right, cool. So as we can see here, I'll highlight. So here we go. So this is the basic hello one, which is like hello world from Dino. Cool, really simple, right? So then there's go assembly argument. So I said hello and I put OSS 2023 Vancouver. Guess what? It returned no problems only because I'm using Dino. And then finally, Urban Dictionary for Arch Linux. As you can see, there is nothing terribly embarrassing in that result. Maybe our TFMs get on the edge there, but that's about it. And there we go, it worked. So yay, we have a demo that actually didn't blow up. So if anyone came here for a blood sport like NASCAR, we do not have that today. And just a quick bit, if you guys wanna see how simple it is, we're gonna go Tino. And the Dino stuff is really simple. All the build scripts are in shell. So if you're a Windows user, I can say that I have strong confidence it will work as long as you run everything from get bash and you have go set up properly. But for Dino.js, so it's really basic. You just gotta import this little wasm exec, which is the same thing you would do with any type of example for no.js for implementing wasm inside the browser. When I found that this worked exactly in go, I was impressed, this specific exec actually comes from when you go to compile go. So there's a way to grab it. The build file has that for you for the build.sh. So there's no smoke and mirrors here. And there's a run script to show you exactly how it has to run with Dino and some of the security protections that you have when you go to run wasm, you have to give Dino all the permissions to run to read the wasm from the file system to then to actually execute. What I would like in the future is granular controls, especially for government and for that dream operator idea. Having governor some statically compiled governance controls for each wasm module would be ideal to that way. You could sandbox the developers for what they need to focus on and let them run as free as they can to innovate and adjust and adapt as possible without having to go through an entire ATO process so you can kind of segment the work and the exposure. But yeah, basically it's just read a file, load into a buffer, start an instance. Now it gets a little bit more interesting with arguments where you have this written file. You set up the instance, the instance runs it and sets it up in the namespace. So you have a function that essentially weren't working. That function, when it gets called up, on line 17 for who hello, it'll actually go grab that module. The module name is defined from inside of go, which is fun to document. So if anyone's writing this in TypeScript, you do need to kind of handle how you want to have that referenced. And then you can save it as a string and you can output the string. And then I was trying to do something similar where I could actually do the function where I could pass it in from this next chunk of the script. The reason why that didn't work is that threading issue with HTTP. So when Dino was trying to call up and run that function, we had a threading issue where you get this deadlock error and after hours and maybe a day or two of trying to figure out why this isn't working, that's where it failed. Anyway, that's it for the implementation of it and the rest of it could be any one thing that says wasm there, you could implement in your own language. So any questions now? You have anything else? You have a question? Oh, sure, yeah. I think I know the tool that you didn't know about when you were doing this, which is Witbyngen. Did you play with Witbyngen at all? I wasn't in Rust. But it should, I think it now has plugins for other languages, but that is the tool that will generate where it will convert between back and forth, all those 32-bit and 64-bit integers. Yes, there is a lot of good work done with Witbyngen. That is true. It is all for Wasm time. It is all for Wasm time. It is all for Rust. And some of the, and Witbyngen, I did not see an implementation easy at the time. And a lot has happened in the last three weeks. I mean, even watching you guys, some things kind of, there were new things added to repositories. So no, for Witbyngen, it is definitely why the reason why I say, Rust is a first class citizen, is because Witbyngen does address a lot of it. It is a little nuanced. It seems like for how much people are leaning on it, there's not a lot of people developing it. But this is what happens when you're an early adopter. So anyway, but yes, if you are a Rust person, definitely look at Witbyngen. It is definitely the secret sauce for why it's a first class citizen in this space. Any more questions? Thanks everyone.