 So you'll give me a thumbs up back there when it's time or do I just start talking and just continue talking like this. All right. Hi everybody. My name is Colin Murphy. You can wave back. I'm a nobody at Adobe. I work in the Creative Cloud web team on some upcoming products. Before that, I was responsible for the infrastructure and developer experience for document clouds, microservices, Adobe sign, Acrobat, that stuff. I've given this talk or a version of this talk many, many different times. And so I don't have any idea how much you guys know about WebAssembly. So does anybody totally new to WebAssembly? They don't know what they are. How many people, some of you know it very well, I can see. And then how many people know about Wazzy, running it outside the browser, a handful of people? So you guys can probably just go to sleep for like five minutes. And also on the title, it's kind of a provocative title. So these are some use cases in which you can replace Docker with Wazzy. This is not forget about Docker. We still have mainframes. Nothing ever dies. So this is not a, last time I gave a talk, somebody came up to me afterwards from Docker. I was like, I heard some of the things you said. I was like, okay. So it's not adversarial. I'm not going to. Now, if you're a Java person, you might get offended. So I apologize in advance. Okay, so, yeah. And also apologies if you saw my talk a month ago in Valencia. You're going to see some slides again. And so apologies on that, but it's been a month and I have a regular day job. So only so much I can do. Okay, so this is the challenge to talk about WebAssembly is it's a really big topic, right? So we, you know, Amazon Prime, how do they deliver their software updates to Edge devices? Right. Okay, they use WebAssembly. How does, you know, I'll talk about what we do at Adobe. Very different. And then Wazzy, completely different from that. You know, Fastly, Cloudflare, they have, you know, they've got a lot of products built up with WebAssembly. So it's a really big topic. And so it's going to be hard. It's hard to explain, especially in a half hour talk. So, and so we're talking about it from the point of this talk is server side WebAssembly. So the web is the, you know, so not my mic working. Okay, at Adobe right now we use it in the browser. We use it extensively in the browser. It's all C++ code that is the pride of, you know, the pride of Adobe, our C++ code in our creative products. And we run that in WebAssembly in the browser. And we wouldn't be, without WebAssembly, we wouldn't be able to do it at all. And we've been really, really involved with the W3 and Chrome team and the Mozilla team for a long time around WebAssembly. Main drivers, and I have nothing to do with that. I don't know what C++ at all, so just don't want to. But if it uses in scripting, it's really, really tied to the browser. Okay, but, you know, the good thing is that within Adobe people have a really positive view of WebAssembly. So it's kind of a, as I've tried to make a point of pushing server side WebAssembly at the company, you know, it's been a positive experience because people already have a really good impression of the technology. Okay, and so here, I'm going to be throwing some terms here. I'm going to try to define these for you. So we're going to start with WebAssembly, which started out running in the browser, asm.js, was like kind of the beginning, I think. And it's a computer language. It's the fourth language for the web. It's a binary instruction format and a text format for stack-based runtime is the thing, and it's a W3C standard. And then we have the WebAssembly system interface. So the idea that we're going to run WebAssembly outside of the browser in a kind of almost like an operating system. It's an operating system unto itself, well, WebAssembly and Wwisey are kind of an operating system unto themselves, and it's just, it stands for a system interface, and that's managed by the byte code alliance. So Microsoft, Amazon, lots of people are members of that. And then we have an implementation of Wwisey that's also managed by the byte code alliance called WwisemTime. That is one implementation of Wwisey or a server-side WebAssembly. I'm going to kind of, I'll be listing some kind of companies and stuff, and they don't all technically use WwisemTime, but it's the same idea. And the other thing to think about is that Wwisey and Wwisem is almost like we're starting back from the beginning of an operating system. So think of like very beginning of Linux, which is kind of famous. But just like, hey, we're really starting right from the beginning. So how do we do networking? How do we do file system access? How do we do memory access, that kind of stuff? So it's still early days in a lot of ways, but as you'll see, I'll have some use cases that you actually can use today. And then I think the big limitations right now, so multi-threading, there's no multi-threading. There's no outbound networking within Wwisey that you can't make a call out. So if you're going to just try to use the S3 API or the DynamoDB API, it's just not going to work. And then the other thing is garbage collection has to be completely re-implemented. If you have a language that has garbage collection, you're going to have to completely re-implement that for Wwisey. So it's a challenge. You can't just, it's not like Docker. And so kind of getting there, this is really a talk of, if I'm comparing Docker and Wwisey, and I'm just going to say Wwisey. And when I say Wwisey, just think server-side web assembly. It's not 100% accurate. So Docker, I think hopefully everybody is fairly familiar how Docker works. It's built on a bunch of separate Linux technologies to give you what looks like a virtual OS. But it's not, because it's got the host OS. And you can really, you can just take whatever. If you could run it on Linux, so provided you have a, give it enough memory in CPU, and you have enough disk space, you can run it in Docker. And so people can do some horrible, horribly inefficient things with that. But you can lift and shift. You can just like, yeah, I have my thing, and now it runs in Kubernetes, right? And it takes forever to spin up. And it's impossible to figure out what went wrong, and the logs are difficult sometimes. But you could do that, and kind of say, I'm done. It also has some kind of security drawbacks, at least relative to Wwisey. Because you're building it for Linux, it's assuming access to file systems memory. Networking stack, all these kinds of things, which if you're going to try to run in a highly compliant environment or an untrusted code, you're going to have some issues. And you really need to go back to that host level virtualization. So Wwisey is, as I said, you could think about it like an operating system. It's funny. If you just Google the definitions for these things, you're going to see it defined completely differently. If you go to Wikipedia, if you go to the bytecode line. It's a little, but just I think of it like an operating system. And it's an API or an ABI, technically. But it's really think of it like a virtual machine or an operating system unto itself that is completely segregated. Kind of like a browser tab, right? You don't worry about, if you have your browser tab, you have your banking app and some other app in your browser, you don't worry about it. It's what escaping and going into the other one, right? So it's that kind of isolation. Okay, so here. Sorry if we're at eight minutes, still doing the intro, trying to explain things. So I apologize. Hopefully this is useful. If it's not, just be like, you know, like, okay. All right, so Rust is the universal language of WebAssembly. So everybody has, there's a bunch of open source projects. I'm actually going to show them on the next slide. Every single one of them you can get up and running with Rust. And then we kind of, it's, and then from there it's kind of a mixed bag. So C++, I kind of made the top row big on purpose. So C++ is, as I said, Adobe, we already used C++. They're already wazzy C files, C-Libs, for wasm time. But you know, you're not going to see every single project have a library and see that you can bring in. Then there's TinyGo, which has really come a long way recently, pretty much can do everything, I think, except for reflection. If you're a Go language person, which I was because I used to do a lot of Kubernetes, can't do reflection for, I think, but I think they're working on that. But otherwise, if you have a Go library, you can compile that to wasm. And then Python, Swift, Ruby, actually .NET's really cool because they have garbage collect, they had to do garbage collection. And so your WebAssembly module's going to have an extra, I think 30 megabytes, but you can run .NET and they've done a lot of work and it's been really awesome to see Microsoft doing that. Also, Fermion has a really nice page. If you go to Fermion, they've got a list of all the languages and kind of what they support and what they don't. So really cool. Okay, here are the platforms. So in my last talk, all of the demos were done with wasm cloud, which was just what last year I got going and it was the easiest one to get started with. And I've worked with that team kind of, every week I kind of meet with them or roughly every week and we talk about what their roadmap is and so they're kind of the farthest along. So I would recommend them if you're really trying to do some of these examples and run it in Kubernetes. So they've done a lot of work getting that running in Kubernetes. The other one I've done a fair amount recently because I wanted to change up the demos a little bit, Fermion Spin, they don't have a logo. So I just did the Fermion logo, they don't have a spin logo yet, so Rady's got to get somebody on that. But, and then there are others, and these are, it's really, these are great products. I have not used the others extensively. And they all kind of have their little niches. So if they're not, these are not all trying to compete to do the same thing. So you should read up if, depending on your use case, you should do that. And so yeah, this time I'm gonna do my examples with Spin. Last time it was Wasm Cloud, so sorry. If Taylor's here, or Lee. All right. But yeah, I'll mention Wasm Cloud a few more times. Okay, first example, this is I think what you're all here for. Just replace Docker. And I did. So I kind of don't wanna repeat myself too much because I went through this in depth in my talk last month, KubeCon Europe. But I wanted to just come at it from a really high level. Why Docker can be bad, right? Especially with Java. So Java, it's unfair, right? This is like, you know, bear baiting or something. It's like, you're gonna compare WebAssembly to Java, right? That's an easy target. So anyway, yes I am, but you know why? Cause that's what we run at Adobe. So it's like incredibly unfair and incredibly realistic. So if you just pull down Oracle, you know, Oracle Java's eight or whatever, or Java 11, I think they put a seven in there. I don't know why it's Java 11. You pull down the latest Java 11 from Oracle Docker and then you scan it with something with one of the scanning tools. I did, it was JFrog artifact, right? 672 vulnerabilities. So just, and if you have FedRAMP, right? Every single one of those you're gonna have to track and talk about. Just for one Docker image and if you're actually running at scale and running a real product in FedRAMP, you're gonna have a lot more than 672 CVs cause you're gonna be running hundreds of images, right? So that's bad, right? What else does it need? My example uses the very smallest actual Adobe sign microservice and the smallest you can get to, smallest we have is a three gigabyte heap. That's as low as we go. The artifact itself comes in at 296 megabytes which is actually pretty small. As again, we have ones that are way more than that and it takes 100 seconds to start up. So basically it's not use suitable for a functions and service application. You're just gonna have this thing running. You're gonna have this thing running in all your stage environments, all your production environments, your dev environments, it's just gonna be running. All the time. We compare that with if you were to rewrite it, which I did, I rewrote it in Rust and I ran it, I compiled it to wasm time. I'm not gonna get, I mean, so you're gonna have library vulnerabilities, right? So if you use some sort of a library vulnerability tool, you're not, you know, you still have that. But you're not gonna have any OS vulnerabilities cause there's not really an OS or there's a pseudo OS. It's just gonna use enough heap to process. So the example is a background removal of an image and it's just gonna use enough memory to load that image into memory and then whatever the code is. So it's as small as you can get theoretically on heap and it's written in Rust, so you have a very fine distinction in Rust between what's on the heap, what's on the stack. The artifact itself is three megabytes and I don't think I even optimized it and it starts up in, I think it actually starts up in microseconds but I think the official line is milliseconds but it's really microseconds. Hundreds of microseconds to single digit milliseconds. So it's a functions as service capable thing. So now it will run, it'll come up, it'll execute and then it'll finish up. Okay, so let's, I'm going to now exit and we'll do a demo. It's really chicken, like I'm really, last time I did a video, this time I'll do it live but well first off I'll show the code. So this second line up here, can anybody see in the back? I can make it bigger, make it bigger. All right, okay, bigger, bigger. Okay, how are we doing? Okay, so this is the spin example. So there's just a few little idiosyncrasies between the implementations. So the second line here I rewrote, I rewrote the core business logic of the Adobe Sign Up microservice to remove a background. Really, this is like the most micro of microservices. And I, you know, I put in some Rust libraries and then I had to pull in, this one's for spin. So I pulled in the spin SDK. If I, the Wasm Cloud version, I pull in the Wasm, some Wasm Cloud libraries and they don't use any how result. But that's pretty much the only difference. I just, I load in, I load in the image, it's really basic. The whole point is it's easy. I take an image, I run it through, I run it through here, run it through the background removal and then I basically just send back some HTML. So it's dumb. But it's easy and this would actually be the application. So here we go. To postman, oh, sorry. All right, so make sure I'm in the right directory. I'm not in the right directory. So good, I guess better than just falling on its face. Oh, good, spin up. All right, so even with the most dead simple of demos, I still screw it up. Okay, so I'm just going to 3000. I'm going to pick an image. It's, so this is like the one we always use. So don't read into this, but we use this image, this signature and we remove the background from it. It's good because it's got an O and like your contour tracing has to work. So yeah, so I'm going to pull that in and I'm going to send it 3000 and I'm going to get back the image with the background removed. And so, oh yeah, nobody clapped last time. Okay, I think it's because I kept doing it because I did it on like fastly and I didn't do it in the browser. I meant to do it in the browser. I didn't, I did it with like Wasm Cloud and so people are just seeing it over and over again. We're probably like, but anyway, so there it is. 25 milliseconds, which is roughly the same as Java, right? If a Java is already hot and running, you know, it's pretty good. I mean, it's just expensive and wasteful. I think this is actually sometimes a little faster. But in that time, right, it started up, it pulled in the image, it ran a fairly computationally intensive series of algorithms and then it gave me back the image all in 25 milliseconds. So really powerful stuff and that's a great stateless microservice. If something doesn't have to talk to anything else, it just runs a job. You give it something and it gives you something back. We are definitely now, the future is now. So okay, so that's the stateless server. That's the big replacing docker. Yes, so you mean like in terms of performance and artifact size and all that kind of stuff? Yeah, yeah, so it would be close. It's not as good and it would have all those vulnerability issues and you couldn't run it on the edge compute and you couldn't run it on like little devices and things like that. Because docker just needs a lot. It needs a Linux OS? Oh okay, so you're saying like not docker. This isn't machine code versus wasm, it's web assembly. There's a lot of, it's a container versus a container. So yeah, like if you had an optimized system and you built Rust for your MIPS, whatever a thing and you ran it natively on like a real-time OS then yes, there's like nothing that's gonna beat that. But this is a container versus container comparison and docker, if it weren't for the fact that there was something alluring and capable about a container would not be, you can do things with containers you can't just do with running machine code. So natively. So yeah, so you're right, yes. Yeah, so there are limits depending on the system you're running it on. Can you, I don't, well, you know, I don't know. I'll have to get back to you. So it depends on the orchestration system you're using and they have to implement it. So this is just pure, well this was spin. Which would be typically orchestrated with hashcorp nomad. And I think hashcorp can set those kinds of things but that's a nomad kind of question. So, and I'm not, I don't know anything about nomad or beyond the basics of running stuff. But great questions. All right, so it gets a little tricky once you wanna start calling a database or pushing to S3. If we have time, I could go over this a little bit and some of the newer stuff that's happening but it's gonna be not gonna really be in the spirit of the talk because it's fairly technical. And then also, you know, I'm just kind of a user. But there is, with Wasm Cloud, we have the ability, I'm using we kind of, we all of us, I'm not part of Wasm Cloud. Have the ability to push to S3 and they have a MySQL and a Redis. They have kind of a different model. That stuff is not implemented in Wasm but it connects nicely through NATS to your Wasm and you can talk to those things. I think everyone's got that plan in the works because it's clearly a problem, right? If we're actually gonna start using this for real, we have to be able to talk to things. We have to be able to make out to calls and this is really implemented on a platform by platform basis, although there's some cool stuff that can kind of be in common that they can all use. Okay, and this one is really particular to Adobe. The C++, how many people here would want to run C++ server side? Okay, so at Adobe, there'd be hands raised because, so right, so this is why. I was like, I almost hesitated putting this in. But, you know, Adobe needs it. So the way we do this now, so we started out with like, oh, we're just gonna run C++ and we'll get some sort of C++ web server and then we'll just run it. And it doesn't work at all. It didn't work at all. And we had some really genius C++ programmers. Because you'd have memory leaks, you'd have crashes. The multi-threading and the multi-processing was like very difficult to debug, especially the things just up and running, right? And so what they said was we'll run a Java web server and then we'll use JNI, Java native interface, to spin up C++ workers, right? And so, you know, this is just not great. It's kind of got all the worst of Java and you have kind of in order to make C++ work. But now, right, now we compile C++ directly to web assembly and we use one of these orchestration platforms and now we can just run C++. So yeah, so, for whatever that's worth. All right, this one's actually cool and I'm hopefully most of you, if you have, if you're in the web server world, if you're in the web products world, this is actually something really important. This is really important to us and it's collaborative editing. So, and also notice I took away Docker because Docker doesn't run. Docker doesn't run in people's browsers. Docker doesn't run, at least not yet, in edge compute. So, and right now it's only, it's fastly in Cloudflare, but I think we're gonna get there. So it's an open source talk, although fastly does use wasm time. So it's like, it'd be like saying, well, don't talk about AWS to run your code. It's like, well, okay. Fastly's using open source products and they run it. So kind of have to include them. And so, and also edge is tricky because it means a thousand different things. So I buy means edge compute, I mean content delivery network, edge compute. And then data center, just whatever. You know, like, hey, it's in AWS, it's your private data center or whatever. Wherever your VMs run. Okay, but the big idea here is now we can completely, we've had this new tool in how we do web applications. And it fundamentally transforms how we think about things. Now the idea that we can have something and run it in all three places is now more compelling because we have three different places to run it. And also, by the way, like the whole distinction things eventually gonna go away, right? But, so anyway, so we wanna take advantage of all these kind of the take advantage of the strengths of the various platforms. So the problem statement, right, is we want to coordinate people's browser sessions so they can see what other people are doing on a canvas, editing a document, something like that. Well, if it has to go all the way back to a data center, halfway around the world sometimes, then that's gonna be not a great experience, right? Especially if it has to spin up a worker and if that worker's written in Java and the time it's gonna be kind of expensive for us to do and it's gonna be hard to isolate everybody's sessions in a way that somebody couldn't get into somebody else's browser session. And we're gonna have to do a lot of heavy computation to see whose edit should show up when, right? Because of this high latency. So the kind of, this is like the best use case for edge compute is we have all of these changes to the canvas, we call them deltas. We have that all happening at the edge. And there are products around this that you can look up that are offered by these CDNs. And so it's cheap because networking's cheap at the edge. It's low latency. And then, so the customer experience is really good. And then we don't have to have a pipe of megabytes of changes from every single user all the way to the data center, which is gonna get really expensive if you break into tens of millions of users. So we actually just kind of batch it to us up at the edge and just send occasional snapshots, occasional batching of deltas back to the data center so it saves, so the customer experience is better. We save money, it's great. So this is an idea, this is something that's out there. But yeah, so that's really compelling. And I'm really excited about that, the possibility of this. I think I got everything here. Oh, and storage is inexpensive in the data center, relative to the edge. Okay. All right, so this is the other one that's really compelling for me, machine learning. So you may not realize, if you use Adobe products, there's a lot of machine learning. We've made a heavy investment in machine learning over the last six, seven years. And so anytime in a Photoshop, a filter, edge detection, lasso tool, all these kinds of things are Acrobat. And so those are all machine learning models. We have lots of little machine learning models. This is kind of the classic one for me because I came from Document Cloud, is if you have the Acrobat phone app, Android or iOS, there's this, there's a teardrop kind of. And if you click that, it's gonna rearrange the PDF so that it shows up nicely on your phone. And this isn't a great example, but it's nice and generic, so that's what I used. So you can see like, oh, this image that was up to the right, I put it in the center, I make the font bigger. But it can do a lot more than that. It's actually pretty compelling. But the problem is that most people, well, not most, but many people around the world, and most people in certain places around the world, their phones can't do it. And they're always gonna have this kind of significant percentage of the total addressable market that can't do what your app wants to do. And so the answer right now would be to, well, we'll run that on the server side. And so it'll go halfway around the world and then come back. And that's not great. But we can actually run this stuff, if we can run this stuff on the edge, then we can have a consistent experience for everybody and a consistent low latency experience that doesn't need a lot of requirements from the user. Okay, so here's the, it's kind of similar to the last one and some of the words are the same, but it's different because we're talking about machine learning. In the browser, if we could, all things being cool, we want stuff to run in the browser. We don't pay anything to run something in somebody's browser, right? Other than CDN costs for them to download it. But that downloading time is significant. People don't like to open up a webpage and have it take five minutes for something to show up, right? So we can't, our total amount of memory that we wanna pack into the browser, we don't wanna put too much into that browser. So it's high compute, but kind of our space, total space is limited. Edge compute is where it fits in, is if we have a lot of models that are fairly small that run single threaded nicely, but you have a lot of them, so you can't, if you combine them all, you can't download them into somebody's browser, but you can run them in a pretty well, low latency environment, execute and it's a great place to do that. And then you have still have the data center for GPU, things that are really heavy, things that need GPUs to run, that model inference, still have it there. But once again, it's an expensive network and it's not a great user experience, depending on where they are. Okay, so now demo again. If anyone's still with me, I'm gonna demo again and I'm gonna do the right, the correct tab this time and I'm going to use the, I'm gonna put Grace Hopper in here and so, oh sorry, I didn't even explain what this model's gonna do. What this model's gonna do is part of the content authenticity initiative, so it's gonna fingerprint the image. See anything. So if somebody, whatever, made Grace Hopper look different, we'd know. So here's the code that, this uses the tracked Rust library. Do I have to make this bigger? Let me just make it bigger. Okay, uses the tracked Rust library and it's really just using the tracked Rust library. There's nothing super special here. I load up the model. It's actually as kind of significant, so this model load times not awesome, especially on like edge compute. I mean, we're talking, it's milliseconds, but it's like two thirds of the time. So we are loading in this model, we are optimizing and making it runnable and then we get passed from the HTTP, we get an image and then we run the inference on the image. Really basic, really basic. Oh and by the way, I did this for Wasm Cloud last time. Both examples were with Wasm Cloud. This time it's gonna be spin. It's done. All right. And I'm running it already. So here we go. I already loaded it. I loaded the image ahead of time, so. Okay and this is gonna spit out a bunch of numbers, hopefully. Okay, yeah, there we go. So here's a big vector and that's the content, but a machine learning model ran, so it spun up and ran and it took about a second. It's good, it's good depending on how it works in the workflow. It's definitely better than sending it halfway around the world and coming back. Is this image just too big actually to run in a browser? Sorry, the machine learning model, it's too large. It's 100 megabytes. So yeah, so that's that demo. Okay, so that's where we're at. Half hour in, or we have six minutes left. We can do Q and A or I can do a brief overview of how the component model problem, well yes. So you use an orchestrator. So you use one of these guys. Well, so actually that's a great topic. I don't know if anybody is familiar with Crosslet. Okay, so there was this thing, Crosslet, Deus Labs at Microsoft. Some people here were part of that team. So that was something they tried to do. It was like, well we're gonna just run, we're just gonna run WebAssembly inside Kubernetes. It's just another container. We're just gonna have Kubernetes orchestrated. And it didn't, it's a fine project. You can still find it and run it, but it's just too different. WebAssembly, wazzy containers are just too different from Docker and there's just a lot in Kubernetes assumptions around running a Docker container, some sort of RunC, whatever, runtime. And part of it is just, well this thing starts up in microseconds and the Docker container spins up in milliseconds. Like one of these Docker containers could, one of these wazis could start up, do its thing and shut down before Kubernetes even knows it ran. That's part of it, but there's just a lot of assumptions and it just doesn't really work. So you can run, so WasmCloud, my demo last time, I actually ran WasmCloud within Kubernetes. And that, WasmCloud has some stuff that will kind of publish stuff to Kubernetes, make pseudo services and route the traffic to the actors. And so you can, and I did that in the demo last time. But, so you can play with Kubernetes. Actually Nomad has been found to be much more useful with wazzy for reasons that I, I mean there are reasons and I've heard them spoken but I haven't dug into them. Yeah, yes. Well, yeah, so no, there are efforts, so it wouldn't be in the, I don't know, maybe I'm wrong, but I can't think of how you would get it into the JVM. You could take Java, you could do what .NET did and break out that garbage collection, rewrite the garbage collection for wazzy, because it's really just like an OBLAS. In a way the kind of Java VM is an OBLAS kind of, and then have Java like with a Grail VM compile that to wazzy. They have, there are doing work on browser, Java to browser Wasm's, they're not working on Java to wazzy oracle at this time. Does that answer your question? Got three minutes left. More questions? Yes. Yeah, so I'm sorry, I have to repeat the question so people can, and so the first, sorry, for the first question it was why not Kubernetes? The second question is can we put it in the JVM? The third question is, is there an open source standardization trying to solve these problems communally? And yes, there is. It's just that's a, if you build your own platform you can kind of fill in the gaps. But yeah, I think all of these platforms are hoping and they are, at least most of them, attend the, many of them attend the standards committees and trying to get the standards going. And there has been progress, it's just, it's just, that's always gonna be slower. And so if you're gonna make a commercial product now you're gonna fill in the gaps. But yeah, yes. Yeah, so there's something called Wozmer. So, oh sorry, the question was, if you had a server, a C++ server, and then you wanted to run Wozms within it, how would you do that? And so there's a project called Wozmer and it's been written in a number of languages where you can basically embed Wozms. So Wozm, you know, it's a compile target, it's like assembly, but you can write a lot, you could write just Wozm, just like you can write assembly and that gets turned into bytecode. You can write Wozm and it gets run by the stack machine. Sorry, the VM for lack of a better term. So yeah, so Wozmer, and they've got a bunch of languages supported where you can just kind of quote out some Wozm if you want it and just, or bring it in as a string and then run it. So I think there's, I'm pretty sure there's C++. There's Java, there's Python, there's Perl or whatever, there's everything. Wozmer, so that's different. So that's, so right, so sometimes you might get confused if you Google Wozm in a given language. Wozmer will come up, but that's different because that's running Wozm inside a different runtime. It's not running that, it's not compiling that language into Wozm, into a web assembly. So that's it, yeah. Yes? Yeah. It's, I didn't because it's a hard, it's hard to compare because I think, because I was focusing on running on the edge and so there's nothing to compare it to. It's really the only game in town. But yeah, I could've, I mean I could've taken a, and the other thing too is like, well what's my reference? Am I referencing, you know, a 64 core, you know, monster that was like two GPUs or am I, so it's gonna be slower. You're running this thing basically on a Raspberry Pi. So yeah, it's gonna be slower than that. But no, I didn't, I didn't, I mean it's really, it's more just like here's the number, is it good enough? You know, kind of, okay. Yeah. Okay, I think that's it. Oh yeah, one last one. Requirements to run Wozzy? Yeah, so it runs on like, I'm pretty much every micro architecture out there. And it's running, it's kind of, the whole idea is for it to run at near native speeds. And so yeah, it's really widely, widely supported. And that's because browsers have to be widely, widely supported. So I don't know if I repeated the question. Was like, is it, you know, what platforms it's supported on? Pretty much everything. All right, awesome. That was really good. Thank you for clapping. Canonical, oh that's better with the microphone isn't it? Let me try to do that again. Hello, I'm Stefan Graber. I work as the projector for LexD at Canonical. And today I'm gonna be talking around running very small clouds either for private use or for whatever your company might be looking into doing. So bit of history. How did I get there? I've been self hosting my stuff for a little while now. That's my domain registration. I got it in 2003. And I've been self hosting a load of different services over the years. I cannot, you know, I started doing that in college pretty much. I was doing a bit of website, email servers, some game servers, that kind of stuff. Web hosting for friends and family. And eventually turning that into hosting for like open source services and doing more and more open source at the time. I started doing it on this pretty much. That's actually a Pentium One, one of the megahertz or so. That was just sitting at my parent's place and was doing a good enough job back then really to host all of that stuff. Then I got a bit more ambitious and needed to have a lot more things hosted. So moved on to mostly renting dedicated services from a variety of providers. Bunch of those are mostly European centric. That's where I lived at the time. And I cannot finish doing that using mostly OVH servers in.