 that container and communicate with container and manage container and nevertheless where it's running. And it found already a very high adoption. 93% of corporations who also in the cloud native survey utilize container. This is a huge number, but container itself doesn't make the magic happen. So what you also see is that Kubernetes has the container orchestrator of trust kind of de facto standard comes around and it's responsible for setting up and to implement complex systems running in containers. And we see that the Kubernetes is basically implemented everywhere on any kind of infrastructure, globally scaling, cloud providers, local, regional infrastructures, the service on edge, on airplanes. So there's theoretically no limits, but sometimes it maybe doesn't make so much sense to use it. And if you take a look into the European market, which is very interesting, because we see here an adoption of a little bit more than 90% and compared to hypervisors, which has a market share or market adoption, better to say of 92%, we are very close. In fact, in the last seven years, Kubernetes has reached the same amount of share on the market more or less as hypervisor did in the last 20 years. And this is crazy. So container and Kubernetes itself and also together as a big nice package has drastically influenced the information communication technology market world. It's a big bang actually for a whole new market based on all the source projects. It's boosting the open source ecosystem drastically, it's continuously implementing new features, new ideas, and it has kind of thought leadership in many ways. And it also changed the way how we see infrastructure. So nowadays we're talking more about infrastructure as applications rather than of infrastructure for the rest of my life. Things like security, observability, this all got more black and blue, actually makes fun to implement it and to play around with it. It's not a burden, but you can do so many cool things with it and it's not that complicated as you have done in the past years. And Kubernetes abstracts away the hypervisor, the cloud service provider infrastructure as a service. And this can be very valuable also for heavy regulated markets. Banks and insurances often needs to define the exit strategy. What did you say have to demycrate from a cloud provider? To do a migration one to one to another provider would be not that sustainable in the end. So Kubernetes as an extraction layer could be there a very reasonable good solution for it. But Kubernetes also creates knowledge whiteness. This is this big black hole in your actually enterprises and corporates which you fall into when you try to implement huge Kubernetes implementations. And this whiteness comes right behind the field of missing knowledge from cloud providers. So there's a big hole, a big issue of adapting to it because it's a continuously moving, very big, growing environment where you not really can walk with the same speed as the community does. So what's next? Solomon Hikes, one of the developer or inventors of Docker mentioned already 2019 that if Wasm WebAssembly and the Wasi the interface specification would have exist in 2008 they would never have created Docker. And this is a big statement from someone who has theoretically invented practically invented something which has changed so much the way how we are working how we implementing applications. So what is WebAssembly or what is Wasm? There are many aspects which we can take a look on but the three outstanding positions we have described here. WebAssembly is a kind of intermediate layer. On the one hand side it supports various programming languages. On the other hand side it supports very different architectures, kernel architectures. And yeah, it allows you to basically abstract this both things away from each other. So if you have a new chip set and you need to support it you actually just need to make your WebAssembly runtime being able to execute on it and then you can ship any other application to it. It's very secure by default. If you build a Wasm module and execute it and you do not tell what it is allowed to do it cannot do anything. It's running but it doesn't do something because it's also an encapsulated binary and there's no operating system in it. I would not say impossible to hack but there's not so much which you can hack. And this on the other hand side comes specifically the opponent of what a container can do. The container pretty falls so you're already allowed to do everything while it was a Wasm module you're allowed not to do anything. And it is fast, blazing fast and way more faster than you would expect. In comparison to the most of the containers which we see on the market it is more than a hundred times faster than that in startup time. Means that this is for example a perfect implementation if we talk about serverless because rather than it's maybe still needs a few hundred milliseconds until the container started this is now very less milliseconds at all. And this is also because the footprint of the WebAssembly module is very, very small. We're talking about megabytes not gigabytes. So most WebAssembly modules which we see running around two, three, four megabytes while in enterprise creative environment golden hard images are often around a gigabyte or more. So this WebAssembly in your paradigm, maybe. One spoiler, I think yes. Yet there's a lot of things which needs to be developed but you see already that gets so much drive and this step from a containerized environment to the WebAssembly environment is not that difficult. Especially not when you think about the use cases. You have tools like Figma or Lidges which runs as a WebAssembly module totally encapsulated and still guarantees a good performance. This is outstanding because Figma for example is very powerful in what it can do but if it would run this for every customer and they would need a very large infrastructure but they don't need because running and executing it as a WebAssembly module basically in your browser it's getting quite interesting here. And I know it's at the outside of the browser but we need to start somewhere. Then we have plugin systems like the Envoy Proxy and then QV Needless environment with the WebAssembly you can do the modifications and configurations of it and allows that you basically have a very trusted environment. On the other hand, it can isolate directly on the other hand side like you born. And this allows you that you have a little bit higher secure environment itself which leads to embedded sandboxes like you have in Firefox. So it prevents that third party libraries could expose you. We see also an adoption of blockchains in areas like the internet compute protocol from the DFINI foundation utilized WebAssembly. We have implementations like Cossin-Wazen which utilize WebAssembly for running blockchains. And obviously we're going to talk about containerization with Krusty, Wasm Cloud and Wasm Edge. And as mentioned, serverless platforms are on the rise and I think their WebAssembly will really make the difference because on the one hand side it will allow you to execute very fast applications but it would use your costs and because of the infrastructure owner. So it's a win-win situation and it's more secure. So in the containerization environment we see for example the Krusty implementation, a cool project that allows you to build nodes where you can run WebAssembly modules. You just have to exchange the runtime with a Wasm runtime. You need to run the Wasm module there and you need to basically replace the kubelet with a Krustlet but then here you go. The downside is you cannot execute container and Wasm modules at the same host. That's not possible. So you need to maintain multiple nodes, multiple node groups which can cause some additional maintenance effort at the moment. Therefore we see Wasm Edge because it just can run along the container images and place together with the old CRI runtime and the CRI runtimes so that you do not have to change anything and this is very powerful because here you can really run a container image and a Wasm image beside each other which are that it's cutting deep into the Kubernetes. On the other hand side it utilize the Kubernetes environment but we will talk more about this in a second. And then we have the Wasm Cloud which is actually a new paradigm and a new platform. Here we can really think about new paradigm because this is a system which can run on Kubernetes but also with machines or anyone else. All the different nodes are connected through something called lettuce based on nuts messaging system and that's actually there for implement the business logic between these things. So when we talk about Wasm and the potentials based on Wasm Edge we see here that through the clear movement strong forward to the Kubernetes environment it can utilize and leverage all the existing tooling means that it doesn't need to develop any new big functions and features because everything is given. On the other hand side Wasm Edge is also able to run and being executed as itself as a modern web application runtime if you want like this. It can host serverless functions and extend it and being even kind of embedded. Always be careful was embedded because it's not really embedded in the Edge device but because it's so small and the runtime was so small and can be brought actually to any kind of devices. The Wasm Edge is a very good alternative in comparison to very expensive embedded development. So all over all Wasm Edge bring the advantages of Wasm to the existing ecosystem of Kubernetes together but without being any kind of inverse. And this is very strong. And this will also from our perspective drive very much the adoption of Wasm Edge and Wasm modules in the cloud native environment. On the image level we also have here some benefits because as you can see in the first black box this is how you can build a Wasm image you build it from scratch so there's no operating system inside you just add your Wasm module and then you execute it by starting up when the container started. That's it, all the magic happens. In addition, there's just one minor thing which needs to be done and this is the annotation of the module that wasn't the image slash variant equals compact. This will tell when the image is getting executed on a Wasm Edge module in a Kubernetes cluster hey, you need to execute me on the Wasm Edge and not on the container run time. So this is one minor dependencies which you need to keep in mind but I think sooner or later the adoption also here for giving this kind of annotations also for other build mechanisms will come and then also here you do not need to rely on one single tool like buildup which actually does a very great job. So let's have a look how does all the things looks like. I have here a machine running and perfect, directly dropped out of it and I need shortly the password for the environment. All right, so back VR. As you can see, we have here a few things which are interesting to highlight. We have builder because it builds my container we have an HTTP server demo. We have Kubernetes and Docker running and we'll also highlight why and then we have a Wasm Edge demo. So first things first, we take a look into the Wasm Edge demo and there we have a demo. And you can see here there's a cargo tunnel which means that this is a Rust program in the end which is executed. See here, it has a name called echo and it will be built from the source main.rs file. And if you take a look into the source folder, you go to source and then take a look into the main.rs. You see there is nothing big and special going on. So it just will basically print echo and then whatever argument we throw into that's it, that's all what it is doing. So going back, the first thing which we need to do is to run cargo build. And we have here the target for Wasm32 and the Wasi specification. So we'll have a simply interface specification because you can also just make a random cargo build and then you have your Rust implementation like a normal implementation. And this is already quite helpful. What you can see here then directly, you get a cargo log file and you have your source and then we have the target folder which is newly created. It knows some of your other folders and then we have all the builds and most importantly our echo Wasm build. And when we can execute it here, I would highlight that we call now Wasm Edge and say target Wasm32, blah, blah, blah, echo, say hi to your all. And then you get just response echo, hi to your all. So this is a very stupid simple example, obviously, but you've seen it's very fast build. It's nothing special. However, this Wasm module is super reliable because it's quite small, it's robust build and it can be also just actually your Rust program. And you can do the same with JavaScript or Go or whatever. So this is not the very special thing in it. I have also there another nice demo in the HTTP server. So again, we have a cargo Tomo. Just for time, it looks a little bit different because you have here a few dependencies which we take into like the HTTP codec, the byte codec and Wasm Edge Wasi socket. This is needed so that we can actually do the network communication. And if we go to the source folder, we'll take a look into this one. You can see it's a little bit more code which we have here, but actually in the end it's just an HTTP handler which does the same like before. It echoes whatever we throw into and gives us as an output. So this time we will run again a cargo build, but you maybe will notice that here's release written. So it's not a debug like before with the echo one, but it's built for the release. Why so? With the debug, you get everything. You get all the packages every gets thrown in and you can take a look precisely what's going on. But if you make a release, it's get reduced. So everything which is not needed, it's gets thrown out and everything is get compiled into your code from the libraries which I needed. So when you have your module, we can take a look. Oh, nice. So you can see here is a little bit more going on. So again, here's our cargo log and then we have our target for the release build, the dependencies and then here our wasm32 wasi folder also again for the release and we have our HTTP server.wasm in the end. Just to be sure about what we are doing here with the container, I give some execution rights to it depending on how it's configured and which writes by yourself, you need to do this. And what we then just simply can do is again run wasm edge and gives a target for the HTTP server. So now you see there's nothing else going on. So I also have to check basically on this host and we can perfectly also disconnect in this terminal. I forgot to reconnect. So there we are. And now we can make a curl, perfect. Now we can make the curl. I think this is the network today. And you can see here that we immediately get the answer from the wasm module running echo main equals wasm edge. So we can also say something like, hey all, write a mistake and you immediately also get the answer. You see that it's so fast giving you feedback that even the terminal doesn't get the time to get to the next line. However, this is still, well, okay, cool. Now we have an HTTP handler. I can answer some requests or cool. This is running either here in a wasm edge module or what we also can do with it is basically to build a container from. And there it's getting super interesting. So let's see where we are on the root level. So where we first need to go is to do a target, wasm release and create a Docker file. And just the example which I've shown you before in the presentation. We don't need anything more than from scratch at the HTTP server.wasm and execute it as soon as the container starts. That's all what we have to do here. Then the next thing is that we need to actually do suitable build and give the annotation with the module wasm image variant compact to the HTTP server image. And last but not least, you also need to push it to our container registry. And here's very important. This is just the Docker hub. There's nothing special going on. Fast go into it. May I go wasm server and just pushed a few seconds ago with the wasm annotation tag. Now, as you can see and as I highlighted already earlier with these approach of building the images and utilization in wasm edge or I will show you in a second also in Kubernetes. This is very easy to use. So how we can use it? We have here running Kubernetes or Kubernetes in Docker because I don't want to blow up the whole infrastructure too much. And you can see, always forget about because I'm lazy about writing keep CTL. We have here already, for example, wasm edge running. I call this wasm edge two. And what we can do now is to throw in a new container. You see the run. We'll never restart it called a wasm edge from an image with the HTTP server wasm annotation. And then we have the annotation of our module and overwrite here some specifications. So that also we have the host network and we can talk with it. So now we also have this wasm edge thing here and what we need to do and find out next is the IP address so that we can actually do a call on it. It's the 172.1802, so what we can do this time is like this. 172.1802 and then we have the port 123.4. Specified. And then you get also here immediately an answer on our curl. So again, we can change here the curl as we want to have it. Perfect, so let's break C command so it can also say just copy it fast and say like hello world. So then you have also here your hello world. Again, it's blazing fast and it's running your Kubernetes, which is quite awesome. There's nothing special in it. So it's a standard kind cluster and kind control plane. The implementation here you can find under the wasm edge documentation for Kubernetes and Docker. This is actually done by two colleagues of mine, Sven and Christoph Kudos for your implementation. And you can see here it's from our liquid reply repo basically the kind C run wasm implementation. So if you want to get started with something like that it's the, I would say at the moment easiest way to have also your local development around wasm. So summarizing slowly everything up wasm edge. Awesome solution R, isn't it? So it can run alongside everything what you know so far also on your machines, also new Kubernetes, on the edge where you get adoption possibilities here. The specification you have seen it's just yet been as a Docker file and supports all the CIO, CIS and Kubernetes distros. So there's no limit in it. And you can use the existing Kubernetes ecosystem which is great because they have hundred of open source tools solving very specific problems. And you can use all of it. What you have to consider, it's an additional tool chain which you need for example, builder to build the annotate the image. I think this will change in future but at the moment it's needed. For some use cases you need an SDK. If you want to build a full wasm edge or full server less implementation you need some SDKs to make this possible. And it sometimes can lead to confusion that wasm edge solves so much problems. So it's solved on the one hand side or not means problems but it can solve and run on the one hand side of Kubernetes but also on VMs but also on edge. So one and the same implementation for all of the use cases. My former role as enterprise architect I always got a little bit skeptic I would say but so far we have not discovered any big flaws in it. I can really say that it works. So for my point of view wasm edge would be the best choice to extend your current orchestration and even environment depend even not really matter where you're running. And it also extends your container landscape because comparing the Docker like containers and WebAssembly, WebAssembly is better in performance. The resource footprint is very awesome. It's nearly nothing. The isolation, think about like having big multi-tenant clusters. Well, with WebAssembly this is less a problem. It's quite safe. It's very portable as that. It's can run wherever the runtime can run and it's highly secure. Where it's not so good yet is about that not all of the programming languages are supported and it's not that easy to use sometimes. Also my demo was showing you that it's actually quite easy to utilize it but if you get started with it, it takes some time and until you have all the tools installed and so on, it's not that fast in the setup. And from the management perspective or manageability perspective, that's easy. Boss is quite good to handle. So WebAssembly and Watson together has a very big potential beyond the browser. It enables you use cases and it finally gives you the chance to also run use cases in a Kubernetes-like way where Kubernetes itself doesn't make sense, which is awesome. And I'm pretty sure it will not substitute containers. Containers at the moment is replacing a lot of virtual machines, but it also means that there's heavy lifting applications moving to containers. Watson is not made for those heavy applications. This is also clear, so it will not substitute them in here. And we also see on the other hand side, Watson can extend like the N-word proxy Kubernetes perfection. At the moment, we see that there's some harmonization issues, there's a lot of things going on in the market and this makes it sometimes a little difficult to be adapted. But in your future, I believe that this organization will sort all their things out that we have here then also ecosystem, which is clean and tidy up the move on. And as soon as the developer experience for WebAssembly will improve, this will be a game changer. So if you even can make some more steps forward and it's more natural, then I also believe here you will have a very good adoption rate throughout your ecosystem and also throughout the different kind of roles in the development of mine. So how do we see it in the end? We go with the container for the flow. Whatever needs to be containerized, every kind of application says a big ecosystem, all the languages are supported. And then somehow it's this first born yeah, kind of effect like everything's like, oh, you have a big eye on it. On the other hand side, build with Wasm for the future because it's consistently fast. Doesn't matter where you're executing it. It's small, it's reusable and it's very universal. And also container claim more or less all the statements in the direct comparison between Wasm and containers, you see that one thing is really universal and reusable and small. While the other thing is better than a VM, definitely. But there are some use cases with the one or the other makes more sense at the moment. So both together actually is a good win in my point of view and will go along also for the future. So thank you very much. Have a great day and enjoy your time on the conference.