 My name is Sturck Boymer. I will talk about implementing a Wasm host for VS Code. I work for Microsoft, let me introduce myself a little bit. Okay, about me, that's my GitHub handle. I do have one at Twitter as well, but don't set anything there. It's pretty stale. I hardly ever use it. I work for Microsoft. I work on VS Code since basically it came into existence, that's 12 years ago. Before that, I worked on Eclipse and Rational Team Concert. And before that, even that, I worked on a development environment that very likely no one knows. It was called SNF, was produced by an Austrian company called Take Five. So I have a long history in the IDE space and in the editor space. And when you look at VS Code, then most of you might know the desktop version that we ship, right? I'm pretty sure most of you in one or the other way have already started using it. There is, as most of you people know, even the code spaces version, that's the version that GitHub has, right? That you can have a VM hosted in the cloud. And then you can connect with the browser to it. But at the end of the day, that one has a compute instance behind it, which is usually a VM, right? And you have the front end running in the browser. But there is another one that is not so well known and that one is VS Code Dev. I don't know who of you do know about that one. So a couple do. It's basically the web front end of the code spaces version without a server behind it. And that version is, as I said, serverless. It can talk directly to GitHub or other repository systems. If you run it on your local machine, it can even talk to your local file system. So it's actually pretty cool if you wanna edit a file on your local machine and don't want to install VS Code, basically you can go to VS Code.dev, open that file on your local machine and start editing it, having all the VS Code experience in terms of editing that file. But of course it has very limited language smarts because we have no compute behind it. So we do have good decent support for JavaScript then for CSS. So that technology that's usually available in the web. But the support gets pretty limited when it comes to languages like C sharp or Rust or C or C++ where you basically need a compute instance to get these language smarts or to do something reasonable with it. It's even too for Python, right? You cannot do anything in Python or it will be a PHP, it all fell short. There's no terminal behind it. So if you open up the terminal, we nicely tell you, oh, sorry, there is no terminal. You have to go to a code workspace to basically get a terminal. Everything that is VS Code.dev, if you go to it, it's basically what at the end of the day powers the GitHub.button. I don't know if you know that. If you go on a GitHub repository, you can basically on the code page press the dot on your keyboard. That brings you into a nice web UI that allows you to browse that GitHub repository online using VS Code at the end of the day. But the GitHub.dev version is basically a VS Code.dev version which is tailored for GitHub, right? We do some special handshake to get your authentication over and so on and so on. So when we started with this, our dream was always to get more features into the web than simply being able to edit a little bit of file right in getting an outline and little bit syntax color. And then when Vasem came along, we already started looking into it and Vasem right from the beginning made us even able to ship VS Code.dev because we heavily depend on text-made grammars and when we started using it, what we did, we basically put a lot of effort even into the tooling to get the text-made compiled onto Vasem and basically all the syntax coloring even today in VS Code.dev is driven by Vasem. But that's basically only where we started to use it and then when Vasem came out, we thought about, oh, why don't we look at it and really try to see how far we can push it and if we cannot get to a state where we get language interpreters up and running in the web UI, where we get the terminal to run additional tools that we at some point even might get language servers up and running. So it would be really cool at some point if we can take the Rust language server, compile it down to Vasem and then execute it in the web. So the reason why we need that is when we started VS Code, we decided to give people the freedom to implement the language smarts and the language of their choice and not forcing them to implement them in JavaScript or TypeScript. So for example, every language smarts that for example powers the C++ experience in Visual Studio Code is written in C++. The one for us is written in Rust. The one for C-sharp is written in C-sharp. That made it very nice for people to provide these language smarts but makes it extremely hard for us to get them into the web because in the web we have no execution environment for most of them. And with Vasem and Vasi, we really hope that we get even at some point maybe in a year or two these language servers into the web so that you get a decent experience for Rust or C++ or C-sharp in the web as well. And at the very end of this story, what we really would like to do as well because people were asking for that as well, if you're currently writing an extension for VS Code, you're basically stuck to JavaScript and TypeScript. And we do not have a very good isolation story right now and we were really thinking about if we could come to a state where you could, for example, write this VS Code extension in Rust and where you were able to isolate those in a Vasem execution environment and only give them the API or the features they need to execute the extension code. So we even think about it as a sandboxing story in the long run. So we started with Python and in contrast to the others talks before and I will first give you the demo and then I will talk about how that whole thing works and because I think it will make things easier for you and it will give you a nice explanation why we at the beginning started with Python and not with something else. Oh, sorry. I hope you see my screen. Okay, that's VS Code Dev running. So it's the insider version of it, right? You see it up there, insider. I will later explain why I use the insider version and not the VS Code Dev version but from now today until a couple of months out it will work in VS Code Dev as well. There is some not technical limitations. It has to do that we have to enable cross-origin isolation to get this up and running. That's a very viral endeavor as maybe some of you know and this is why we only enabled it for insiders because it has some breakage ramifications if you start enabling them. What you see here is these files are hosted on a GitHub repository. So they are not local. They are in a GitHub repository. You can basically open them up in VS Code and these are two Python files. The one basically imports a module from another one and at the end of the day what we really would like as a first goal we had is to be able to execute that without having any files locally using a Python interpreter that's compiled down to pass. And if I click it, it will run and will print as expected hello world and to really show you that it's not basically some fake. If I start editing this, put three in there and run it again, it will print the three. So okay, so it looks very familiar with a lot of effort to get something like this up and running. The reason is we had to get the Python interpreter down to Vasem Vazi. We have to mount the file system so that the Python interpreter sees it, that they look like normal files to the Python interpreter and all that stuff. To give you a little bit of glance how that looks, what we basically do for the Python interpreter and how that works is we implement it for that little web shell, right? That basically gives you a shell on the Vasem file system that we basically give at the end of the day, the interpreter with some additional commands which we basically took by compiling down the core utils from that are the Linux core utils written in Rust. They were actually pretty easy to compile down to Vasem so we compiled them down to Vasem Vazi and basically they host on the same API we provide for the Python interpreter. What we do with every workspace we open, we basically mount the workspace files for what runs inside Vasem into a workspace folder that follows basically what we do for code spaces and then we do the rest of the mounting. So if I do an LSLA here, right? I see the files that are in the workspace, right? I can do a cat on them on the app.py and it's basically lists what you see in the editor. We even mount the Python interpreter in there. If you go here and look at it, then you see we have mounted Python 3.12 into the file system, right? And if you go into it, you see exactly what you expect to see, right? You see all the Python files that ship with Python that you need to at the end of today execute Python, okay? And exactly that happened before as well. Well, we mount everything this for every Python process we start, we mount that into the Vasem execution context, right? Give them the right file access so that for the Python interpreter they can basically go make an F open on workspace.app.py and basically it looks for the Python interpreter exactly the same as it would run on your local machine. It's more or less what Vasem time does as well, but of course for us it was a little bit more complicated because we had to map everything onto what VS Code provides, right? We have no operating system underneath and there was a lot of implementation needed to fulfill basically the possible specification of the Vasem preview one. To see that it's really going that way, right? I explicitly, this is why it's a little bit slower. I totally run out of source, right? So currently all these extensions are mounted into the VS Code.dev by having a local extension server running, right? That serves up these extensions for VS Code.dev, but these extensions could be easily published to the marketplace. We have some of these Python extensions already published to the marketplace for VS Code, so you can give them a try. And as you see, it really goes down, really down to the web server and ask the web server for all these Python files because that's the way how at the end of the day the file system is then mounted into the Vasem container. When we did that, we also thought, oh, now that we have that would be cool to have additional features as well, right? And then we discovered that, oh, the VMware guys, they compile Ruby and PHP down to Vasem, Vasi, and I thought, oh, as a proof of the concept would be cool to find out, yeah, is it good enough, right? Because that's PHP one as well, so I took the PHP Vasem file that the VMware people provide, right? And mounted it in as well, right? And at the end of the day, right? I have to open up the right one. Yep, you are able to run the PHP stuff as well, right? Because it's at the end of the day using exactly the same techniques. I have mounted in the same files. It maps to the same underlying VS code file systems and so on and so on. And what we did as well is that if you have Python installed, right? And let's go back here to the workspace. You can execute the Python command in the terminal. So it tells me it has Python and you can run Python interactively in that shell. It takes a little bit because it serves all the files from my local folder. And you get basically a Python interpreter in the web running on top of VS code with all the VS code file systems nicely integrated into the VS code terminal. And what is really cool is all these files don't exist on my local machine. They exist on a GitHub repository with what we have in VS code dev. You can even start changing some. You can start developing it and even push your changes back to GitHub. So when we did this, then we thought, yeah, is it really something at the end of the day we can chip, right? Because there are still quite some limitations around what we could do with Python. So we were actually looking for a different use case where we can make use out of this technology and get it out there and getting feedback from people without being forced to basically invest too much into to get all the Python features up and running. I will explain a little bit later what the limitations currently are we have in Python. And what we did is, and as a public preview around it, an education team inside Microsoft took that technology including the Python VASM file we have and all that technology and made basically an education course that allows you to learn Python in the browser without having to do any local install around it. You can go on VS Code, it'll start it and basically get the Python course can learn Python. You need to sign up right now. It's a public preview, right? But at the end of the day, it uses the same technology. They did a lot of work around it to make it look nicer, right? It's you don't get this terminal at the bottom, you get something more nice right up here. But at the end of the day, it's exactly the same thing you have seen before in the terminal. They give you a Python repel and you can do the Python stuff. You can debug even in there. So they integrated the debugger and everything of this once in the web without any server behind it all based on VASM and the VASI preview one technology. Let's go back a little bit to the slides. Let's talk a little bit about the limitations we have, right? So currently the C-Python build we use is one that's provided by the C-Python team, right? That's the nice thing about it. It's not something I sit there and try to tweak Python to get it run, compile down to VASM. The C-Python people currently have the Python in a tier three support, right? So it's not, so they don't stop the build or the build pipeline if this doesn't work, but they really actively try to maintain this, right? So but it comes with no threading support. They gave me a special build that does have threads, right? But I had not the time yet to give it a try. But what we do do is we do have a thread implementation based on the specification for the preview one. So we do have thread support in our VASI layer. So as soon as we get the Python build that's compiled down having thread support, a lot of the limitations we have today will go away, right? Because most of the limitations we do have is based on not having threads in the Python interpreter, which basically holds out all async support. It's very hard to debug. I will talk about this a little bit as well. We don't have support for native packages. Easy support, okay? The reason is that we have no dynamic linking support right now. This will, so my hope is, and I think from everything I understand for the preview two with VASI this will get a little bit easier for us, right, to have a VASIM code that we can load dynamically. What we do have, we have a readme, how you can take the Python build we currently have, compile a native package down as well to VASIM into a VASIM library and then statically link it with your Python execution, with your Python VASIM file and then get one that has basically pre-compiled linked in the native support. So we do have native support, right? But it requires an additional step for people to produce that build with these native packages in it. And of course, there are a lot of limitations to what you can compile to VASIM natively, right? They are, I tested some of the packages, some are easy, right? Because they basically only rely on standard libc functionality, but there are a lot of packages, right? That expect so much from the operating system, right? Which is not available in our VASIM environment that you, even though we could compile them or yet they are compilable and you could link them, you cannot compile because there's not enough features richness in VASI to support all of this. As said, we do have a threat implementation in the VS Code VASIM repository. There is a threat example in if you are interested, you can have a look at it, right? You can have your multi-threaded application and we support the threat as they are supported in VASI right now. And the debugging support is very limited, very limited as well, I would not say. So you can debug these Python applications in the web. The limitation is that the debugging depends on PDB that chips out of the box with Python. Major reason is that we do not have socket support and threat support which makes using another debugger relatively hard right now inside this environment. Because we use PDB, it has the other limitation that you cannot set any breakpoints as soon as the Python program runs because the Python interpreter is basically single-threaded and you cannot set any breakpoints. We're experiencing with having special filters in the event loop of the Python interpreter and I talked to Brett Cannon about it if we can do something, but they were all not very happy about the solutions and we thought we basically wait until we get really good threat support and compile down the Python interpreter with threats because then these problems at the end of the day will all go away. By the way, if you have any questions, I would prefer you ask them right away, not keep them to the end, right? Because it's most of the time easier to answer these questions in the context of what you see and then I don't have to go back. Start with you, okay? So the browser version runs on a lot of browsers. The only area where we have currently tested it, I have to say, is in Chrome and Edge, that support, right? I cannot tell you how that whole stuff currently behaves on Safari or Firefox. So it based on the Wasm support that comes with your browser engine, right? The Wasm support and other browser engines is there as well, right? As I said, the limitations I currently have, but that's the only testing limitation that we only have proofed the implementation that it's running on V8. No, no, the Wasm support does not come with the browser. The Wasm support is something you basically plug into the Wasm execution engine and how that is plugged in is basically standardized. That's more or less what the guy yesterday demoed with Jacob, right? When he did his stuff in his browser, it's the same technique that we are using, right? We are basically given the Wasm execution engine the imported function it needs so that they can basically read from our file system, right? And that's basically standard. That will basically work with all Wasm execution engines that are out there, right? So everything you see in the web runs one to one in the desktop version of VS Code as well because Node comes with the same Wasm execution engine because it's based on V8. So you can get this Python interpreter, this example I showed you, it runs on the desktop as well. It's not very useful there because you have a better Python interpreter there, but it proves that there is a lot of possibility even for the desktop version because everything we do basically always runs on the web and in desktop. We do have full support for non-native Python packages. So what you can do very easily, so the Python interpreter can come in two ways into that system. Either it's bundled up with the extension, that's how we ship it out of the box, but you can configure that the Python interpreter with all the modules you wanna use comes from a GitHub repository or from any file system provider that is supported in VS Code. So what you can do, you can carve your own special Python installation, right? With the packages you want to use, even with linked native packages into it, put that into a GitHub repository and point VS Code dev to say, oh, by the way, the Python interpreter is not coming with the extension, take the one from the GitHub repository. You can get it from Azure repository, any repository where we do have support to talk to, right? Can even come from an FTP server if you want. Yeah, so our Vasis thread support basically maps one to one to web broker, right? If the callback comes from the Vasi layer, right? That's basically in that specification and asks us to create a new thread, right? We do basically what the specification says. We create the new thread, we point it to the shared memory, it does all the right initialization, right? And then what we actually do inside VS Code, we manage these threads. So when the main thread goes down, we kill it, when you terminate the main process, we kill all threads around it. So we basically do what an operating system does, right? If you start a process and you add threads to it, you have to manage them. So that is basically what we do as well. We do all the management around these threads. Other questions? So as I said, we use Cpy and compile down to Vas and Vasi and then on the implementation side, on the high level, it looks very straightforward, right? Vasi defines, this is basically already our lifted function signature we did for Vasi Preview one, right? So the FD read already gets a file descriptor and an array of ruin buffers and you basically want to read from it, right? You wanna read the content of that file into these buffers. And what at the end of the day, what we do is we do on the VS Code file system and basically you read that file, right? We do have to do a little bit of magic, right? Because the file system in Vasi is pass-based, the file system in VS Code is UI-based, right? So we have to do quite some mapping around that. But at the beginning, it looks very straightforward. And what we did implementation-wise is basically what the Linux kernel does as well, right? We have the Vasi callbacks and basically for everything you can plug into that call into our layer, into our layer different device drivers. We have some for different file systems and we have some for terminals and stuff like that. And we do basically the same for the terminal. If the FDA read comes on the terminal, we at the end of the day talk to a terminal instead of talking to a file system. So on that level, it looks relatively easy. The good thing is this gives you access to everything in VS Code where we already have a file system implementation for, right? So if you talk to GitHub, we can talk to GitHub through Vasi transparently. On the other side, the VS Code file system has no POSIX semantic, right? It cannot speak POSIX at all. And that at the end of the day was a lot of work to get that built on top of the VS Code file system. So it was not so necessary in the mapping problem, right? It was more getting all the semantic expectations that come with the POSIX system at the end of the day implemented on top of this, right? And there's a little bit more to it, right? The next thing is the whole Vasi API is sync, whereas the whole VS Code API is async, right? Then VS Code's API, right from our design here, everything that we do in VS Code, every extension code runs in its own when you are in a browser in a web worker. If you run in a desktop, it runs in its own POSIX, right? So the reason for that one is that we never want an extension to in any way block the save or the typing or something else that you have to do in the editor. This is why this code is currently already isolated, but it's usually isolated into one web worker or into one extension. And that's the worker or that POSIX is the one that has the API. So the problem is we could in that basically run the Vasi code, but since the Vasi code is executing sync, it will block the whole extension host worker or POSIX, which at the end of the day makes it impossible for any other extension to do any work. So from a design principle, what we did is we basically offloaded all Vasim execution in its own worker. But of course, whenever we get a callback to answer something that Vasi wants, right? If they want the content of a file, we had to go back to it, the extension host worker, synchronously compute the result asynchronously in the extension host worker and then at the end of the day bring it back into the Vasim execution worker to basically give that result back. The implementation is based on shared array, buffer and atomics, right? To make the sync sync at the end of the day again. Long-term, we wanna base it on the WebAssembly JavaScript Promise Integration API. That one, there is an experimental implementation for that one in V8, but we decided to not go with it, especially when we started the effort. It was not there yet, right? And we decided we go with the shared array, buffer and atomic synchronization. And I will quickly explain how that works. So what we do is that we have the extension host worker and on the other side, we have a Vasim worker, right? And this whole thing starts that we send a message over and said, okay, we wanna execute this Vasim WebAssembly module, right? And then we do the usual stuff, right? We create it. Then that one, for example, prints to the console because it has a print statement in there, right? Then in that moment, we go into the Vasim layer and say, okay, we wanna do an FD write on the file descriptor zero. And then that one basically creates a shared array buffer and writes all the data that it needs to transport over into that shared array buffer, right? Then we put a weight into that thread and say, okay, we wait on the first bit of that shared array buffer until that goes from one to zero and then we basically hold the execution on the Vasim worker site. And then we post the message over because the other side still has an event loop, right? The extension host thread to basically read something from that shared array buffer, right? Then the other side goes and reads from there and decodes all the information that is in there, decides, okay, what I have to do now is to basically do an FD write on the terminal and does all the decoding goes up, writes on it and then puts the result back into the shared array buffer and then notifies the shared array buffer that is basically done. And then the other side wakes up from the weight and basically reads the result out of it and gives it at the end of the day back to the system. That was one too far, let me go one back and gives it back to the system, okay? So this is at the end of the day how the implementation works and that's the reason why it's currently only available in insider version because to get shared array buffers working and atomics, you have to have cross origin isolation in a browser. And that took us months to enable. It is, I have to say it was very challenging. It was very easy in our code, but since we depend on quite some stuff coming from the internet, right? From other sources, as soon as you enable it for your website, all resources you use from other websites need to have, must have cross origin isolation enabled as well, otherwise they will not show up. So even something easy like doing an all-out stance with GitHub failed because GitHub didn't enable blah blah blah. So we had to talk to all our partners and say, look, we need you to enable this and this is why it's only enabled in the insider because we still know that there are a handful of situations where you might not see an icon or something because it comes from a resource where cross origin isolation is not enabled yet, okay? As that, if you have any questions, right, ping me. So, but of course you can bring your own stuff as well, right? It's not limited to that, right? Everything I showed, there are extensions, there are NPM modules, they are in preview, there we have not officially made that a stable version yet. Major reason is that preview one is not stable and preview two will not be stable. As soon as something around Vazi's stable we make things stable as well. But of course we do have a code for that. If you look on that side, it's a simple C program, right? You can compile it down to Vazem, right? And then if you look on the right-hand side it's basically the code you have to write in VS code right now to get that wrapper simply up and running inside a VS code extension. To highlight it a little bit more in detail, the interesting part of it is that we, VS code already comes with the concept of a terminal and a pseudo-terminal, like it should be, right? The terminal is the rendering, it's the UI around it and the pseudo-terminal allows you to talk to a character divide. So, you can create a special one that's basically bound to that Vazem API, right? And then put it into a normal terminal and then basically show that terminal. That gives you a terminal where your Vazem process can write or read from, right? And then basically executing the Vazem code we decided in a first step to treat Vazem code as a process. We do know that this doesn't hold for everything but for our use case at the beginning it was the right thing to do because it was API that was easy to explain because we kept it most alike to the node API for processes, right? Because that is what people are used to. With the component model for sure we will make something like create a service where you then at the end of the day can call service functions on but at the beginning we did the process stuff. You basically create it and then you can run the process. For standard IO we also implemented that you can have pipes, right? You do not need to pipe it to the terminal. You can create that process, give it an input pipe and an output pipe, right? And then the process will read from your input and will write to the output. This is for example the IO team is using some of that support to get a nicer rendering when the Python code gets more complicated and they execute it. They do not simply print it to the terminal one to one. If errors happen they try to help the user with the errors by basically taking the output from the Python interpreter post-processing it. From an abstraction layer we try to make it look like a process. What you don't see here for simplicity reason I left it out you can basically with the process not giving a standard IO you can tell it how the file systems are mounted into it at which mount point they should look at these file systems. Basically what you can do with Wasm time you can do here as well, right? You can basically configure what the process sees at the end of the day. So when I look at the implementation what was the hard part and I'm so happy that it goes away was the rights management in WASI Preview 1 that's unnecessarily complicated for what we needed, right? And I'm pretty sure I still don't have it right, right? It's very hard I think at the end of the day to get it right. Then what happened as well is although you have this file system specification and they tell you which functions you have to implement they don't tell you how semantically the file system should behave. And that's basically a little bit of a tricky part and I talked to people here this week and they told me the best is to look at what Wasm time does, right? Because at the end of the day it's the reference implementation. We have the same problem with specifications as well so I'm not surprised about it or sad about it, right? It's life, right? But the typical case is can you delete the file where you have an open handle on it, right? If you look on Windows, you can't, right? It bails on Linux, you can't. Now what we figured is that a lot of systems that are compiled down to Wasm assume when they run as Wasm that the file system has a POSIX semantic, right? So they do expect that they are able to delete the file even though open file handles on these files exist. It's the same for directories and all that stuff. So this is basically something we had to implement to get stuff up and running although it's not supported on the VS Code file system. So this is where a lot of the, where when I said, yeah, it looks simple from the API match, right? But the semantic differences were so huge, VS Code has no I nodes, it has, right? All that stuff doesn't exist in our world, right? Because we are fully UI based and we basically had to bring this all back, right? To make sure that these programs at the end of the day behave correctly in the sense how they are written, right? And the FD weird here, I have to say was unnecessarily hard but got a lot easier in the preview tool as well. We are fully committed to the component model, right? We will, we already started working on it for us to get a better understanding around it. We wrote a bit parts of it, a simple one, right? And try to generate a little bit types grid code and a meta model around it. The major reason is that we wanna use this. So the transports between the Wasm worker and the extension host I showed this for the preview one, we all code it by hand, right? We sit down and oh, now I have to pack that again, right? Because if you had a memory block in the Wasm worker, the extension host thread cannot access that memory. So you had to copy all that information you needed out of that memory, put it into a different memory, right? Transfer it over and then at the end of the day copy the result back into the Wasm memory and that we wrote all by hand. It's a little bit tedious, but it can be fully automated, right? And we, since we need to keep that semantic, at the end of the day we wanna automate that out of the WIT files, right? By generating code out of the WIT files so that we do not have to write this by hand. Another reason for that one, the handwriting might be okay, but we think that there will be more and more components coming. And if people wanna take a component and wanna implement it on top of the VS Code API, we do not want to force them to do this low level management of mapping memory back and forth themselves. It would be nice if we can generate them and tell them, look, this is the service API you get on the extension host worker site, go and implement it and we basically do all the synchronization in between for them. This is why we, at the end of the day, looked into this. So what comes next? So we are fully committed to preview two. We will really work on this. We will try to look into what it means to get language service up and running. I, for example, talked a little bit to people from the C-sharp team to see what we can do, right? So because we really wanna try to have a prototype where we get these language servers up and running. And basically the idea is all these language servers depend on the language server protocol and the idea that we have at the end of the day that these language servers that we can have the server side of the language server written in Vasem, right? And then a language server can directly talk to that and the people do not need to do anything. And we basically think that we might be able to do the same for debugging as well, that you can debug some Vasem code and that at the end of the day we can basically get the adapt of the DebugAdapter protocol written in Vasem and then you can directly link to it without being able to write all the glue code yourself. And we wanna improve the debugging support for the web, right? It's still a little bit limited and but it's good enough for the education team, right? Because there the stuff is not so complicated, right? That they need a full-flagged debugger but we really wanna look how far we can get with debugging there as well. I think from the time here, I'm down to 45 minutes, right? Or 20 after I should have been done. I'm already running a little bit over. Sorry for that. Questions? Yo. Mm-hmm. So you can go to the VS Code Vasem repository. There is a VS Code Vasem folder. I think the easiest might be even to show it to you. So if you look here, you find that in the VS Code Vasem repository there is an example, okay? And that example is basically extension of the that basically has a C program that compiles down to Vasem and basically it shows how you get that basically executed and bundled up as an extension published to the marketplace, right? The extension you will publish to the marketplace will contain the Vasem binaries. It can read them how that is done from the marketplace directly with all the site files you need, right? And you can mount it into your file system and then basically execute it, yeah? Yeah, yeah, it produced a different result at the end of the day. And we see the problem, so we think as long as we ship it, the major target is the web, right? There will be not so big surprises, right? Because the people are not used to have access to their local stuff in the web anyways, right? If people start using it on the desktop, yes, of course it will get a little bit more confusing. We started to discuss internally if we should ship this out of the box by default. So for example, if people open up an iPie notebook, right? It would be cool. Oh, they do not have to install all that Python dependency stuff to get a little bit about, to get a basic Python execution environment up and running. But we see the problem that comes if we start doing it, it will always be a little bit different to what you have on your local machine. And then actually, I don't have a good answer yet for you to tell you the truth. I'm rambling a little bit because I don't have one. It's a very hard problem, right? I think for the web, it's acceptable. What we do right now for the desktop, it might not be. So yeah, currently we do, we talk very actively to the Python team. What we would like to see happening and the Python team is working on it is that they abstract out the execution part of the Python extension so that at the end of the day, we only bring the execution part to it, right? Currently we're bringing a little bit more. The reason why we are not there yet is because the Python support we have on the Wasm build site is so limited that they say they need to sacrifice so many things that it's currently not worth looking into it. But as soon as that support gets better and it will get better with threads and more Wasi support, right? The difference will not be so huge anymore and they very likely will be able to do stuff exactly like this, right? That you at the end of the day plug-in execution environments and one will be that one. W, IT? Okay, WIT, WIT is the file format in which Wasi component model describes their components and we implemented a parser for that to understand the component to generate the glue code we need to transport memory and function calls from the Wasm worker over to our extension host worker, right? Because we need to proxy that one call further than Wasi expects it to happen, right? Because Wasi thinks, oh, you get the callback and then you fulfill it because you have to fulfill it, sync anyways but we have to transport it basically one more out of the scope of Wasi to get to our API, right? And as I said, we built that parser to be able to generate that code because we don't want to write it by hand because in the long-term we expect people to implement these component interfaces for VS code so that we do not need to implement all of them. Sorry for keeping you away from having lunch if you have any more questions.