 Actually, a lot of these tips apply to anyone doing embedded development. There's probably some potential embedded Linux skills you can garner from here. It's hard no matter what you do, so hopefully this will help. Where do you go? All right. Thanks, everyone, for coming to our afternoon talk. Those who are watching on the live stream, welcome. This is about just your code and how to use it to develop with Zephyr. I find it kind of hard, and a lot of people ask me how to do it. I use this talk opportunity to raise my skills and hopefully give you some tips yourself. This is an overview of how to use Visual Studio Code for Zephyr development. It's not a VS Code 101, but kind of going why VS Code is popular in general, how people are using it today to do embedded development, and generally with Zephyr as well. We're going to go into the guts of VS Code and how it's relevant for you trying to use it. I will be doing some live demoing. Hopefully the demo gods shine favorably on us and also some topics for the future, because their original talk was going to be over an hour and 30 minutes, and I only hit 40, so I cut that out and hopefully we'll talk about it again. Brief about me from a company called Goliath. I've been working on developer stuff for my entire career, doing a lot of IoT things, and I actually started working with Zephyr way back in 2016 and have been a fan ever since. And my company, Goliath, we're an IoT company. We help hardware companies build kind of things. It's not what this talk is about, but if you're interested in learning more, you can find me afterwards. Okay. In case you didn't know, VS Code is super popular in general. And since it's the afternoon, I thought I'd make it a little more interactive and start to pull the audience. So if you don't mind, raise your hand if you've ever just tried VS Code. Okay. 100% of the audience raised their hand. Keep them up. Sorry. Keep them up. Who's kind of actively using it today for development, let's say general web development or Python development? Okay. Still most of the room. And who has tried to use it for embedded development? Okay. Also most of the room. So you're in the right talk. VS Code is extremely popular categorically across all types of developers. This is the annual survey from Stack Overflow. I think within two years of launching VS Code, it was the number one ranked ID, second to Visual Studio, and the rest were pretty far behind. This is including web developers, mobile developers. They've actually surveyed embedded developers. I'd love to talk to those people. And I use the word ID because that's what Stack Overflow uses and we'll try to unpack what that actually means. But it's very popular. And I'll try to kind of summarize why it's so popular. And I think this is generally true. It's free. It's the right price. It's fast. If you've ever used a heavy misconfigured Eclipse editor, you might sympathize with that. It's cross-platform. It works natively on Windows, Mac, and Linux. It's highly customizable, which is actually where it makes it kind of hard to use. It's almost too customizable. And also highly configurable, which is where we're going to spend our time unpacking how you configure it to do what you want. And it has IDE feels, because it has capabilities that feels like an ID, but not quite an ID. And what I mean customizable, most people talk about the extensions that are available for Visual Studio Code. It has a humongous array of languages that are supported, ways to change look and feel through themes, debuggers of all kinds, keymaps for the IAE Macs, if you really like it, and just a ton of other things that you can go download and use for free. The last ad I read was over 180,000 extensions and growing every single day. So it's customizable in pretty much every possible direction. It's also configurable and has a very rich configuration system. And it's a system in a way that allows you to quickly do changes through a GUI, as well as through configuration files, which is where some of its magic starts to get unlocked. You can configure everything from the appearance of the editor to functional behavior of it across the board, and as well as those extension points and how we want those to behave. And what's even more interesting is you can do that across the editor, so the global experience, but at a per-projects level basis. And VS Code doesn't quite have a concept of a project, but you can use it in that way. And this is where I start to get really interested in it. And understanding how these all work and what the right flags are and what the right file's names are is actually where the power of using VS Code for your daily driver comes from. And also the things you have to learn to make it effective as an embedded tool. So understanding how this part of VS Code works is really the key, in my opinion, to unlock its power. And actually the thing I had to go learn because it was not well understood by me. And those ID-like features, it's pretty interesting because out of the box built into VS Code are things like intelligent coding or as Microsoft calls it, IntelliSense. So things like syntax highlighting, code completion, the ability to do code navigation, refactoring, things where you used to have to pay for and other IDs or premium tools is just part of the system. But also native debugging, native task running, source control management, integrated terminal, and even reusable code snippets. And the power there is, as part of the tool, means you can rely on extensions that just build upon a common capability and really how you can turn a Python IDE to a embedded CID. So let's talk about how the world of embedded and VS Code intersect. I think one of the more interesting things that happened over the last two, three years are embedded vendors are adopting VS Code to different degrees of adoption. Where companies like ARM have their IDE experience built around VS Code as well as their ARM ecosystem, things like seemsys pack to companies, some of the folks who are here today, like Nordic, building their entire developer experience around VS Code. And there's eight companies that are listed here and the number is growing pretty much every month where a company that's creating tools for embedded developers is either adopting VS Code as a recommendation or really betting in. And I think we're just going to see that trend continue with vendors across the ecosystem. And there are solutions targeting specifically Zephyr. The first one was Platform.io, which was actually a great cross ecosystem tool. Unfortunately, they dropped support for Zephyr over the last couple years. But more recently, folks like Nordic with the NRF Connect for VS Code, as well as different ecosystem-based extensions. Circuit Dojo by Jared Wolfe has his own extension for his customers. The talk right after mine is about that. You should definitely check it out to see how you can build your own unique ways of working with Zephyr for your hardware platform. And NXP announced, I think it was in a better world this year, that they're bringing Zephyr support for MCO. And this last one is, well, what if I want to build my own experience? What if I want to use the tools that are available to me, which I call the DIY approach. And so this talk is really giving you the tools and knowledge of how to set up VS Code to work with your embedded tool chain, your embedded environment, so that you can apply it to different ways you want to use and build embedded systems. Oh, and this screenshot on the right is an example of Nordic's extension. And you can see, hopefully it's not too blurry, the added capabilities, not just doing Zephyr development, but how to leverage all the features of the Nordic ecosystem. So let's get into those features, those configuration. So for me, when I say VS Code supports embedded or I wanted to support embedded, I call three things usually. One, the sort of idea of smart coding, Intellisense, refactoring. I want that to work with my Zephyr application with the kernel without having to get the dreaded squiggly lines of I cannot find the file, I cannot find the reference, and actually help me code Sparter, as well as the embedded development lifecycle. So build, flash, test, debug, all really easily without jumping through a bunch of terminals and other hoops. And last one, which may not be obvious, but that's reproducible environments. Because the embedded tool chain, the embedded paths, your virtual environments that the compiles use, those need to be reproducible for you on a later date in time, your teammates, your customers, and your clients. And so getting that set up is where the sort of hard parts begin. And really, the rest of this talk is to walk through the extensions you can use to set up an embedded environment for Zephyr, how to configure it, where my live demo will hopefully work, and topics for a future talk, which I almost wrote, but man at a time. So let's talk about extensions. And there are a lot of extensions available, mentioned 180,000 available. And there are what I would call general class extension. So writing, for example, the language that you're using for your embedded development. There are ones specifically for embedded, like embedded debuggers. And different options in the ecosystem, whether it's coming from folks like Microsoft, from community tools, as well as the individual vendors, which you can kind of cobble together that fits your preferred environment. So the most common one, the best starting place, is the official C and C++ extension from Microsoft. It provides the intelligence capabilities for C and C++, things like syntax highlighting, the ability to debug a C or C++ application. And this is generally for C desktop type developers. So their GDB integration, for example, is targeting clang for desktop. It's not super helpful for us, but it's a good starting point. And there's actually a bunch of different extensions around this C and C extension from Microsoft. And in Visual Studio Code, you have individual extensions, but you also have packs. So there's an extension pack that can install a bunch of different ones related to C++. So usually I recommend, go grab the C++, C and C++ extension pack, and you can get a bunch of stuff for your general development. Alongside that is CMake. There's also the CMake extension from Microsoft. It's a pretty complete tool for developing CMake, everything from configuring and building and even debugging your CMake files when you get into the weeds. Just like the C and C++ extension, there are some presets, but they target different versions of GCC, for example, that are not embedded focused. And if you get that pack, like I said, you can get the CMake extension. There's also Python, a lot of stuff in embedded. And obviously a lot of Zephyr itself has Python, so it's really convenient to have the Python extension, especially start doing things with tests and working with West. It's mostly for data science and machine learning users, but the general syntax highlighting and code completion all work nicely out of the box. But it's useful to have. And then we start getting to the embedded focused extensions. There are a few, especially around debugging. So there's a extension called Cortex Debug for probably most of the room who raised their hands saying they've tried embedded. They've probably tried Cortex Debug. It was the first one that was available in the VS Code ecosystem. It's actually open source, so that also adds to its popularity. And when I say it's an embedded focused debugger, it has native support for a bunch of the debugging hardware and software like OpenOCD, Jlink. And because it's for embedded targets, it also has specific features for embedded devices like RTOS threading aware. And so you can see those threads. And it knows how to capture them across different RTOSes, including Zephyr, of course. But it also has advanced features, almost too many advanced features. And figuring out how to configure them all is a little bit daunting at times. But for example, there's a really cool graphing feature. So if you have ITM data, it will actually chart it and really, really useful. And you can only find that if it was built and designed for embedded targets in mind. And it's often shipped with different vendor ecosystem packages as well. And more recently, Microsoft actually released a new debugger extension called Embedded Tools. It's mostly debugging. I guess there's other tools they plan to release with it. It is also RTOS aware. Initially, it started targeting Azure, but they've added ThreadX and more recently, Zephyr. Similar capabilities as Cortex-Dbug, with Cortex-Dbug being far more mature and have open source contributions, there's more stuff there. But this is another one to watch in particular, because it's supported directly from the C++ team at Microsoft. One other one that just also came and been made available is a new serial monitor from Microsoft and kind of does what you think it says on the tin. It's natively integrated into the terminal in VS Code. You can attach a device, see the terminal output. One neat thing is you can have multiple terminals very easily. So split screen, so if you have a different debugger and a serial art, you can see them in the same UI. And probably the most interesting thing from our company when you're training, et cetera, it's consistent. So you don't have to say, oh, if you're a Windows user, go download Putty. If you're on Linux, maybe you can get Minicom or PicoCom or I don't know what it is to use today. You just say, go install the serial monitor and it will generally just work nicely. And there's a bunch of other stuff. These are mostly my opinion that I find convenient to have and there's even more. I use Python environments for most of my Zephyr developments, VS Code. And the Python extension has virtual environment support, but it's kind of clunky. It doesn't quite work the way that my flow is. And there's a Python environment manager that makes it really easy to work with multiple virtual environments. Things like YAML and RST for syntax highlighting. And then a lot of the stuff I do is with GitHub. Microsoft has a bunch of great extensions around working with GitHub more easily, managing pull requests, and working with your own repos. And the last two, which didn't make it into this talk, is how I use Visual Studio Code with remote development, whether that's with our remote server or Docker, but they're still pretty handy to have because a lot of other folks are starting to use that as well. And now let's get into the config. This is probably where I spend most of my time preparing really good slides and really good samples. And I'm still not confident that I have explained this stuff really well. But there are a lot of features in the configuring system of VS Code that once you figure them out, you can start to use it as part of your development. So there's that GUI in the JSON file. And these JSON files that I found really helpful for setting up your environment. There are well-known filenames. If you call your file a certain thing, then Visual Studio Code knows that, oh, this is this type of configuration. And I know what to do with it. So auto-load it, for example. If you put in a special location in your folder structure, it will treat it in a special way. And it'll even give you auto-completion of the values of those files. And that's actually where I get most of my documentation from is trying to auto-complete a key value and seeing what it's supposed to be. We'll go through those exactly, the most common ones right now. And so the most basic one is a file called settings.json. And this is where all the settings live for your particular environment. And that's things around Visual Studio Code itself, so how to configure some behavior or preferences. I only list very few, like the file association, which is about the editor. But then there's also your extension-specific settings. So for example, to get IntelliSense, I have to point to my Zephyr SDK installation of GCC. And I pass that in and through this file. And therefore, the extension knows where to find GCC for syntax highlighting. And the next file that's important is this task.json file. VS Code has a task runner. So you can arbitrarily run commands. You can chain commands. You can have configurable commands. And you just create these array of different tasks in a very specific syntax. But when you use things like the command prompt, you can run those tasks. So instead of typing west, build, dash, p, et cetera, I can just do run build. And we'll see that in a minute. Related to that is the debugger interface. It's for some reason called launch.json. But you launch a debugger instance, and you can attach your debugger. This is true for all kinds of debugging. But this is how we would configure, let's say, Cortex debug to attach to a device and then use the debugging interface. And if you have multiple configurations, you can define those in your launch.json. And the last individual file I'm going to talk about here is this extensions.json, which kind of works like requiring extensions. But what the ends up being is recommended extensions. So if you're setting up a new environment or a colleague is setting up a new environment, it will actually prompt you for, hey, this project is recommending these six extensions. Do you want to sell them all right now? So it kind of presets up the extensions pretty nicely. And with all those different .json files, these configuration files, if you put it in a very special folder called .vscode and it's the top level of your directory, then VS Code will scan for it, automatically glow them, and run them as part of your environment. And so that's how you can pre-configure with this .vscode file and all the specialized .json files. But there's another way of doing this. And I actually find it a little bit simpler and more manageable. VS Code doesn't quite have a concept of a project, but it has a concept of a workspace. Effectively the same thing for our conversation. And it's a single file where you can put all those individual files and just have one .code-workspace file to manage. And the same capabilities, the same configuration. So even you'll see that I have the same data in both places. But now I have version one file and we can even make it portable and shareable across the team. And there's one other tip that is very new. I think within the last six months I've discovered it. It's this idea of environments within VS Code. VS Code calls them profiles. And as opposed to your project file, you can set a profile in VS Code. And as you can kind of see in the screenshot, I have different profiles for different ecosystems that I developed for. Different embedded ones as well as non-embedded ones. And that's the environmental configuration. So between the two you can have your profile file checked into your Git as well as your code workspaces. And basically everything can be mapped to how you want it to behave and how you want this project to be load and operated. And those are the core configuration metadata you can use. And the nice thing is because they're files, you can check them into Git. And now you can have reproducible environments. So now I'm going to try a live demo and go through some of this with a toy project. And everyone crossed their fingers that nothing broke between now and when I tested it 10 minutes ago. Okay, so this is out of the box sample from Zephyr. It's the thread demo under basics. And there we go. I pre-built this because I didn't want to waste time compiling. But if you have ever tried to load, let's say, a Zephyr application, you would typically see a bunch of squiggle lines, which means Visual Studio Code can't find them. But there's no squiggle lines. And I can kind of show a little bit of that. If I hover, let's say over this device tree, yeah, starting pulling some definitions. If I go to, let's say, a struct that I defined and maybe peeked at declarations, you can start seeing the instances of that code. This is smart coding. This is IntelliSense for Zephyr, for the kernel as well as my application. And it works every single time. I have a dev board. The dev board is not important for this demo, but now I want to go and build it. Though I cheated and I already wrote it, I'm running this build task, which I've defined previously. And it's gonna just say there's nothing to build. Okay, but I didn't have to type West at all, but it did the right thing, which is pretty neat. And let's say I want to do something else and run a different task, and West Flash. So I'll go ahead and try to rebuild, nothing to rebuild, and it's using the programmer and my SDK, and yep, running the runner, and now it's flashed. Again, very nice development workflow. And let's see, what did I also want to show? Well, I have some code here. I can do something like launch the terminal, call the serial monitor. Let me clear that. It found the port, and we can now start seeing the thread in the counter. So again, I can start monitoring the device more directly, pretty cool. And if I didn't like the exact configuration, I want to try something with West. I also have a virtual environment that's now nicely configured. You can see that automatically loaded. I didn't have to do anything extra. And maybe we can do try debugging. This worked. Yeah, let's just do a little break point here. And go ahead, launch the debugger, and it's doing some debugging things. It's trying to build just to make sure that everything's there, and if I step, we can eventually get to the application, and count is 0, and count is now 1. So that is my entire workflow, all seamlessly minimal interaction with command line flags and passing variables. And so this is basically building up to the tools I just shared. Let me pull open the actual file and walk through some of those things. And what you'll see here is a lot of this stuff is specific to our project. It'd be great if it was automatic, but at least today, you have to code some of this yourself. For example, I mentioned the C++ extension. I just had to point to GCC, so I know how to do IntelliSense. But this is one piece of magic that I learned from someone else, which is pretty cool. Defer will generate this compile commands.json. VSCode knows how to slip that up, and that's how you get all the IntelliSense. And because it's part of the build system and gets regenerated every time you build, once you build it once, just IntelliSense works. And you have to tell it where it is, and that was the key to unlock IntelliSense. And here's some of a task. You saw me run build. So I created a task called WestBuild. It is the default task for building. And it points to my virtual environment. It has some hard-coded, common things for my particular project. And then every time I run the build task, it'll just call West under the covers. So it works the way it ever works. And if you need to dump into the CLI, you can always do that. But I want to show one I didn't actually highlight, which is you can even create configurable task runners. This is pretty handy if you have, let's say, different boards you're working with or different configurations. Instead of just hard coding a specific board, you can do something like pass in these input parameters. Let me show that since I'm talking about it. And so now you can prompt for specific prompts, and it will work the way you want it to work as well. And so you can build out dozens of tasks, depending on your particular workflow, and just put it in this one task file. And here is where the debugging happens. It's actually where I find the least amount of documentation and the most amount of trial error. But this is defining this Cortex debug entry point. It's launching and basically flashing, attaching to device and telling you which debugger I'm using, where the GDB lives, where the ELF lives, and then every time I run debug, it just kind of works, which is nice. And then finally, there's the list of extensions. I already have them in this machine. If I didn't have it, it would have a nice little prompt saying, hey, would you like to get some of the recommended extensions? I think that's all I want to cover for the live demo. Let's go back to the slides. And so what did we cover here? And a lot of times I'm talking about settings, because that's the area of most complexity. But you saw how to define your own preset settings for things like the VS Code itself, your extensions, and also how to recommend new extensions. Saw the tasks, which kind of reduce the repeated parts that you have to do over and over again of switching between text and command line. A little bit about workspaces and how that could be your project file and a little bit of that intel sense. I put this together as an example project you can find on GitHub. That's the URL or QR code. We'll take it to that. And hopefully you can see how you can define your own based off of this as a starting point. It's not an end all BL, but covers a lot of basic use cases. So a little bit about the future and future talks, which was going to be part of this talk, but I would have been taken too long. I hinted this earlier. There's a series of extensions from Microsoft called remote development. For example, I switch between a Windows machine and a Mac machine. And I have a Linux server that's somewhere in my house with a bunch of devices plugged into it. So I can use the remote development experience to effectively connect to that remote machine and develop and debug on it as if it was at my desk. And it's all nicely integrated into VS Code. And it's easy to set up once you've done it a few times. Similarly, and more common people are interested in, is container-based development. So if you look at this diagram, it looks very similar to the previous diagram, because it's almost the exact same experience for you as the developer. Setting up Docker and configuring it using a thing called Dev Containers is a little bit more work and probably a full talk in it of itself. But for embedded development, it's pretty nice for reproducible artifacts. And even for certain hosts like on Linux machines, you can even flash from Docker pretty reliably. So that's a cool way to do embedded development. And lastly, GitHub launched last year, I think, Code Spaces, which is a beefy machine in the cloud based off around visitor-structured code. So everything we talked about today would apply directly except using someone else's big machine instead of your potentially underpowered machine. So kind of wrapping up, a couple lessons that I kind of talked around. VS Code is very configurable. And I find it sometimes overly daunting of what you can configure. So figuring out what you need to configure is going to be half the battle. Looking at other people's configuration files on GitHub is kind of how I learned some of this stuff. But it does have its benefits because you can actually have the experience you want, which may surprise some folks who are watching this, who hasn't tried it. You can do embedded development with VS Code. It's actually pretty serviceable. And all the complexity I just showed you, something that's actually taken care of from the vendor tool. So if, for example, you're using Nordic hardware with Zephyr, you should definitely look at the Nordic plug-in for VS Code and any ones that come out the future for the vendors of your choice. They put a lot of work into it to make it smooth. Quick thanks. There's a great project called DMK. It's probably one of the largest open source projects built around Zephyr. And their documentation is great. Their samples are great. And that's where I got a lot of the tips from how to use VS Code more effectively. And Mark Goodner over at Microsoft who helped debug some of my samples. And really, just quick about Goliath one more time. If you're interested, we're over here in the booth. We write a lot about Zephyr. We have our blog, which is a ton of content. We talk about development as well and how to build different developer experiences. And yeah, if you want to learn more about Goliath, you can come find me later or find me on the internet. And with that, that's the slides. Thank you. I think we have time for questions. We see your hand. Hey, Jonathan. Thanks for your talk. Thanks for reminding me why I don't want to figure out all these extensions on my own. And my embedded development world is in the platform IO space, because I don't want to learn a new vendor extension every time. And they put things in different places and so forth. So how do you suppose we could get that type of thing back, or at least more vendor standardization? I don't want to fuss with it. I want it to just run, compile for these 20 different boards from 10 different chipsets. Can we get back to that with platform IO, or is Microsoft making that hard, or why is that not? That's a good question. So the original platform IO support worked great. The platform IO team did their original development. And they targeted the LTS release. And the last LTS release was some time ago. And it basically broke. And there was nothing specific that Zephyr did wrong, or the platform IO did wrong, or Microsoft is just maintenance of that enablement. And so it hasn't progressed. I think if the community around platform IO, or folks Zephyr who knows those folks could figure out a way to move that forward, it would also be a great option. Honestly, from the perspective of what is the experience I want, I think that's a great experience. And I was forced to learn how to do it myself because there wasn't a universal way to do it. So I think that'd be great if it came back. All right, so I'm normally in Vim. And I've tried to get into VS Code a bunch. And it's interesting to see this setup, because this setup's the hardest part for me. But I have two questions. One, if I set up the workflow, or sorry, the workspace file, can I commit that file, or is it going to break from one system to the other? And then the other question was, you had a slide on there about the CMake extension that asks you to scan for kits. And I'm always confused about scanning for kits. Does it still ask that? And what am I supposed to do? And what's the outcome I'm expecting there? Two great questions. So in terms of the portability, there are some things that actually make your life a little bit easier. Let me flip over to that. It might be in my example. The thing that makes it less portable is hard-coded paths. As you can kind of see this path for the Zephyr GCC compiler. There are some conveniences in the configuration system, like I can say user home. But where this lives is dependent on my machine. So this assumed that you follow the getting started guide in Zephyr and use all the default paths. So this actually works if you did that, assuming you're on 0.16.1. So this is why it's a little bit fragile. But I say that some areas are actually pretty intuitive and help you. I only pull up. Do I have it in this particular example? No, but in the individual files, for example, you can declare this is for Windows, this is for Mac OS, and this is for Linux, and have different paths or different executables. So there's a little bit of that to make it less brittle. But it's still brittle because of hard-coded paths. This is why the extensions from the vendors or other extensions, like I'll plug Jared Wolf's talk, can add a layer of intelligence on top of it. But you need an extension to do that. So in that extension, you can scan for environment variables or well-known paths and auto-set these things. And that's what a lot of them do in the background. But just manually doesn't have the intelligence to do that. So that's why it's better because you can version the file and maybe give some instructions and some assumptions. But the DIY is still going to be that fragile way with paths and environment locations. The second question you had about CMake. So I generally don't touch CMake when I'm doing general Zephyr development. So I find that extremely annoying that it's prompting me. This is the hack that you disable it. You literally just pass this thing to say, CMake, stop trying to be so help. Stop trying to be clippy. And let me just code. I think they can improve that on the CMake. But that will reduce the pop-ups and other scanning things. Which is why it's nice to have the extensions configuration just at the top level. They'll get loaded before everything gets loaded. So it listens to you. So about, well, here in the macros or whatever we call them here, is there a way to also set ones? Did you set points? Yeah, set like macros. So we can, I don't know, even help the user or help the portability so you can kind of control. Oh, you mean for the configuration? Yeah. No. The limitation of the configuration system is it's an JSON file and it has to be static. And the really only answer is to build a custom extension, which is written in JavaScript. And that can be the place where you be more helpful. Unfortunately, it's pretty static. Now, the hack or another way I've seen this is to define specific tasks. So I was just doing build, flash, and debug, build and flash here. But I've seen people define set-up tasks. And that can run a shell script or a PowerShell script and help a little bit. Maybe even address the cross-platform challenges. But in the system itself, it kind of assumes you're going to either do that manually or use an extension or write an extension. But yeah, the task as a set-up is another way to do it. Thank you for the nice talk. Did you figure out the way to utilize the SVD files describing the register layout of the individual SOCs on the debugger thing? Yes. I did figure out. I didn't actually put it into this example. But Cortex Debug, in particular, has an interface for defining those SVDs. I think it's even in the extension configuration. And if you go to the documentation, they show you how to point the SVD. And then it uses that to the translation. I don't think the Microsoft embedded tools supports SVD yet. But it could be wrong. Zephyr doesn't ship with SVDs, as far as I know. But if you grab SVD files, put it somewhere in your file system, point Cortex Debug to it, and it will work out of the box. While we're on debugging, I found it a bit tedious that you have to define the whole block for every MCU that you want to debug. Is there any way that you can have a base description and then you just basically define the SVD file or specifics for the different MCUs that you save a little bit of tracing code space? No. I think there's areas for improvement in Visual Studio Code. Reusability is not something they've optimized for. Even things like variables. So I don't have to keep on putting the paths of where my virtual environment is. It's just not a part of the configuration system. So scripting, reusable code blocks, or meta code blocks, it's not there. So you do have to define that. And again, the only two paths there is either you write some tasks that do that, maybe pull it from the internet, or use an extension that encapsulates that. So you don't have to keep on writing it. But yeah, it is a little bit tedious. The next thing is it's JSON, and it's a pretty small file. So you can do that, but it's a lot of work. How do you decide or what's the difference between editing in preferences.json versus pulling up the GUI preferences window? Because I know, at least when I've looked, there's some things that are in the GUI window, but some are not. And it's like black magic to figure out when to use one versus the other. There's a lot of black magic in VS Code. The way the settings work, there's global settings and there's user settings. I don't know if I've experienced that, but I am definitely going to check it out. It's supposed to be that anything that's in the global settings is everything that VS Code knows about, as far as I understand. And so if you have functionality that's not available at the global level, or functionality that's available only at the user level, certain settings will dynamically appear or not appear in the GUI. And versus the JSON file, it's blank when you start, so you can put what you prefer to have there. It will ignore things it doesn't know about. So it's a little bit more like a blunt hammer. But it's sort of, I think, the biggest distinction that I've seen is whether it's at the global level or at the user level. You can put everything in the settings file or the workspace file, and it will try to honor it or it will just ignore it suddenly. So yeah, a little bit of that black magic hurts. OK, we have one minute left and then we have a heart stop. I know I can be available outside for more questions. Thank you. What I like in the platform IO are the icons at the bottom where you can just click on it to compile and to upload and to debug, to open the terminals. Can you have that with your setup? Unfortunately not. It's another limitation that I find very frustrating with Fiasco. And they've commented this on public GitHub issues. You can do that through an extension. So if you look at a lot of the extensions, the embedded ones and otherwise, they can register icons that can launch actions. I forgot the exact term. But this is actually from VS Code, but for example, any extension can do that. And so that's why folks end up running their own extension, even if it's a pretty simple one, just for the UI elements to add or pre-building some of those tasks automatically. So it actually might make sense to build a custom extension for your own project or your own team just to do some of that automation, which fills in the gaps of the more manual way. Jonathan, thank you very much for your presentation. Everybody, do you need to talk to me?