 Welcome to this webinar on containerized IDEs. My name is Jan Van Bruggen, and I'm developer relations lead at Itopia. Gonna be talking about containerized IDEs among other local as code techniques. Local as code is not a common phrase. It's something that I've made up. I think it makes sense. You don't have to copy me on it. Config as code is a big thing in the software development industry these days. And we seem to have as code for everything else, monitoring as code, data as code, permissions as code. So why not local as code for our local coding environments, making them more productive, making them easier to set up really quickly and ideally declaratively so that we can also version control those config files. But I'm getting a little bit ahead of myself going to slay some groundwork here. So this is just about coding in general. This is very high level. This isn't specific to anything about Kubernetes or any specific programming language like that. Coding in general is just text editing with a little bit of running apps as well, running executables. You need to be able to do all of that in your local environment. You've got a device. You've got some software running on it. You got an IDE installed, CLIs, compilers. A lot of different things go into setting up your local coding environment. And that's one reason that developers are so picky about their local coding environment is because they need to make sure that it's just right so that they can be productive. As well, I'm going to define the term project, maybe a little unnecessary here, but just so that we are talking about the same terms. When I say project, I mean a team code base, not an individual project. And it has to have like a repository, at least one repository of code associated with it. Any project like that is gonna have necessary dependencies. So things that have to be installed for the code to run. And then recommended helpers. So things that are probably a good idea to use to improve your dev productivity. Those tools may be declaratively configured. They may have a config file like one of the config files you see here that allow you to use a CLI with like a one liner command on the terminal to get everything set up that you need for that specific tool. Some of them don't. And that's really what this webinar is about is what to do with the tools that you need or want, but can't declaratively set up or aren't declaratively setting up. Cause there's been a lot of progress in the industry lately on being able to declaratively set those things up. And I want to make sure that I share that understanding with you in case you haven't heard about some of these tools. So at a high level, there's a few layers you need to before you can start coding. You need the source code for the project, source code that other people and yourself have already written. You can get that by just get cloning it. Great. One liner command, you're up and running for that. But you also need third party libraries. So if you're using a language like JavaScript, you're going to need your JavaScript libraries, Python, you're gonna need Python libraries, Java. You can get these things very easily. This is a solve problem. There's tools like NPM, PIP, Maven, Cargo for the Rust ecosystem. And you just use the config file that comes with the code base and you run the one liner and you're off to the races. Things that are usually not version controlled but necessary to code on a project are things like the packages on your operating system. So a project may assume that you have curl installed on your system. It may assume that you have the correct version of node installed on your system. It probably won't assume that in particular because it instead would have it in the read me that you need to install this specific version and that version changes over time. So you need to just keep up with it. So it's a really tricky situation when you need these operating system level packages in order to do local development but those aren't NPM installable. Those aren't PIP installable. They aren't Maven installable. They aren't Cargo buildable. You're gonna need more than just what comes with the language ecosystems package manager here. As for your operating system, that is also very much assumed that you've already set it up. And for most developers, that's the case. You have a Mac book that you've completely configured that you're super comfortable with and can work on a lot of different projects. Great. But what if you're running a Chromebook? There are some projects that you just can't develop on a Chromebook effectively. And there may be discrepancies between team members. So you may have some team members that are on Macs and some team members that are on Windows. This leads to a discrepancy between what I call local as code and what I call local as documentation. So local as code are tools and dependencies that you need and voila, they're there for you. Just right out of the code base. There's some config files and you run some commands and you've got them. Things that are local as documentation, you need to read guides. You need to read step by step. You need to do it manually and do it imperatively yourself which causes a lot of problems. This can cause drift between different team members. This can cause bugs to come up where you develop on one machine and it works there but it doesn't work once it goes to another machine or it doesn't work in production. It can also lead to delays because it means that you need to set all of this up before you can start and it can take quite a while to get that stuff set up or if you do a factory reset on your machine, now you have to do it all again from scratch or if you're switching from one project to another, you have to switch a bunch of operating system packages. You have to change your node version. You have to change, you know, if curl or W get is available in the environment. So the goal here, the point of this entire webinar is that whether you're an open source project or an enterprise team, you're going to reduce project or onboarding time by automating those project setup steps and project onboarding time can be, you know, having short time versus long time can be the difference between having a lot of open source contributors. If they can come by, they can clone the code base and immediately write effective high quality code that they can push up in a pull request. You're going to get a lot more high quality pull requests and you're just going to get a lot more pull requests in general because people will feel more welcome to contribute there. And on an enterprise team, onboarding, offboarding, employee churn, contractors, these can be things that make or break the ability of a team to ship features if you can onboard someone quickly and have them contribute a feature to your project. There may be, you know, from another team inside the company, but they don't have this code base configured. So you want to get it set up for them. So let's get started automating these steps with a bunch of different cool tools that you may or may not have heard of. And there's an extra layer here that I have hidden from view that I'm going to bring back later once it becomes more relevant. I don't want to derail too much now. So I have a little secret for later. First one is operating system packages. To have them configured as code is very tricky, especially when developers are on different operating systems. So you may have some developers on Windows, some developers on Mac, some developers on Linux. And how are you going to configure the same packages across all those operating systems? That's one of the reasons that traditionally it isn't done. Another reason is that these are often system wide system level packages that will go across all the projects that someone's working on. So if project A installs some operating system package, project B is also going to get it. And that is not always good. Sometimes you can get away with it, but sometimes it causes big problems. So a simple example is node versions. If you have node 12 on project A, and then you upgrade your version of node to project, to node 16 for project A, project B might still be expecting it to be on 12. And that's why you need something like node version manager. You need all these work around tools for each individual package to make sure that they're versioned per project. And that can get really messy when you have to do it on a per package level. So it'd be much nicer if we could do it at a high level. But unfortunately, a lot of package managers, let's say most of them, almost all of them, don't come with declarative config files. So if you're on Mac and you're using something like homebrew, there's not like a dot homebrew file for this project. You can't just say, hey, I want to have these packages available on Mac right now and then later throw them away. You don't have that ephemeral config and you don't have that declarative config. And same thing on Windows, there are a few different package managers. I'm not super familiar with them, but I know there's Chocolatey and I know there's Winget. I don't think that they have declarative config files and I'm almost certain that they don't have ephemeral environments that you can hop into like temporary shells or something like that. Luckily, there is a very cool tool called Nix. And if you haven't heard of it, it is growing in popularity. It is a package manager for Linux and for macOS. Unfortunately, not for Windows. So if Windows is a requirement for your team, this in particular isn't gonna work for you, but I'll have another solution for you later. But Nix is basically a cross operating system package manager that is very aggressive about version pinning. And it has, down to the nitty gritty details, if you have a set of Nix packages installed for just project A, those won't bleed over to project B. And so this command Nix shell is something that you can use with a .nix config file to say, right now I wanna hop into a bash shell, an interactive bash shell, where I have these specific packages installed, these specific versions of these specific packages installed. And then when I switch projects to go work on project B, all of that evaporates because it's all Simlinked. It's all installed outside of your project directory and outside of your operating systems root directories that you would normally expect to see these packages installed. It's instead installed in the root slash Nix directory. And then everything is Simlinked into this temporary ephemeral interactive shell session. So I've actually used this on a project in the wild. There's a great programming link which coming up called Rock ROC. If you're interested, check out rocklang.org, a little plug for them. It's a great project. I've never contributed to a programming language before and I've never worked on a Rust project before. But it was really easy to contribute to this because they had everything defined in a Nix shell file. And so I didn't need to install Rust. I didn't need to know how to install all of these compilers and tools to compile this programming language. Instead I just ran Nix shell once I had get cloned the project repo and everything was just installed for me and everything worked. I ran like the test command, the build, the local dev build command and all of the tests ran fine on the first try. And I was just stunned that I didn't need to install compilers. I didn't need to install all these things manually because it was all done automatically for me. So I highly recommend Nix. Unfortunately, they don't have support for Windows yet, but for most projects I think that this is going to be a huge value add because you are going to reduce a lot of your read me steps of make sure you have this installed, make sure you have that installed and you can keep up with it as well because if you just do a get pull the next time you do Nix shell it's going to do the diffing to make sure that it gets the right things installed for you in that environment. So you never get behind. You don't have to manually update things which is really nice. Okay, the next level is the operating system. And this is, I don't want to say controversial but this is like a huge decision point whether you want to containerize or not. Having an operating system be defined for a project can be both liberating and restrictive. And by liberating I mean if your project sets up your operating system for you and we'll get to how it does that in a second but if you had a little operating system just for this one project that is awesome because you don't care what OS people are running on their devices. This little mini operating system is going to work for them. You don't care what, you don't have to do all those manual updates it's all taken care of for you like I said with Nix up above. Having a declaratively defined operating system is great for these things. It can really increase productivity. It can also be restrictive because even though it gives you all of that stuff out of the box and it gives you like an environment in minutes, it's an environment that's not the same one that you're used to working in. It's not going to be a little mini Mac OS system for this project or it's not going to be, oh it's just like your desktop but cloned and then minified. It's gonna be different. You're going to have different muscle memory in this environment and that can be painful as well as getting GUIs out for a lot of operating systems like this where it's like a per project operating system. And I feel like I'm burying the lead with how this is gonna happen, how this is gonna work. But if you have a mini operating system just for this one project, running IDEs out of that operating system can be frustrating and you need a specific planned way of connecting to GUIs that are inside of there. So be warned, if you wanna just set up Nix for your project, I think it's great by itself. I don't think you need to go further but if you want to dive down the rabbit hole a little bit further, let's talk about two solutions for this. So the first one is containerization and you're probably already familiar with how this would work. It would set up a guest operating system on your device and your device's operating system whether that's Mac OS or Linux or Windows would act as the host operating system. And this means that that little mini operating system in the container can be completely defined of what is, you know, what version of the OS it is, what packages are installed inside of it. It's great. It is an imperative paradigm, slightly unfortunately because that means that things can drift still. So for example, if you do a Docker build on a Docker file today and then do the same Docker build command on the same Docker file a week later, you could get different results because you don't have all of your third-party dependencies exactly pinned down. Instead, something like Nix OS would do that for you. And I recommend this to anyone who wants to get their hands dirty and is curious about completely declarative operating systems. This is basically an operating system designed around the Nix package manager to have like a single config file for your entire operating system. It is wild. It is, I wanna say early days, but they've been working on it for I think a decade now and it's come really, really far. It's impossible to expect that every application and every configuration could be supported, but you can have a productive working machine on Nix OS. And I'm actually considering setting up Nix OS on one of my personal laptops to give it a spin in my daily life. But I'm gonna go ahead and rule it out for this conversation because it is not accessible or popular. It is not going to have integrations with other tools. There's not gonna be cloud hosting providers for your Nix OS desktops. That's just not gonna happen. And accessibility-wise, not everyone is going to want to edit a obscure obtuse config file for their operating system. This is definitely for a certain type of developer and for them, it is the greatest thing they've ever seen, but for everyone else, it's a little bit smelly and we wanna stay away from it and just stay in the world of containers. Even though they can drift a little bit and they're imperatively defined, they work well enough for our purposes. So we're gonna stick with containerization as the solution here. And I just wanna be clear that these are orthogonal solutions. You could containerize a Nix OS environment. You could do both because they're taking different approaches to solving the same problem. You could layer them on top of each other. There's pros and cons, but just wanted to put that out there. So let's go with containerization. Let's say that for this specific project, we're going to spin up a dev container for everything that you need to have installed. It's gonna be Ubuntu, let's say. And it's going to have this specific version of Python pre-installed in it. It's gonna be Python 3.10. It's gonna be the latest and greatest. It's gonna come pre-installed so you don't need to deal with installing that Python step. It's also gonna have PIP completely configured. It's gonna have everything you need. It could also have operating system packages installed because Dockerfiles can install all of these things. Oh, I forgot to show something I just wanted to show that the number of Nix packages that are out there is wild. There's 380,000 commits in this repo and it just blows me away how many packages they have in here. So I just wanted to mention that. But even though Nix has all those packages, your Dockerfile can install those packages for you, imperatively and cache and pin those layers of those images for you. That can all be done by Docker so Nix becomes optional at that point. If you're taking the dive, taking the plunge into doing a containerization approach to your development environment, that can take care of a lot of the operating system packages as well. Again, you can layer them, but you don't have to. It's optional at this point. Before I get into filling in these question marks of how to run an IDE out of a containerized operating system, I wanna come back to that thing I alluded to at the beginning, which is that there's an extra layer here. When you're going to be setting up a whole operating system for people and take them out of their element, it's gonna be very uncomfortable. If they like using fish shell and you make them use bash, they're gonna be uncomfortable. If they like to have certain shell aliases and autocompletes and certain color themes, they're gonna be a little uncomfortable in this new operating system, which is why personalization also matters here. So even though we're plopping people or taking our developers and putting them into this new project specific operating system, you can still personalize it to fit your needs. You can still have it, have the same color themes and shell aliases and environment variables that you're used to having because you can use dot files for that. This is something that I don't see as very popular, but I don't know why it isn't because the tooling is very mature. Tools like Shea Mois, which is a dot file manager, can do a lot for you. It can manage these dot files and install them on different systems, decide, hey, you're on Linux right now, I'm gonna install it in these directories, oh, you're on Mac now, I'm gonna install it in these directories. And compared to a lot of other dot file managers, it seems to have a strong feature set. I mean, I guess this comparison table is biased because it's on Shea Mois website, but it seems legitimate. It seems like it is the best in class dot file manager. I recommend using this. I enjoy using it. It's very simple one-liner command to get all of your BashRC files and IDE preferences and fonts and color themes and stuff like that installed on any system you go to. So if you're hopping around, you could go to a library PC and set up the exact environment that you wanted to. I don't think that anyone goes to the library to do enterprise software development these days, but it doesn't really matter. You could go to any machine you want and it would look and feel like your home development machine if you put your dot files into a manager like this. So that was a quick aside on personalization. So now back to ways to containerize the operating system. The easiest way to do this is a way that I enjoyed doing a few years ago when it came out, which is using an IDE with guest OS support to use Docker desktop for guest operating systems locally. So this is entirely offline, entirely local. Effectively what you'd use is something like VS Code Remote Development, which is a plugin for Visual Studio Code, a popular IDE, that allows you to have a second operating system that the back end of the IDE is running out of, which means that all of the terminal sessions are running out of that second operating system. All of the code is being edited and stored in that second operating system. So you don't need to install the dependencies locally anymore. And this definition of local kind of gets blurred at this point. It's only gonna get more blurred later on, but the idea is that you don't need to, let's say you're running a MacBook and you need to install Rust in order to do Rust development. You, in this case, would not actually need to have Rust installed anywhere on your Mac. You could install Rust instead in the Ubuntu-based container, or I guess for this use case, you wouldn't need it to be a graphical distribution like Ubuntu. It could be something really lightweight, like Alpine or Arch or something. You would just need to install Rust in there and then have VS Code do all of the GUI stuff for you. You would need to have Visual Studio Code installed on your Mac, but you wouldn't need to have the tools for your project installed on your Mac. So you could version a Docker file with your project code base and then have Visual Studio Code pick up that Docker file and run that Docker image locally on Docker desktop for you. So that's the easy solution and everything other than that gets a little bit trickier. So, oh, and I also should clarify, this could work for teams. It's really great for individuals, but it could work for teams. A scaling issue that this has is that you're making all of your developers install Docker desktop on their machines and run these Docker machines locally, and that can be problematic for developers that don't have access to Docker desktop or developers that, you know, you can't really reduce the compile times here very well unless you get these Docker containers off of their local operating system. So that's where something like a hybrid approach comes up. You can move that Docker container onto a server and suddenly you're working with the VS Code client locally, but you're connecting to this remote Docker instance on the server and all of the dependencies need to be installed on the server instead of your local. It has all those same benefits and now you don't even need to have Docker desktop installed locally. This, the remote containers extension for VS Code supports this and I believe other IDEs are working on supporting this pattern as well. So I call this a medium difficulty because you need to set up a server for it. Another medium difficulty solution is to use something like code server or JetBrainsProjector so that instead of the server connecting to a local IDE client, you have the server stream a web browser session. So you don't need a client anymore. You don't need to install anything special on your local operating system. You just need something like Chrome in order to open up these IDEs and use them purely in your web browser. It is a little different from working locally. I believe that with these solutions, the extension marketplace, for example, isn't as robust as it is locally because you're just running it on your server and it's not like a Microsoft approved thing. But for the most part, this is a strong solution to it. It's just that again, you have to set up the server yourself and JetBrains also has a solution. I don't wanna be super VS Code specific here because there's lots of IDEs that people use and VS Code is not the only one in the world. Okay, so now you set up one of these solutions and you want to get it to the rest of your team and set up these environments so that the rest of your team can simply log into a web browser and start working or open the machine and start working. In order to get that same setup in minutes feel for them, in order to have it as lightweight as possible, you want to be able to scale that server up beyond the confines of a single server, beyond the confines of a manually configured server. You want declarative configuration for your server config and you want it to be able to autoscale, which basically means you want Kubernetes at this point. You want Kubernetes, you want an operator that's managing the connections that's saying, hey, you wanna connect to this IDE? Okay, here you go. I've got a connection for you per user sessions, stateful workloads. It kind of balloons in complexity of what you need in order to have this self-service and scalable system and that's why it's nice to have something like Selkies. So Selkies is a Kubernetes platform for per user stateful workloads like IDEs but it can do other things too. It can do the Unreal Editor for 3D editing. You can use it for IDEs. You could use it for a full desktop experience if you wanted to. It's effectively just a streaming solution for the end user but for the administrator, it's taking care of all of the networking. It's taking care of all of the security concerns. There's a lot of things that a Kubernetes operator and a Kubernetes platform like Selkies can do. If your team or enterprise does not wanna manage Kubernetes, there are managed solutions that are here for you instead. So instead of setting up those Kubernetes clusters, you can just sign up for one of these services. One of them is Utopia Spaces, another is GitHub Codespaces and what they do is they provide the full IDE in the browser with the backend completely managed and scaling to the level that you would want it to. For GitHub Codespaces, it's Visual Studio Code for GitHub-based repos and for Utopia Spaces, it's a variety of IDEs for a variety of source code posters. So if you are using GitLab, Utopia Spaces would be a good solution for you if you're using a JetBrains IDE or you're using something like Eclipse or some other niche IDE. Utopia Spaces would be able to run it, scale it, set it up for your whole team. It would look something like this. Complete IDE in the web browser. You've got everything to look and feel the same way as local using those personalization files with SheaMwa. I've got this color theme and I think a font installed in here. And you also have the complete ecosystem of extensions for whatever IDE you're using inside of here. Could also be PyCharm, not just VS Code. You could use a variety of different IDEs, install them according to a Dockerfile spec and then customize them for whatever your team needs them for. Preinstall all the dependencies, all the extra tools that you need alongside them so that they're just isolated in that browser tab. These two browser tabs share no backend. They are completely isolated from each other and so work and changes and drift that I do in one workspace is not going to permeate into another which is really great for that project isolation. Developers log into here through whatever SSO system you wanna integrate with and they can switch between the different teams that they're on. Now let's talk about administrators because they get a lot of superpowers with Utopia Spaces. Two click onboarding, one click offboarding so that you can bring people in and out of these teams. You can manage your teams here in the admin portal. You can configure spaces for them and monitor how much they're using them. So for instance, this space, we can see it's used every other day or so and it's not really hitting its CPU or RAM ceiling so there's no need to upgrade it from a small space to a large space or anything like that. As for creating new spaces, you can grab from the catalog of pre-configured space images that we have or you could upload your own custom images into the system and use those for the specific configuration that you want of your dependencies. So for instance, we have Visual Studio Code, WebStorm, PyCharm, IntelliJ IDEA, Eclipse, all of these pre-installed, pre-configured with the correct language that you would expect to have in this environment. As for network controls, standard enterprise things, network egress filters, VPC peering if you've got on-prem or stuff on a cloud that you want to connect to and then we have allow listing for the external IPs of the cluster. Again, all of this is fully managed so it's not running on a server that you need to set up, it's not running in your cloud project or on-prem or anywhere like that. It's all exposed via SAS APIs as most enterprises want it to be. And then our catalog we're building out in public so all of this is open source on GitHub, how we're installing these IDEs, how we're building these Docker images. So you can base your custom Docker images off of these ones that we have in here. Maybe there's a step that you wish was done in the middle of it. You can just add that in and upload that custom image into the system and boom, your developers have exactly what they need. And it's also nice that your developers and administrators can collaborate on these images so they can collaborate on the definition of what their development environment is. Rather than administrators sort of throwing over the wall, here I've configured this setup for you if there's problems with it like email me. No, it can all be done through version control as it should be. And that sort of wraps it up for the ways to containerize. You can containerize very manually and offline and that gives you some benefits in the short term but it doesn't really scale up to a larger team. And while there are those intermediate solutions for moving things off to a server and scaling up that server with something like Selkies if you wanna do a DIY solution, something like this managed solution is much easier for a team to adopt and to try out and get their feet wet with it without having to invest a lot of development time or DevOps time in getting this setup before they see if it's right for them. So with that, thank you very much. Appreciate you joining. And if you have any questions about anything I've said today, please email me at jvanbruggan at itopia.com or hit me up at yonseivanb on GitHub or Twitter. You can see what I'm up to over there. If you're interested in Utopia Spaces, check out these links here and feel free to reach out if you have any further questions about what that service offers. And if you're interested in the Selkies project or if you wanna collaborate, check out the GitHub project for that, the GitHub organization or hop into our Discord and chat with us about a use case that you're interested in. Thank you very much. Hope you have a great day.