 language that prides itself on its lack of magic. So let's be explicit in this case. The only thing that I actually want to talk about that might be magical are creating magical experiences, maybe for consumers or people. Part of that is inventing new products and that's sort of where nerds falls in. That way we can have magical experiences like times when you need your smartwatch to talk to your egg carton. You'll never be out on the grocery store again and not know the temperature of your eggs. This is smart eggs, everyone. Something we're gonna just deal with it. I actually kind of like these series of products that are coming out. A lot of people have some... you gotta put some ideas into this place a bit and it's something that sort of stirred up a lot recently with a big community, especially where I started. I started as a maker and times have never been better for makers. Things are moving fast and forward and the tools that were given to be able to create ideas like smart eggs can actually come to life because we have powerful frameworks and languages to get us there. One of the great things that we see these days is the ability to use Phoenix to be able to rapidly create connectivity with channels and presence. The birth of the web in this case just begs for these devices to be connected to it. It's a whole new level of abstraction. An example of which is our session that we had for badge hacking. I have it actually running up here. It's a device that we attached to the back of the badge. We had a workshop for this. It runs on the Lincoln Smart Duo. It's a nice little tiny processor. What it does on the display there is it's connecting to Twitter. It's following some hashtags and whenever a new tweet for that hashtag comes in, it displays it on the display for about 20 seconds and then it'll go away and wait for the next one. This kind of stuff is really cool and powerful because it gives us the ability to now glue together the pieces of the language. We have access to running so much on this little device. We're running Elixir. We're running Phoenix with a web interface. We're running connections with Twitter and all of that on a Wi-Fi-enabled chip at 580 megs with 128 megs of RAM. This kind of stuff is amazing and as a maker, it really starts to accelerate people towards the point where they feel like they can actually produce quality products. Somebody came along and actually allowed me to do that. I was able to transition from a maker to a professional by somebody saying, I'll hire you and pay you money to do this. Hey, I'm a professional now. Thank you. Speak thanks to the Toad, actually. We're hiring. The best thing about the Toad, actually, it's been a very enjoyable experience. Not only now do we get to push forward on making nerves what it really can become, a great platform for creating embedded products and systems, but some of the fun experiences of working as a remote employee is that I get to combine dangerous activities with inappropriate workplace wear. The other hard part about working remotely, actually, is my wife can attest to this, that you just don't know when to stop. So it's all blends together, everything continues to work and just powered through it will eventually get there. Anyways, today I wanted to talk about some of the experiences that I've had along the way. And these experiences started out as my career as a maker, working towards creating tools and systems that could help people create these devices easier, originally from trying to be able to remote start my motorcycle from my iPhone. I decided it was really hard to be able to do that kind of connectivity with the devices and I wanted to work towards making it easier. So joined up with nerves and the rest of the team with Frank and we all pushed forward to create this. Being on the creation side of the language and framework is interesting. But then working with Latote and being on the usage side was enjoyable, but it had exposed its painful features of it. Some of those things are first off distributed artifacts. And we'll talk more about what an artifact is, but we've had some experience in the use cases of these things where when when you're working with nerve systems, there's probably a lot of packages or artifacts that have a lot of weight hundreds of megabytes in size. And the distribution of these, as I'm speaking here, is to say that every time you'd start a new nerves project, the same you may be using the same artifacts to communicate with the same board like a Raspberry Pi three. And instead of those artifacts being centralized, they're going to get copied into every one of your projects. So this has its upsides and its downsides. It's downsides being that now computers are starting to move more towards solid state drives at lower capacities. And a lot of people get angry when you start chewing down gigabytes worth of disk space so you can produce tiny little 20 megabyte images that can run on your embedded devices. Another painful feature of this was the development lifecycle. And this, if you've ever used nerves, it's a little bit tedious because you have this target hardware. And to really truly understand that what you're producing will work, you have to run it on this target hardware. And it's not necessarily a proof that it's running when it's running on your host, because it's a different environment. Something else that was painful is nerves brings with it systems. And with these systems, we have these predefined systems that will handle platforms like Raspberry Pi, Raspberry Pi 2, Raspberry Pi 3, the BeagleBone Black, as you see with this badge, the link is smart. There's a great list of them if you can visit on our GitHub pages and check out. And part of modifying systems has been, well, we give you these predefined ones, but you might want to extend them with some additional stuff. And the process of adding packages is a little bit painful for some. And in addition enabling kernel modules, well, these are like little drivers that will run pieces of your hardware. And you may have a wireless, little wireless dongle that you plug in your device. And there's a million different kinds of wireless dongles that plug in and have a million different, maybe not a million, but tons of different drivers. And so from our approach, it's a little difficult when we try to be able to produce these minimal systems to enable all of them so that all of them will work for you out of the box. So we can only handle a few. And for those who are trying to use one that's not supported, well, you can turn it on, but it's a little bit painful to do it on some situations. And finally, another one too that we have is extending the tool chain. Well, our tool chains, they support C, C++ compilers, but these days a lot of people want to be able to compile in a lot of different other languages. So these are the painful experiences. So today we're just going to talk about covering these in a top-level component. We're going to talk about artifacts. We're going to talk about how we solve this with package management, and we're going to talk about the development cycle. And the best pretty much is getting everybody else to work towards contributing towards elixir and nerves, because it's not that difficult. So first, the packages and the artifacts. Well, as I said before, it's uncanny the size of the weight of these distributed artifacts. At first we just decided, well, this space is cheap, so let's just allow this sort of pattern to exist. And it wasn't until further usage that we realized that these things started to accumulate. In this case, we have an image here of the accumulation. So with nerves, we build out to the build directory. We have our own nerves directory in there, which contains some of the intermediates. And if you are unfamiliar with some of these stages, you can certainly check out some of the other talks that we have, especially the one from me in EU. It describes a lot of the build system and the production of these assets. So in this case, we have the tool chain and the system. And this is the sort of foundation of producing nerves firmware. And as you can see here, it's 127 megs for the tool chain, 142 megs for the system, and that inevitably produces a link-it image of about 15 megabytes. And it's startling when you think of these kind of things, how much in assets are required to produce such tiny little things. So the mentality here was a little skewed, I feel, because not only are we duplicating the download of tool chains and systems across projects, because if you think in this case, I'm going to create this little project that's going to do this little bit of work for my link-it smart. And then a little later, I'm going to go and create this other project. And it's just a little toy thing, and it's going to do this thing. Well, every time I create a new project and mix firmware with them, I'm duplicating that 127 megabytes and that 142 megabytes. And things start to add up quickly. In addition, if I want to be able to leverage mix environment to be able to say, okay, I want to build for dev now, and then I want to go and I want to build for prod, well, every time that you switch between environments, well, the build folder, as you can see under the, there's a dev folder, well, then you're going to have one for every environment. So at that time, we're also going to take in the systems and the tool chains for that environment. And things also in the same projects start to compound. Now, it's a little difficult to attack this problem right away. We have some things in place that were sort of inhibiting our ability to move towards a way that we could easily share these. A while back, I came up with a concept that was released out into the wild for centralizing a local cache construction. But it worked for systems, but it didn't work for tool chains. And the reason for this is because systems and tool chains are different. And the difference that we can see if we look at the, if we look at a systems project declaration itself, is that we have a compiler that we add in for NERVS system. And for tool chains, we have a compiler that we add in for NERVS tool chains. So when it came time to say, okay, let's make this shared, we laid out the groundwork and systems first. And where we started was we looked at the structure of what this glue layer that we're calling NERVS under system is. That's where that NERVS system compiler lives. And we have this mentality where well, we already try to save the time and energy on making you build the assets locally. And instead, we fetch them from the network. So you'll notice when using NERVS that when you execute a request or when you execute a compile, if the assets don't exist on your system, it'll download them during that time and it'll store them into the build directory. So we started to spec out, we have here a local provider and an HTTP provider. And we're like, oh, well, you know, let's make it easy. Let's do the same thing for tool chains because then that'll fix, that'll make them both the same. And what ended up happening was we're like, oh, okay, all we have to do is take this code over here. And when we go and work on it for tool chains, we'll just put it over here. And when you take, when you're copying stuff from one place to another, that's obviously a location for an optimization. When you, when you try to be able to, that there's something that's related there. So the point that we ended up coming to was that we had to, we had to, to somehow create an abstraction that handled both cases. And then everybody decided that the best way to do this was to be able to get it so that the tool chain, we had to take the tool chain and take the system and put them in a better environment. So what did this look like? Well, as I said, we have these two different elixir compilers both trying to perform the same task. So we decided in this case that we're gonna, we're gonna, we're gonna go away from using the mentality of there being systems and tool chains. And we're going to work towards things just being a package. So this, the next step in this case is to say instead of these being different compilers, these are now going to use the NURBS package compiler. And depending on what type of package it is, it's just going to perform the operation properly. This was really interesting because not only if you've noticed with NURBS, not only is there that size problem, but we have a lot of repositories and it's becoming more and more difficult to be able to manage the amount of the sheer number of repositories that we have in the scope. Well, we knew exactly where we needed to hook into this. So if you're a NURBS user today, you may notice that every one of our packages ships with this NURBS.exe file. So this is a package configuration file. And this file is really important because in talking about this before, if you don't know how the NURBS system works, well under the hood, what it does is we have a special time during the compiler phase called the pre-compile phase. And that's before any code on your computer is actually compiled. We can hook into that phase because when you're compiling a NURBS application, you're always compiling for the target. And when we download our dependencies, if we need to know some information from the environment of the dependency, we would need to load it, which means we need to compile it, which means we're too late. So we include this file, this NURBS.exe file, which some of you may or may not have opened and peaked at, but we include this file to be able to give us a little bit of before compilation introspection into the dependencies in your system so we can easily identify which ones are NURBS dependencies. This is what one of these files looks like today. We haphazardly, in a sense, tried to lay this out with enough information that was required for us at the time to be able to satisfy the build engine. We can see here that this just gives us some information like, oh, well, this is a system type. We know some information about its version so that we can look up the artifacts locally. We have a list of places where we might be able to fetch this from, in this case, the mirrors. And we also have some information about some of the modules that might be required to be able to take this package and turn it into one of these artifacts. So we needed to make some changes here. This is the part where I say, things are changing. So to be able to move us from systems and tool chains to packages, we simplified things a little bit in the NURBS package configuration. We're now simplifying the platform by taking a lot of the code that's out there in sub-repositories like NURBS System and bringing them up into the main NURBS application. We're making with this, it's making it easier for us to be able to consolidate all of the build code into one location so that we can easily omit it from any of the releases without having to flag warnings. As you may have been here for polls talk earlier, we're also really excited to be able to be adopting and to work towards adopting the new release system using Distillery. There's going to be some great advantages there for us as well. One important feature we're doing here now too is we're going to take the list of files that exists in the package and we're going to in a similar fashion on how HEX operates, create a unique hash around the contents of those files so that we can use that as a fingerprint towards identifying when the contents of a package might change. This is the freedom that we don't really have today with this because reproducing these packages is really slow and difficult and we'll see that in further slides. So currently, we have the other aspect of this too is that's the creation of it. Well, the other aspect of this is what an artifact looks like. So a package is just a big lump of configuration information and when we look at a system, that configuration information is just the configuration that's important to tell our system how to produce an actual nerve's Linux system. So one of the changes that we want to make here is we want to move away from this bag of stuff and we want to add a little bit more structure to it so that we can do some interesting things. We're going to split this into three sections pretty much where we have anything that gets produced in a package that's part of the target file system will be part of the target fragment and then anything that the package produces that you may need to use in your compiling time to be able to link to, we're going to put in the staging fragment and then there may be anything that you need that's extra that gets put into the image in the long term that gets put into the image fragment. So ultimately now, with the new sharing architecture for artifacts, we're also going to take one step to where when you download an artifact using our system, instead of it getting downloaded into the build director of your project over and over again for whatever environment you may be in, the artifacts actually going to get saved into the home, we're going to put it in the home.nervs.artifacts. This way now, any time you go to use them, they'll always be available, they'll be ready to go and they won't weigh tons and tons of megabytes on your machine. The other thing about this too is that we realize that sometimes you don't want to always pull these from hacks and you want to just play around with some stuff or you might be in the process of developing your own system or your own package. And at that point, any time that you're going to reference a git or a path dependency, we're actually going to reference that from where the dependency exists inside of your project structure so that it doesn't have the opportunity of iteratively polluting the global shared artifact pool. With this infrastructure in place, this allows us to do some really interesting things. One of which is package management. So with package management, you may be thinking that there's already hacks and that's true. And we're going to leverage this in a way by allowing you to do something similar to this. This is just a trivial example in this case and what you're going to see here is not something that's in place yet but everything that we're talking about at this point is things that we're going to be merging in, working towards after the conference. So in this case, now that we with package management and the new artifact structure, we can say, oh, I want to include Postgres in the user land space of my NURBS firmware. And what we do is because artifacts, the way that artifacts work are with the Postgres package, that can get compiled for a bunch of different architectures like Raspberry Pi or Beaglebomb Black or Lincoln Smart. And for each one of those architectures, we have different tool chains and those tool chains produce different binaries. And so even though you're getting Postgres from Hex, it's the package configuration on how to build Postgres. At pre-compile time, we're going to identify that package that you want to use that package and then we're going to identify which tool chain you're using at that time based off the target you have selected and then we're going to be able to go fetch the pre-compiled artifacts from a different cache. And essentially what this is going to look like then is with everything being split up into all of the target fragments, it now gives us an incredible ability to be able to just chain them all together, merge them in place, and then that inevitably becomes your target image. And now that all of that is smashed together, it'll just, it's similar to a lot of how other package management systems work for your desktop, but the beauty of this is that it'll be highly reproducible at compile time. That, as I mentioned before, is going to be in conjunction with using this with Hex. We're going to let Hex do all the heavy lifting. We realize, me especially in conversations with Eric, that building package management, building dependency management systems, it's hard work. And so we want to be able to leverage all of the hard work that Eric and the people on Hex have been doing. And then aside from that, we're just going to pull those artifacts from another location from our buckets that we store. So an upcoming release as you can keep a look at, we're working towards creating this as a beta, so there'll probably be a few packages available that we'd like people to try out just to be able to work out some kinks in the system. But this is definitely something that, the changes that we have staged are moving towards. Another thing that we want to tackle is the development lifecycle. So, here's where I want to be able to just have a quote in this case. This is one of my favorite quotes, something I usually say all the time. Future-adjustance problem, that's now Justin saying that. So things I talk about here are future-adjustance problem. Well, what I mean by that exactly is that these are things that we're interested in tackling. I can't give you a definitive timeline on it. I'd hope that we'd be able to get there before the next time you see me, next conference or next year. But these are things we've identified that we want to work towards fixing for problems. And what I mean by the development cycle is when developing on your machine using nerves, you're doing a lot of SD card swapping. And we know this, it can be tough. Every time you make a change, you have to put the SD card in your machine, you have to mix firmware, mix firmware, burn. And then when you burn the firmware to the target SD card, you take that out, you pop it into your machine and you're a blinked smart or you're a Raspberry Pi. And you realize you forgot to add a module to the applications list and the thing doesn't load. And you go back and you put it back in. Well, since doing this now, being on the other side of the fence and doing this professionally, I've done a lot of it. I've practically written the book on SD card swapping. Like a boss. At this point, I have some hypothetical ideas as far as an approach that can be taken for this. And one thing is, well, there's two ways to handle this problem, in dev and in production. I'm gonna take the production side of things first. Your hardware, let's say in production, it's remote. And you wanna be able to push firmware to it and you wanna make sure that it doesn't fail in the field. And you wanna make sure that when you push it, it can fall back properly. Well, that's actually a method that looks a little bit like this. You can see, you'll mix your firmware, let's say. You can create a firmware bundle and then we have some libraries that are coming into play more recently that Garth Hitchens has been working really hard and Chris Dutton on for remote firmware updates. There's actually some tweets out there of people using it early on, but you'll push it to this service that's on your device and then that service interacts with the version of FWP that's running on your node and it inevitably updates the firmware on that device. The harder part of the cycle is if you have it in dev mode. Well, in dev, it's a little different. We wanna be able to iterate through our firmware a little bit easier. We wanna be able to leverage things like what we're used to with Erlang, hot code reloading, things like that. Things should feel a little bit faster, like Phoenix when you can just type in some stuff and it will reload your modules. But there's some things that we have to consider here. I mean, we don't just have to take Erlang code or Elixir code and be able to reload it on your device. We also have things like artifacts in this case that contain user land packages or binary applications that need to be reloaded. In addition, there's stuff that you might include that was compiled by C and C++. Let's say you make a change in one of those and how do you reload something like that? That exists outside of Erlang. Or if you're actually interfacing with it through a port, what is the procedure? You restart the port, what services get restarted. But then ultimately the easier one is with the Elixir code. Well, with Elixir code, you can just probably reload it, but then if you reload the Elixir code, you have to reload any of the other sub modules. And this can become problematic because pushing all of this stuff to the target device could have consequences. And those consequences when we're working with software is let it crash, but maybe those consequences when working with hardware are a little bit different. So with this to say that there's some things that we have to consider and think about as we move forward towards trying to be able to solve the development cycle problem, and that is that we need to make sure that we can do it in a way that we can handle the entire gambit which may require different strategies for different situations. So we also had a problem with modifying packages locally. Here's a common situation actually. As an Elixir developer, you might be going onto a support channel and be like, I'm having this weird problem where it's not compiling and something's wrong but I don't quite get it. And then somebody kind of chimes in and they're like, hey, you have a major typo here. Like just let's change this typo and then everything's good to go. I fixed it in master, just pull it from master. Well, this poses the problem with us because when pulling from hacks, those artifacts, you're downloading those artifacts that are at a stable place. And if you were to check out code that's marching forward in master, let's say from a system, well, there's no artifact associated with that which would require you to have to produce it locally. And this may seem fine, but the problems that we have are that the time, it takes a really long time to be able to build one of these artifacts, one of these systems, upwards of 20, 30 minutes, depending on your internet connection and the amount of packages you have enabled. In addition, the portability is low. Well, what I mean by that is that the only, since we're built on top of, the systems and packages are built on top of Buildroot, the time, the system you have to compile them on, it has to be on a Linux machine. So then at that point, we decided with the providers, well, a lack of providers was to say you can download them from the remote HTTP service or you can build them locally on Linux, left a big gap in the way that it came to being able to build these yourself so that you could benefit from changes upstream. So what we have in place in addition that we're staging is support for our ability to run this through Docker. And this doesn't mean deploying to Docker. What this means is that we're just going to use it so that you can on your Mac, or especially on your Mac, use Docker to be able to build inside of the Linux environment. And you can then produce these artifacts in a reproducible way that you would on a Linux machine. This looks like here is the defaults. So depending on what computer or what machine you're on, we default to Docker versus on Linux, we just default to allowing it to run locally. But we're also toying with the way that we can allow you to override the environment so you can say, you know what, I'm on Linux but I still want to build this package using Docker. So let's pass the provider for Docker on there and make sure that that still runs. This opens the door for some really cool features as well. Those of you who are familiar with the systems themselves and building and build route, we can do some neat things. Like we can on macOS allow you to be able to get yourself to the shell for a package. So in this case, if you wanted to add additional packages to the system, you can use make menu config, which is an exposed build route command that'll bring up the make menu config here, as you can see on the bottom, that allows you to be able to specify all the different options that are enabled for your system. So this is an example of saying a command we'd be adding that would say mix nodes.shell and that'll dip you into a shell at that package so you can make modifications. So ultimately, the lifecycle of this is gonna change a little bit. We're gonna have our package and we're gonna have our Docker container and our package dependencies and we're just gonna push those into the container via linking them. And out the other end comes the artifact directory. It's gonna push the artifact back out to your host into the artifact directory and to speed up this whole process, we're actually enabling a behind-the-scenes Docker volume that's the cache. So this is gonna make it so that, and this cache is gonna be shared between all of your different artifacts. So that way, if you wanted to change this system over here and then you wanted to go and change this system over here, the downloaded assets that they may share will be already downloaded in the background and it will also leverage C-Compiler cache as well if you have that enabled in your NERVS package, in your build-root def config. So what does this bring us? This brings us great support now. Not only can you build on Linux, but you can also build on macOS and we haven't quite tested the Windows support yet, but Greg Meffert on the team, he's been doing a lot of work on getting FW up to work on Windows along with Frank Hunlith and they've been getting it to the point where we can finally support a lot of different systems coming out. And the reason I'm telling you a lot of this is because we are trying hard to be able to allow people to use new devices and peripherals along with their NERVS nodes, like touchscreen displays. Imagine that you could create your Phoenix application and instead of having to run it as a web interface, you could actually run it on the touchscreen. The complications here we had to solve first and that was allowing us to be able to build on multiple platforms before we could end up tackling the issues that came along with, enabling all these different kinds of supports for displays and touchscreen interfaces. And essentially one of our goals as well is when we produce your firmware, you can imagine that there's a great deal of which lives in the user land. Like imagine running Elixir on your computer, on your host computer. Well, you're running Elixir but alongside of that you may be running underneath the scenes some additional programs, like Postgres. And Postgres is something that's gonna be the user land application and Elixir is gonna have a lot of Elixir based things. Well, our goal here is to be able to move more from the user land towards my new favorite thing, Elixir land, which is feeling kind of like Disneyland but only better. So this is a big goal, but we think that we can work towards this and bite off more of it by including more of the community and by bringing support to all these different platforms finally gives us a utility towards bringing more and more people in as contributors so that we can work towards moving these lower level land facets up into Elixir because that's the place where you really wanna be when you're working with initialization or running your code. So at this point I wanna talk about hypotheticals or as Chris earlier, they like to put them, things that may happen in the next year. This is future dust and saying it's future dust and problem, passing the buck. There's been a lot of talk about how people wanna move away from CNC plus plus because they feel like it's inhibiting their abilities of programming these systems faster. Now that's a great assault because there's strategies that may require you to be able to use to dip out and use those for speed or performance but with newer languages these days people want the ability of running rust and go or any other host-based compile that they want. So this is a concept that we're calling tool chain extensions. Right now our tool chains ship with the ability of creating a C C plus plus code compiling it for the target and runs on your host. Well this is, as we were saying, something we're toying with for down the line. We'd really like to be able to bring support in for rust and go but the complications here are that, as I mentioned earlier, everything for nerves is compiled for the target. So if we were to include tool chain extensions as dependencies, the difficulty is that the tool chain dependencies are actually for your host to compile for the target but then if you bring them in as extensions as dependencies there's a potential that they're gonna end up getting fed into the system and you're gonna inevitably maybe try to compile a tool chain that's meant to run on your host for the target. So to get around this we have one potential solution we've been playing with which is to leverage the ability for us to segregate dependencies via a mix environment. And the concept here is to say that, okay so if all of your host dependencies will only get fetched for mix environment host that would be all your tool chain extensions but then all your target dependencies would get fetched whenever you're running in test to have a plot. This requires a two-step phase where you'd be like, okay I have to fetch all my dependencies for host before I can fetch my dependencies and move on iteratively for dev or test. There are complications with this system but like I said, this things are being worked out and this is totally future, future desins problem. So I have to say that we couldn't do all of this without contributors like you. The idea is that people shouldn't be afraid of embedded development. It's actually strikingly easier than you may think because it's software in this case. A lot of people here can really get started contributing today by writing libraries to interface with different services or different hardware. For example, we have a page up on our website for community libraries that seems to be growing every day. It includes additions from all across the community including professional or I should say production additions from places like the National Association of Reliters. I believe Chris is here. He can talk to him later about his use cases as well and now he's been using it and Garth and others ship in production. Also at LaTote we ship devices and use them in production and if you have questions about how to get started you can always join the Elixir Lang Slack community group and join the NERVS channel on there. We're highly active. You can ask questions, get information, help. If you need advice on hardware there's a great group of people that have lots of opinions. And then ultimately soon you have some great documentation that we started putting up that's ever growing and needs more and more work every day. So read the docs, if you have problems with them contribute some pull requests, it's real easy. And also anybody who's in our community that loves NERVS and is great at web development or even good at web development we'd love if you can contribute to being able to make our website better. We really could use some overhauls and it's tough asking a group of embedded developers for web help but that's where Phoenix gets the benefit a lot. So stop in, check out our docs, get started, ask us questions and if you have any problems just find one of us during the conference. Thank you. I believe we have a little time for questions. Anybody have any NERVS questions? Yes we do. So NERVS is predominantly for embedded Linux systems? So yeah, NERVS is geared to targeting embedded systems development. There's a great lack of robustness in that field but it's really good at cross compiling. So there has been a lot of talk in the community of people using it for other methods but we're focused on building better embedded systems. Okay. The GUI stuff, have you guys made any plans with that to do embedded GUIs or what are your thoughts around that? Yes, so we started tackling that problem by tackling the problems that sit underneath that problem and that proved to be a lot of work in itself which is part of the issue with building locally and package management. Those two features are gonna bring a lot more, a greater ability for us to be able to extend the base because the GUI, the webinar faces brings in a heavy payload and that heavy payload we didn't wanna bring into the main systems but we also didn't wanna have to force people to compile it because I've compiled it myself, it takes a long time to build. Yeah, yeah, it's based on WebKit, the one that we're playing with and we have some people, a lot of people that are interested in pushing that forward and now with our focus moving more towards making development easier and package management, I'm sure that's gonna make it easier for the community to be able to participate in assisting with the creation of that level as well. So. Justin, curious, so I know Nervz is still pretty young and hasn't quite reached 1.0 release yet so I'm wondering which future Justin do you think gets to release 1.0? So, I believe we've, the team, we've had a conversation about this and no promises, this is the empty promises section. We're really working hard towards trying to get towards a 1.0 stability, hopefully by the middle of next year, hopefully by the next Elixir Conf. There is a great deal of work to do to be able to get there but we're trying to be able to focus on that one piece at a time and push towards next year for that. Fantastic, that's all the time we have for questions so please give Justin a huge round of applause. Thank you.