 Welcome again to another OpenShift Commons briefing. This time we're going to do some Node.js stuff, and I know you all know that I love Python, but I also love Node. So I'm really thrilled to have the folks from your forum here coming and talking about their experience with OpenShift and using Minishift specifically on creating the development of OpenShift. So I'm looking forward to this and getting some interesting information on what their adventure was, getting it all to work. So Connor Neal is with us, Dara Hayes is with us, Connor is going to introduce who their form is, and Dara is going to do some neat guys, and I'm looking forward to seeing you. Right in. Thanks, Diane. Hi, everyone. So just before we get into the meat of the talk that Dara is going to cover, I'll just kind of introduce the two of us and talk a little bit about Neerform. So Dara is one of our DevOps engineers and he's done a lot of different projects for kind of customers around the world, Europe and the US. I'm actually Chief Product Officer with Neerform and not long with the company. Previously, I was with Red Hat Mobile for any Red Hatters watching this video. I'll tell you a little bit about the company. It was set up only in 2011 and unusually it's based in a kind of tiny little seaside town called Tremor in the southeast of Ireland. And weirdly, this little bit of Ireland is probably node central for most of Europe between all of the node people in Neerform and the guys in Red Hat Mobile as well. So just one of those interesting little quirks of geography. The company itself, it's a little over 100 people but they're spread around the world. It's a very, very distributed organization and really the bulk of what we do is build full stack solutions for large organizations. So a huge amount of what we do is professional services and we work with companies. You'd know people like Condé Nast, McKinsey Consulting, ADP. So I think pretty much every person in the US who gets a paycheck has ADP printed on it and while all the projects that we work on are unique and specific to customers, what we do see is a strong commonality in the tech stack that we've been using kind of recent years. So that's very much things like react on the front end, node on the back end, happy as the kind of web framework, obviously deploying into things like AWS and then using tools like Terraform and Ansible and then kind of getting closer to what we're going to be talking about today, a big focus on containers, whether that's Docker, Kubernetes and as we'll talk about here, OpenShift. But on the other side of what we do, we're heavily involved in the open source, various open source communities, but particularly the node community. We're major contributors to Node Core and in fact, James Snell, who this week just landed HTTP to support or early HTTP to support in Node Core is the technical steering committee director. And finally, that was the other thing we're very well known for our conferences, particularly NodeCon for you, which is sort of the European version of NodeCon. So that's going to be held this year in Tecani in Ireland in November and is sort of one of the highlights of the Node conference season, shall we say. Now on to the briefing itself before I just hand over to Dara. It came about because of a relatively new initiative that we've kicked off inside NearForm. So what we've done is we put together this floating team. They work very much like kind of pathfinders, trying out new tools and technologies. And the idea is people come in and out usually just for a couple of weeks and work in areas where that we're interested in learning more about, talking more about, letting people know what we do and just kind of general communications. It's totally sprint based, so it's once a week. We, you know, everything's done in blocks of a week. You have a demo and a deliverable at the end of every week. And, you know, the work can vary anything from evaluating your DevOps tools. Try it, for example, one of the things working on now is trying out different approaches to some of the React tooling. And really then that knowledge is distributed internally and then externally via blog posts and obviously webinars like this. And what Dara is going to talk about is one of those types of projects that have been done over the space of several weeks by him and several others in the NearForm team. So I'll hand over to Dara now and he'll talk about the nuts and bolts of the session. Right. Thanks very much for that, Connor. It's a great intro. And also thanks everybody for taking the time to listen to us today. My name is Dara Hayes. And as Connor was talking about that sort of working group that we have that kind of does this researchy kind of stuff, one of the things that we did was looked at Minishift and tried to figure out how can we make Minishift into a development environment? That's actually pretty useful for Node.js. And so before I jump into the specifics, I just wanted to talk a little bit about, you know, NearForm's experience with container technologies. So we've been working with Docker for like three years. So we've been using it since Docker was quite young. And back in those days, we did loads of crazy work trying to build like custom tools to deploy Docker. And yeah, they were like the dark ages. But in more recent times, like we're starting to look at Kubernetes quite a lot. We've got a number of people at NearForm now that have real expert knowledge in Kubernetes. We've got a Google developer expert on the subject. And I suppose naturally out of that, we're starting to become interested in OpenShift as a product. So this was just one of the initiatives that we sort of came up with. And so just in case anyone's not familiar with what Minishift is, it's a tool that helps you run all of OpenShift in a single virtual machine on your laptop. So it's a really nice way to get familiar with OpenShift for free. And it's pretty much compatible with, you know, any operating system. And it can run on a bunch of different hypervisors. So, you know, it was originally a fork of Minikube. But it really is its own project now with its own community. And so that's what we started looking at. Right. And why did, you know, as a company, why are we getting interested in Minishift and in OpenShift? I suppose, as I said, we have a lot of customers already that are using Kubernetes and we've been speaking with a lot of potential customers as well. And there's, you know, there's a growing interest in OpenShift and, you know, how that can potentially solve their problems. And then again, as Connor said, we kind of do these initiatives to help scale up some people, you know, in our company and help increase awareness of some of the stuff that we're looking at. And then this last point here, like a technical reason for looking at Minishift, we're really interested in this idea of how your dev environment could be really similar to your production environment. So like the idea of running, you know, a full-fledged installation of OpenShift on your local machine where it would be really similar to the way it would be running production, like, you know, in terms of the build process and the deployments and even just the sort of files that we use to define our system, all that kind of, all those resources. The idea that, you know, that could be really similar to our production environment was really interesting to us. So we wanted to investigate that, I suppose. And yeah, so to us, there are a couple of things that actually make a good development environment. And we sort of, you know, looked at this before we went off and, you know, developed a demo that I'm going to show to you today. And the first thing really is if you're new to a project, like it should be really easy or it should be relatively easy to get, you know, that software up and running pretty quickly for the first time, right? It shouldn't be a hassle. Like it should be pretty pain-free to get your software running for the first time locally and then to be able to make changes to it. So time to first code. And then on top of that, once you can start making changes to that code, those changes should be reflected instantly. So this was a really big challenge with OpenShift because it's got full-fledged build and deploy cycles. And we wanted to figure out how we can make those changes that we make locally more instant in a running instance of OpenShift. And then lastly, again, on the idea of having your dev environment similar to production, this kind of idea of production parity is really interesting to us. So what we did was we looked at Minishift and tried to figure out how can we have this environment where we looked at it and we tried to focus on the instant feedback loops particularly because we wanted to solve the problem of how can we make changes to our code locally and have them reflected in OpenShift instantly? Because by default, you know, there's a whole build and deploy process that actually takes quite a bit of time. And the end result of our work was this little demo project here where we have a little server running and it just gives you this Hello World message. We can go in and change the code and hit Save. And in our running instance of Minishift or OpenShift, that's reflected instantly. So I hope everyone saw that. I'll leave it play again just in case. And this little five-second GIF here, like the result that you're seeing here actually took quite a bit of work, right, to get to that stage, right? So we've got a little repo here. It's near form for Minishift demo. If you want to go and take a look, you know, and follow along that way, you know, you're totally welcome to do that or you can just sit back and relax and enjoy while I talk about it. So as I said, it took a bit of work. There were a few pieces of the puzzle to get to that solution that I just showed you there. And I'd like to talk about those few bits to take you on like a story of how we got there. So the whole strategy really kind of revolves around using volumes. Anyone that's used Docker compose as a development tool will probably like knows exactly where this is going. The whole idea was that we would get around the build process in OpenShift by mounting the local code directly into a running container. So the idea is that any changes we make locally are already reflected inside the running container. So these are just two small little snippets taken from the file that defines their system. And what we're doing is defining a volume at this folder, which is where, you know, our app lives and it's what's called a hostpad volume. And a hostpad volume is this concept in OpenShift that lets you mount folders on the underlying host that the container runs on. So the result of that is, is we can avoid rebuilding a container when code changes because those changes are already there inside the code or yeah, inside the container. So once we have that, all we need to do is figure out how to do some sort of instant reload inside the running container, right? So we're primarily node, like we're all nodes. So at Neoform, so like we just use this tool that's really popular in the Node community, Nodemon. It's just a simple utility to restart your app whenever changes are made. And what we did was we added Nodemon as a dependency to the app and just added this extra start command that runs the app with Nodemon as opposed to standard node. And then in the spec for the container, we just override the default Docker run command with the new command. So the result of that is the application can restart within a running container whenever changes are made. So to recap, the code is mounted into a running container and we can use Nodemon then to detect any changes when they're made. And that by itself, those two elements really makes up the majority of the actual solution. But there were a couple of snags that we ran into, like a couple of challenges that made this a little bit more difficult. And so I'm just gonna talk about those a little bit. So the first one was permissions. In OpenShift by default, that hostpad volumes feature won't work because your containers don't have access to the underlying file system. And like that makes total sense in a normal real life OpenShift kind of context, but not so much in a development environment. So the OpenShift security model is built around these things called security context constraints. And what they are is these sort of policies that define permissions. And then those policies are attached to service accounts or to users accounts, that kind of thing. And then the other side of it then is a service account. There is a service account associated with all the components in OpenShift. So like the builder, the registry, the deployer and indeed your own containers that you run all run under a service account. And by default, the service account doesn't have a lot of permissions. So luckily there are some baked in permissions already that we can use. So if I just log in here and I can show you. So we're just gonna log in as the admin user. What I'm gonna do first is OCE get a service account. We can see some service accounts here that are created. So like this default service account is what your containers will run with. And then you can do OCE get SCC. This is gonna give me a bunch of different, you know, pre-made policy objects that exist that we can attach to service accounts and to users and stuff. So what we added simply is the host access one and that gives your container. That will, the effect of that is, is that your containers now have access to the file system. So that's just one little thing that you have to do extra. And, you know, it took us a little bit of kind of searching around to figure that out. So then I guess with the volumes, the instant reload and the appropriate permissions, you're in 99% of the way there. And it was only when we showed off our work to another colleague, he came back to us and he told us, there's one more issue that we completely hadn't even thought of. It never occurred to us, but it was to do with native dependencies in Node. So our strategy of mounting your entire local code into the container doesn't really work if you're because of native dependencies. So like, if you mount all your code, including your Node modules, if you're running OSX and that code gets mounted into a Linux container, then any native dependencies are gonna break because they're compiled for the wrong platform. And so we came up with a solution and I still think it's a little bit hacky, but it's the only thing we could come up with. So if anyone has any suggestions for a better way, I would be so happy to hear those suggestions, but what we did was we installed the Node modules outside of the application folder in the Docker image. And then we just link to those Node modules. So I can just show you what the Docker file looks like. We've got two folders here, bin and lib. In bin, that's where the app is gonna be. And then this lib folder is where the external Node modules folder is going to be. Copy the package.json in there and then we can run npm install. So what that gives us is dependencies in another folder. And then we can use this Node path environment variable and pass in that directory. And what that does is when Node is running, it reads that environment variable and if there's anything there, it knows to also search in this folder for dependencies as well as the other default ones. And then lastly, we've just added this folder here to the path and what this guy does is any like binaries and CLI tools that you've installed as part of the application dependencies will be available on your path. So like Nodemon, for example, you can use Nodemon now because of this. And so it's kind of hacky, like I'll admit. I'd love if there was a better way to do it and maybe there is. But the result of that is that the native dependencies are compiled for the right platform. They're just in another folder. So when we mount in the other code, they're still there. And then the one caveat obviously is if your dependencies change, if your dependencies change, you're gonna have to rebuild that container. So that's the one kind of trade-off that you have to live with. I suppose it's not too bad though because as a project matures, your dependencies won't change a whole lot. So yeah, that's all of the elements of how we got it to work. And now I might as well just show you the demo because you'll get a much better understanding, I think. Right, so if we start off with our Hello server, our Hello server is the piece of software that we used to demonstrate this whole thing. It was the guy that you saw on the GIF. And what it is is it's basically a tiny little API. It's built with this framework called Restify, which is just a really minimalistic sort of web framework. And we've got a couple of endpoints. So our root endpoint just sends back this Hello world message. We've got a health check. So OpenShift checks, health checks to make sure your service is available. And then one thing we've done here is we've got this additional health check which uses something called LevelDB, which is like a, it's a database, just a file-based database. But the reason we're using this is because it's actually the Level module is a native dependency. So we just have Level in there just to prove that the native dependencies are actually working as we want them to work. So that's why that's there. So this endpoint will just cause Level to save some data in a database. So we can just tell that the dependencies are working. And then that's the Docker file that I showed you in the slides a second ago. So that's the server. And then what we have is this one file, this manifest file. This manifest file specifies all the objects required to actually get the software to run within our OpenShift cluster. For anyone that's not familiar with this, it's really similar to the way you define objects in Kubernetes. But there's just a couple of extra features. So for example, this file here is what's known as a template file. And what that does is it allows you to pass in parameters and then that renders the file with those parameters. And you can pass in like default values and stuff as well, which is kind of cool. So we've got a couple of parameters here like naming, memory limits, health checks, some environment variables, that kind of thing. And then what we have then is the actual objects. So I can't go into massive detail because about like all the Kubernetes and the OpenShift concepts because that would be a talk in itself, but I'll just brush over it really quickly. So what we have is a build config which defines how we build the application. So what it looks for is, you know, a source to build the application from a strategy of some sort. So we're using Docker and then somewhere to put, you know, the resulting build. So in our case, we're gonna build a Docker image and we're gonna publish it to an image stream tag, which is essentially the OpenShift sort of abstraction for our Docker registry, right? And then what we have is a service. So a service just groups all of our running instances of an application together and exposes them under a single name and it low balances requests to those. Now a service only exposes your application internally within a cluster. So then we use what's called a root, which will create like a publicly accessible endpoint from which we can access the application. And then lastly, we have this big object here called a deployment config. And the deployment config has a big specification for the container, how it's run, the environment variables, health checks, volumes, and a bunch of other things. So that's really the biggest one to understand. Now, as I said, I'm not gonna go massive into detail on that, but I definitely encourage if you're interested in doing this, definitely encourage just to go take a look at it in our repo and definitely re-true the OpenShift documentation because it'll really help you understand what's going on. The OpenShift documentation is actually really good. So that's the manifest, I guess, file that describes the whole system. And then what we have is just a single script that will help us get everything up and running. So this script, it's just a little batch script. And what it does is it does that permission stuff for us that I mentioned earlier. It just adds additional permissions to our default service account for our particular project. It creates a project or a namespace under which all those resources are going to be created. And then it uses this OC process command, which would take the template file and render it with some parameters that we're passing in. And what it does is that just logs the rendered version of that file out to the console, which can be piped into a create command, which will go and create the resources. And then lastly, we've just got a start build command, which is going to tell OpenShift to kick off the first build and we're passing in path to our hello server code. So what I'm going to do is I'm going to go to my terminal. I'm going to, just in case I have the project already running, I'm going to delete it. I'd say I don't have it running, but just in case, I'd like to do it from scratch. All right, so it looks like it's not there. So I'm going to log in. Hello server, just make sure, right? So yeah, so we've got a nice clean slate to start from. So I'm going to get going now, right? So we've got the scripts folder here, which contains that script. So we're going to run that script to create projects. That's going to apply the correct permissions, create a project for us, and then it's going to go and create those resources within OpenShift. And then it's going to kick off the first build. And when that build is complete, that's going to trigger the deployment of our software. So the first build is obviously going to take some time because it's going to go through an entire Docker build process, going to install all the MPM dependencies, that kind of thing. But after that, we shouldn't have to rebuild the whole lot unless we've wanted to change dependencies. So yeah, I'm going to leave that run right now. And does anyone have any questions so far? Are we all good? Chris, there's one hand in the chat and I've got an echo going. I'll let you read the chat. Chris, thanks very much for your suggestion. I'll definitely take a look. Yeah, one thing we didn't look at because I guess to just as a response to Chris's link as well, that's a great suggestion. I suppose because of our strategy of trying to avoid doing builds, I guess we didn't look at the CEI tools and the Jenkins kind of side of thing as much as we could have. So yeah, that's just the order. We should have looked at it, but our strategy kind of involved avoiding building as much as possible, I guess. And so as a result, we didn't get to look at that, the Jenkins stuff as much as we should have. But if it can help, that would be great, definitely. So that's still running. In the meantime, I can actually open up the console, I suppose. So, sorry, one sec. I can open up the mini-shift console and just kind of look at it, mini-shift. It's just a handy little command to open it in the browser. So this is the project that we've just created. My machine is starting to get angry now doing a video call and running mini-shift at the same time. So we can see here, we've got a root. So this is the endpoint that our application will be available at. We can actually see the build is running. So we can go in and take a look. And that's just gonna show us the list of builds. So we can go into this build and we'll be able to see the log of exactly what we're seeing in the console at the same time. You get the point. There's no point in waiting for that to load. But what else can I show you real quick? Sorry, my machine is getting a little bit slow now. But right, so the build is complete and now our first deployment is running. So we can just go in and take a look at the deployment config just to show what that looks like. We can see the command here, the start dev command. We can see the image and see the build it's related to. It's actually running now as well. One thing that I should mention just real quick as well. So we've got the app volume. That's the volume where our local code is mounted. Got this level DB volume, which just creates an empty volume where level DB can write information to. You've got this empty node modules volume and what that does is because we're mounting our entire application code, the node modules folder within the application folder itself is still actually being mounted as well. And node will always take preference for that as opposed to any other ones that you've specified through that node path environment variable. So what we've done as a little hack is just to put an empty volume in the default folder so that it won't see anything there. And then lastly, we have this little volume here. So apparently node man needs a file called config for it to work. So we had to create a file there for it to be able to write to. So just to explain if anyone comes along and looks at this and goes, what the hell are all of these? That's what they are. So now we can see in the logs, we can see that our server is running. We go back here. We can access our server. So hello world, right? We come in to hello server in here on my local machine and just, you know, hello, open, shift comes. It's save. We hit refresh. There it is. So we have successfully gotten instant feedback working, right? And it's pretty nice. And if we just go in and take a look at the application again, go down into the running pod and just look at the logs. You can see, you know, node man did its job. It restarted the application. So that's pretty cool. And that's, you know, that's kind of the main thing we worked on. And that's what you'll see in the master branch of the repo, right? But just for this demo or for this talk today, I've actually gone and done some additional work. It hasn't been merged to master yet on the mini shift demo repo, but it's still like to show it to you because I think it's interesting. It expands on what we've done here. So having a hello world server is pretty good, but by itself, you know, it's not that meaningful. So we wanted to do something a little bit more involved. So what I tried to do was to get a database running of some sort and have an application that can connect to a database. So I'm just going to go and show that to you guys. So if I go back into where I was here, if I check out my other branch, I have Postgres. Now if I come back to my editor, what I have now is two new files, as well as the original mini shift demo file. We have two new guys here. We have a Postgres file, which defines some objects to get Postgres up and running. What we did was we got Postgres running from the default Postgres image from the public Docker registry. So we didn't use like an open shift or a Red Hat specific version of the Postgres Docker image. We just wanted to use the vanilla one. So there was a bit of work there. But what we have is this persistent volume claim, which defines like some storage. That will outlive any running container. And then we've got some stuff here to define where the image comes from. And we've got the service and the deployment config very similar to the other manifest, but it's going to run Postgres. And then also what we have is this guy called Users API. So the application is a users API that connects into Postgres. And just, you know, you can list users, create users, et cetera. And so the kind of the manifest file for that is really, really similar to the mini shift demo one. The only difference is just to make things quicker. I was a little bit lazy and I don't have the code included in the repo. I have it elsewhere in my own Git repo. And so instead of building from a source, we're building from a Git. Or instead of building a binary source, we're building a Git source instead. So that's a really nice feature of OpenShift actually is to just run an application straight from a Git repository. And you just tell it Docker. And I've got a specific Docker file here as well, like a different name on that Docker file within the repo. So I'm just going to get those up and running. The first thing I'll do is create Postgres stuff. So let me see. Let's see, create Postgres. That's going to go off and create Postgres. I'll just show that real quick in the console. So Postgres is, we can see this stuff here. And now it's going to start a deployment of Postgres and that should be off any minute now. And now in the meantime, we're going to go and create the application. So I have one tiny other little script here that will just create the application kind of in the same way as the other one. This script is a little bit different from the other one. So they just kind of prove it together real quick, but we've gone up and down to create it and to delete it. So if we just scripts that's user's API up, that's going to go create those objects. So now we can see this user's API guy is starting. What that's going to do is, as part of the build process, it's going to clone down the code from the Git repo and then it's going to start running the Docker build. So those guys are deploying. Should just take a couple of minutes. Again, I'll just check. Does anyone have questions or anything? We're all good. I think we're all good. We're all good. Yeah, we're all good. What I can do is just group these together just to make them a little bit easier to, yeah, my machine is really, really coming on fire now between the call and everything, acting a lot slower than usual. Let me see. I've turned off your video. I think you have your speaker on. Let me just check in here and see the log. Can you see my screen still? Yeah, yeah. So you can see it's being a bit slow, but it's now, it's after cloning the repo and it's just going through the sort of standard build. Yeah, my machine is getting quite slow now. Apologies about this. I might just have to skip past it. We'll give it another second. It's after totally freezing on me now. Yeah, I agree with you, Diane. It is a cool environment for learning node and it's definitely a really cool environment for just learning about OpenShift as well and about how deployments sort of work and the kind of Kubernetes and the OpenShift concepts that are involved. So it's a really nice way to bring those concepts to the local environment, I suppose. And that's kind of cool because then it gets to, like if you've got developers and you've got ops people, it sort of gets the developers thinking along the same lines as the ops people and it gets kind of, at least I would think that it hopefully kind of reduces friction between those teams because we're kind of breaking down the barriers between them. In my own experience, I've kind of found sometimes the developers really don't care about how the ops people get the software running in real life. They just want to be able to run it locally. So it's kind of cool that we have this possibility of an environment, a local environment that's really similar to a production environment. It's, yeah, look, it's being very slow. So I'm actually going to just stop it now. You get the idea, like at the end, you'll be able to access an application that's reading data from Postgres and you can like kill Postgres and bring it back up and the data will still be there. And it's, yeah, it's pretty nice but I'm going to skip pass it now because it's just being too slow. So what I'm going to do is just completely delete the whole thing. We'll see you delete that. I should hopefully free up my machine a little bit. The fans are already after getting quieter. But, yeah, all right. So that's the demo. I'm sorry that the other guy, the other pieced it and really work. I would love if you go check it out, though. You know, it's got a lot of stuff there to look at. And, you know, when we went and did this for the first time, there wasn't a whole lot of material we could actually find online to help us. It was essentially a case of going through all the OpenShift documentation, which is really good but also very extensive and trying to figure out how we can adapt, you know, adapt it to a development environment. Yeah. So if anyone's interested in, you know, trying this out, you know, hopefully some of this can give you ideas as to how you could do it yourselves, you know? So that's the demo. And then this is kind of funny, actually, this idea of what about scaling? One of our colleagues asked us after we demoed this for the first time internally, he said, yeah, how many services can you run? And it's quite funny now because, you know, it kind of crashed out on me just in this call. But there is a way to scale it with some tweaks and we did manage on a single machine to get it up to about 40 or 50. You know, the results vary depending on your machine. I was able to get it up to 40. I have some colleagues that got it up to 50 and even more. So, you know, it has the potential to run quite a lot of services. And I think, you know, if you try to run 50 services on any kind of development environment, you're definitely pushing the limit anyway. So for sure. But just if you're interested, you can run this command OC describe node. And that's going to give you back information about how much, you know, how much resources are available to your, to the cluster that you're running. And it'll also give you a figure of how many pods it will actually allow you to run. And by increasing the memory and the CPUs, that will go up. And on top of that, then you can kind of put restrictions in place on the individual containers. Like you can limit the amount of memory that they'll use. And that should be able to help you kind of scale up a little bit more. So, I guess some of the conclusions coming out of it is it does have the potential to be a great development environment. When we first set out to do this, I was a little bit skeptical because the idea of running a virtual machine was a bit crazy to me. But after having a bit of experience with it, I think it is definitely good. It has a really nice UI and like experience. The CLI tool as well is just fantastic. The OpenShift CLI tool. I spoke about this a little bit. I really liked the idea that if we have people using MiniShift as a development environment, you have your people thinking in terms of OpenShift and in terms of deployment and in terms of all those concepts that are there. And I think that that would definitely enable us to break down barriers between Dev and Ops people because you have them thinking on the same page almost, right? So I really liked that idea. And then lastly, this idea of a universal Dev environment, that sounds crazy at face value, but imagine if you work on a bunch of different projects. Like the idea that you could have just this one MiniShift installation where you could have potentially have multiple projects going on in there at the same time. And it just becomes this one kind of unified place where you can manage all of your different software projects at the same time. I really liked that idea. I don't know how close we are to actually having it, right? But I really liked the idea. So it definitely has a lot of advantages and it's a really nice tool. But it's not perfect, obviously. There are some issues with it, even you've seen today. It uses quite a lot of resources on your machine. It takes up a lot of battery as well. It drains the battery very quickly. And then there's some more kind of higher level questions and work that still needs to be done. So for example, I just showed you today a single code base and it was a very small code base too, but how would we organize this if we had multiple code bases for one project? So say if we had a project that had 10 microservices, how do we structure our repos to facilitate a MiniShift development environment? And so that's one question we need to ask and we need to figure out. And then the next one would be if we were to have this sort of universal Dev environment, how would we actually coordinate that across multiple projects? And then the other question is, can we turn it into a general solution? So what I mean by that is, I showed you guys some like manifest files today and some scripts to get it all up and running, but they are kind of specific to the one project and they're kind of, they're not perfect either. They're just enough to get the job done, but is there a way we can make those better and use them across projects as well? If we had just a single tool that can help us get a project running in MiniShift, that kind of thing, that would be really nice. And then lastly, what is the pathway from going to a local OpenShift to a production OpenShift? That's a really big question for us, particularly at NearForm. We're actually, we're still very new to OpenShift, but we're actually doing a lot of work now in getting OpenShift up and running and then actually trying to figure out those pathways and hopefully we'll have content on that soon. It is work that we're actively pursuing. So those are some of the questions that still are left on answer for now. We don't have all the answers yet. And the other thing is I'm very aware that there are people listening here today that probably have a lot of knowledge around OpenShift and you'll be really, really happy to hear your suggestions and to hear your contributions if you have any because we've shown you something today, but it can definitely be improved. And if anyone has those suggestions, that would be really good. But once again, that's the link. Do check it out if you're interested. Even just leave some issues or just get in touch with us and give us your feedback. We would love to get your feedback on it. My name is Dara Hayes. Those are my contact details. If you wanna get in touch with me via email or Twitter, Connor O'Neill's details are there as well. Please feel free to get in touch with us. And thanks very much for listening to me today. I'm really honoured to come here and speak to you guys, right? So thank you very much. Well, thank you very much, Dara. A couple of questions or just actually one. Mostly with your laptop, how much memory does it have? Yeah, my laptop, right. So can you still see my screen? No, not at all. Yeah, yeah. About the snap of the snap. Yeah, can you see my screen there now? Yes. So I'm running an early 2015 Mac with eight gigabytes. So it's definitely nowhere near like top of the line in terms of Macbooks go. I have other colleagues that are running much more powerful machines and they were able to run this much more comfortably. I'm still of mind that I'm still aware and kind of conscious of the fact that it does take up a lot of resources and that's just the downfall of running a virtual machine. But yeah, those are the specs there just so you guys know. Yeah, any other questions? There are no other questions. And I have an echo. So some of the questions that you were asking in your conclusions, there are a lot of answers out there. Some of it are Jacob's base. Some of it may be solved with OpenShift IO on premise. There's a lot of different approaches to solving some of them and we'll probably grab you and another time that we can walk through some of them. I think this has been really interesting for me because it's a great way to get your node set up so that you can be almost production ready very quickly. And this is, I'm gonna be trying out myself shortly. So I'm thrilled that you've done this. And I'll see, I don't see any other questions. So thank you really for taking the time to share all of this and we will post this video on the YouTube channel shortly, probably in the next day or so and a blog with the links back to the NearForm MiniShift demo and some of these resources. And we look forward to hearing more as you venture into using this for production ready node application stacks. Absolutely. Thanks so much for taking the time to listen to me as well. Don't need my second time speaking so I'm really happy to share this with you guys today. You're welcome. Thank you very much. Thanks.