 Oh, Roddy snuck in while I wasn't even looking. Hey everybody, thanks for joining another episode of the scalable multiplayer game design live stream. Today we are joined by the usual cadre of Jared Sprague and Roddy Kiley. Hey folks. And we've also got Jafar today. He's going to be off camera because he's got a child duty in the background going on. But we're kind of switching gears a little bit away from some of the low level C and C++ and game design stuff and pivoting to kind of try and do some things directly on OpenShift. And so what we're going to try and do today is actually get the application built, both the game server and the game, the web client, and then maybe deploy the broker and do a couple other things. So thanks again for joining. And I guess we'll get rolling here. What I should probably do is move some things around on my desktop so that we are keeping the good things good and the bad things away. Okay, one more thing. I don't need this. Okay, great. So if I try to share my screen, let's see. I want desktop to share. Can y'all see my OpenShifty console? Yes, Jared gives me a thumbs up. Okay. So Roddy and Jafar had done a little bit of pre-work behind the scenes to figure out how to get the game server built and deployed using OpenShift pipelines. And in this environment, I actually don't even have pipelines installed. So Jafar, if I remember correctly, I need to go to Operator Hub and just install the pipelines operator. Yep, that's correct. And so Operator Hub for those who may not have seen it before, this is kind of our convenient user interface into a catalog and marketplace of certified and community operators. And if you don't remember, an operator is just a piece of software that is a Kubernetes control loop that sort of makes sure that something is running in a Kubernetes environment as you define it. In the case of OpenShift pipelines, this is a operator that provides the OpenShift pipelines feature, which is based on the upstream Tecton community project. And so installing this operator is going to add the Tecton features into this OpenShift environment. So I will go ahead and click the install button here. I guess I want stable. We want pipelines to be available everywhere. Approval strategy has to do with when or how the operator gets updated if there are updates available. And here's all the APIs that are provided by it. Unsurprisingly, the pipeline's operator provides a pipeline object to our API endpoint. So we click install and what's happening under the covers is operator lifecycle manager is potentially creating a namespace and it is deploying the operator into that namespace and adding some artifacts into that namespace. And so let's see. OC get project grep pipe. I'm using a pipe to pipe into grep to look for pipe. Interesting. OpenShift. Does it not create a pipeline's project? Does it just put it in the Red Hat operators too far? OC get pod dash a grep Tecton. What's your pipelines? Weird. OC project open shift pipelines. I definitely need to turn off. Sorry, Eric, what was the question? Oh, I was asking about the project where pipelines lands, but I guess I was just a little too early. OC get pod. In the pipeline's project we see there's an operator and then there's the pipeline's controller and the pipeline's web hook, which I guess is an admission controller or something like that. And now if we hard refresh the open shift user interface, we actually pick up this new pipelines tab, which lets us look at the pipeline objects in our various projects and other stuff. So this is where Jafar and Roddy, where did you put any of the pipeline objects in a repo anywhere? Like the pipeline definition or anything like that? That's a farer somewhere. I think Jafar still has them locally. He may need to push them up. So Eric, actually it's going to be a bit even easier than that if it works. So famous words, if it works. Yeah, so you'll have to make a small change to your Docker file because if you look at the server the way it's built, you are mounting the source code as a volume when you are doing the build. Yes. So the problem if you do that is that if you do a build within open shift like a Docker file build, it's not going to find the source code. So all you have to do is add the command line to the Docker file where you copy the current folder to your TMP slash game server. So just clone the repo into the right place? Yeah, exactly. So if you look at your code, you are doing TMP, you are referencing TMP slash game server as the folder where the code is, right? No, no, wait, wait. So if you just change the Docker file and so at some point and probably... So we got to change this copy command to be a good clone? No, no, no, no. Just add the... So it's weird because it's not the Docker file that I saw. This is the Docker file. Yeah, you noticed the commentary there though. Of course we have an open community project. So Derek pushed up a change an hour ago. Oh, okay. So we'll need to modify on the fly here. But either way, it's still basically doing the same thing. So the work there is going to be this TMP as our server source. And so I just need to get rid of this copy. No, no, it's fine. It's fine. It's fine. If you leave it that way. But how is this going to work that we're not mounting anything into source? Because when you are going to do the git clone, so basically when you generate the pipeline, you're going to have three tests. You're going to have the git fetch source code. Okay? Okay. And it's going to clone the code in the work in there. Like it's going to be in the code, in the code where it clones your source. Then it's going to do the Docker build from that folder. So if you copy the dot folder to the TMP game server, it's going to copy the whole git repo in there. But we don't want dot. We want source. Actually, this should work then. Yeah, if you want just source, but you need to, yeah, you can have just that. But I think you need other files like the, let me check the repo. No. Like what's in the root folder of the... Stuff that we don't need. Because we're already copying CMake lists and the config file. And yeah, this, you know what? I think this actually is just going to work. So let's give it a try. Well, you know, actually the CMake list is needed at some point. It gets copied in. That's what I'm saying, man. I think I'm a couple of steps ahead of you. So I think it's actually just going to work the way it is now. Yeah, because, well, that's not the Docker file that I saw yesterday. I know. I know. But this is the Docker file that we have now. So I think it's going to work. Yeah. So this one should work. Exactly. Yeah. All right. So create pipeline. No, no, no, no. Just do a new app. Like go to the Dev console. Go to the Dev console. Let's go. And like create a new project. Okay. Then now do from Docker file. Really? Browser's not being fun. SRT game server. Application name. Yeah. So now the cool thing is that if you look at, you can add pipeline, but you are still on tech preview. You didn't upgrade to the latest official version. Like the pipelines are still in tech preview in your cluster, but it's okay. Just so our watchers know, pipelines are now in GA, but it's basically just because the operator needs to be updated to another version and then it's going to be GA. But I installed the stable version. Yeah. But I think you have to be on OpenShift 47 something or 46 something to have the tech preview. Oh, the cluster needs to be on. Yeah. On a specific. Yeah. Because I'm on 47.0. Yeah. Yeah. I could upgrade to 47.13, which I will definitely not do. At least not during the thing. Anyway. Okay. So pipeline, it's going to fetch, it's going to build, it's going to deploy. We don't need a route to this application because it's not directly public. Oh, actually we will need a route. But no, it's because it's UDP. So it's going to get real ugly real fast. Um, because the game client talks, no, the game client talks to the broker. I saw that anyway, so we don't need a route to the server. Great. Things and stuff. All right. Pipeline. What you can do now is like hit the edit, like actions, edit pipeline. And now you can click on each task to see what it has created. So, so first thing you see that it has supplied some parameters. So the, the tasks are generic to it relies on some parameters that have been automatically filled by the, by the console UI. Um, so basically it's going to use a git clone. Well, it's going to get cloned the repo using whatever parameters you have defined in there. So like if you had a context here, it's going to be there. If you had a specific branch, it's going to be there. Cool. So here's where it did the, so I'm looking at the actual run of the pipeline, which is happening right now. This is where it does the clone. If I remember correctly, when we use pipelines, it gives us a PVC. Yes. So that's the workspace that you see there. Yes. If I look in storage persistent volume claims, we see this claim that got created in the SRT game folder. It's a gig in size by default. And so for each of the, for each of the pot, or sorry, for each of the steps of the pipeline that get executed, this PVC is going to get attached into the container where that runs. If I go to the dev console here and look at the topology, oh, that's right. We don't see those. Yeah. So just hover the build image that you see there. In the topology. Yeah. Just hover over the, you know, the two arrows that you see there. So I think this was a cool addition that they added. Like you can see the status. Yeah. I wanted to see the actual pod where the list of the pods where the things are happening. So what we see here is the game server is trying to be deployed and it's failing because we don't actually have an image yet. So the reason it's pulling, the reason it's backing off from trying to pull the image is because it's failing to pull the image because there isn't an image because we're in the process of building the image. This is the build pod where the pipeline execution is happening and you see there's three containers. And I guess each one of those containers is one of the phases. Yeah. The steps. Yeah. So if we look at the, yeah, we see there's some minute containers that set things up and then there's the actual steps of the pipeline build, push, digest. Here's all the crazy volumes that get attached. Wow. There's a lot of volumes. Holy crap. So anyway, back to our pipeline. So here's the build which hasn't exploded yet. So that's a good sign. So it's using Fedora 33. It's installing a bunch of packages, blah, blah, blah. That takes forever. Looks like the package build stuff was successful. Sorry, package install, which is good. Oh, then it does more package installs. Geez. Man, why doesn't everybody use C and C++? This is like so easy. Huh. Oh, looks like you got Derek. Yeah, yeah. I'm munching down on breakfast here. Otherwise, I'd have my camera on and be talking about breakfast. It's 2pm Eastern. Where are you the trading breakfast? Seattle, man. Oh, geez. Late breakfast, but it was a busy morning. It's like brunch, brunch-ish. So we're just watching software install. Thanks for joining, Derek. Yeah, absolutely. I got it running locally. So I'm looking forward to syncing up with y'all on it. Yeah, we, uh... That's true, yeah. Jafar's in France. This is like a real global project right now. Pearl. Why do we need pearl? Geez. Everything has pearl in it somewhere, sadly. Nothing like spending six hours debugging pearl only to realize that you forgot the ad sign in front of a variable. So do we lose Roddy and Derek? And then we lost Pearl. Oh, turn your camera off. I'm home with my mouth full. All right. So this is sort of a good sign. CMake is doing its thing. Oh, the source appears to be there. Yeah. Oh, it didn't explode. Look at that. Oh, no, it might still explode. So it is building the C code now, which is cool. Yeah. So if it gets to this step, then that means everything is fine. Yeah. So this means the code was successfully injected into the container. Correct. So who made those changes overnight? I think the commit log pointed to Derek. My browser exploded. What did I do? You changed the Docker file, but it actually was not bad. It fixed the problem that we were going to have, which is kind of cool. Awesome. This is probably going to crash my browser again. There's so many lines of logs. Actually, I'm going to do something different here. Yeah. Pass to minimize the dependencies may not go straight. What's wrong? Oh, that's annoying. Is there a way to view the pipeline logs from the command line? Yeah. You have to say what container you want. Yeah, it's boring. That's not what I wanted, but that's okay. That's boring and efficient. Well, it's not, uh-oh, fatal error. Killed signal terminated program CC one plus. Oops. Error. I wonder what that means. Do we also get to blame this one on me? No. How much memory does it need? I do not know. All right. Let's see. So that got deployed on node 133 120. Let's see. Describe node. I mean, it's only using about eight gigs and it has 16. Yeah. It should have eight gigs of memory. There's no way. Here we go. It did get, oh, I'm killed. Wow. What are the configuration settings for the pod? I have to say that I had 64 gigs of nodes, so I didn't get that error. Yeah. And I have, I have a 32. All right. So this node has, but no, no, that's the wrong thing that I was looking at. No, these, these nodes have 64 gigs. So by default, are the pod settings where the build is being done? Are they set for best effort? Resources were guaranteed. Does anyone know? Hang on one sec. 64 gig allocatable capacity. 64 gig. How many pods actually are there? Go front end, log generator, relax. Oh, I've got elastic search running here. All right. Let me, let me add another node here. Okay. So I'm going to add some more nodes to my cluster so that hopefully I have space to run this workload because I'm sure elastic search is being hog and eating all my RAM. I guess I could kill logging, but that's less fun. Jared, are you, have you made any progress? Or do we want to just do it live? I am. I am actually, yeah, I'm having to refactor some stuff, but yeah, I've made progress. Yeah, I'm making progress. I mean, it's not ready yet. We could do it live. Sure. I don't see why. I mean, do you have, do you have other like stuff on the stream agenda that you want to keep? I guess I could deploy the broker because we're going to need that. So let's go to operators, operator hub, Roddy, what do I want here? Do I want AMQ broker? Do I want AMQ broker or do I want AMQ broker? Do I want AMQ broker? Yeah. So, so one of these is based on rail seven, which maybe be the one on the left. The one is based on rail eight. All right. Let's look at them both in new tabs and see. And if you look, if you drill down in the versioning of the closest service version files, see one is denoted with rail seven and one is denoted with, no one's denoted with rail eight. And the other's not, doesn't have a rail specifier. Operator, operator hub. So in actual fact, we just have a release in the works at the moment that makes those operator hub panels more descriptive because for exactly this reason. Got it. Seven, eight, one, OPR three. They both say the same thing. So you scroll down on the left. I think it's this further detail. Container image. Aha. AMQ seven, rail eight, AMQ seven, rail seven. Which one do I want? So what are your nodes running? Oh, that's a good question. They're running OpenShift four seven. Okay. Are they running them on rail seven or rail eight? They're running them on OpenShift four seven because it's not rail. It's CoroS. Right. But CoroS, yeah, it's at the core of CoroS at the moment, they're still rail. Yeah, but it's like, I think we transitioned to rail eight when we... I think we did. In the four six or four five time frame. So let's go with the rail eight one. So that one we want. So anything about the rail eight one is that it is actually truly multi-archament. It also supports folks on S390X and PowerPC architectures. Cool. I'm assuming I want to put it in the namespace with our game. I guess I could put it anywhere, but for now it's easier to put it in the same namespace. Oh, is it not a cluster wide operator today? It will be relatively soon. Got it. So for those who are still paying attention, what that means is that when I say cluster wide, there would be one instance of the operator pod running in its own namespace, say AMQ broker operator or something like that. And it would pay attention to all CRDs, all broker related CRDs in the cluster anywhere. And if a user creates an instance in their own namespace, the single operator would handle that. Today, it's a single namespace operator. So we end up deploying the operator into our namespace and it only pays attention to CRs in our namespace. Is that fair to say? Absolutely. Okay. So we've got our broker, we get some provided APIs from the broker. We get, sorry, from the operator, we get a broker, we get a broker address, and we get a broker scale down, which I actually don't know what that is, but I'm assuming I need to just create an instance of the broker. Yes. Yeah. So basically there's broker deployments, which is the AMQ broker CRD. There are broker addresses so that people can define their cues and topics and AMQ addresses, AMQP addresses that they want to deploy across their broker cluster. And there is a scale down controller. So the architecture of the operator is such that there's actually three independent controllers inside the operator, one for each CRD. Makes sense. The scale down controller in particular watches just for pod discrepancies to see whether or not the size you have specified matches the number of pods. And when there's a discrepancy, it starts up, and if configured to do so, it'll go get your messages from the pod that just scaled down and move those messages to somewhere that's still available so that your clients still have access to the messages. If I remember correctly, we need to add an acceptor? Yes. Do we care what protocols? Yes. I think port is 5672 or something. Yeah. So 5672 is your default AMQP port. So the enabled protocols, you want to put AMQP in there? There's nothing to... Down below. There's another one. I'm not sure why there's two in this case. There's another one down below. Down below where? There. Oh, I saw you when you were scrolling through there. There should be... Enabled protocols? Yeah, right there. Yeah. So... That may be an artifact of the way the operator hub UI... Sorry, the operator UI handles stuff. So you may not have a way to check boxes or something like that. Right. And these are all done with these X descriptor flags in the YAML file. In the CRD itself. In the CRD itself. Yeah. That's right. Do we need a connector? No. Do we need an address settings? No. No. Console. We do want to expose the console. I don't care about SSL because we're super careless in dev. I don't care if you can leave it off. Okay. So if I do something horrible here and switch to the YAML view, we should hopefully see what I just entered. So we've got the AMQP acceptor on 5672. We've got... Oh, this is not right. Not that protocols. Fooling is correct. I'm going to go check the CRD. Deployment pan size one. No persistence. Don't require logins. So just for the user and admin password in the application, I think it was admin-admin. Is it... Does it need to be the same or it doesn't matter? No, because the clients aren't connecting to... As admins, they're just connecting to same messages. Well, we'll see what happens. Great. Invalid value, boolean. Must be of type string boolean. Yeah, so if we look at... Oh, protocols as true on line 14. Yeah, but the question is, is that because it needs to be a string? I think it just needs to be a string with a quotes. Because true is a reserved word in YAML, I think. I think the X descriptor may be wrong. In the CRD? No, no, in the... So the definition itself, if I can find the right window here to put it out to all the... You know what? I have a file for this somewhere. New window, open folder. So in chat... I've done this... I've done this rodeo before. Red hat, open shift. I'm going to take a look at the X descriptor. Because enabled protocols, the description says, comma separated list of protocols used for... So this is... Here's the issue. There's the enabled protocols is the comma separated list of protocols used for SSL communication. Meaning, I think that that's the ciphers. Whereas the protocols, description, the protocols to enable for this acceptor type string. So this is from the CRD itself. And the protocols to enable for this acceptor would be a string like AMQP, comma open wire, comma stomp, et cetera. But you're right, we should have custom resource instances already, which do this. And the examples also should tell you how to do this. I know I have a folder for this stuff. Yes. Okay. It's in the Ruby lobby. Totally in the wrong place, but here we go. So AMQ, that's the address. This is the events address. This is the operator subscription. Ah, broker CR. Here we go. Right. So yes, there we go. Yeah. So it's the reverse. So protocols should be AMQP, and the enabled protocols should be removed. That still doesn't explain where the boolean value comes from. Actually, I'll just do this instead. And the extra script has actually come from the cluster service version file and are not in the CRD itself. Oh, that's even weirder. Because it's the cluster service version that basically encapsulates everything that is operator hub that makes it go, right? So now we should be getting a broker here. Apology. But you broke it. As usual. I'm good at that. There we go. So here's the broker operator. Here's the actual broker. This is the staple set for the broker. This is the broker pod coming up. We scroll down. That's as far as it got so far. Okay. Yeah. So what we're looking for is like EPOL, acceptor, AMQP, or port 5672 starters. Sweet. So we got our new nodes. Jafar, are you still with us? Yeah, I'm here, but my son is bothering me actually. It's okay. Is there a way in the pipeline to specify memory request? What are you trying to achieve for a specific deployment or something? No, for the build in the pipeline. Not that I know of. No. That's not good. We need to figure out if it's a thing because we may end up with the same problem if we land on the same node. Let's look at the yaml for the pipeline. Yeah. So you want to define resources for the build pod, right? Yeah. So in the build task, it would be good if we could specify that the container should have a higher memory limit. Well, I guess it all runs in one pod. So really, just that whole pod needs to have it bigger. So what you can do if it works is, yeah, like if you have a node that you are confident, you know it has enough resources, you can put a selector on the whole namespace, on the whole project, and I think the build that happens in the project should happen on that node. Yeah. Let's see if Tecton supports what we want to do here. Tecton docs, memory. Really? They didn't embed Google search? That's really boring. I don't like to see ads when I get search results in their own website for the docs. Come on. All right. Let's see if our broker came up, which I think it did previously. Hey, look at that. Here's our broker. And Roddy was saying... Rolls here, right there. One of those E-Poll acceptors is what you want to look at. By default, you should have an E-Poll acceptor on port 61616 that speaks the core protocol that enables brokers to be clustered together and to talk to each other. And in our case, we also additionally configured the AMQP acceptor on port 5672, which is the standard AMQP non-secure port. Cool. That means it might be working. And then if we look in... Greg, what if you define a default image range for the namespace? Put enough memory. Maybe. Service. And so we have an SRT broker AMQP service, which exposes 5672, which is good. I don't think... Oh, we should also have a route. And so we have an AMQP route, which exposes AMQP over HTTP, I guess. No, it exposes AMQP over TCP. Oh, yeah, I guess so. So if you're coming from the outside world through a route, what you actually end up needing to do is configure full SSL. Oh, yeah, we'll have to add that. Or you do the termination at the edge, right? Wasn't there... Is there not a setting to tell it to do pass through? There is not. When you're setting up the acceptor or whatever. It doesn't really matter, but I'm just curious. Anyway. We should make one of those. We should make a what? One of the settings. One of the settings? The part of the issue is that in order to do pass through, you need SSL certificates. And the certificates need to be provided by the people. Either you need to provide them, or you need to have something like a cert manager operator, which generates them for you. Or you just use the built-in crappy self-signed ones and call it a day. Or you could, I suppose. Hell, yeah. There's a way, I think, to change the enable pass through redirector or edge termination or whatever. Let me... Hey, Roddy. You're getting stocked in chat. Oh, is that valid? About asking him about Mina? Oh, indeed. I have a bash script that I used for deploying the arcade. And it had a bunch of steps. So it happens when you have too much garbage. You end up with lots of garbage. Where on earth is that script? So just as a note, the X descriptor in the OpenShift GUI is defined as we know it as a Boolean switch. And the matching entry in the CRD is actually a string, which, as we saw in the custom resource instance, Eric had up there in YAML. You need to have AMQP in it. So the GUI has actually got a bug in at the moment. Ah, I broke something again. You found a bug. What was that? Oh, it's over-reported. This is a terrible idea to do this grep right here, but whatever. I have some bash-y YAML-y stuff for how to modify the route to add the TLS pass-through or whatever. I just need to find it. Ah, arcade SH. Look at that. Here we go. TLS termination edge. There we go. Exactly what I want. We'll see get route. So we want to patch the broker route to use edge termination. That was a terrible idea. I don't know. I just did a dash A. No, I just did it again. Cool. So this might actually do something. It's doing nothing, but I think that's what it's supposed to do because we probably got a weird upgrade request, curl, KLV, HTTPS. I totally forgot a colon. Forget the at sign. Forget the colon. Ah, dude, it's terrible. So we got edge terminated and then nothing happened, but that's okay because I think the broker is basically like, I don't know what to do with your stupid HTTP. So it's just not doing anything. I can't remember. We had had this problem once before where we were like, what's the crappiest way to try and connect to the broker to see if it's even working? I think we never figured it out. So if you pop open the other route that was opened up the console route, you can go to the console and then tell. I can go to the console and tell what? Yeah, and tell what the status is. So the only question is, is if in the CR we defined admin and min or what password is? It looks like we did. So in here now, you can get the current status, right? Which status do I want? Well, you can look at the acceptors drop down on the left. Acceptors. Yeah, but the question was like, how do I know that the edge termination TCP SNI stuff is working? Well, it should be irrelevant from the point of view of the, oh, yeah, it won't be because the client's going to run in our browser outside. Right. That's why we went down as before. Yeah. And I just think we never got to, well, I mean, I can hear, I can do a really horrible hack job of a test. So new window, open folder. I'm just going to open the game client and then change the config file to point to that route. Right. So we've got config.js. The broker endpoint is going to be the route and it's 80, right? Or 84 for three. I mean, because it's edge terminated secure. That's right. So that's going to be this. Yeah, because I think the conclusion was because it's Websoft traffic, it's already HTTP equit not equivalent quite, but it's already handled by the proxy. Yeah. Well, look, we got a player identity. It didn't barf and say that it couldn't connect. So I'm assuming we're connected unless we haven't tried to connect yet. You can open up the broker console there and it should tell you whether or not there's a connection. WebSocket failed disconnected, but it didn't say, it doesn't say why it just says failed disconnected. Yeah. Eric, so just check that the service that you created points to the correct port. Because by... Should broker AMQP service points to 5672 and then the route points to SRT broker AMQP zero service with edge TLS termination. Oh, it's WSS for secure. I think that's the ticket. Boom. Look at that. When in doubt, add more S's. It's like these, man. Just keep adding them until things work. VVVVVVVV. Okay, cool. So we're connected. If we look at the broker console, we should see a connection to that. We've got our client ID, rockin, rockin, rockin, rockin. Our build is still running, which is good. Yeah, it means it hasn't died, which is always nice. I don't know why it's taking forever. Yeah, this is annoying. I should probably, there's probably a bug that's worth reporting here, but I need to try the latest version. Like I can't get to the end of the logs. Can I get logs on a pipeline run? That would be cool. If that was like a OC logs, it would be better to use the TKN client. It's much easier. Really? What do I want? What do I want? What do I do? You said be better. So yeah, yeah, so do something like pipeline logs or something like that. There is no pipeline. TKN pipeline run logs. TKN pipeline run. Is there like a list? There is a list. TKN pipeline run logs. So TKN is the tecton command line client, which for some reason I have installed. I don't even remember. Yes, I know it's still running. It'd be great if you would show me the logs. Hmm. Thanks. Not helpful. Stream live logs. Yeah, that's F. No, killed again. This is not. We got further this time though. We got to 50%. Yeah, so Eric, try what I mentioned about the limit trains. Like if you create a... Well, let's read the docs first and then we'll try what you mentioned. For tecton, I'm pretty sure there's no way to specify like limits or whatever. Well, we're going to find out. Memory. The CPU memory and ephemeral storage request will be set to zero. Or if specified, the minimum set through limit ranges in that namespace. So basically, define the limit trains. Okay. Make sure that... Over, net ease, limit range. Limit ranges. Eric, just use the create limit range from the admin console. Oh, that's boring and easy. Why would I do the easy and boring thing? I'm not even sure that it's going to be... Oh, SRT Gamecore Resource Limits. Why does this exist? Check that and maybe delete it. All right, so yeah. Container. Why do you exist? That's a good question. So are you using... So you're using an RSVDS... Yeah, but I thought... I ordered the workshop cluster, which I thought didn't have that feature enabled. Yeah. So this should solve the issue. Could be. All right. So now we will try. TKN pipeline. Yeah. Oh, I'm killed. Oh, this one says it's still running, which I don't believe. So what... Yeah. Oh, I'm killed also. There we go. Okay. TKN pipeline list. TKN pipeline-h. TKN pipeline. Start. SRT game server. Value for param. Oh, why are you asking me questions? Just do it. Just go back to the UI and we'll start last run. Can I just keep hitting enter? I want it to be cool and use the... You know, that's why the UI is much better, right? I don't know why we build these UIs. Who wants to do stuff the easy way? That's just not... All right. TKN pipeline run list. TKN pipeline run logs F. All right. It's thinking. It's going. Okay. Jared, how are you doing in the background there, buddy? World peace. All solved. We're running away from the keyboard for a few minutes. He says... No, no, hungry. Oh, damn. All right. Oh, he's busy doing stuff. All right. So let's recap what we've done so far. So we have deployed the AMQ operator into our project. And found the bug. All right. So we've got the AMQ broker. We've got our... Sorry. The operator, we deployed a broker. We've got a pipeline that's in the process of doing its thing. We modified the route for the broker to use TLS edge termination with the self-signed certificate from our router. And then we locally tested the game client talking to the WSS, the secure web socket for the broker. So that's good. So we validated that the broker is doing what we want. We fixed our limit range that may have been causing us problems with our build and OOM. And so if we look at our build, we are busy installing packages. Jafar, do you know... So clearly, there's a PVC associated with this build and stuff. Hey, look at that. Woo. 27. 100. We did it. Yay. So what was the default memory limit here? It wasn't the default. It was... So the internal demo system that we used to provision the cluster, I think there is a admission controller that the demo system people set up for this particular catalog item that will automatically create a default limit range on a new namespace. And so, for example, if I do... And Eric, just before something that happens, maybe, can you check that the limit range has not been recreated? Because at some point, there was an operator that was doing that. Like, if you delete it just the limit range, the operator recreated. So, yeah. Okay, perfect. So I had created this demo namespace earlier, not today, some other day. Oh, look, I can tell you exactly when it was created on the seventh, which is not today. And we got this core resource limits thing. There's some limit range controller webhook thing. I'd have to ask them what it is. But anyway, when you create a new project as a regular user, or really when you create any new project, it just gives you a default namespace. Oh, it's probably baked into the template. Yeah, it's baked into the project template. Yeah. Okay. So here, OC get template project request dash and open shift config dash OEML. And we get a limit range. So, OpenShift has a concept of a project request. So if you use the default workflow of clicking new project in the user interface, what actually is happening under the covers is it's doing this project request workflow, which says, hey, is this user allowed to create a project? Oh, they are great. Use this template object and instantiate a bunch of stuff. And what you see here is there's a network, there's a default network policy. There is the actual project object. There's a bunch of role bindings and service accounts and other stuff. And so what the demo people did was they added this limit range to that template. And so you can see that the pod memory max is 12 gigs and the default limit for a container is only 1.5 gigs, which is not very much. So that would explain why we were getting blown up in OOM. Hey, look at that. SRT game server successfully rolled out. That's what I like to see. All right, Autobots. Autobots. Oh, cool. It's even got this OpenShift logo. I have no idea why. Now, the server is crashing, but I bet I know why. If we go look at the logs, what we'll see is it's trying to connect to our Temis cloud, which doesn't actually exist. So what we need to do is inject a config file using config map into this. Go ahead, Jafar. Eric, I think you can just change the environment variable like broker underscore URI. Yeah. That's so boring. Why? Guys, we're just like... It's too easy. You know, I'm... Do we actually pull it from... Yeah, I think that was part of the update environment. Let me look in that. You know, when I got yelled out for updating the Docker file. Yeah. Yeah, yeah, yeah. Huge jerk. Applications server, no configuration. Broker dash URI. It's past as a CLI argument and that comes from an environment variable. Yeah, it's broker underscore URI, the environment variable. Uh, where's... Yeah, I'm just trying to find in the code where we pull the environment variable from the... I think that was it. Was that in the Docker file in the run? Yeah. So it's a command line argument and the actual value of it comes from the environment variable in the Docker file. Yeah, broker URI. Okay. So that broker URI happens to be the service in the project. Yeah. So, Eric, let's show a cool stuff just quickly. Okay. We don't have a shortcut to the services on the left UI, right? Correct. So go to the search on the left. Yeah, now select resource services. Services. And now click add to navigation on the top right. And now you have a shortcut there. What? Yeah. What? Wow. In fact, when you first start up the open shift, uh, like sandbox or whatever, it gives you a little tour and that's like the first thing it walks you through. No way. I didn't even know it was a thing. Yeah, yeah, yeah. To me, I learned, man. Holy crap. All right. So here's the service. The cool thing about open shift and services and internal and all that fun stuff is that you don't need fully qualified domain names because we set the end dots on the resolver. So we can just go broker URI, is it broker URI? We can set that to simply that. Oh, Variety, do I need the TCP or do we do that? I don't think you would still, you don't need the TCP. I don't think I think that would be the default if you didn't specify it. But I think you would of course. But uh, possibly. Maybe need the port. Yeah, you need like that. Try it and see. T-I-A-S. I started game server pod. Really? Why can I not click you? That's very annoying. Pods. Here's the new one. It has not exploded yet. Hey, hey, hey. Look at that connected. There you go. So if I reload my web client, which I think is still running, I should actually, what, what? 3,000 is the port, I think. Magic. Good job, Jafar. Glad somebody's paying attention. Join. Look, I have a spaceship. Oh god. It's moving very quickly and doing weird things. It's moving. Oh god. I think we, oh, the default, uh, the default. Sleep cycle. Sleep cycle is really bizarre. Anyway. Yeah. So whatever. Spaceship moves. We did it. So what did we do? We, we only, we did, oh, we did it in an hour. That's like a new record for achieving stuff on this stream. So what did, what did we do? I should join more often then. Uh-oh. Latency issues. Yeah, yeah. We have the broker is eating all the bandwidth. Yeah, I think so. Back here. I think so. Maybe. Yep. You are. Sounds like it. Yep. Yes. Yes. I hear what it is about like the game, the computer. I don't think it lasted. You're dying again. He's probably explaining something about how the game destroys his computer every time he runs it. Yeah, I guess. We still have bandwidth issues. It's not bandwidth. It's CPU or memory or something. Oh. Yeah. It's laptop CPU. Probably. Oof. Well, so what happens is it's, I have a crappy CPU GPU. Thanks Intel for, for nothing. And so, no, I'm just kidding. It's not, it's not Intel's fault. And so when I'm running two displays and Nome and all these other things and blah, blah. When I run phaser games, especially it just like heats up the CPU really bad and then it basically goes to like thermal throttle. And so the processor speed will get throttled down to like 200 megahertz. And then the load average shoots through the roof. And then it's basically just like whatever the Linux scheduler decides is important at that moment, which usually isn't Zoom. So yeah, it takes, takes a few moments for, for it to all come back. But I think I'm back. I think so. Okay. So in a new record for this stream in about an hour, we actually got cool stuff working. And so to recap again, we have a pipeline that builds a C application from a Docker file that speaks AMQP over TCP to the broker. And we deployed the broker using the AMQP operator. So that's kind of cool. It's pretty vanilla, simple config, right? There's no redundancy. There's no persistence of the message queue, but it does work. And so now potentially Jared, we could deploy the web client for the game. Yes. Actually, is there a way to change the server sleep cycle with an environment variable? I don't think so. Sleep underscore cycle, I think. Is it? It is. What's the default supposed to be? 15? 1500, which would be 1.5 seconds. So yeah, so 15 would be, or 16 would be 60 frames a second, essentially. What you're aiming for? Okay. So I'm going to change that to 60. Or 16. 16. Okay. Save. Pods, new one, logs, sleep cycle 16. There we go. Okay. Okay, Jared, yes. Now I'm really ready for your comments. All right. Perfect timing, because I got it working like about five minutes ago. So I had to change a bunch of stuff, but now we have a goal built. So what has he changed? Do you have a PR? I just pushed a new branch. Up to the SRT web client repo called gulp. Gulp? Gulp, yes. Okay, commit. Added a gulp build. Yes. So we've got a gulp file? Yeah. So I added a gulp file, added, which basically adds, like, you know, it's build tasks basically, like copying stuff to the disk directory, running Babel, which running Babel and BrowserFi, which takes all of the, all of the dependencies that are required in the JavaScript and builds them all, like package them all into a single JavaScript file that gets included at the top level. Does a bunch of magic. I had to refactor some stuff just to make it cleaner. Like I pushed all of the, everything was at the top level. I pushed it down to, like all this JavaScript into a JS folder. And so at the top level, we have like assets, CSS, JS. And let's see what else did I have to do? I had in, and I had to go into each file pretty much, because before we were included in JavaScript, including everything globally, by just using the script tag on the index.html file, and just every dependency, we would just import it in the global window scope. So all the JavaScript files had scope into everything. So since we're not doing that anymore, I have to require those dependencies directly in the files themselves. So if you look at the, one of the files, look at like the AMQP client or the main scene or something. Yeah, there we go. Yeah, so you see where I've got protobuf equals require protobuf and uuid4 equals require uuid. Those didn't used to be there because you could see where, they just used to be in the global scope, because they were script tags in the HTML file. Yeah, now we're locally scoping them inside the locally stuff. Yep, and then so what? Yeah, I can JavaScript. Listen to me. So when you run the build, it looks at all of these require things and be like, okay, this is a dependency. I need to include this and then the packages all up together into one big file. So cool. Okay, so now we just have to figure out how to actually build it. Yeah, so locally, like if you're doing this locally, there's some new MPM targets now. There's build dev and build prod. Build. Yeah. Yeah, there you go. So we got build dev, build prod, start by default runs build dev when you're just running it locally. And the only difference between build dev and build prod is build prod will include minified versions of the minified version of phaser, which is like, like the unminified version of phaser is huge, is giant is so. Yeah, so it makes it makes it smaller. It also adds like a global like environment variable for prod. So if you want to switch on stuff in the code, if it's prod, you can you can reference that. But other than that, oh, and then the other thing is the prod build will include source maps. So when you're needing to debug something in prod, even though like without a source map, you wouldn't be able to debug it because everything's just like c.1, 2, you know, it's like you can't, everything's obfuscated, right? But the source map lets you, lets you debug stuff even if it's minified. It's kind of weird how it works like that. So make sense. Yeah, so that is, so that is those are the main changes we built. So yeah, so our node, I don't think supports gulp, does it? No, what would you like our node builder? Um, the the S2I node builder? Yeah, I don't think so. No, it does. It does have a let's get out. So software, collections, node.js, node.js 12 software collections. I don't even, it does have it. So the node, the S2I of the node builder will run a npm build command. And whatever you put in there, it will do that. Right, but if gulp's not there, both with that work, with npm install gulp or whatever. No, it wouldn't. I mean, I mean, I guess it probably would. I'm just thinking, would we even want to use the node builder? Yeah, I don't think that if you're not going to use the output image of the S2I build, I don't think that. Well, we would want to. That's kind of, that's the point. Yeah, you don't think the right. What I think would be better. No, it would be nice. If we could do a regular S2I build, such that the output of the node build was the functional web client, but we don't actually need nodes, so that wouldn't be a node build. It would actually be like a weird multi-stage build. So hang on, hang on a second here. Yeah, there's no gulp in there. So you have gulp as a dependency in the package, and then it's installed. Yeah, if you do an MPM install, and I think the S2I will do an MPM install, it will be there. The question is, it's like, the difference between the building and the running, like it's going to build, it's going to output the stuff to a disk directory, which is static HTML and JavaScript files. That doesn't need to run under Node.js, it just needs to run under Apache or something. So we could use the S2I builder of Node.js to build that disk directory, because it runs MPM install, and it runs an MPM build, and we could make the default MPM build. S2I Node.js. And that's really the way we should be doing it, honestly, is that those multi-stage builds are going to end up much smaller and have better cacheability. Yeah, so if we look at the node script for the S2I image, so for those who don't understand some of these acronyms, so source to image is a feature that OpenShift takes advantage of to build container images directly from source code. So instead of having to use a Docker file, you can basically just point it at your Git repo. And so right now I'm looking at Red Hat Software Collections, provides a source to image container for Node.js, and the build process that gets invoked is this assemble script. And so I'm looking in the assemble script to see what happens. So we move the source, we do some permissions-y stuff, we do some MPM config-y stuff. If it's dev mode, we set that to false. If node end is defined, we set node end. If it's production, we do an MPM install. Otherwise, we do node end and dev MPM install, and then MPM run build. So Jared, if we change our MPM tasks to make the default behavior of build do the gulp-y stuff. Yes, it will run it. It will run it, because it will MPM. But it will, oh, it does MPM. Yes, yes, that's true. Yes, it will do that. Yeah, so we change the task. I mean, there is another option. We can actually overwrite this entire script in our repo, and so the cool thing about source image is if you have a bin assemble folder in your repo, it will ignore the assemble script that is baked in, and it'll just use yours. So we could actually tell it MPM build or gulp, or MPM install gulp, MPM install, we can make it do whatever we want, or we can just make a tweak to our MPM config to take advantage of what it actually does, which is MPM run build. Yeah, I think that's obviously easier to just use what's out of the box. We can just change that, and we can add, actually, I'll just add a build. So they'll do a run build. Do you want to share your screen, or do you want to tell me to do it, and I'll do it? I'll share my screen. Let me stop my share. All right, you are up, sir. Okay, you see my client that. I do, hang on a second. Might be small, maybe. No, you're good. Yeah, the text is a little small, but it's okay. Yeah, all right, so all I really need to do is make a new build target in the MPM, which will do an MPM run build prod. I think that's what we want, because, and build prod will run the gulp build production target. Right, and we do, and if I remember correctly, the, oh man, I lost my browser window. So many browser windows. Yeah, it'll do an MPM install prior to, if it's then otherwise. Actually, so we're gonna, so here's the caveat, right? If the node environment is not production, it will do an MPM install. Otherwise, it will do. What? It'll do a development. So the, so what is saying? It'll do a development MPM install, which, which the, which will have everything that a regular MPM install will have, but it'll also have all the dev dependencies like gulp and stuff. Right. So that's what you want. Right, but the thing is in, yeah, so that's what we want. So we need to make sure that we set the node and we need to make sure that we don't set the node and, and that we set dev mode to true. Yeah, we definitely want to get into that conditional that does the devs. So if you want to put it in dev mode, because it's not gonna be in dev mode by default. Right, we have to set that environment variable on the build. Yeah. And I think the easiest way to do that is in the S2I environment file. Yes. Yep. And just like in the S2I slash environment environment file, like same as with the assemble, there's an S2I that environment that you can use. I would assume that we would set that environment variable when we're going through the workflow for the build. No. You mean with the pipeline? Sure. Yeah, but the thing is that with pipelines for the moment, you can't set environment variables for the tasks, unless we do a hacky thing of like changing the default node task filter. Oh, that doesn't sound fun. So what do we need to do? We need to put a file in the repo? Yeah, like the dot S2I slash environment. If it still works, it used to work like that. So just check the script and see if it loads a file, if it does a source on the environment file. Like the assemble script does not do that. Can you check the, can you show the screen? Yeah, I mean, yeah. I think I can stop sharing and then you can go back to the docs here. Sure. So the way it used to work is if you put a file called like the dot S2I and the name of the file was environment, whatever environment variables you put there get loaded in the build. It doesn't seem to do that. Let me see if I have any more. At least not in, not directly in the assemble file, we probably have to look at checking the bin file. Well, bin, there is no bin file. And check the usage. The, it's just a tells you the usage. Yeah, but does it say? No, okay. No, yeah, what we need to do is look for the actual S2I documentation. Well, not really the documentation, but the script that invokes the assemble. Okay. So, okay. So this is why it sets the environment. Here we go. S2I sets the environment variables from S2I environment optional. Yes. And then it says it starts the container and runs the script. So we just need to create what? Yeah. Yeah. So it should work. I mean, if it's still there. So, yeah, just create a file called environment in that folder and put dev underscore mode equal true. And it should work. That makes sense, Jared. Say it again. So in our repo that we need a folder called dot S2I and in that folder should be a file called lower case environment and in that file should be dev mode dev underscore mode equals true. But not with the dollar sign. Just, just correct. Just like setting it up there. Okay. Yeah, I'll do that. Yeah, that should be getting sourced. Making sense and just at the root of the repo, right? I'm guessing. Yes. The dot S2I folder should be at the root of the. Okay, I'm doing it right now. Setting environment variables provided by environment file sources. Yep. So it looks like that's still there. So you're going to push that to your gulp branch. I'm building it right now. It's dev underscore mode equals true. Oh, building the file. Okay. Oh, I really hate my chair. It's not very comfortable. It's all right. I'm pushing right now. Push it. Doing it live. Yep. All right. Should be up now. I'm looking at master, which clearly is never going to work. Go. Hey, dot S2I. Okay. So now, Jafar, I'm assuming we go to pipeline, create pipeline. No, you said go to add from source. From. Are you using, yeah, from source? Well, from, from Git. Yeah. So repo URL. What? No, we don't use a Docker file, right? So it's just S2I. Yeah. Builder image. It picked node. Oh, node 14. Fancy. SRT game server. Nope. It is the SRT game clients. Well, web client, SRT web client resources deployment at a pipeline. Create a route. Yes. We definitely want a route. But we need to. Okay. We're probably going to have the same issue with. We need to enable TLS. One thing for the deployment. Are we going to be? No, we don't need to run it in that mode. Because we're going to encompass it in an HTTP, HTTPD server, right? I'm sorry. Say again. So this pipeline, I believe is going to be useless because it's going to build a node image and deploy it. And I believe we want to run it in the HTTPD image. Yeah, sort of. Right. That was, that was. Yeah. This is all that's going to do is build it in the disk folder. We still need to. I mean, we could. We what we could do though, like if we want to not use HTTPD is we could just run it under express. Like we could use nodes built in HTTP server or, you know, we could use express or just like vanilla HTTPD server like under node. So Jafar once the build is theoretically complete. So the default pipeline is not going to be particularly useful, right? But we couldn't. We take the output of the node build and then shove that into the PVC for the pipeline and then have another task that builds an HTTPD image and just copies the junk from the. From the, from the workplace. Yeah. Yeah. Yeah, correct. So you could use another step as an S2I HTTPD builder that points to the, to the disk folder in the workspace or something like, yeah, something like that. So I know the S. So I know that it's doing a it's doing a Docker build anyway. It looks like this is weird. So the the node S2I will at the end run NPM start. We could just make the NPM start just start the nodes HTTPD server and just run it straight in one image. Yeah, you could do node server.js or whatever. Yeah. And then we just, it just, yeah, we could, we don't have to run it under Apache. Like that gives you more power and flexibility and performance because you can, you can decouple it from node and do like a bunch of whatever proxying and stuff you want to do with a, like a full, a full HTTPD server. But, but for this, I think it's fine. We can just run it straight out of node. Yeah. What I was suggesting is that if you just, instead of having the NPM run dev as the NPM start command, like just have a regular you sort that you Yeah. Yes. Yeah. The, the NPM start could just like start express or, or start nodes in an HTTP server. Yeah. Yeah. So do you want to, you want to make that change, Jared? Um, sure. Yeah, I can do that. Um, and I didn't notice when you were building this, when you were creating this app, did you point it at the Gulp branch? Yes. Okay, good. All right, cool. Then let me go ahead and I will make NPM start, just start a local HTTP server and run out of dist, which should be all that we need. Yeah. So it clearly, it clearly did the NPM install here because it was barfing about stuff that it did. Yep. It definitely NPM installed. Now I'm wondering if it did. It did deploy it. But I don't think it actually did. Did it do the build? Yeah, I don't think it did develop stuff. But did it run the Gulp? I mean, in the logs, I would see the Gulp, like if the Gulp build itself. Thank you. Thank you. There's a local file on the, on the root of the repo. Well, you can see where, where it ran S2I assemble. Don't see any blogs of Gulp running, getting around though. Right. So if we look at the assemble script, let's go to 14, which is S2I 14, S2I bin assemble. So it said installing application source, right? We see that in the log, installing application source. And then we see building your node application from source. So if we scroll down, we see building your node application from source. Somehow. Oh, it went into the production section. Yeah. Yeah. So it looks like we also, it says if node n is not set by the user, then node n is determined by whether the container is run in development mode. So if node n is set, otherwise export, but this is the dumbest thing I've seen. So you have, not only do you have to set node n view, also have to set dev mode. Yeah. Cause I only said dev mode in the environment file. I did not set node n. Right. So node n is supposed to be, I guess, development or development. It doesn't even matter. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, It just has to be not production. So I think node n is dev. It doesn't matter. We can look at the script, right? The only thing it checks for is whether or not node n viz production. No, no, actually it's development like it is there, like the export does. It needs to be like the export does actually, I was, I didn't remember if it was dev or development. But we see the export value there. Oh, sorry. I see, I hang on a second. So I'm reading this wrong. So if node n does not exist, then it does nothing. And then if node n is not production and node n viz in production, if node n doesn't exist, then it just does npm install. So even if we didn't set dev mode, this is like the weirdest thing. If we set node n equals anything, it will do all the other stuff. So we actually didn't even need to set dev mode. Like I don't understand, whoever wrote this, I'm sorry. I don't understand what you were thinking, but this is just, I don't understand here. Set dev mode to false by default, export dev mode false. Then if node n viz exists, if dev mode was true, then set node n to development. What? Why not just honor node n in the first place? Like, ah, so I guess you can leave dev mode equals true and you can add node n to development. And that should probably work. All right. I'm committing it right now. Okay. So is there a way, Jafar, to do web hooks with Tecton? What do you mean like when you commit any previews? Yeah. Yeah. Yeah, you can add the trigger. Oh, where do I do that? I think you, it should have created one by default, but let's check. So go to the pipeline, SRT web client. I pushed it, by the way, that change. Cool. So, yeah, go to edit triggers, add trigger. Oh, yeah. And then you can change like the git push. And so, yeah, just select the right. I'm assuming git revision is the branch. Yeah. So then this should be all that's required, right? Yeah. It should be fine. And then it's going to give you a route. Going to give me a route. Which is the web hook. But since you are in Tect Preview, I actually forgot what it did. Yeah. So that's there. You have the event listener. Which is this one? Yeah. Yeah. The copied route you have there. And just put it as your web hook URL. Yeah. I don't want to do that. I mean, I guess I can do it. But nothing says security like an HTTP web hook. Exactly. But just put the payload as JSON. Yeah. Yeah. Yeah. That should be fine. Is there a way to test it? Yeah. Yeah. So it just did a delivery. Yeah. It just did it. Exactly. First of all, it's better than simple according to Zen. All right. Pipelines. We should see what? Yeah. Just check the response to see if it was fine. It looked like it was fine. Just go up. No. Because this didn't. But it didn't check the time. Oh, 202. I don't know what 202 means. Yeah. So that's good. And now your pipeline should be running actually. It's not. It didn't trigger it. Okay. All right. So try to push and see if it works. Yeah. Because I don't. I think this request is not a full. Yeah. Maybe like a dummy one. Oh, or maybe it's not. It's not on the branch, I think is the thing. So it's a push event. But it wasn't on the branch. So the pipeline didn't care. Okay. So I guess if you, Jared, if you do like an empty commit or something. Oh, okay. Okay. I mean, I did just push that change for the note and equals to have one. Yeah, but that was before. Okay. Yeah. Yeah. Oh, you're testing the web hook. Got it. I'll just change the description of the package. Yeah. Now with more hook. Yeah. All right. Done. Oh, doing it. Things. I seen. Seen it. All right. Here we go. Clone. Built. Baby. Generate. So node 14. Hopefully we'll see the gold build. Yeah. So did you change anything to make, to have the gold stuff working? Well, we put the node and equals development in the environment file. Okay. Which should trigger the logic gates. Correctly. Yeah. The logic rodeo. That is the assemble file. Oh, I did see that. I did get that. That was nice. Nice. Nice. Copy the source. Run that your own. So. User. Assemble. Building applications for my. And actually you can, there's a nice. I think that's a wrong one. You can use also, which is in the assemble, you have pre. Assemble and post assemble stuff that you can do also. Like if you don't want to. To, to copy the overall assemble file. There, there, there are hooks that you can use. To do stuff before or after. And then just call the assemble script. Whatever you want. Okay. It ran and can install. I don't think it went through the right. Slow. Even though it said node and development, like it actually echoed that, like. The note and is equal development. That mode is equal to true. So how is it? Wait, if a note and is not equal to production. Oh, actually it says a mode and is not equal. We maybe had it like go back to that. Actually, I think it will go into that first one. If node and is not equal to production, it just says MPM install. So, and that's, so that is, we actually, what the heck? Okay. Got it backwards. Yeah. So maybe it should be in production because if we say node and is not equal to production, then instead of the dev, then it's just going to MPM install. Yeah. This, this looks like it's backwards on a couple of things. Yeah. So it should be in node and equals production. Okay. Well, we have a webhook. Yeah. I'll just change it. But the, what should that mode be? It doesn't matter. Why would you do this? This is so weird. Like dev mode. Can you, what is the, what it will change if I, if it's in dev mode, does it change anything? Nothing. I guess dev mode doesn't really come into play after that. No. Okay. And I'm just going to, I'm just actually removed that. So node and equals. Yeah. I don't think I, this is okay. So let's see dash Z done is if the length of the string is zero. So not defined. All I think this does is it would set, so if, this is so weird. If you were in dev mode, it would, it's, I don't even understand. It's initiating, it looks like it's initiating. So if the node, if the node end is set, if the node end is set, but, but it's overriding it. So if node end is already set and non-zero length, if node is already set and non-zero length, then check dev mode. If dev mode is true, ignore node end and re-export it as development. Otherwise, if dev mode is not true, if it's anything else, ignore node end and re-export node end as production. Okay. But then if node end is production, it just does an npm install. This is like the weird, I just don't even understand. No, no, it's not equal to production. It does an npm install. Oh, right, right, right. Sorry. Yeah. So I just, so basically no matter if, no matter what you set node end to, if you set dev mode also, you're going to end up in, I don't, I don't understand why you would write it that way. That just doesn't make any sense to me, whatever. All right, let's try the next pipeline run. So let me spend 20 minutes on going to you. Dude, hey, look at that. Gulp. Oh, that's, that's the gulp output. Yeah. Yeah. And it did, and it did do a prod build because it said, it said it did a prod. Um, mother of God. Cool. So wrangle. So now if you went into the terminal on that pod, you should see. Well, I still got to build the image and then push the image. Yeah. But then you would see a disc folder in there, like at the end. Cool. Okay. So now I guess the last thing I'm going to do, maybe we can get this in the last five minutes is, is have an add an npm start that runs. I thought you already did that. No, I didn't do that yet. That's, you didn't do that. So what's it going to do? And it deploys. Nothing like, because we were nothing. That's no good. Yeah. I guess it might try. Is this the other one? It's going to do a node run that because that's the npm start. It does, it does run build dev, which is the default, which blows up because Gulp's not found. Yeah. So it rebounds stuff within the runtime. Yeah. That's, that's no bueno. Interestingly, when you create that web hook, this is, this is blows my mind. It deploys a pod with a route and a service to act as the trigger for the pipeline. Like that's weird. I mean, it makes sense, but it's also weird. And actually my, my next, uh, tecton show is going to be exactly on this topic, like explaining what exactly happened. Tecton show. So yeah. So basically you have stuff that defines what parameters get extracted from the payload to be sent to run the pipeline run. And that's what we're going to explain. Cool. Do we want to, I don't know. We, is this, how long does the stream last? Like, we got like, we can go if we've till four, three, whatever the end of the hour. Okay. Things and stuff. So we need to start a server here. Oh, open shift is live on Twitch. Look at that. Who wouldn't know? That's crazy. All right. Us, we would have sitting over here writing mouse code. Mouse code? Well, tap and click and stuff for, uh, for the user interface be able to direct the ship around to translate, uh, directional controls from taps or whatever. Oh, for a virtual joystick. Yeah. Phaser virtual joystick. I mean, that might be more interesting for the last few minutes. I don't have anything working, man. No, I think it's more interesting to listen to Jared type. Yeah, I mean, uh, what I'm going to have to do actually, we're just waiting for you to push, man. Come on, no pressure. So, um, so let's see. If we look at the run, the run will. Okay. Echo environment, dev mode, node and if dev mode is true, which it's not. Otherwise it does npm run D whatever npm run is. Oh, runs as a demo as a. Eric, I'm going to share with you an assembled script. Can you, can you load it? In the screen? Uh, maybe. If the official Docker hub node images use, skip the SCL and just run the server. Run node. If node nv is not set by the user, then node nv is determined by whether the containers run in development. So we don't have, do we have node nv set or do we not have node nv set? I can't remember. We have node nv set is set to production. Yeah. Yeah, we do. So node nv is set to production. So if I look at the run script, if node nv is set, if dead mode is true, it's not export, node nv equals production. That's fine. Then we do a run node, run node says if dev mode is true, which it's not, then npm run dash d, whatever npm run is, but where, where does npm run get set? Good question. Must be m run. Must be higher on gets that anywhere. So npm run is actually an environment variable that you can put in your container in your deployment. If you want, what is the default? If you just do npm run dash d, there's a default, right? Maybe I assume there's some default probably just start. Yeah, I can just try it here. Yeah, because that variable is if you want to override it, like instead of having npm start, you want something else. You just put that npm run the deployment and put whatever goal you want there. I'm pretty sure it's going to do npm start, like if you don't define anything, it'll just npm start as a demon. If I do npm run dash d. It shows you the options. Yeah. But I bet like, is it interesting? I think that npm run is probably getting exported earlier on in the whole s2i flow. In the s2i flow? Yeah, I mean. I mean, it's not set here to have mode falls, node end production. Well, okay, but if someone is just going to make a push out a node app, like the default is going to be like without any changes, the default, it'll just run npm start. And I know it does that, like from other projects. Like so we shouldn't have to define npm run anywhere to make it do npm start. Launching via npm. Yeah. I just wonder where it's getting npm run from. Yeah, I'm not sure. Take the run, take the run, where are you? Let's check the Docker file. npm run start. Yeah, yeah, there. Okay. So you just need to make start do whatever we need it to do. We need to make start. I need to think of like the best way of starting up an HTTP server either through a command in npm. Why can't you just have it do whatever the debuggy garbages that we do now without the live reloader or whatever? Okay. That's a server. Yeah, yeah, it's going to run on a random port. I mean, if we're going for a hack job, I mean go full hack job. Go full hack job. Okay, so npm, so what it does is if you run it exactly the way it was set up, all it's going to do is start on port 3000. So to make it work, it's not doing that. What? It's not what it's doing. It's not what it's doing. No, no, no. That's if you start like the browser thing stuff. But this is, that's gone now because we're using gulp. It's not, it's not doing that anymore. Yeah, that's the start is doing build dev. Yeah, okay. So what is doing now in gulp is it runs a serve function, which, which runs, it's still browser sync, like, but it's starting up through gulp. So it runs. And it runs it on a port, like, does it matter? Like, can we just eat? Yeah, but gulp, but so here's the thing. So start is doing run build dev. Build dev is doing gulp build wash serve. So my assumption is that all we want to do is serve. We, yeah, exactly. So what we could do is make npm start just so you could just change start to be a gulp serve. Yeah. Yeah. But the, my question is, is it going to start it on a weird, I think it's going to start it on port 3000. It doesn't matter. Okay. As long as that doesn't matter, we can, we can adjust. Yeah, we can fix the service. Yeah, it can change the service to the port. Okay. So let me, let me change the package. JSON to make npm start just to run serve. That's all we really need to do. So this one, one thing. Are you, are you wanting to change the runtime or the build time? No. Which one? We're changing run, but we're changing it in our repo. We don't have to do anything to the existing script. Yeah. One thing. Do you want to change the target in the build image or in the run image? No. Neither. We're changing it in the, in the package JSON. To just make npm. Yeah. Yeah, npm start is going to just start. It's going to run the serve thing, which is going to run gulp serve. If you want, what I mean is if you want to check that it's going to work, you can just change the npm run command in the image that you already have. And check if you. That's what we're doing. What? We don't like redo the. In the image we already have. Yeah. You mean change the deployment? You actually have a deployment. If you just change the npm run environment there and do gulp serve as the value of it and check that it's just to change the package on npm start command. Yeah. Okay. I see what you're saying. So I got to go to topology and then I got to go to here and then I got to open the deployment and then I got to go to environment and then I got to override npm run with serve. Gulp serve. Yeah, serve. Exactly. Oh, my man. I got my command ready to go if you want me to push it or not. Still not working. Jafar. Yeah, go ahead and do your commit, sir. Okay. Okay. Oh, gulp's not installed. Stop it. So we took the command. Yeah, but gulp's not in the end image. So what you can try? The gulp serve won't work. Yeah. Well, I mean, the npm install install is the dev dependency, so gulp will be there when it tries. Oh, but only in the image where the build is run, it doesn't commit that stuff if that makes any sense. Yes. So it's like we need to do another npm install. Or we just make npm start run a something that's not a dev dependency to start up an HTTP server on that. Well, or you make gulp a prod dependency. We make gulp a prod dependency, which is really hacky. Yeah. What's your JavaScript entry point find there? The JavaScript entry point is... Yeah, do you have like a root js file? Yeah, it's index.html in the disk folder. Okay. So there's no index. We can totally blame Jafar for this because he's the one that sends us down this, like just do it with no js. I have an IDF. I have an IDF. No, no, I'm actually out of this. I have an idea. I'm going to try. What you can do is actually, if you change the serve command, do npm install gulp and gulp serve. It's going to work. Yeah, because that's just sexy as hell. Let me see. This works. Hey, that works. Okay, I have a very kind of hacky idea, which I'm going to try. All right, man. Commit it. All right. Do it live. All right. So what I will actually have to do is... But I have to do one other thing before I do this, which is a local dependency for HTTP. Hold on. So there's an HTTP server that you can install with npm that you can just start with. In any directory, just type HTTP server. So I'm just going to cd into the disk folder and run that command. Let's see if that works. But I have to add that as a dependency real quick. Okay. And then I just need to go here and say... It's only... And almost ready. So this is... Oh, hold on. Let me think about this for a second. Because this is... Oh, there will be a disk. Okay, this actually still might work. Okay, this might work. We're just going to have to try it and see. If we were doing real GitOps, we would build disks and then commit that to like a disk branch in the same repo. And then that would cause a web hook trigger to do an HTTP build of the disk assets. That would be the kiddopsy way to do it. We'll try this and see if it works. And you know that you can actually do that also in the pipeline because if you commit the disk to your Git repo, then you can do an S2I HTTP build. That's what I just said. Yeah, I mean in the pipeline. Okay, I pushed. So the... Yeah, that's what I just said. Yeah, but you said GitOps is not pipeline size. I was thinking like you were referring to Argo CD. All right, so I'm in a GitOps approach. What did you change? Try HTTP server. What is this? Let's see. So I added a local dependency for a simple HTTP server. NPM run serve and HTTP server, yep. And then it's just going to change the... It's just going to go into the disk folder and start that server up. CD disk, then run it. Yeah. Yeah, let's see if it works. And what port is it going to use? By default, that service runs on 8081. We'll find it. 8081 is a default. Let's see. Hopefully it'll tell us what it's... And I wonder if it actually... Yeah, I think it's just 8081 by default. So NPMJS has no dash HTTP server, but it doesn't have HTTP server. Yeah, but I think you have to put that in the code. Like you have to encompass. HTTP server, here we go. Published a year ago. Whoa, it's got a turtle with a rocket strapped onto it. Okay, the port actually... I think it starts at 8080 and increments if a port is in use. So like if 8080 is in use, it uses 8081. If it's 8080, then you're good to go because... Yeah. Nope, I already changed the service. Are you changing? Yep, got to change it back. Why does it look like it? Edit service. Okay. Oh, it didn't... Whatever change I made didn't stick. So I guess we're all right. Noop. So... Noop. Web client, pipeline runs, running. It's building still. Watching paint try. Oh, I should probably look at the logs. Hey, built, built, built. Oh, temp is not a mount point. In case you were wondering. So that's the build log. All right, deploy. We worked. And hey, it has not crashed yet, which is a very good sign. Starting an HTTP server on 8080. All right, so we have a service for the web client. Amazing. Do we have a route for the web client? We do have a route for the web client, but this is going to throw all kinds of shitty errors because we... Oh, but we have a background. That's like a big deal, right? Like that means at least that the... Yeah. ...server is working. All right. ...because it's loading assets. Right. Yeah. The broker endpoint... Oh, god. Sound. Yeah. Okay. Sound. Okay, so the assets loaded and the HTTP server is serving them. So that's good. So that works. So, Eric, I think now we need to change the... Like the broker endpoint in the config.js file. Oh. You might want her value we want for the broker, right? The game client just destroyed his computer again, I think. All right. Instantly. Yeah. All right. So, all right. Well, I mean, we got the web client building and deploying and running. Well, that's good. But, you know, it's not configured, right? Because it's not going to be pointing at the right broker or anything until we... Yeah. ...like a config map and stuff for it. Oh, I don't think... Uh, yeah. We'd have to inject the config file. Yeah. Cool. Well, I would say today was highly successful. Far more successful than we normally are. Yeah. Alrighty. Well, I guess we'll call it a day then. Stop by screen share here. So, thanks much to our special guest, Jafar, for helping us lose our sanity with Node.js and pipelines and building C++ applications with OM Killer. I mean, with OpenShift. But yeah, that's it for this month. And so maybe next month we will go back to doing more sort of actual game dev stuff. What I was hoping to get to today was to actually get the web client running enough so that we could share it to have a bunch of people try to play it since it'd be running on OpenShift at that point. We could all interact with it, but we'll have to save that for next month. Yeah. We could do a load test of like all of our stuff. I'll make an issue for myself to make an actual server, then not just the hacky. That's like a local dev server, so I'll make it better. Yeah, so... Ah, that's boring. It's good enough for production for now. Yeah. If you do a mostly stage build from the HTTP image, it should be very easy. Cool. All right. Cool beans. All right. Cheers, everyone. Cool. Thank you. Bye. Bye.