 Well, hello again everyone. Welcome again to another OpenShift Commons briefing. I'm really pleased to be redoing this demo with Don Schenck. We tried this once before and the demo gods were not with us. So this is a redo. We're going to be talking about .NET Core, microservices, and running all that good stuff on OpenShift. I'm going to let Don take it away. The way we do these is you can ask questions in the chat if it's something door-stopping or show-stopping rather. We'll interrupt him. But otherwise, we're going to let him flow through his demo in his presentation. We have Q&A at the end. But you're welcome to ask questions in the chat. There's a number of us who might be able to answer. And then Don can direct us at the end if we misspoke. So anyways, go ahead Don. Take it away. Let's see if the demo gods have been appeased. I was praying. Great. Thank you. So someone wise person or crazy person once said when you're handed lemons, make lemonade. So that's what I did since the last attempt at this. The goodness that came out of it was now the source code for this demo is not only in a repository. If you see there, there are two branches. There's master, which is the current version of .NET tooling. And then there's a version one tooling. So if you go and install .NET core through the proper channels of Red Hat, you will be using the version one tooling. I'm going ahead because of reasons I have to, to the newer version. The source code is the same. The only difference is the way it's built. So nothing breaks that way. And when you go from version one to 1.1, it's .NET migrate and you're done. But I just wanted to show you now that we do have two versions of this and this will, this will continue this versioning on the GitHub repo. So here's a description. You've seen this. We're talking about a transition in the software industry. And as you see, moving from left to right, top to bottom, from waterfall to DevOps, from monolith to microservices, from physical servers, which we all, I'm sure, remember, to virtual servers and containers, and then of course the cloud. We, we're, we are all at different parts of this grid, if you will, the ideal being at the bottom and staying at the bottom, which is the top of our game. We'll play on words there. So let's talk about some of the technologies in play in this demo. Linux, containers, Kubernetes, OpenShift. This is the buzzword slide, if you will. Zero downtime deployments including blue, green, and canary. And then the circuit breaker pattern, which is of great value, not just for microservices, but this is something you can use today. So some of the issues that we're dealing with and want to address, deploying software just takes too long. It can take hours, weeks, months to roll out new versions. I recently gave this demo and I said it can take several months and most of the room nodded their head in approval. So I, we all know that. Software is too complex. This is an arguable point in that microservices can reduce that complexity. There can be complexity in the network. So you might be replacing complexity and code with complexity and network. But the idea is that each individual piece of software is very simple. I recently heard a talk by the founder of Wanderlist, which is a to-do list that was purchased by Microsoft and they use a series of microservices and he said, you know, it's a to-do list. But the point was he said every microservice is so simple and they're written in all kinds of, any language you can think of, that you can look at the code not knowing the language and understand what it does. And he gave the example of a service went down and it was written in Haskell and the person looked at it, didn't know Haskell, so they rewrote it in Python or whatever. They literally just rewrote it in their language because the service was that small that they could do it. So that's where the software complexity is reduced at that level. And then the third thing, it takes forever to scale up or down. Even if you use VMs, it can take forever, of course we're spoiled, can be minutes. But when you need to scale, you need to scale now. So what we're going to do today, we're going to do the evolution of running on my PC and then running in a Linux container, which is a lot of people, I think want to stop there. But we're going to go on and there's an important reason why, to run an OpenShift, which is a platform as a service. And there's a couple of huge benefits here, huge benefits that should not be understated. Then we're going to do some, then we will do some zero downtime deployments using OpenShift. And that is something that can benefit you immediately. And then the circuit breaker pattern, which again, there is something you can use now. And that circuit breaker is just one part of microservices distributed processing, if you will. It's just something I want to bring up to show you that, hey, there are some changes when you deal with distributed processing microservices. Some what you can use now, some what you won't. Probably most of them you will. And they all come from the idea of what's called the 12 factor app, which if you're not familiar with that, I would just say go ahead and ding it instead of Google it. So I'm going to start by going over here to my OpenShift console, which I've logged into and just show you I have a sample project there, nothing major going on there. And I'm going to create a project at the command line. So the first thing I need to do is log in and shift dev account. It comes built in with OpenShift. So everything we do here today, you can download the red hat development suite and get started and do it immediately. The repo has instructions. I mean, this is something you can replicate later today if you want. So I'm logged in to a new project and I'm going to call it my.net. It's cool to me as a.net or as I'm doing all these things in Linux and OpenShift and Kubernetes and Docker. And I'm in my comfort zone of C sharp and F sharp. So I'm not never stops blowing my mind the fact that I'm doing everything in.net. So I'm in my directory for my repo. I have code here. And the first thing I want to do is just .net run. And just to run the code, this would be the evolution. It runs on my PC, which is typically, isn't that where Friday at 4.30, isn't that when you deploy it? Sorry. So if I go over to my browser and look at port 5000, which is where this is running, and the IP address is the IP address of my virtual machine. By the way, I'm running this. This is a Linux VM running in Windows on a Mac. So we've basically done everything we can to break this or show you that it all works together. And it does, in fact, work together. So what it does, it outputs the name of the host, which in this case is the VM, right? The relcdk. So this just proves to you that it's running. And I'm accessing it outside the VM within my Windows host machine. So, okay, that's working. That's great. So let's take that and shut it down. Notice if you're new to .net in Linux, you're not using IS. You're running things from a command line. There's an HTTP server called Kestrel. It's very fast. It's not full-featured. It's not outward-facing. In other words, you want to put it behind Apache or Nginx. Microsoft, I've read and heard them say that they're working on improving that. But for now, you'll just run it behind, again, Apache or Nginx. And it works great, but it's command line. So there's a shift right there. Okay, so that's working. So the next thing we need to do is we need to do a build. And we're going to call it. We'll go .net hello. So we're going to do a Docker build. If you're not familiar with Docker and how it works, think of it as a real tiny VM. I don't know how else to explain it without going into a lot of detail. But it's an image that is small and compact and it has everything it needs, the operating system, all the dependencies. Basically, it's just ready to roll. It's just like if you had a server all configured and you press the button and it took off and ran, which is interesting because when you go to how many people now in an enterprise environment could go to a server, whether it's physical or virtual, and say, I'm going to replicate this exactly 100%. That's a tough one. It shouldn't be, but it is. It's a fact. And we just need to understand that. So I've gone ahead and created it. The image, if I do a list of images, you'll see it right up here at the top. So there's the Docker image. It's 287 megabytes. Now, an image runs in and becomes a container. That's the Docker run command. And so I have to, I'm running it detached. You can run it interactively, but we don't want to do that. It runs on port 5000. I'm going to give it a name because if you don't, kind of funny. It assigns a name and it's one of these things where it's an adjective underscore person like silly Einstein or something like that. So now it's running. You saw how quickly that started. I mean, just like that, you have this program running and it's running in Docker. And if I do a Docker PS to show all the processes, I should see it up at the top. There it is. It's been up for 14 seconds. Another neat thing. If I do Docker logs and then the dot net hello, and you can tab complete, I can see the logs. So if we ran at the command line, you said, you know, control C to shut down. Now you just saw the logs that verify it's doing the same thing inside of Docker. So now I go over here to my website. When I refresh this, I should get the same thing except for this. Let's see what happens. Refresh it. And there it is. Two zero B 211. If you keep that in mind because if you go back here to the command line, when I started it to see that the ID that came up. So it's putting like the first 10 or 12 characters when it outputs it. But so what I've seen here is that the container is functioning exactly the same way as it was from the command line. And the difference being the host name, which I want, right? That's part of my code to show you the name of the host. So now I have it running in Docker. Well, that's great. That's fine. It's running on port 5000. There's a couple of issues here, though. One of which is what if I run another application on the same port and you would get a collision? How do I scale this? How do I deploy this and update it? And that's where OpenShift really shines and Kubernetes working together. So the first thing I want to do is I want to stop this. I know the name, so I can just stop it. And I'm going to remove it just for reasons of clogging up, you know, taking space. So now I'm at a command line and I'm ready to go and show you the power of OpenShift. So here's the project I created from the command line reflected in my dashboard. And when I click on it, there's nothing there, which is understandable. Over here, I have a PowerShell window and it will watch. Well, let me just show you what it does. So if I do a watch, the OpenShift, you'll see that it just basically does a cruel command in PowerShell and then sleeps for a second. So if I let me fire that up and I should see service not available because it's not, that's a bit of a problem. What you're seeing right there because every time it does that, it's hitting a server. Of course, right now it's not a problem because, you know, it's just one of them. But what if I was trying to spin this up and there was tens of thousands or hundreds or millions of people hitting it? Think what that would do to my server. We'll talk about how to get around that a little later and it's a really neat feature. So we go back to the command line and I have a bunch of scripts. So the first thing I'm going to do is run create green one and that's going to create a build and put it into OpenShift. And what I'm doing here at this point is I'm going down here to show you what's called the blue-green deployment. Sorry about that. So basically a blue-green deployment and it's not A-B testing. Blue-green deployment says, and it's just two colors arbitrarily picked by someone. If I'm running on green and I want to switch immediately to blue or if I'm running on blue, I want to switch immediately to green. That is to say you might have version one running and you want the option to immediately go to version two with no downtime. And then say something happens. You've tested, you've done everything right, but somehow something falls through the cracks and version two isn't working right. You can immediately go back to version one. That's the beauty of the blue-green deployment and it's an all or nothing switch which is important to know because a little later we'll talk about a partial switch. But this is the beauty of Kubernetes and OpenShift and containers all working together. So on my command line I'm going to go ahead and run this and when I go over to here you're going to start to see some goodness. If I can get this to stay in the background, get my screen right, let me go over here and show you what the what the script looks like. It's only four lines of code as you can see. So what it does actually three. It creates a build in OpenShift which is it's a build configuration basically says this is what we're going to do and in our case it's a binary build. That's important to know because you can also build from source. You could have a merged pull request on GitHub fire off a build. That's nice. Then we start the build and there's an option here that says follow and that just says show us what you're doing and then we create a new app. At that point we have .NET Hello app up and running and here it is. We can see it. It's done. It's up and running. Now there is an issue here is that we cannot get to it. So I have to create a route, a URI if you will. So I'm going to in the parlance of OpenShift expose the service and I'm going to assign a host name. You don't have to but I'm going to because I want to show the flexibility that you can do this. You don't have to go with the one of the signs and I'm going to sign it to the service .NET Hello. Before I mash enter here, up here in this corner if you can see my cursor which says create route it's replaced with that quickly and if I click on that boom. So now I have version one of my application running in OpenShift. Now I'm where I want to be. You see the name of the host is reflected in .NET Hello-121f and over here my PowerShell application. Ah, it springs to life. It sees my application now running in OpenShift. One of the benefits you have of OpenShift and Kubernetes together right now is what's called service discovery and that is I don't need to know IP addresses. I don't have to write some kind of script to go and find it or assign them or what have you. I can write my code to look for the URI. Remember the host name I assigned? Well that's in my PowerShell script. So when that comes available it everything comes to life. It's discovered. I don't have to do all these crazy things to find it. That's not a small thing. It really is not. Now that that's running I can go over to OpenShift and I can say give me two of them and if you're watching the background well let me go here see boom it's done. Now I have two and see the different names the v47 and the 21f. So now we're talking about scaling. Now I have two now my traffic goes up I have three thousand one one thousand two one thousand three one thousand four and four seconds I have three of them running and this is on a laptop this isn't even a server. So as you can see now we start getting into oh this is great. I wonder if you could do this with a legacy application. You can we'll talk about that later which is fantastic. So now we have three pods I'll scale back to one just to save some CPU cycles and it's it's my my green application remember I ran create green I'm gonna I'm gonna I'm gonna change my code here just to further the impact I'm gonna go from green and the go to blue so now I've changed my source code so I need to rebuild it so I have a publish a publish it's a build script that takes the net code and publishes it into a directory that I can then build from and put into an image over here on my I was in the wrong it doesn't matter so there I have it built so now I can do a let me show you I had create green first now I have create blue and that's the second version if you will so when I hit this it should go ahead and take the bits that I just compiled which you remember I changed it to version two and should go over here and and and not replace the one I have but create another instance there you go dot net blue running within OpenShift and then I switch now if you see in the background PowerShell did you see that service not available again what is that all about how do we get around that that's that's we're gonna see that too and alright so I have some caching problems going on here but as you can see the it is the new version because it says blue the hostname I didn't apparently dot net builds don't always replace all the bits unless you clear out the cache first that's okay we have the blue running and the red one and if I in the green one and if I switch back I have a script switch to green and it immediately switches to green let me restart this because this there we go so there's the green sometimes PowerShell likes to get hung up on me and if I could switch to blue I want you to watch the PowerShell screen when I hit enter boom it's an immediate change so that's how you would go with the blue green so version one it's working fine I switched to version two uh-oh I have a problem I need to go back boom I go back so that's the blue green and the scaling here if you have three scaled up of each it's going to switch from three to three in other words you're not going to lose your scaling so to speak all you're doing is changing basically the route to point from one app one service to another so that's the blue green deployment which is again made possible by the use of Kubernetes and OpenShift that's something that if you were just using Docker that would be no I don't know how you would even do it without writing your own basically and it's also uh if you're just deploying apps the old way I don't even know how you would even come close to this particularly the speed of it so now let's talk about the canary deployment this one's pretty neat because it's the canary in the cool mine if you've ever heard that phrase where you take a canary into a cool mine and if any gas carbon monoxide or natural gas um this is public service announcement natural gas has no odor that's why they add it right but if they didn't think about that how it's deadly so the canary deployment says we're going to take the next version and we're going to deploy it to some of our users we're pretty I mean we tested it we know it works um but we all know everything always works right but it somehow doesn't but let's say it does okay well that we gave it to 25 percent of our users let's go to 50 percent you just kind of slowly roll it out to some and if anything goes wrong you can back off quickly and when things go right you can move forward quickly at the same time never losing your original application so again it's like as blue green was boom boom boom real quick canary is also the the speed at which you progress is your call so to speak you know if you want to run 25 percent of users for a week and then scale it up to half or all whatever it's up to you but the beauty again is it supports it so basically you have development here's my application moving through development right and everything works out there that's great of course you know how that is it runs on my pc it's ready well it's not so we go to qa qa looks at it runs a test everything that's fine and by the way this could all be automated a lot of this can be automated by using jengen which is a whole other subject and then it goes into staging we're ready to go we're ready to go okay okay let's roll it out into production and if you see here I have a you know I have my load balancer and out to my users and it's pointing to the blue one so I put it out production to some users and if that's successful there we go now okay now we have it connected so to speak to the users and they're hitting it some of them that's working well we're going to give it to some more users we're going to give it some more users hey everything's working real well no no no we better back off or no no it's going well okay we go out and next thing you know everyone has access to it that's the that's the diagram of the canary deployment let's go and actually do it shall we so over here in windows i'm going to watch canary and over here i'm going to do my favorite part of the whole demo and destroy everything and start over so go see delete that so i i'm going to delete and this is just to show you um if you're a developer right and you're and you're developing and you're using this on your pc there comes times where you know how does we just like i need to start over and that's boom done and you see in the background everything's just gonna go away so that's the that's another neat thing about using all that you don't have to like oh my gosh you have to tear down all the stuff or repave my machine or anything silly i just typed a command and wiped out all my stuff now i still have my source code i still have my binary stuff on my machine don't get me wrong but um now we're ready to do it again so i'll see a new project and i'll just call my dotnet again to keep it simple in the background if i go here there's my dotnet and there's nothing in it so now i'm going to create a service it's called dotnet first um and i apologize i need to name change the name to like dotnet canary just to make it simpler more understandable but as you see here i have a canary create one so i'll run canary and at this point it's fair for you to say well yeah don you have all this stuff uh you know you have it all scripted that's that's all well and good but when you know when i'm developing stuff i don't have that luxury and you know it seems like well it's unfair you scripted everything of course it works but that's part of dev ops is that we do that right now this is where the shift we want to make from just typing commands of you know and another command another command another command to just running a script and so think about it once you have scripts written then you could start automating those and things can be kicked off by like checking in source code or timers or what have you so it's not cheating it's where we want to be i know we're not everyone but this is where we want to be so here's here's my application running as the canary all right it's fired up now there's the pod i'm gonna go with four of them and i'm going to take my startup code see it didn't so you see here the code it's it's uncommented you saw last time when i did the build it so here's what i'm gonna do i'm gonna remove the object library directory and the bin directory and give you a little inside baseball here so now it's a little dot net restore to pour everything down because i just blew it all away so if you're not into dot net this is this is a npm has what yarn is that the new one this is your basically your package manager i'm pulling down so i restored everything i need for it and now i can do a dot net i can i can build it here's the build by the way if you see this build this is the newest version of dot net that uses the microsoft build engine the previous one used a different build engine that was the change from dot net tooling version 1o flat to version 1.1 again the underlying source codes all the same um and unfortunately we went from we went from json to xml but oh well so there's my output now i have this ready to go and i can do the canary create 2 and what that's going to do is create it should be version 2 of this application and start it up so if you want to see what that is canary create 2 what happens here is you you create a build right and you you you build it but then you you start i don't want to say messing around but you you manipulate the uh the application it's a path to it and the metadata and you use that to your advantage where you put labels in that you can identify you know that you assign and then you can manipulate the the route and things like that so that's what's happening again the service not available in the background what's that all about we need to address that we'll come up with a great way to address that so scaled everything back now it's scaling it up and you're going to see in the background come to life and what we should have is four versions on the top four pods if you will the first one and one of the second and when once that goes to four i'm going to scale it back to three immediately okay again this is running on a laptop this is not a server but you still get an idea of it's it's pretty quick i mean this is just running on a little laptop so now i have three and one so i should see three of the dot net hello first for each of and then one dot net hello first canary so that would be the canary and in the background if you watch there you see it so now i'm in a position and you see the version two and version one now i'm in a position where i can manipulate who gets it now it's important to note that this isn't going to control who gets it this just controls the scaling but think about it if you're 25 percent of your customers are hitting your website and you want to give it to 50 percent you're probably going to adjust your scaling accordingly right i mean that just makes sense so i will scale this one up and i'm going to wait for it to come up and so so i'll have a little overlap which nobody's going to complain about that right and then i'm going to scale that one down now i should have a 50 50 mix and as you see first and canary so now i have 50 50 and it just goes on like that uh you know add infinitum so to speak if i scale this to four and by the way that all the scaling you can do at the command line or you can you can set thresholds and triggers to do it automatically so you don't you don't you're not sitting here at a screen doing these things and now i'm i'm completely the canary is flying through the the coal mine if you were everything's great so that's the canary deployment made possible by open-shift and kubernetes and containers this is i this is something that would be really difficult otherwise so now i'm going to talk about the circuit breaker pattern because remember we saw that like well the service is down and what's that all about well this is one way to address that and and this is a best practice in distributed computing so circuit breaker pattern it's kind of like you know electricity you know circuits closed and everything works and then you you know you spill water on a or you get water in your outside stuff and it breaks the circuit um so here it's the open you're not even reaching the server you can't reach it and the closed one the server is working now the the effect is the closed one in reality the server could be working but let's say really slow so let's think about that scenario where you're hitting a server that's down for some reason or or not down but the application isn't functioning right and it's very slow let's say it's overloaded well if if all your clients keep hitting it it kind of defeats the purpose right you're saying you're saying this thing is overloaded so let's just keep hammering it till we get what we want and that's not really going to work what you want to do is you want to back off and say hey let's give it a rest here so here's a little state transition diagram of the circuit breaker pattern so it sounds counterintuitive to me that closed and open but closed is the good one right and opens the not great one so when it's closed you're reaching the server everything is honky-dory everything's fine but when it's closed and you have a failure whether it can't reach the server or it times out and these and that condition is defined by you in code when you have a failure the circuit goes to open then it stays open for the a certain amount of time and or attempts again these are all configurable by you and as you see when the wait time is reached we go to half open which says all right we think we're ready to try now we're going to make one attempt to reach the server and if we have success we're like okay everything's fine we're good but if you don't have success it fails and you go back to the open state until again your wait time and in this demo I think I have four attempts for the failure and eight seconds for the wait time um but that's not that's the numbers aren't important the concept is and everything in the orange box you are not even attempting to reach the server you're not even tempting which is great because now the server gets a rest you know pun intended and what you can do on the client side is you can say okay if the circuit is open and I'm not reaching the server I can have some kind of fallback position a default value a default action that I take so it allows you to graciously handle failures and the reason this is important because you can do this today in your existing applications you don't need to be using containers if you have a website or a mobile app or whatever that that reaches a server and a service and you have this problem can present itself you can implement this now you don't have to wait this is a best practice in distributed services think about it if you have 200 microservices and they're all talking to each other the last thing you want is 199 of them flying and the one holding them all up that's what you want to avoid that's what this is really all about so I have a cool little application called howdy and what so if you've noticed but I have howdy and bonjour and aloha so what howdy does is it runs a little a little website that just returns web service that returns howdy and then I put some switches in it to slow it down and to speed it back up to mimic an overloaded server and then to show how the circuit breaker pattern works so howdy is I'm going to publish it and so that compiles it and then I will do a docker build and then run it howdy and while that's going I'm going to go over here and show you I have a circuit breaker console app if I go into program.cs there's a lot there and I'll scroll back through it over here we're okay we're still building so this is the console app it's not important that you see all the code and know you know everything that it's doing just notice that there's a timeout value of 200 milliseconds again you would configure that right in fact in 12 factor apps that would probably be a setting that would come from an environment variable that would be a best practice you have four seconds four attempts rather before it breaks and it stays open for eight seconds and then here's if you know if it's breaking for the circuit if it's uh if it's logging okay if it's half open you can see some of that in the code all these policies are set and here's where it's calling the it's calling aloha I remember earlier when I mentioned discovery this is a perfect example of where you would benefit from that I have to hard code this granted I could do it from an environment variable but I'm using an IP address on a port that's you don't want to do that you want to have you know you are I right here that's where again the open shift I'm not running this an open shift but if I was I could just put that here and I would never have to worry about it so now I go back here it's built docker run and I'm going to run it and it runs on 5000 as we saw and it's how do you know what I'm going to give it a name just because it makes life easier now when I mash this it should start running oh what I do oh my fingers so now it's running in docker and I should be able to go over here and that's this is vestigial from before and if I type greeting I should get howdy hey there you go so now I have it there's a really interesting dynamic that goes on here just out of programmer curiosity when I go here and start to type because chrome it does that forward fetching to improve performance so as I start to type it uses my history to know what I'm going to type and forward fetches in other words I have a switch here I can use called slowdown that slows down the service and watch what happens watch my screen when I type s oh you can't see it but it'll it slows it down immediately you can you'll see it when I go over here so there's that get all that that that we're done with that now we're going to go to the app you know circuit breaker app this is where you'll see that dot net run okay so the console app is going to watch howdy and I want port 5000 and report back okay so it timed out and it failed I don't know why that's failing the application is running I do know why it's failing what I did the mistake that I made and this is important is I'm running howdy in docker and I don't want to do that because that remember I had the IP address of 10.1.2.2 of my code that's not the IP address of my docker container is it no it's not so if a docker stop howdy so the point there is again without discovery this that's a perfect example like you can you can introduce problems and mistakes and they happen so I stopped howdy and if I do a dot net run from the command line now my other one will come to life and I apologize my my server is getting hammered I should scale that back or my my PC rather so there it is running and now over here okay so here you can see what's happening it's getting the request howdy and that's great so now if I take that and and slow it down in the background you'll see what'll happen now it's failing and it fails at times out and then it goes into the circuit breaker and that's where it's not even hitting the server now I'm going to purposely speed it back up but I'm going to do it right after it shuts down it's going to do it you're going to see it go to half there it is and now normal no put a normal I want you to notice it doesn't immediately restore it in the background I'm waiting for it to get the half open it checks it fails now 10.1.2 5000 slash normal and something is happening with my uh there there we are there's the greeting you had to have one demo god then go along the food day because everything else is so nice yeah well that's odd I've never seen that um here it is let's see let's see the log everything's there I can I can defeat the demo god by doing this I can shut this down remove the file slow down nope it's restarting remove the file slow down which is which is my little trick to get rid of slow down and and then run it at normal speed so finally the circuit breaker closes um this is actually a great demo of it because this whole time it's not been hitting the server and when that comes back up I should start seeing the howdy's there it is there's that make a liar out of me and that's the unless the uh slowdown file was still there anyway that's the the theory of the circuit breaker I've never had it fail on me but uh obviously I've done something wrong and I apologize for that oh back to normal speed there we go well that was odd there and back to howdy well go figure so the idea is the circuit breaker keeps the client from hammering the server now there was one other thing that I wanted to show you that's not in the slides is if you take a regular application and run it regular you're not a microservice just say like a website and run it in uh an open shifter docker so I'll go to my shared I have a dot net on the next over there's speaker this is a little website that it's a keeps track of where you're going to speak at events and if I do a docker build I should be able to build that yep it's not published yet so the point I'm going to show you here is that you can take an existing website and just put it in docker now this one I mean there's there are some things to consider this one uses a SQLite database it's in the directory which is a terrible practice right because in the container you delete the container you lose the database you would typically be using a connection string to a database outside you know outside the container or you would use a persisted volume to make sure that you don't lose the data so there's a docker build and I and if I did this right it's going to sound funny if I did this right when I run it it won't work and you're going to be like what's he talking about well docker run and I hope I did this right so it crashes and now figures I did it right the point I was trying to show you was that if I tried to run it on port 5000 and something was already there it would crash and again I was just to underscore the idea that kubernetes and OpenShift work together to manage those things when you run applications in docker you have to assign a port right port 80 whatever if you try to run two of them in docker on the same port it won't work they crash they collide but if you use kubernetes which OpenShift you know uses as its tool it manages that for you so you can have multiple applications on the same port I was hoping to show you it fail so I could underscore but it succeeded so this is just a basic NBC website I created and again it writes to a SQL like database so you know it's just a throw away but the point of this is I just took an application that you maybe have a website and just threw it in the docker I would I would throw it in the OpenShift and be able to scale it the point is even if you're not doing microservices you still have this capability available to you so that is everything I wanted to show you there's some resources here .NET and ASP.NET are on github this demo all the code and all the instructions are redhat-.net-msa on github there's some websites redhat-loves.net that's ours .NET is where you go for everything .NET live .asp.net is there's a weekly stand up for .NET with Scott Hanselman John Galloway and Damian Edwards it's it's absolutely a must-watch if you're going to do anything with .NET core and then the Poly project is the the circuit breaker I showed you that's the .NET version it's a very mature and robust project and it's ongoing I really recommend that one and then I'm available again on Twitter or email and I encourage you to get your own zero-cost redhat enterprise Linux and or development suite at this URL and grab the github repo for this presentation and that's it for me do we have questions yeah well there are a couple of them I think you covered a lot of stuff there so I'm expecting you'll get a couple of emails but so far there was one question I think you're running OpenShift 3.3 your origin 1.3 so and someone was just asking if it also if it had to be on 1.4 and it doesn't everything works just fine on 1.3 origin or 3.3 the paved OpenShift container version of it and let's see there was one other question which I think Burr did a good answer and what I might do is I'm going to unmute Burr if he wants to add anything here at the end and just say I'll stop sharing my screen there you go and there was one question very early on yeah hey Don your question was related to what does the ideal style of .NET application to bring from the old world to this new world you know what would you say some of the limitations are in .NET Core versus .NET Framework and what kind of apps are good to migrate over apps you know good old .NET apps good for this new right that's a great question if you have well if you have a restful service that's that's the right out the that's the one to do that's like the best one an MVC website you could bring over there's there's a lot involved in this it's not a migration it's a port or the other way around I can't remember which one is but it's not a lift and shift there's some work involved however there are some tools available to analyze your code and show you what the scope of the work basically um Todd Mancini from Red Hat has written an excellent blog post about the effort of going from framework to core and I know that um outside of core I know some people have had success using mono and running that in open shift uh to take a .NET Framework app and just basically almost lift it up and drop it into mono there's some minor things very minor things necessary um some of which can or I think they can all be automated with that click the cloud but a number one would be a restful um service two would be like an MVC website if you have a SOAP uh oh my gosh a web service then you're going to have to run either run mono or run it through I don't know what you would you'd have to use mono because there's no there's no Windows communication framework in .NET Core yeah no WCF no web forms no win forms right it's it's a limited subset but it's a good subset there's a lot of possibilities today so it's you know and and a lot there's a lot of fun things you can do so we're we're looking forward to I think maybe the um the stuff that you're mentioning about Todd's migration tools might be something that's for a future briefing um and demoing that and using moving stuff across um would be would be interesting at least to me um and to see and uh hopefully there's a few others um Steve Spiker is one of our PMs it's on the call is there anything Steve if you want to unmute yourself you can add in here um this is there's lots of information out there if you uh I'll look for Todd's uh blog post and include it when I post this this this video that Don has done for us today the video get posted on blog that openshift.com probably Monday or Tuesday next week um and it'll be on the YouTube channel so it's you can watch it and slow down the pace and um all the links will be there yeah I was just going to add um this is Steve um this is a great demonstration we tried to keep up with all the dotnet core upgrades that are out there as well and so we showed a good way of demonstrating an offline in the localized development suite or a container development kit we also provide a hosted on our developer preview instance of open shift online which has dotnet core 1.1 um and we would continue to try to you know keep pace with dot the dotnet core updates occur which kind of keep closing the gap between dotnet I don't know what you call legacy but uh prior to dotnet core keep adding more features we usually have almost zero uh wait time from the time it's released the time it's actually available from Red Hat on our platform so it's terrific good way to experience it too yeah next week uh the seventh what is that Tuesday microsoft is is dropping visual studio 2007 and the official I believe it's the official tooling 1.1 comes out then I think I may be mistaken um but it related to that I wrote a blog post I think it was just last week about the the versions in dotnet and some of the confusion that could be around it it's a short blog post it's real simple but it's one of the things if you don't know it it gets really confusing I mean I've heard Scott Hanselman like how come if I type this I get this and if I type you know dotnet I get this I type dotnet version I get this so I talked to the guys at Microsoft and said let me codify this and put it in writing so people understand so if there is some confusion about versioning go ahead and just find that blog post at redhatloves.net and you'll you'll understand it then that makes sense well so we can cross post this um in both places so that you can find it so maybe we'll put this video up there and I'll I'll find all those blog posts and add it with the video when it gets posted um and we'll have you back again soon you know it sounds like there's more um I know we're going to ask uh want Charlotte Elliott um to come on and do some dotnet dot the ASP.net demos as well showing you know some opportunities a new developer evangelist with the OpenShift team and comes from the gaming industry so I'm curious to see what her demos look like they're probably a lot cooler than mine and even your little speaker thing so um and probably a little bit more complex so this has been really good um really pleased that the demo gods were with us today Don um and if you have anything um if anyone else is on the call and is listening to this there's any other aspect of dotnet or running um Microsoft stuff um just reach out to me let me know um you can find me at App OpenShift Common um on Twitter um or at Python DJ um which is my Rant and Rae channel and also if you're not on the OpenShift Commons mailing list yet drop me a note at dmuler at redhat.com that's D M U E L L E R at redhat.com then you'll get um added to our mailing list so you'll get announcements of all of these briefings and upcoming events also a lot of us will be in Berlin on March 28th for um and 29th and 30th for KubeCon um and I'm hosting a one-day OpenShift gathering um at the same location um the day before KubeCon so if you'd like to join us for that um we'll have a meetup of all the dotnet Microsofty folks um we'll have their own lunch table to eat together not because we're trying to segregate you guys but um you should have seen Don there was a little going back and forth for a candlelit nicely about you know this is uh dotnet core is just like Node.js and Vertex and I refrain from doing any hipster jokes but um yeah uh so you know we'll let you mix and mingle and at the gathering there'll be a gathering in Berlin in March there'll be another one at the day before summit in Boston and another one on the day before KubeCon in Austin the North American one in December so um lots of opportunities um there's SIG groups at the OpenShift Commons so please do take a peek at commons.openshift.org um and join up on the SIG that you like and I'll add you to any mailing lists that you want because everybody loves to have all those mailing lists without any further ado thank you again Don, Burr, Steve and everyone who asks questions um it was a great demo and I'm really pleased to have appeased the demo gods finally so thanks guys thank you