 All right, welcome to this other episode of the Ovenshift Coffee Break. Good morning, everyone. I hope you have your good shot of coffee together with you. And today we will do another episode of the Ovenshift Coffee Break. We may have the wrong title on Twitch and YouTube, but it's fine. Today we're gonna fix it. Today we're gonna talk about the clown native inner and outer loop together with all our guests. So I leave the word to Jafar to present everyone. Yes, good morning, everyone. And thanks again for joining us to this Ovenshift Coffee Break session. So first of all, let's have a sip. So my name is Jafar and I work in the Ovenshift Tech Marketing team. So today we will be joined by our esteemed guests, Alex Groom, Carlos Vincens and Madoko Libali to talk about a lot of interesting features and patterns for developers to work on Kubernetes and OpenShift. So without further ado, I'll let you guys introduce yourselves. So let's start with Alex. Hi, my name is Alex Groom. I'm based in the UK and actually most of the team that we've got here, we're all a part of the Ovenshift specialist or a solution architect team. We're all working in the sort of top of the OpenShift stack where developers sit, where we're using OpenShift to build things to create applications. I myself, I've been a Red Hat for about three years now but I've been in the computer industry way, way longer than that. So, and here we are today talking about developer stuff, particularly the inner and outer developer loop and tools and how to start building your applications. All right, thanks. Carlos. Hi, basically, I did exactly the same as Alex. The only, maybe the only difference is that I do this from Spain, Madrid. And well, I think that's all basically. Okay, cool. And my fellow countrymate Madu. Yes, hello everyone. So my name is Madoko Libali. I'm a developer specialist solution architect. So I think, yes, Alex, you did the great introduction about the team. So same team from Alex. So really happy to be here and I'm doing that based in France. Okay, cool. So thanks. Thank you very much guys for joining us and everyone who's watching. Natalie, Natalie is our famous host. So he doesn't need further introduction. So Alex, you spoke about inner and outer loops, correct? So I do a little bit of guitar and I play with loops, but I haven't heard of those inner or outer loops. Can any one of you guys tell us a bit more what that means and what we will be talking about? Okay, I guess I can start with this. I mean, we talk about loops here really as developers, strange enough, spend a lot of their time doing the same thing over and over again as they slowly build out an application. They'll, for example, the inner loop, they'll write a bit of code. They'll test that bit of code locally when they're happy with it and they're satisfied that it's working functionally. They will maybe add a bit more and then go around that loop again, building stuff, debugging it, writing a bit more code. So they are continually working in a loop. Okay, the thing they're building is changing each time, but they're doing much and much the same task. And as they get to the end of that loop, they will actually then commit those changes, the stuff that they've done into source control. And that's typically how we think of the inner loop. It's that really tight circle of work the developer's doing. And if that loop isn't very tight and efficient, then the developer can spend an awful lot of time waiting around for their systems to compile, build, set up, install, whatever it happens to be, when it's actually more productive if they're actually thinking about problems, writing code, whatever it happens to be. So the longer a developer has to wait to test the thing that they've just thought of, the code they've just written, I'll give you the less efficient they get. So it's really important to have a nice efficient inner loop. So they develop themselves a really efficient. So that's kind of the inner loop. Maybe one of my other colleagues will take on the outer loop perhaps. Yes, sure. Thanks, Alex. Yes, I can only say thank you, Alex. So this is what we call the inner loop. So basically everything happening before you commit your code into a Git repository or a repository. So after that, so we enter into the outer loop and the outer loop is mainly defined by all the automation and all the process. So everything that you have around the CICD pipeline, so continuous integration, continuous delivery in order to be able to build your final artifact, your final binary, that you are going to promote from a different environment. So development, the test environment, your QA, the staging environment until the production environment. Basically, so the main difference between outer loop and inner loop, it's on the outer loop, the developer don't have let's say less control about this process because it's automated to be able to have a consistent way to deploy and run an application. And the inner loop, it's where let's say the developers can really do whatever they want to have all the control about the environment, what they're going to build and so on. Okay, thank you very much, Madu. So those concepts of inner and outer loops have been of course existing for a while and the developers have been used to working with the run times directly running on their computers and such things. But what I believe you are going to show us today in which is a bit more innovative is how we can leverage containers and Kubernetes and OpenShift in order to further improve that inner loop and outer loop, is that correct? So transitioning from deploying traditional stacks into using a container platform as your development platform and not just as a CAS or a runtime platform. Exactly, I think Alex introduced this situation. So what you want as a developer is change the code, see results, change the code and see results. And when you try to do this with Kubernetes, then you have a problem because you need to wait until everything is deployed, right? Also, I think the idea is just to find how to reduce the amount of time you need to see results. So, and there are several ways to do this. Okay, so I believe you have cooked for us some nice demos and presentations. So we like this to be very live and hands on sessions or discussions. So who wants to go ahead and show us some nice stuff or demos about these? Maybe we start with the inner loop and then we go further. Wait, whatever you guys wanted to present, go ahead and the floor is yours. Okay, then I think I'm gonna start. Let me show you my screen. You should be seeing all this nice stuff. You see nothing, you just tell me, okay? We can see your screen that's coming up. Here we go, perfect, perfect. Okay, so this is a Spring Boot application. I have deployed this application before we started with the session. It's a very simple application where you can just basically see rows coming from another database. So database is a database that I'm sharing across the cluster. You need to think about this as an external protocol database if you want, okay? Then what I'm gonna do is using telepresence. Telepresence is, well, I think I'm gonna explain that later. The magic, I want you to see the magic and then I can explain the trick. Don't tell us the tricks. We just want to see the magic. Exactly, just the magic for now. So let me do this. So what if I do something such as creating a VPN, something like a VPN from my laptop and I deploy the code in my laptop. I use my laptop to run the Java code and swap this deployment with a proxy. Basically that's what I'm gonna do. When you do this, you need some requirements in the backend because you need some permission so that you can deploy a specific proxy. So it's not something that you can do if you're not clustered as me, I mean, it's very best. Yeah, it's worth reminding people at this time, our definition between inner and outer loop is that code commit. And as applications have gotten more complicated, we're discovering that actually open shift is actually becoming part of the inner loop. The original assumption might have been, oh, yes, we touch open shift as part of the outer loop when we're getting into deployments, but applications become more complicated and complex. And actually our application has to live in a world that some of the parts components may be running in open shift already like a database or you're a part of a set of microservices. Half of the microservices already exist in open shift and the piece you're working in is actually on your code bench. And we end up having to integrate or spend our time integrating with the code we're writing and open shift itself. And then later on, we will then actually commit the code and that will close the inner loop, but we still need to be as efficient as possible. Exactly. So yeah, go ahead, Daniel Carlos. No, no, I was just about to say that it's true. So if, for instance, you want to create just this very simple microservice, it's not connected to nothing. You don't need this. So as Alex was suggesting, maybe there are external services. I think that you can outreach from your laptop. But those things are reachable from the cluster. So this is basically the situation when maybe telepresences is something that could help you with the inner loop. So now what I have is there's a proxy here and now I can start doing things like this. So I have an Oracle database. What if I try to reach the database? So the database is reachable. So if I go to this nice SQL, nice, I mean, it's all fashion application, but it's very useful when you try to do these things. So now I test and you see that I can reach the database. So I can start talking to the database directly. And that's something that is not really deployed in Kubernetes, it's an external database. I'm using this external database with a service of type external name, okay? So yeah, you are accessing the external database through a service which is exposed within Kubernetes networking, right? Exactly. I'm doing all this through this proxy. Okay. So now if I deploy the application locally, I mean, if I run it, not deploy it, if I run it, this is a very simple thing. The only thing you need to pay attention is that I'm providing some environment variables because I'm running the application directly in my laptop. So it's not a Kubernetes deployment. So there are no environment variables injected by default. So I'm putting all those environment variables before I run the Java code in my machine and hopefully it will start up and we can run the application locally by talking to the database, which is deployed in Kubernetes or through Kubernetes. So just be with this kind of lazy now. It's about, okay, there we go. So now this Springboard run is connected to the database, hopefully. So before I try to do anything, I should check that localhost is working. It's working and it's talking to exactly the same database. And I know this because if I run this URL, this is an external URL. This is the same external URL. And this is very handy because, I mean, localhost is very, very fun. It's normal. It's the normal stuff we do. So if I change something here, let's go to the index. Let's put something here like, hello, for instance, I saved this and I go to localhost and you see that hello is here. So changing is immediate, right? Something that you can do very easy. So the inner loop is working, although the service, the backend service database is really far away. Yeah. So you don't have to rebuild an image or store that in container. Exactly. Exactly. And in fact, if you change the code, so let's do this first. And I have this from local. I'm gonna save it from Twitch. Okay. I save this and I go here. It's reloading, right? Can you increase a little bit of font Carlos? Yeah, sure. It's more visible. My bad. My bad. This is, it's readable now. Yes. Yeah, it's better. Thanks. Okay, perfect. So now if I do exactly the same, you see from Twitch, but the services I'm talking to from my Java code are in Kubernetes, okay? Okay. If I go here and I use this URL, should it work? Because you see that deployment has been swapped. So this deployment has been, has been scaled down to zero. I'm using the proxy. So the Java code is in my machine. But this URL is, it's an ingress. It's a route to open ship, right? Should it work? Well, yeah, it works. So that's basically the magic. I think this covers basically the inner loop when you're using Kubernetes, okay? Maybe there are other ways to do this. But I think this is a pretty nice way to do this. So now it can be stuck okay. So it's certainly a nice way to get services that are too big to run locally. Imagine that database, you've got about three entries in that database. Imagine if you had 30,000 entries or maybe 3 million entries, then it's not something you would want to bring down to your laptop and run. You absolutely want to remote it and connect to it in an easy way. So we're ready. And so also guys, what you are saying is that with this type of setup, we are able to share environments. So we don't have to duplicate them for every new developer or every new project. So we have remote shared environments that benefit from all the power and the resources of the cluster. Which is basically sort of unlimited compared to what you can have on your laptop. And you are also compliant to all the security constraints if you have private networks and confidential data that you cannot access directly through your connection through your laptop. It's all going through the Kubernetes networking and OpenShift networking. Thus, putting you almost as close as possible to a target environment. So you don't have that notion of, oh, it used to work on my laptop, but it doesn't work now that I am in integration environment or in preprod environment or stuff like that. Is that correct? And that's pretty important because the complexity of deployment, we can now actually practice deployment while we're developing. And that's actually very important because it means the developer is much more aware of what the final state is going to look like. And the sooner you expose the developer to the constraints or the benefits, perhaps of that final deployment, the better it is because their code would inevitably be tested and will work better in that final scenario because it's what they've been using up until that point. But the key thing here is if you're going to get developers to use a deployment system, it has to be easy to use and fast to use. If it starts delaying them or slowing them down significantly in that inner loop that we want to be as efficient as possible, then they're going to reject it and they're just going to move back doing everything locally on their own workbench because it's the fastest way to write code. So it's always a balance with developers whether they do stuff locally or whether they do stuff remotely. And OpenShift brings a number of decisions to be made actually because as said, as your application gets more and more complicated, more and more microservices involved, it actually becomes harder and harder to run this thing locally, reliably when actually the reliable way to run it is in an OpenShift cluster because that's someone's already done the work to define how to deploy it and the relationship between those services. And you have to end up duplicating that if you're going to do all that locally, which is inherently going to introduce bugs, issues that you have to debug and distracting you away from writing your code. So getting people to use OpenShift as part of their inner loop can be very productive. Okay, thank you very much Carlos and everyone, the very interesting ways of improving the developer experience and inner loop while still leveraging all the power of the OpenShift remote platform in your situation. Okay, thanks. So anything else you wanted to show us guys? So is this the only way to do inner loop? Did you have other treats for us regarding this topic? Cause I've heard about several other ways we can do similar things. I've got a demo to show you. Let me share my screen and I'll show you. Today is like, you know, the last three hours we have this three singer, this is a super cool demo. So it's a super session. Really glad to have you all here. So let's go to these other fantastic demo. Thank you very much. Okay, so hopefully what you can see now is, it's basically an idea. This happens to be code-ready workspaces. I've just happened to use that because it was easy. I can put everything in a browser for this particular demo today. But what I'm trying to demonstrate here is develop a busy inner loop. They're writing code, they're testing code. They're building a very simple rest microservice. Alex, sorry to interrupt you. Can you explain what this is? Cause I see that you have almost a development environment that you are in Chrome. Can you tell us a bit more about this? What this environment is, which if you imagine it, below this Chrome tab is our code-ready workspaces development environment. This is something that's actually running in OpenShift. We use it a lot for demos. It is essentially like a VS code environment, not very similar to what Carlos was showing us shortly, recently, except it runs in the browser. Your actual workspace is actually running in OpenShift. So your code and all your builds and all your storage is running in the OpenShift cluster. And all the browser is doing is projecting the front end. It's just showing you the graphics and the code that's going on. What this means is that all the code is stored remotely. There's nothing local on my laptop. So it's actually a very secure way of writing code, but that's not really the issue we're trying to talk about today. It's just another way of writing code and deploying it, et cetera. And even in code-ready workspaces, there's still the concept of local as in running stuff within the development environment and running stuff remotely or in the OpenShift environment. So yes, for the purpose of this demo, it's just a text editor or a Java development environment. And what we have today in this particular piece is this is a microservice that sits among many others. It's very simple. It's got a very simple REST API and it's got a very simple object which it's going to read out of the database. And due to a lot of clever caucus annotations, I really don't actually need to write a lot of code here but I still need to test it locally. And being caucus, one of the features that caucus offers us for that developer loop is a very fast way to develop code, that sort of write code, make a change, test it, debug it is very, very fast in caucus because what we've tried to do is eliminate the long pauses when you had to get Maven to build a Java application, build a Java file. That use actually takes minutes. Whereas with caucus, once the thing is up and running, we can actually make live code changes. And what I'm going to show you here is that working here locally. So if I refresh my browser, just wait for it to wait for the app to start. So can you put the screen a little bit bigger because it's overlapping with our video previews? Move that over there, just a little bit, yeah. Okay, so what I've got here is to test the microservice is got a very simple front end. So I can go in here and test it. And this makes one API call. It's looking at this item, returning this value 35. So that's all very simple. If I want to make a code change, I can show you how to do that quite easily. So if I go in and tweak the quantity class so it returns maybe a hundred times the same values, I make the code change, I can refresh the tester and what it's doing in the background is reloading that code, recompiling it, rebuilding it in a very small number of seconds. And I can immediately test it. And strangely enough, I've hit the wrong one which is set quantity. I need to do the get quantity one. That's why it has to be changed. Let's try that again, do the same thing. So again, and what we're seeing here is why we built this is because so you made the wrong change, you want to test it again, it takes just a... Literally seconds. You can imagine if it takes five minutes to go off and do that Maven build again, I would be either drinking a lot of coffee or tearing out what little hair I've got left, but I can make a code change and within two or three seconds, I get the response back. So by changing the quantity to times a hundred, I'm now getting 3,500 back as a result. What I will do is I'll... Yeah, we can see it on the stream. Can you just put it a little bit on the left? So we can indeed... Or maybe this is a public URL, right? So, okay, cool, yeah. So now we've seen that you have 3,500. Yeah. Actually, we can share this, no? We can share it... Yes, I think this is a public route. But let's not bother with that, but maybe... It's not... This one isn't particularly interesting, but what I wanted to show you next is... This is all happening locally, which is all very well and good, but this is a microservice. It actually sits within a group of other microservices. And what I've done earlier, because it takes a moment or two, is I've gone over to OpenShift and I've actually deployed this complex series of microservices where I've got a front-end, some orchestration, another spring route happen. This is my application that I've been working on, this one here, and it's using a database in my OpenShift environment. Now, what I want to do is test it and develop my code while it's live in my OpenShift cluster. So that's the next thing I want to show you. Now, what I have to do to... Yes, so you are making changes to a Dev environment, correct? It's a Dev environment, yes, but I want to make changes to the version that's running in the OpenShift cluster because I want to test it in its real environment. In situ, it's using a database, it's working alongside other microservices. So I'd like to test it and make code changes while it's in its proper place, not just running on my workbench, which is what I was doing previously, which is perhaps a bit artificial. So I want to try and do this for real. So what I have to do is you've seen the dynamic aspect that Corkus has when it's running in the IDE, but we can split those two things apart and actually run a piece in OpenShift that will listen to changes and detect those changes in the IDE and we will send those changes across to the OpenShift cluster and update the running version in OpenShift. And to do that, I need to run a slightly different build locally. So if you just let me find that and let this start up, but we should be able to do the same thing. What my IDE is now going to do once it's done this build, it will be connecting up to the OpenShift cluster and as I make code changes, they'll get sent across a HTTP link into that running application and update it exactly as we were doing locally, except we're now doing it across a network link. Now, clearly this is not something you do in production because there's a bit of a security aspect to this, but for a developer it's fantastic because I can make code changes and within seconds, have it updated in the cluster environment, which is something that was practically talked about. If you have to go through the full aspect of building a container or building a JAR file, then building a container and deploying it, that can actually take maybe two or three minutes. And what I should be able to do is make changes maybe in two or three seconds here. So let's have a look, see what's going on. This is what my app looks like, but let's just in case there's any doubt or so, let's start from the beginning here and start from my OpenShift cluster and bring up my app. Have you noticed that all these quantities are now about 35 or 3500 is now actually being displayed in the web UI as a stock quantity as part of this web interface. So these 3500s, 1200s, these are all because of that 100 multiplier that I've added, which is clearly wrong, I don't want that anymore. So if I go back to my code, I can take that out. Now go back to my OpenShift application that's running there, refresh that. This takes a little bit longer, maybe five or six seconds to actually refresh because it has a little bit more work to do. And here we see, let me make this a little bit bigger for you, these numbers are now 35, 12, 45, et cetera, which is the correct number. So this is very good. So what I've managed to do is I managed to make a code change. In fact, you can tell me here, it's a little bit small, but it says 4.793 seconds, it's taken to hot replace the code change that I've made. And I can do it again. I mean, yeah, let's face it, this is not a... So everything, I mean, this inventory service is deployed on OpenShift as a container and you didn't even have to rebuild and redeploy the image. It's basically directly replacing the content that is running inside the development container within the OpenShift environment. That's not correct. That's absolutely it. As I make a code change and then refresh the browser, it detects that code change, repackages that code change, sends it across to OpenShift and puts it into that container where it's running. And it doesn't need to build a container or restart the container. So here we are, 350. This is like absolutely blurring the lines. We don't want to say in a loop means don't use OpenShift. We say in a loop is everything you do before you commit your code to source control, get probably. And what we're talking about here is making OpenShift part of that development environment because it has an awful lot of benefits for doing so. This is a very interesting example of where the fear of having to containerize up and put my application in OpenShift is going to slow my development down. Absolutely not. You can carry on making code changes, updating your code almost as quickly as you would if you're doing it locally with Caucus. And to get an update within four or five seconds, that's like no JS, that's like JavaScript speed. And it's something almost unheard of in the Java environment, which would normally be two or three minutes to build your Java file. And then another couple of minutes to turn that into container and deploy it. So maybe five minutes between each code change. We're talking here about five seconds between each code change. So that's kind of the difference we're talking about. And that's why we talk about the efficiency of the inner loop. Okay. Thank you very much, Alex. This is very interesting. Yeah, that's an awesome workflow for interacting with your services that are running on OpenShift, but still have that feeling that it's all running within your own environment and your own app. Cool, thanks a lot. To folks following, if you have any question, please send it in the chat. So we will try to answer your question from the chat here in Twitch and YouTube or Facebook, everyone who is listening. And in the while, we will go to the other demo, right? Wow, such a morning. I haven't seen three live demos at 10 a.m., folks. I need another coffee, wow. All right, so who's next? Are we still in the inner loop or are we transitioning to the outer loop? So maybe before we transition to the outer loop, can you tell us a bit more about those underlying technologies or tools that you have been using? Alex, I saw that you've been doing a Maven compile with remote browser. What was that using on the underneath? What was it, auto or something like that? The technologies that I've been using for this demo are code-ready workspaces in my development environment. I'm using Corkus framework. So that's Java. So I'm using OpenJDK. I'm using Maven as a build tool for that. And then I'm using OpenShift to host all those microservices. Now, once you get onto OpenShift, what are we using things like databases and various different frameworks? Each microservice actually uses a different framework, but the one that I was doing the live debugging on the live coding with is Corkus. On the other hand, there's no limit with OpenShift. I've got one of my components was .NET, another was Spring Boot. The web UI is Node.js. So it just indicates that there are kind of no limitations in what you can do with OpenShift. But the actual live coding is a Corkus feature. So that's a kind of special speciality. But that's really all I've been using today as part of my demo. Files, I think, actually is using some additional features. He's using something called telepresence. So maybe he can just talk you through that and what he actually needs to get that running because I think you need to install that. Is that true, Kars? I'm sorry. Yeah, yeah, correct. Thanks, Alex. Sure. All right, thank you very much, guys. So how about Madhu? What do you have for us today? So yes, today, so just to conclude the part of what we're with the inner loop. So what is really important around all the tools that we have seen today is really what you try to do. It's, of course, to accelerate the developer productivity, but not only. So basically, there is one question and one main question that we need to ask is why it will be good for you as a developer to be able to use, let's say, OpenShift or Kubernetes when you're developing your application. So the main aspect around that and why we would like to do that is really to be able for you to develop better application. So, yes, better application, why that? Because as a developer, there is always things that I was fighting, let's say, in my past career. It's really the fact that sometime when I push a code into a Git repository, I created and generated some bugs. So linked to not the code, not the business logic that I implement, but because there is some infrastructure problem because my application didn't get, for example, the specific dependency because I just left one bad parameters inside the application and so on. And that's, it's really, really, it's really, really, let's say, frustrated as a developer. It's really that the fact that I don't have the right environment to be sure that in my inner loop, when I have, let's say, all this flexibility where the code is mine, I haven't published the code. So I didn't have, let's say, all the resources that I have to be able to be sure that the application that I have, I will be able to test it into the same environment that the application will be used into production. So that's why that we do a lot of work here in order to be able to add these open shifts as a developer, inside the developer phase because I think it's not only for the developer, but also for, let's say, your project manager because I don't know if you are familiar with the concept about the cost of a software bug. So there is a concept with the fact that if you are, let's say that you have a bug into your application. If you find the bug, let's say, during the development phase, so this bug is going to cost, let's say, so 100 euro, right? You find the bug, so you find it into your development, so it's cost about 100 euro. After you push your codes into a Git repository and you find this bug into your QA, for example, this bug is going to cost you 1,000 euro. So it costs 10 times more. Yes, and you can, exactly, and you continue to that until the production because of course, if you have a bug into your production, so I mean the impact will be much more bigger because it's in production, so your customer see the bug, so you are going to have a bug. So very bad feedbacks are going to stop using the application, they are going to post everything into the social network and so on. So that's why to be able to remove this part because when you have a bug around the fact that I don't know my application, don't run inside the container. So I mean it's not, I think it's not an important bug, it's something that you can avoid but really be able and give the ability for you developer to use the production environment or like, so the same production environment to be able also to be comfortable and to be sure that when they're going to push a code into a good repository, everything will be perfect and clear. So this reminds me of a term that I hear which is shift left, is that correct? Is that you are shifting left and way ahead of time, your ability to find bugs, to correct them instead of having that happen later on in the CI CD process? Yeah, yeah, it's really to, yes, to keep people this, yes, the capability really to say that it's not that I'm going, for example, to test my code when I'm going to production or I'm just only to test the code when it's going to go through my CI CD pipeline but I'm going to really test the code so from the first form, so when I starting to enter and to implement my business logic, I will have all the resources to be able to be sure that my code is working, so technically it's working well on the environment and after that what you want to have basically when you have a code, so it's better, it's better, let's say more relevant to be able to have a code around your business logic because it's what you're trying to implement instead of having a code because I don't know you forget to add any specific, for example, cloud libraries that you need to run your application. That's cool, so another topic that I can think of is that you said the number of times I am able to develop my application because I have all my components, so traditionally this is something that have been addressed with things like STABs because the developer needs to have an evaluation of the environment in his laptop to be able to write his application and what you are saying here is that we are sort of improving this way of working without needing those STABs because I can now consume some live services even if it's a QA service, it's not going to be a STAB, it's going to be a real container containing my real APIs and interacting with that. Is that also something that you are referring to? Yes, exactly. Okay, cool. Yes, good. No, I was reading the chat, there's Sebi saying that the shift left is a new password. It's just a comment from Sebi. Thanks Sebi for helping fixing the title of the year. Well, thank you. It's a new, but it's been there for many years. All right, so thank you very much, Madu. Okay, so now let's transition to the outside world from your inner persona or inner loop to see how we can share that with others and push that to QA or even to production. Is that something that you have cooked for us? Yes, I have something around that. First of all, so let's redefine what is the outer loop and then let me share my screen here. Share screen is here. So let me know when you see my screen. It's all there. We can see it. Thank you. Okay, yes, perfect. So as we mentioned, so everything that we have seen today with all the technologies around the inner loop, so the inner loop, so code, you run your deploy or you build your application that you test that your application has the expected behavior and are you doing this loop in order to at the end create your application? After you create your application, so what's happening is especially when we talk about the container, let's put this into a container context. So you are going to build your container of your application once. It's really important. And then you are going to deploy this exact same artifact into the different environments until the production one. So you build your container. After that you deploy your container into, for example, the QA, you do all your QA tests that you need. Then if everything gets okay, so you're moving into the acceptance environment, you deploy it there and you run all the tests and so on until you ship your application into production. So everything here, and I think when we talk about outer loop and inner loop, I'm quite sure that within the cloud context or outside the cloud context, it's exactly what you're doing in your day-to-day life. It's what is defined as your application lifecycle, right? And what is going to change is the fact that you are going to have much more container. You are going to move from smaller services and small services is going to compose your application. So that's why that's in order to be sure and to let's say increase the quality of everything that you would like to deploy into production and to go through this, and to go through this journey. So you need to have a consistent way, right? To be able to push and to do all these steps. And this consistent way, it's rich and it's something that you will be able to achieve by automation, so many automation. And that's why we have the continuous integration and continuous delivery pipeline. When I talk about continuous integration for people who are not familiar with that, so the continuous integration is basically the process for you in order to build and create your release and to create, so at the end here, a container. And the continuous delivery or the continuous deployment, so there is a different terminology for that, is to be able to take this release and to deploy it into the different environments. Same thing, so automatically. So this is a part and this is a really, really important to make sure that at the end, what you're going to push in production will be, will have a really higher quality. If we move again, so to open shift, so let me show you here. So let's say that I'm starting here and I'm switching to my developer environment from the developer environment, what I'm going to have here, it's my CN project or one project in which that I'm going to simulate a specific environment, not my dev environment, it's what Carlos and Alex used, but it will be my, let's say my next environment, let's say the staging environment. One way to be able to implement the CI CD pipelines, it's, so first of all, you could use some technology, like I mean, the well-known Jenkins or other technology, which makes the work and which going to work fine. But in this new world, when we would like to, let's say to give more power to developers, so especially when we talk about all the agile methodology around so DevOps, or I should say so DevSecOps, now it's really to be able to distribute the work and the product to the end of this team. That means that it will be a team will be responsible from the creation of the application to the operation, to the same application into production. And to be able to enable that and to give all these capability, what we could use and what we are going to use, so right now what I'm going to demo, it's OpenShift the Pipelines. OpenShift Pipeline, it's a technology based on the upstream project, so Tecton, which going to help you to define some Kubernetes native pipelines. So that's mean that every, so you could give the capability to everyone to use and to create a pipeline. So there is no server to maintain. You just have to define your different steps that you would like to have. And after that, you will left OpenShift deal with such a pipeline. Just to give you a quick example and to coming from the end to end. So inside this project, I'm just going to take the code. So once again, it would be a code that we push into OpenShift into a Git repository, sorry. And from the Git repository, I would like to take my code that we put and take my code there. So we would be this one, so available if you would like to use it. I'm going to specify my code. So from Git here, I would like to deploy which application I'm just thinking about when application spring. I will take the exact same application that Alex used. I shared in the chat the workshop link. And if anyone want to try it, it can do just follow the workshop and also create an account on developer sandbox. I put in the chat also the link to developer sandbox. So it's a free account. You have to add a cloud access to OpenShift. So that's Kubernetes. And you can follow the same workshop Madhu is doing if you'd like to. Yes, let me change like this because it's like demo and I would like just to change a bit about which application I'm going to use. I'm going to use this application. So it would be a backend application that with you, especially Natalie, that we work with to be able to. It's a Qarkis application, but we see so it's just one preview that's what we are going to use. So the end application from the application, it would be a Java application. So once again, I'm not going to everything even to be able to build my container will be automated. I will let the platform running that. Can you please say a little bit there for me? Sorry, so we can see that. Here, so I'm going to specify the name. So I could say that it's so the Twitch app, and it will be at the backend. And here what I'm going to use here is I'm going to, for example, pipelines who is going to create for me, automatically a pipeline. Based on a tecton. So Kubernetes native pipelines here. So I would like to expose and to get access to my application and I push on the button trades. When I'm doing that, so this is familiar. It's something that we push into open shift than you have seen into the outer loop. But what is going to change here is the fact that now I'm going to build and to have this build based on my pipelines. If I have a look on the pipelines that I have, so it would be different step here. So without a different task here. So without going too much in detail, so I'm going to have, so one task is going to query and to get to clone my code from my Git repository. It's something that I have here. Then the other task is going for me to take that source codes in order to create my container image that it will push that container image into image registry. And at the end, if I go back to the detail is going to deploy my container image into my Git repository. The cool thing with this way to run the container, it's really that we're going to enable to run this pipeline. It's that we're going to enable and to leverage all the benefit that we have with containers. That means that each, let's say steps that you can see here running, it will be a specific container. So I could have, so let's say one container focus with Maven, for example, in which I'm going to increase the resources because it's Maven and I will need to take a lot of different, a lot of different libraries and I need to have a lot of resources when it's Maven. And after that, I will be able to have smaller container with smaller resources in order to do just to push into my image registry or to deploy an open sheaf or just to run some script and whatever. So I'm going to have this really good flexibility around dealing with my pipelines. What is good also with the pipeline is the fact that it's, as I mentioned, so there is no server to maintain. So that's been that no server to maintain and I'm only going to consume resources when it will be needed. If after my run here and my pipeline just finished, so it's going to finish and it's going to deploy the application there. So it's in progress. So after that, so my pipeline is now finished. So my pipeline is not going to consume any services. So it's really enable the serverless capability just consume resources when it's needed. So here my pipeline is run automatically, completely. And now if I access to my services, so I can go to the URL as not working right now. So probably I missed something. So let me go to the log of the application. Outchecks, maybe the headchecks. Yeah, the headchecks, the application started before. Yeah, I think you didn't wait enough, I think. No, no, it should be okay. But probably I probably forget something. Did you miss environment variables or something that you need to inject in the backend? No, no, no. It should be good except if I have a different, let me just check that maybe the endpoint backend wise. The backend wise, no, same, okay. Is it red wine instead, red wine? I don't know, it should be, it should be that one. It should be that one. So all seems to be okay. Let me check very quickly in the terminal. Like that. Yeah, yes. The beauty. Yeah, I think. I bet it's the port, do you expose it? For the DNS. No, it's there. It's always the DNS. AT-AT is working, but you need to look into which is the port that you're exposing. I had this problem. No, okay, I know, I know, I know, I know, I know. All good, everything is good. How much wisdom in this session, how much wise in this session it is. Check the service redirection, maybe. No, no, no, I know, I know, I know, I know. All good, all good, yes, because it's a network policy that I've enabled. So it's for security reasons, right? Let me take that. I can, okay, I can write it. Too much security. But it's not, okay. This is a real left-seck of situations. Really? When developer has the power to control, let's say, application layer and networking, they behave like a system administrator, right? Network administrator, we used to blame them, but now with the power, we do the same thing, right? Yes, tak, tak, tak, tak, tak, here. So in my labor, so, I try to expose... Madhu, Madhu, you need to tell everybody that this is happening because you were trying other stuff. This is not something that you need, you want to use pipelines, right? Yeah, yeah, yeah, but on the pipelines, yes. It's just because I'm selecting the wrong, the wrong project. But this is a chance to show out to the back, right? A microservice publishing in Kubernetes. So what we have to look for, the logs, the service, the network policies. So it was also a way to understand what we have to look when we need to debug something. Yeah, so something that will be interesting also, Madhu, is if you can click on the pipeline, okay. So this is something that has been generated, okay? Yeah. Can you make changes to it, like if you go... Yeah, basically, so it's the pipeline, so generating by default. And if I would like to add, let's say, more tasks to my pipeline. So I will be able to edit it. And a pipeline, so what is a pipeline is just different, let's say, tasks or actions that we would like to schedule and to order. And on that, for example, let's say that I build and before... Let's say, before building, I would like to run a specific task. So let me check if I have interesting tasks here. Oh, let's say that, not here, but let me go here. So let's say after that I build my container image. From the container image, I would like to copy my image. So from my current image registry. So to another, so I will have some specific tasks, for example, Scopeo, in which I will be able to define the source URL of my image and where I would like to push my image and so on. I will be able to ask more tasks, of course. So before building, let's say in parallel, I would like to run some specific command. Here, I have some tasks here, so not cluster tasks that could be interesting there for me. But I would like to use, for example, so create a configuration for here and I would like to add more information there. And so I'm just editing some cluster tasks like this. But it's really the way that, of course, you will be able to build a task as you want to build a task and it will have a lot of benefit on that. So you don't have to be a Tecdon-Diamond guru to be able to make changes to your pipeline. That's really nice. And so can you show us where those tasks come from? So these tasks, yeah. These are maybe what we call Tecdon tasks. Is that correct? Yes, this is, so what I just show here, it's the Tecdon task, let me just come to here. So it's a Tecdon task and everything around the different labels. So it's something that, a task is a YAML. So everything that we have with offensive and Kubernetes is a YAML. So I will have my cluster task and all the tasks that I could use so by default from the cluster is there. If I would like to create my own task, it's also possible because it's a YAML to be able to have a good starting point. So we have, so the app, the Tecdon, the Dev. So it's something that we are going also to integrate as a catalog inside OpenShift in order to give you two other spots where people is going to share tasks and that you will be able to reuse. So like you have with the Docker app, for example, other container public registry. And let's say that I would like from this, I use SonarCube. So I would like to use my Sonar scanner. So here it's explaining what is that task. I will be able to just take the task here. So copy that task. Now this task I'm going to add and import my YAML file into my project. So now my task is creating. And if I go back to my pipeline, so I go here from the pipeline and from the pipeline, I'm going to edit. And just before, let's say building my application, so I'm going to here, I'm going to just run my cube scanner tasks. So I select my cube scanner task and of course there are some parameters. So what is also important is the fact that I create, for example, one pipeline for all my Java application. And after that, everyone will be able to use that same task, just changing the different parameters. And I add my... Yes, and I define here, yes, URL and the key and of course I save it. And next time that I run the pipeline, so my task will be there. Cool. So that means that you are defining a Kubernetes native way of defining your pipeline and you are defining your build steps as container images that will run those tasks. That's really awesome. Thank you very much. Awesome, awesome. And yeah, this is a nice way of extending the capabilities with your own custom tasks. So thank you very much for showing that. Yeah, no problem. Cool, super cool. We have seen today three live demos on cloud native inner loops and outer loops. I would like to thank a lot, Alex, Carlos, Madu for these three live demos. It's just one hour folks, you know, it's not that easy. We have seen all live demos we shared in the chat of the links. If you wanna try out those demos and also there is the links to developer sandbox if you'd like to try it out on OpenShift online and the cloud access 30 days, renewable for free. You can use developer sandbox. So we also use developer sandbox today. I would like to go to the end and remind what we have in the schedule today. So the next one would be the level up hour. Docker compose with podman version three on 9 a.m. ET time. This is interesting, Docker compose with podman. And we come back again, the OpenShift coffee hour we'll come back again on Wednesday, April 14th with another disconnected series. We will talk about VMware disconnected with IP with Robert Bonnet. Thank you everyone for attending. Thank you to our guests. Thank you. Jafar, do you have any final words to these awesome live demo we saw today? I would just close by saying thank you very much. And I'm just gonna go ahead and try everything you showed right away. So thank you guys. All right. Thank you everyone. Have a good day. Bye bye. And see you in the next session. See you the next Wednesday, April 14th. Bye bye. Thank you. Bye bye.