 Good morning, everyone. Thank you for joining us today for another episode of OpenShift Coffee Break. So today, we will be joined by our usual suspects, Natale Vinto and Thiero Pau-Nen. And of course, we have our esteemed guest, Marcus Eisel, who is our developer adoption lead joining us today. So thank you, everyone, and welcome to the show. All right, all right. Good morning, and welcome back to the OpenShift Coffee Break. Giafari, it was one month. One month when we missed the show, because there was summit in the middle. Yeah, exactly. Yeah, it was a while since our last show, but yeah, we're happy to be back on track with our esteemed viewers. But today, we have a wonderful topic, a wonderful guest. We're going to talk about the developer experience on OpenShift 4.8. And today, I have with Marcus Eisel that we present. Marcus, you want to present yourself what you do in Reddad? Yeah, absolutely. Good morning, everybody. So Marcus Eisel based out of northeast of Munich. So that's obviously Germany. I do work for Reddad now, again, one and a half years, I think, roughly at least. Been here before, always been in the area of enterprise Java, developer relations. And today, my job basically is to help our customers be productive on OpenShift with our middleware, creating solutions that basically make developers faster and help them deliver value faster. That sounds awkward, but basically means we want to make developers better and faster on our stuff. So that's why I'm here talking to a lot of customers and also working a lot with community. So speaking, writing, and all these kind of fancy things. That sounds amazing. Of course, Kofi, and I hope you all have your coffee shot for this coffee break. And Dero, now you are our permanent guest. So, Tere, you want to talk a little bit what you do now? Yeah, thank you. Yeah, I'm not a host anymore. I'm a guest. But I do devops. That's easy. You can all figure out what it means. But what it means, I help developers. Yeah, yeah, I'm a github, but that's devops, yeah. But what I work is that I help developers to do their job better and also help the SRE team. But let's say working for almost five years on the vendor side, and now watching the word in the other side of the table, the world is different. The goals are different. I would say that the full engineering power is more on running the applications and maintaining. And getting started and implementing new is actually it's not that important. It's how to run efficiently, how to monitor, how to do support developers on the doing updates and everything. So it has been like I've been working for one now for one month, and I have learned a lot and a lot of new stuff. So it's a different level. So it's really nice to hear, because now I'm not in the inner loop of the radar product business unit. I don't know what is coming from Opensift. So now it's actually nice to hear from Markus that what there is, again, something new for developers. I still work with Opensift, by the way. Of course, we cannot invite you. I mean, it's in your DNA forever now. Yeah, that's cool. That's cool. That's cool. And this is a good introduction of our topic today, that I mentioned DevOps, GitOps, developers. So Markus, we have some nice announcement for Openshift 4.8, what Openshift brings to developers. I'm going to share my screen, because I want to, first, I want to share the announcement blog and I can do in our chat, and it's going to go into YouTube and Twitch a lot to everyone. And following us on Twitch and YouTube, please feel free to use the chat to write any question. What I want to do now is share my screen. So I can go into, and let me check here. I'm going to share this screen. Let me know if you can see it. Yeah, I want to, I share the link in the chat, because I want to discuss around the announcement of the Openshift 4.8. This is the announcement, the press release for Openshift 4.8. So we have a couple of news, and today we're going to focus on the developer experience moreover. So for instance, we have a nice feature for developer. And Markus, we can discuss about it as you are a Java champion also. We have the capability for Java developer to just drag and drop their artifact, their jar file into the Openshift web console developer perspective. So there's no need of writing YAML file. There's no need or no link Kubernetes under the hood to just produce the jar, upload to the platform, or there's some database if needed. And boom, Europe is up and running in a cloud Kubernetes experience. So I think this is a very nice feature, Markus. What do you think? Yeah, I'm witnessing a little bit of a shift in the industry, right? So just a couple of moons ago, we were all talking about container platforms and container orchestration. We've seen tools like Swarm. We've seen all kinds of various approaches. And what kind of was missing all the time is something that is actively starting to reduce complexity for developers. Because honestly, in one of my talks, I had a slide with a picture of me and a big quote. So I'm not really interested in writing any YAML. Like at all, I don't really want to know how that works because all of this has been working perfectly fine on my application service just a couple of years ago, right? And now I have to dissect everything, split it apart, deal with all kinds of infrastructure issues. I think the amount of actually coding left in my days, that has been drastically reduced by this whole idea of using an infrastructure as somewhat the new application container-ish, right? I'm trying to not say that Kubernetes is the new application server, but that's also been out there. So long story short, I think this year and maybe the next couple of years will have a common theme and this will be to make all of these distributed, stateless infrastructures easier to handle for developers. So you'll see a lot more developer productivity features. And honestly, like this jar drop, this is amazing. One of my personal wishes that I've probably expressed a couple of times, what I really wanna do is like, drag and drop an ER file like Enterprise Java Archive and just drag it into the OpenShift console and literally parse the XML descriptors, like all magic, spin up the necessary databases because we can figure that out by the JDBC driver, right? So there's so much potential. So I personally consider this a very important first step and we're onto something. Like this is definitely going to give me some development time back because I don't have to deal with like everything around that is not actively putting something into code and like making me feel good. Jamal doesn't. So just for the record. That's nice point of view. And so of course, while we stick about this, the, of course the goal is to make it easier for the users to be able to get quickly up and running with their applications and have their development environment. But of course, we are, as we are, I'd say, hiding the complexities for the developer, we still generate all the resources like all the YAML files to keep it 100% like Kubernetes compliant. So just, you know, not to have a confusion here. We are providing an easier user experience, but still we are generating all the Kubernetes files if we wanted to do, for example, GitOps approach or whatever, that's something that could still work. Yeah, and this is super important, right? Because ultimately we have developers and we have use cases that require like YAML tweaking. You'll always find this. And I mean, we just talked about how old we all are and how long we're in the industry just a couple of minutes ago before the cameras went on. And ultimately, if I draw like a little bit of a conclusion, just looking at the progression enterprise Java as a specification made over the years, right? So we started with something that was container managed persistence, which was a shitload of XML and we had to like write all the mapping stuff ourselves. And if we look at something like Hibernate with Panache today, like in Quarkus, that is next level stuff, right? Hiding all the complexity, intelligent defaults, literally just coding is the main concept, but you can also start to take a part and look under the hood and make sure you can tweak every single thing if necessary. But thanks to intelligent defaults, this shouldn't be at all, right? So I love that approach. I think we're not really reinventing the wheel, we're applying something that has been proven to be super valuable to something that is pretty new. And he has something that I commonly do. So I don't refer to OpenShift as a container platform at all anymore because to me, it's like the Kubernetes development platform. It's actually built to support not only ops and not only dev teams. It has all the functionality that is needed to even involve like the business teams in a different way of working and the terror talked about DevOps, right? So this is even a different way of scaling projects. So to be really performant on a real good platform, you also need to tweak your methodologies. And all of this needs to be one package and needs to be supported by a product. So technology itself won't solve your problems. We all know that, but we can invest a lot to make your life easier and actually support these new approaches. So thumbs up for this. And I can't wait for Natale to show us this magic drag and drop thingy because you won't believe how easy it is. Yeah, yeah. We have a little demo, but before that I would like to listen also to the DevOps opinion, Tero. Do you have any, what do you think about developer that can just drag and drop a jar in Kubernetes and have your application up and running? Or do you prefer, do you like this approach? Or do you prefer another more formal approach? What do you think about it? Yeah, it's, I would have to say that I have mixed feelings. The, I remember that I'm so old that I remember when there was web logic tables and they had a web UI and you dropped your jar file in there and it deployed. Now you do something that you copy your file into deployment directory and it deployed. So now the level is going just higher that there is a lot of young generation and at the end of the day, same thing happens. The artifact goes to correct directory. I would say that for my opinion, the most important tool for developers is Git for DevOps for everything. So it is awesome that there are different tools because there are different type of developers that they want some want to write young files, not many, but some want. And if you have a lot of different ways to get started like dropping the jar file, doing source to image or just giving running a container, it's easier to step into the Kubernetes native world. And then you can just write, move to production operator that basically deploys everything to production. But there needs to be like tooling in place. Let's say that the new GitHub actions that Red had implemented, like building a container with build up and doing a deployment and opens it. So that if developer want, they can do everything with Git just committing and then there is this DevOps team that actually do the magic behind, they hide everything so that developers can be as effective as possible. But yeah, it's, they will be new tools like Panache and MongoDB and Quarkus. If you compare to old hibernate value, you have to write XML files and mappings. It is like magic. Yeah, and you actually mentioned a good point because I also do believe that there are many different types of people, right? So I don't know with whom, but somebody asked me when I started working with containers and that was even, that was the first edition of OpenShift, right? We had gears back in the days that wasn't even meant to be on top of Kubernetes. So that was even prior to all of this. So the platform as a service idea, that kind of sparked the first ideas around different work habits and ideas. So, and honestly speaking, just look at the technology stack that we have today. So we have like the operating system, some kind of virtualization underneath. We have some container technologies that is basically also just some CH root environments plus magic. We have our VMs, whatever, like JVM in the Java space or even GraVM if you wanna run something else or whatever. You have your libraries and frameworks on top. You have your applications on top. You have something that wires all the individual bits and pieces together. You have your data stores. So like if you are a student today, I don't envy you because you really have to learn a lot. And like folks like Tarot and me, we basically grew up with all of this, right? So like we've been swimming in the ocean since it was a little lake, but it's really getting complicated. So having different approaches to reach a goal and be productive is super crucial. And I think what is also important is to have these different on-ramps into the technology because you can't just get people started with a CNCF landscape, right? That is just way too much. So having an intelligent definition of the base technologies and intelligent understanding and some like practical ways. And honestly, there are different requirements when it comes to development, right? So there's like this little inner development loop that you do on your laptop, literally offline in a plane way that was a couple of years ago. But anyway, like when you're in a disconnected environment, you still wanna be productive and you still wanna do things. And you can't just have your like company cluster available to deploy stuff all the time. Maybe you can. So yeah, different ways of doing things. And I think it's super crucial to support this. I'm a beginner, I wanna learn, I wanna like simple defaults and I'm an expert. I can do literally everything myself and I want 150% control of what's going on. Yeah, that's a good point. Javar, go ahead, please. Yeah, so, Tero, you mentioned something about making it easier for users, like if you wanted to do things like GitHub actions. And that reminded me of a feature we didn't really plan to talk about it, but I think, yeah, let's jump ahead and say a few words about it. So one of the features I'm excited about with OpenTip 4.8 is something that we call pipeline as code. And if you guys have been familiar with how GitHub actions work and stuff like that, it's basically it's creating or triggering pipelines automatically based on some definitions that you put in your repository. So we have added a new feature, like, yes. Sorry, sorry about that. One second. No problem, no problem. So reminder phones off when you're live. Yeah, exactly. So, and especially when you're coffee. You owe us a beer. Yeah, owe a coffee. And so what happens now is that we added a new feature where you can say you define your pipeline within your repository. And once you trigger an event like pull request, if you are doing feature branch development or something like that, then it's going to automatically trigger the pipeline that you have put in your repository code. So you don't have to figure out separately how to first build your application and then how to create a pipeline that will deploy it on the different environments or substance. So yeah, that's something that we will be covering in another episode at some point. It's still in the early phases now. It's still in developer preview, but yeah, that's also one of the nice things that will make it easier for developers to basically take benefit from OpenShift Pipelines and Tecton, but still and have that DevOps or getups approach where the pipeline lives in the application repository and the developer can just push his code and then everything gets triggered transparently for him. That's very nice. We've seen from the engineering, we've seen a little demo about it. Well, I think we can talk about it in our Tecton series, very good topic, having the same GitLab or GitHub Expedient where you can use the code to start the pipeline, start the test, very powerful, like Travis in OpenShift, very cool. Yeah, and lots of cool feature. Now I wanna show you a little bit of OpenShift 4.8 developer experience because we were talking about it. So let me restart sharing my screen. So this is, I hope you can see it. This is an OpenShift 4.8 cluster brand new. If you are familiar with OpenShift, this is the developer perspective in a topology view. What changed between the 4.7 and 4.8 is that when you try to add some workload here, well, you have some option like you can pick from the developer catalog as usual. You can go from a Git repository, from source to image, Docker file, even then file version two, we will talk about it. And also container image samples. You can import from local machine. This is what we were talking about also with Marcus before, right? From the inner loop, from my local machine, I can start coding. I can start, I can write my container, but also I can also load my jar file. And this is what we wanna show today. And for that, we will like to deploy a Spring Boot Pet Clinic application. We were gonna start from this repository here and I put it in the chat. So what we need is a clondage repository, build locally and then we can have also the two-tire version. Let's say we start coding locally and Visual Studio Code will say we're gonna produce the artifact, the jar file locally. So the application is gonna be started in my laptop and if everything is fine, I can just upload my jar and let the jar go inside the open sheet. Locally, it's gonna use H2, right? In memory database. But this application is also supporting with the JDBC driver all the databases like MySQL. And what I wanted to show is that I can code locally, use H2, verify that my application, it's cool. And when it's up and running, I can just verify that you see there is H2 dialect run. My application is up and running, listening to port 8080. So I can just quickly test if my application is really up and running locally. Did we already mentioned that Spring Boot is insanely slow? Oh, this is a good point. This is a good, we haven't yet. But yeah, if you think about other technology that now are taking place like a core quiz, I've seen some gadget on you. Did I see 12 seconds startup time in the console? Millisecond startup time. Super fast. Yeah. Yeah, this is Spring Boot is using a Tomcat as embedded server. And yeah, and Quarkus is under Toy Firing Co-correcting and Quarkus, which has optimized and also footprint for starting the application. We will talk about that in a few seconds. What I wanted to show is that, okay, my application is up and running. Now what I want to do is move into cloud Kubernetes. Cloud Kubernetes, let's say this is close to the production. What I need is a, I need a database, no? And I can't order a database. This was also present with, you know, with OpenSheet for that seven. So I'm gonna order a database, persistent database. And here if you see, we can just follow the instruction from this repository. The database is gonna be MySQL and the other is gonna be all pet cleaning. So let me create my database in this brand new OpenSheet for the eight dashboard. All right. So I'm gonna create that in the while I can or upload my JAR file and point the JAR file to the database. So I'm gonna hit this upload JAR file. The previous compilation was producing a JAR file here. So I have my Spring Boot Pet Clinic and I can try to drag and drop here. So what I can do here is select the runtime icon, it's Java. To build the image, I can go into UBI eight, for instance, OpenJDK 11. The name of the app is gonna call Spring Boot, Spring Pet Clinic, Spring Pet Clinic. Resources is a deployment. Also I can decide right now to upload some environment variable to just connect to the database or I can do later on. Let's do later on so we can just demonstrate that the JAR upload is working with the drag and drop. So JAR uploading is going on. I uploaded my locally produced the Pet Clinic Spring Boot application, not that super fast into the web console. Now what is happening under the hood is that a build is in progress. I'm starting from my artifact. This is what we call the binary build, it's source to image. I start it from the artifact and I'm using a base container image which is OpenJDK 11 to create a container from the artifact. As a developer, I haven't had to write any YAML code, any Kubernetes knowledge. You know what? I can also play with the topology view and just logically add my Java application to the database and wait at this finish and when it's finished, I can really point the application to the database. How can I do that? If we follow always the same report I was sharing with you. We can just inject some environment variable and say, hey, activate the MySQL profile. Go to this MySQL JDBC URL, which is name of the service, local port, name of the database that we just created. And once, you know, now our application is up and running. At the moment it's not using yet the MySQL. Let's do it. Let's go into the environment. Let's inject this environment variable. So we can do also at one shot before, right? But I wanted to show that we can do also afterwards. So MySQL, add more. Yeah. So Natalie, while you are showing this feature where we can add environment variables, I believe there's also something that is possible through the web UI. If we install like the binding operator, I haven't checked yet if it's completely implemented. But yeah, when you put the arrow from the Java application to the MySQL, if you had some annotations, appropriate annotations, then it would say whenever I pull the arrow from the Java to the MySQL, then you inject these environment variables in the Java application and it will be connected to the MySQL server. So that's also one of the like cool UI improvements that are happening within the OpenTrip console. That's an amazing point, Jafar. With the service binding operator, we had the capability to auto-connect. I don't need to, you know, inject the environment variable, take care as a developer. I can just drag and drop my line and it's gonna do that automatically to enable it. And we see here, you know, MySQL has been loaded and my pet clinic application is up and running here in the live in the only live demos like Tero and each only live demos, both. But it is very cool what you mentioned because to enable what you mentioned, you can go into our market, which is an operator hub, right? And here you look for the service binding operator. Let me see. I don't know if it's in tech preview or... Yeah, I don't think it's GA yet, but... Okay, it's not GA, but when you install this operator, that action connecting things, it's gonna be automatically connecting a database to a workload and you know what? You upload the jar, you order the database, you connect automatically, boom. This is a very nice developer experience. I don't know what your thought about it. I love it. I mean, the next stage would probably be to have this for Kafka topics. Hey guys, even I can deploy Java apps to open shift. So... That is pretty easy, isn't it? It looks very, very easy. Looks very, very easy. Yeah, we wanted to do this a little demo just to show one of the features of OpenShift 4.8, a developer experience. But again, if you go here, this is the new heading to apology view, right? So again, you can start a pipeline. You can start from the example. You can start from the jar file, your artifact. You can start from any of this mChart, for instance. If you wanna deploy from... It's very popular deploying software via mChart, like a state, not complex, not stateful application. For that, you have an operator hub. You can use operator hub to deploy your software. There are quick start. You can explore the new feature. It's... There's the external, but also the internal view with a quick start. So you are guided to deploy, let's say a Quarkus application. So, Marcus, tell me why Quarkus is faster than Springboard. Now that you mentioned it. Yeah, because they are doing everything the other way around. So a JVM is an amazing thing, right? So it basically loads all your classes that are packaged up in your jar file, and it takes a look and it takes some time to kind of build its own view onto your application and the whole ecosystem and world, like in memory, right? So it's starting to look for an unused code. It's starting to look for the class path and all the classes that actually need to be loaded. And at the very tail end, like just before your application is starting to serve any incoming requests, it assigns your application threat pools and all the resources available, right? And Quarkus basically takes this first part, like everything except the threat pool assignment, and does that at build time. So when the JVM gets the jar file literally injected, it just has nothing left to do. Just start up and be ready. And if you really start to like play around, go to quarkus.io. There's actually a getting started guide that plenty of getting started guys, but you can like witness Quarkus applications of pretty much the same size, firing up in like under one second locally on a normal machine. So that stuff's really neat. And you have like the local dev mode, which is my personal favorite. So Quarkus starts up that fast that it almost looks like it's a hot reload. But instead of you change something in your source code, it gets recompiled and Quarkus just restarts, which is possible because it's so lightning fast. So yes, if you're a Java developer, if you're looking for something like serverless, microservices or I call them microlith, then Quarkus might actually be an option to look into. And if the JVM startup improvements aren't enough for you, there's also the option to build a Graal VM native image. So using Graal substrate, so you can like natively fire up your application, which is insanely fast at all. Did that answer your question? I love Quarkus. I think that's very amazing what you just shared. But if I wanted to make a very like down to earth analogy, please let me know if that's correct. So say you wanna play tennis, right? So I love tennis, that's why I'm bringing it here. So in one case, you are going to the tennis club with your golf clubs, with your basketball, with your volleyball, with your swimming suit. And then you finally decide, oh, I'm just playing tennis. So I'm gonna take my racket, right? And in the second option, you just say, well, I'm just going to play tennis. I just take my racket and leave everything behind. Is that correct? Is that like, I'm taking only what I need every time I build time, I get things off. It's pretty spot on. The only slight difference is you don't even actively have to decide that, right? You implicitly decide by using classes and putting like your library dependencies. They are called extensions in Quarkus. So obviously there's a little bit more magic going on behind the scenes, but it's not dark magic and pretty similar to what we've seen with OpenShift. You have like intelligent defaults. You can still dive all the way down to the individual settings and screws and tighten them if you want to. But yes, just by using extensions, the capabilities that you import via packages and everything that's needed by your class, that's kind of your packaging list, but it's implicit. So you don't have to make an active choice, but I love that comparison. It's really spot on. And also remember this, even that containers were awesome and Quarkus was awesome, but if you write bad code, it will still be bad code. It will not help. Of course, you have sensible default with Quarkus. Like you don't have to write, let's compare the plain old HTTP request. You have to write your parameter parsing and everything. So you have defaults that do stuff, but still you need to write good code to have a good application. Yeah, you can also drive a Porsche on the Autobahn and just be a bad driver, right? So it heavily depends on what you're doing with the tools you're provided. So I totally agree. Yeah, you still have to have some abilities at some point, if not like that. But now that you mentioned it, sorry, now that you mentioned it serverless for Quarkus, in the announcement that we have been afforded aid, there is an enhancement for serverless. And here we talk about OpenShift server function. So this is, I think this is a tech preview, it's tech preview, but this is the function as a service on OpenShift, very close to pretty popular serverless offering like Lambda as your function or Cloud Run. We're gonna bring this on hybrid on OpenShift, you can run your function even in serverless. Yeah, I think that's kind of the logical next step, right? So if you have a really suited function that needs to be executed at various times, various instances, and I've seen a couple of really good examples. So one function that I love the most because it kind of explains the essence of a function from a design perspective, there's a function out there as an example, it's not even Java, but it basically takes a video stream as input and adds a watermark to it on the fly. So you can like process 20, 30, 40, whatever your pocket actually allows in terms of cloud credit consumption functions at the same time, convert these videos and add like a watermark to it. So if you have these kinds of scenarios and I'm not explicitly picking on like video editing or anything, but just as an example. So the next logical step from microservices which are like bounded contexts, logical functionality around these bounded contexts, functions will definitely up the game in particular when it comes to scaling to needs, right? So this is a super important feature and I'm pretty sure we'll see a lot more of it being used in the next coming years. That reminds me of one comment is that also that now that the startup time is coming, it's going low. So let's say that you don't have overhead to run a corpus in a serverless function to do something. So then you can actually still removing core components. Let's say that your web application registry part can be a function and it can scale basically to the sky's the limit because you don't actually need to think about the cold boot at time or the startup time. You just isolate stuff from the core. This is something that is super important to Java developers because they're kind of bound to the JVM and the JVM itself can be a beast in terms of size. So not only disk, but also memory consumption, right? And there's a couple of non-cloud friendly behaviors. Just what I described with Quarkus, right? So everything the JVM does before the application even starts up kind of leads to a little bump in resource consumption when applications get started up, right? So having a native image as a compilation result coming out of Quarkus, you get rid of all of this. You have like a super small footprint in memory on disk like the actual images on disk are also comparably small because they only contain what's needed. So this is the exact right approach for serverless on the JVM in Java. So yeah, give it a try if I haven't said that. Code.Quarkus.io And I want to show also that there is a nice project which is the Quarkus for IoT project from our colleague Andrea. Very cool. He organized an access. We were read that partners in order to use Quarkus both on the embedded device. It's a Raspberry Pi. So Quarkus minimal footprint. It's very good also for such low memory devices but also and moreover on the server side part with serverless. So lots of data that is coming from those devices here is an example, but can be really lots of data, pollution data in this case. It can come to an OpenShift server and with serverless can really scale up on demand. I'm also, I want to share here in the chat. So you can see also this project is a very good example on how to use Quarkus technology either for the Edge IoT use case but also for serverless. And for that eight is bringing function. So it looks a perfect match in heaven. I don't know. What do you think? Yeah, and this project obviously is open source. So pull requests, welcome. Just take it, play around with it. Everything is on GitHub. And he also did something else because this is the first example of Quarkus running on a Raspberry Pi as a native image. So take a deeper look at how he did that. I think it's not even officially documented at the very moment, but even that is possible. So yes, plenty of good things in this project. Yeah, there's a repository, a GitHub organization grouping all the repositories. And you know what, you remember me that we're gonna have the Quarkus for IoT access winner on OpenShift TV. We're gonna announce in July. So we're gonna have a round table with the winner. They can share their thoughts about what they did with the platform, how was Quarkus for either client and server side. So very, very cool. Wow, we have seen a lot of things, right? So this is, if we come back to the announcement, serverless, we have pipelines, pipeline as code mentioned by Jafar. And also we have this OpenShift sandbox of containers, which is a technology preview that provide lightweight virtual machines. So also we are extending our support also to virtual machine with Q-fear, but also Cata containers, which is a technology for lightweight virtual machine. But for our developer experience, I think there's a huge improvement. And we've seen a couple of things that developer can use, can consume, order software from the operator hub with permission or developer catalog. So I think we've seen lots of cool things. And this is the Quarkus app we were deploying before. So I think we have a nice round table of option, but now I would like to talk about Microsoft. Where it comes, okay, I have OpenShift 4.8. It's very cool. I'm a super new developer, very cool. What about bringing workloads to OpenShift 4.8, migrating workload? What about it? Because I don't think all the developer are super cloud-native from day zero, no? JAR file upload is very cool, helping a lot. But is there any process to help enabling the developer on OpenShift, OpenShift 4.8? Yeah, I think Taro mentioned this in the very beginning, right? So, I mean, the majority of workloads is not going to be green field microservices, cloud-native application development. We have large investments out there. Many companies have like all their backend processes supported by sometimes really large enterprise Java applications. And they are built in a technical way, right? So back in the day is three tier systems. We separated presentation from the business logic. We have a little bit of database access or even integration layer or whatever it's been called back in the days. So to find a reason to throw this away literally and rebuild from scratch, it has to be a pretty good business case, right? So I think what is super important is to have an infrastructure underlying that supports all kinds of approaches. So we talked about Kata containers, CubeVerts. So like teensy lightweight VMs, we also have like full-blown VMs that we can deploy on OpenShift. So we can literally mix and match depending on what modernization path you choose for your application infrastructure. And I think, so I looked this up a little bit because I was interesting. Everybody's talking about the six R's, right? So it's like refactor, rehost, retain, a couple or more. I don't even know them by heart, but ultimately the collection of the modernization approaches you can choose for your applications. And they are kind of standardized and everybody's referring to these R's. So Gardner coined them. It's unbelievable. These guys are like definitely everybody and everywhere involved. But ultimately it's been four R's and they got like developed over time. And around 2011 they got picked up by Amazon and there was like a big blog post that went literally went viral where they described what to do with certain applications, how to categorize them and basically lay out a plan for modernization or migration into like new technologies. And I'm trying to not make this a pitch for Kubernetes because I think that technology itself is never a good reason to modernize. So it should always be changing requirements. And this is the architect in me talking because ultimately we build our systems conforming to functional and non-functional requirements. And it's like simple best practices out of software architecture and design that help us select the right infrastructures, deployment, ultimately scaling variants. So we can meet these needs. So just because there's a new technology and we wanna like stuff everything in a container that's not a good reason. But yeah, ultimately the best platform that you can chose is something even if you wanna standardize that supports all these kind of variants. And maybe you even find a platform that comes with supportive tooling. And we haven't talked about this and not even sure if that's on an upcoming show or somebody talked about it here already. There is the modernization toolkit for applications that Reddit offers as open source. I think the upstream project, the most important one at least is called windup. And this is a rules engine. So a bytecode parser that can look for incompatible Java classes. So you could do like app server migrations. You could do Java migrations. You could like check for non-compliant libraries and stuff like that. So there's all little bits and pieces that you actually need to take a look at before you can make a choice for your right deployment target. But yeah, as like we're talking about OpenShift Coffee Break today, having this variety of options for deployments on this Kubernetes development platform, it really helps with all kinds of modernization efforts. Yeah, that was actually a good point that you said that you don't have to do microservices to be agile because there's a lot of other stuff than just the code. And like you said, don't throw the like the research and development that you have done for 20 years. But if you have a platform that can run VMs, catacombs, containers and charfiles, you can harmonize change management network in firewalling, inverse control, updates, everything. So you get a lot of stuff for free for free that you now have like change ticket, firewalling, different organization, managing different parts. So it's not just code. It's the whole end-to-end that you have. And if you can centralize and harmonize even some part of that, it will be a beneficial point. And the magic doesn't stop in your data center, right? I mean, ultimately, I also want to decide on how costly your hardware is that you're running it on. Maybe there are governmental reasons that you need to keep a certain data set on-prem literally physically locked up in your basement. But for other things like Black Friday sales and you need to expand your shopping cart part of your application, maybe you just want to shovel that out to a public cloud provider for the two or three weeks a year where it's needed, right? So you ultimately need to manage that complexity of your applications across various clouds and on-prem. Hey, having a solution that can provide all of that, I'm starting to sound like a marketing person, but I'm actually a developer, I can still code. So Natalia, let's talk about our book at some point. I heard we're allowed to talk about it. We can like spoil it a little. Are we? Are we? Yeah, yeah, we can. Okay, cool. Yeah, I think it's a good time because we have 10 minutes left. So then we can remember our next appointment. And here we go. Wow. This is how it's going to look like. And I talked about the six hours. So Natalia and I teamed up because we really wanted to put a lot of what we know and what we think are the right ways to modernize existing applications into a book. So to give you a little bit more than just a 50-minute show to watch on YouTube, this is not yet available, not even in like earlier access or whatever, but watch the space and now that it's official, like everybody knows it. So, Gaston and Natalia, we need to like really finish it. There's no way back now. We need to. So does that mean that it has already been started? Because you're finishing? Yeah, I was going to say at least started and that's okay guys, but that's really nice to have. So here's a tip like writing books is never easy and everybody knows that. And even we knew that in the beginning, but we're like, you really can't recommend us writing a book in a pandemic. That is kind of awful. So yeah, but we're getting there. We don't really have a release date yet, but it will be this year. So maybe after summer, some like cozy winter reading that could definitely be true. Maybe it's available for my first international flight whenever that will be. And if there is no COVID-19, you can go to a world tour. So you could be reading your book in the plane. That would be a first. I think I've never done that. I hope you read it before it's published. Yeah. Wait, is that mandatory? Usually, but there are technical writers who do it for you. In fact, big shout out to our colleagues that are helping us in tech review, J, Sebi, and all the colleagues that are helping us with the tech review, all the colleagues that inspired us with their awesome works. So we will, Marc, we will do a really big thank you page, but we will really, I would really like to thank everyone here in Redata, which is helping us writing this book. There are lots of Kubernetes inside the book, lots of modernization and techniques. There are springboard work. I cannot spoil everything, but lots of brand fresh new stuff for Java developers. Yeah. And I think what really makes it different is it's not just a cookbook, right? So it is something that really takes you through your modernization adventure. So we're not just looking at code. We're also looking at all the non-functional requirements around what it really takes to replace maybe an application server with modern technologies. So we're kind of trying to transition your existing toolbox and knowledge into the modern world. And I should have said modern, but into today's world from a technology perspective. But yeah, I can't wait for the first feedback to come in. So enough. Yeah. Thanks for being on that journey with me, Natale. Definitely. It was an awesome experience. And amazing. And yeah. Thank you. And cool defaults. We have time to remind you a next appointment that we have not on OpenShift TV, but we have here in EMEA. And I'm going to share in the chat. Let me start sharing. We have this and Robert is also in the chat. We have the OpenShift and vendor Threshen, which is typically a German event, but it also has a room in English, typical one room English, one room German, and Robert. You can confirm in the chat. So if you go to this website, 11 AM CES time, we're going to talk more about OpenShift with all the other news, other discussion. So please, if you have time, go to the OpenShift vendor Threshen, because it's a very, very nice event. I think the last one was for around people attending. So that's a cool one. And Mark, even you joined it, right? I don't know if you joined it or not. I had the pleasure to talk a couple of times at one of the events, even live and in person before all of this went down on us. So it's a large community, 1 and 1 half thousand people actively working with OpenShift and technologies. A lot of people coming to these online meetings and even the in-person meetings are really like something that you need to keep an eye out. So if you're not already part of this amazing community, sign up. It's free. There's a Slack channel. There's a lot of people around who can help you with your questions. And there's literally somebody to answer everything from operations to ultimately development. Even I hang out there from time to time. So yes, join us. OpenShift Anwender Threshen. And congratulations for your pronunciation, Natale. Your German lessons are paying off. I was impressed by the accuracy, even if I don't speak German. It seems, it seems. Yeah, whenever you need a challenge, this is something I can highly recommend. It seems very German. Cool, cool, cool. So folks, five minutes left. I think, Jafar, we can wrap up and go into the remainder of next appointment on OpenShift TV and our next show, do you want to wrap up everything? Yeah, sure. So again, thank you very much, Marcus and Tero for being here as guests. So Natale, of course, you and I will be here, I hope, for every other OpenShift coffee break. But yeah, so what we saw today was some great announcements about the developer experience improvement in 4.8, things like making the console even easier to use with the drag and drops for deploying jar artifacts and creating all the resources, or even connecting the applications between them by using the service binding operator. So this is still, of course, in tech preview, we spoke about pipeline as code, which is making the tech on OpenShift pipelines even easier to use by putting the pipelines directly in the source repo. So I think all of that is going to improve a little bit more the developer experience. And again, as you said, Marcus, this is like the beginning of making it even easier. So I really love the console. It has been going through a lot of changes when we switched to OpenShift 4. And yeah, I can't wait to see how many improvements we're going to put in, especially when we speak about functions with things like eventing, being able to drag and drop, to connect to Kafka topics or whatever, and be able to just graphically build your application, and then everything happens in the background. So yeah, that was very nice. So of course, don't forget, guys, to join the OpenShift.tv Twitch channel. So we're going to put the link in the chat. And our next episode here will be, again, in two weeks. We will be speaking about tech on and OpenShift pipelines. So it's going to be our second episode of OpenShift pipelines and tech on. So what we will be covering this time is what happens when you commit your code and then the push event triggers a pipeline. We will explain in detail what happens in the pipeline constructs, in the tech on constructs, with things like trigger templates, and all the little things that work together to make that automatically trigger a pipeline run. So again, we will have guys from tech on engineering and OpenShift engineering to talk about it. We will show you some cool demos and go into the under the hood of what happens when we trigger those pipelines. And so yeah, that's it. We will have another session dedicated to that new feature of pipeline as code to show you how you can use that in the same way of using GitHub Actions. So I believe this is going to be the episode three. But yeah, things are working. Nice, nice. We are bi-weekly. We are bi-weekly. So non-executive, two weeks from now. Jafar, we can be weekly. So we have lots of content. Yeah, yeah, yeah. EMEA, time zone, yeah. I am actually aiming for that. So what we are thinking about doing now is keeping our regular OpenShift Coffee Break meetings like bi-weekly. But every other week, I will try to put in a text on episode in between. So that's my end goal. And then we would have a show every week. But one time it's going to be a main topic for OpenShift Coffee Break. And the next week, it's going to be dedicated on text on. So it's going to be like the text on the web. And next, you will be a text and influencer in social media posting videos. Yeah, yeah, exactly. You will see my. Product placement. There is Morocco and Red Hat. And we and everybody is promoting your stuff. Exactly. You will see my face everywhere. I can imagine the both that Larry will buy you, yeah. All right. So folks, thank you. Thank you, Tero. Thank you, Marcos. Thank you, everyone. And Tero, keep spreading the OpenShift aura outside. Yeah, I have been using this awesome service called Rosa. It is nice. That's the way managing OpenShift. Thanks, Jens. Pleasure being here. Bye-bye. OK, thank you very much. Everyone, I think that's a wrap. And we'll see you on the next show. Thank you. Bye-bye. Bye.