 Hello, and welcome again to yet another OpenShift Commons briefing. This time we're going to talk about building cloud-native applications with Spring Boot on OpenShift, and we have Thomas Quarks from, who's one of our Jboss Gurus here to talk to us about that. I'm going to let him introduce himself and his topic. We'll have live Q&A at the end if you have questions, ask them in the chat, and the slides to this will be posted on the blog.openshift.org site in about a couple of days. It's taking a little while to get them up there. So, Thomas, without any further ado, please, and thank you for taking over and taking this slot. Thank you very much. So as Diane said, my name is Thomas, and I work as a Jboss Technology and Vandalist, and I'm going to talk today about Spring Boot and how Spring Boot can run best on OpenShift. But before that, we're going to dive into some other questions. So this is probably the question of the centuries sometimes. Well, cloud-native actually has many different definitions, and the definition of cloud-native will vary depending on who you ask. So before we dive into what cloud-native, what I think cloud-native is, I want to consider what's been happening in the IT industry leading up to this term a bit. So traditionally, applications have been encouraged to include lots of functionality, including user interface, various services, code access data, and more in a single application, and that's regardless of technology stack used. I'm going to use monolith as the term to describe these applications in this session. Considering for example an e-commerce application, if built as a monolith, it generally includes all functionality for handling the web user interface. Product catalogs, shopping catalogs, product recommendations, product ratings, reviews, payment system, and everything else that is needed to make a purchase in the e-commerce website. And it's all in one single application, if that's a monolith. And majority of the existing applications today are built as a monolith and deployed on traditional application servers. These application servers are generally designed not only to service the application itself, but also include many operational features to host and manage multiple monolithic applications at once, and then administer the life cycle, for example, deploy and undeploy them and manage deployment history of applications, manage and sync configuration, etc. Which all resulted in these application servers being heavy and resource intensive. However, software development, how we architecture and host application has evolved. So if we start with the software development process, we've seen involvement from waterfall type of development, where we spend months on planning and defining requirements to agile development to DevOps, and at the same time on the infrastructure we've seen involvement from big complex data centers to hosted solution and to hybrid cloud solutions. And finally, the software architecture has also evolved to the latest and very popular architecture style called microservices. So looking at the software development process from a business driver and the business drivers behind it, this development comes from speed and agility. And the same is true for involvement in infrastructure, where we're moving from physical to vertical to cloud and containing, have been reducing the lead times from month to get a server and to seconds to get a container up and running. Additionally, microservice allows for faster time to market with small incremental changes. And to me, cloud native app is exactly that. That building application using the latest of these trends and latest of these architecture process and infrastructure and having that support. There are others that define cloud native applications differently, using for example 12-factor applications as a template. However, I don't necessarily share the view that an application has to be stateless to be cloud native. It just have to manage the state slightly different than the monolithic applications. So but for now, let's focus on the architecture a bit about and about adopting microservices. So with the emergency of cloud, people wanted to take these applications to the cloud environment and in order to take advantage of the on-demand compute capacity. However, application servers are usually not a good fit for such an environment and may become a blocker. Shifting to cloud, many of the administrative capability are provided by the cloud platforms that are not required anymore from the application servers. Also, from an architectural standpoint, in a microservices architecture, you want to break out each of the services into your own deployments. Each of these microservices are managed and deployed independently and possibly by different teams making responsibility for the lifecycle all the way from requirement to production. In order to break out these services, in addition to define correct boundaries, you need to define the correct boundaries. But you also need to think about a couple of more things. So going to microservices, a way to describe this a bit easier. So let's say you've reflected your application to a microservices architecture. You will need some type of runtime to run your services, regardless if that's a fat-bootable jar or some kind of runtime. You will need a runtime to run your applications on top of that. But in order to operate that service, you also need a set of microservices capabilities to know how to build it, how to manage it, how to monitoring, etc. And we're going to talk more about microservices capabilities later, but first let's see what microservices runtime may look like and discuss what alternative successful organizations have been using in order to be productive in microservices. So here are some of the most popular runtimes and probably you're familiar with some of them. So Node.js is known for its ecosystem, that it's lightweight, and also since it's based on JavaScript. So if you're a fan of JavaScript, you're probably looking into doing at least parts of your microservices with Node.js. An alternative might actually be Vertex. Vertex is a multi-language reactive framework that supports not only JavaScript in Java and other languages, but it's known to be a reactive framework and very, very good. But it requires a new set of mindset to develop reactive applications compared to building a traditional type of applications. We also have Valkyrie Swarm, which is a bootable, just enough, Java EE runtime for you to use your application. Great if you're already knowledgeable about Java EE and that's your preferred platform, then you might want to look into Valkyrie Swarm for that. And finally, Spring Boot, which is where it's the focus of today's session we're going to talk. So let's focus more on Spring Boot. Spring is a very popular development framework that's used to be deployed typically on an application server. However, the introduction of Cloud Native and small projects with a very little project within the Spring community started to get traction and that project was Spring Boot. Spring Boot together with an opinionated approach on how to build microservices made it very popular amongst the early adopters of microservices. Just like early versions of Spring Boot had an opinionated view of which components to use, Red Hat are looking into offering support for many of the community projects that Red Hat drive and participate in. For example, of those communities are Tomcat, Hibernate, Apache CXF, Red Hat Single Sign-On, also known as Clicky Cloak, and finally a set of Kubernetes adoptions for Spring Framework. Red Hat also plans to add more technology of this as we move forward. This is planned to be supported by Red Hat OpenShift Application Runtime and supported in an application called Red Hat OpenShift Application Runtimes. And we will support Spring together with other runtimes, the other runtimes that we discussed like Node.js, Vertex, and Wildfire Swarm. And if you want to hear more about that, I suggest if you're listening to John Klingen's talk that we'll provide much more detail about the OpenShift Application Runtime. So let's move over to a demo actually and see how Spring Boot looks like. So if you've never seen Spring Boot, you might enjoy this demo, but it's a very simplistic demo on how to get started with Spring. So Spring has a website called start.spring.io, which is called Spring Initializer as well, which allows you to bootstrap your applications. So you basically just provide a group name. We're going to stick with common example here. You can change the artifact. I'm going to call this greeting. And then we can add different dependencies. And these dependencies are what we call the opinionated type of dependencies. If you do web here, you get full stack web development with Tomcat and Spring MVC. So that's fine. So let's do that. Let's generate that project. So now what I can do is I can unzip that downloaded archive. And voila, I got my Maven project. So let's go into greetings and greeting project here. The greeting project, we can open it up in a code editor. I'm going to use Visual Studio Code Editor, which I find very nice, easy to get started type of editor that is easy to do quick editing in. So as you can see, we got a good example of a good project here. We already got an application that will use Spring Boot to bootstrap our and use that in Spring Boot to start our runtime. And we can now start adding more and more functionality in here. So I'm going to add a new class in here, which we're going to call HelloController. So I'm going to do hellocontroller.java. And since you probably don't want to see me type all the time, I'm going to use a copy and paste here. I'll fix one little thing. And here I got my HelloController. And my HelloController is kind of some plastic. It's using a REST interface. And it will find out which hostname it's actually running on. Either it's looking for an environment variable named hostname, or it's going to show you unknown. And then it's up on the request to slash, which is kind of the root or exception of our calls. It's going to return greetings from Spring Boot from and the hostname. So kind of some plastic. So let's try that out. So all we should have to do is to do maven package here. Oh, it helps if I actually save the file as well. So I'm going to redo that. Maven package. And now what should have happened now is we feel what happened is that we got a JAR file called greetings. It's not snapshot.jr. So we can immediately test that by doing java minus JAR. And go to target and do greetings 001.snapshot.jr. So let's try that locally and see what happens. So you can see my Spring application is starting. And if I go to the browser, I can go to localhost 8080. And it should return greetings from Spring Boot. Let's do that big, bigger greetings from Spring Boot from my computer, which has this name hostname. So that's kind of some plastic and kind of nice. But that's how easy it is to get started with a Spring Boot project. So let's move on in a presentation for now. So that's a possible way to do runtime. And it's a Spring Boot is one potential way to do your microservices runtime, which is easy to use. There's a lot of good examples out there. There's a lot of know-how and sharing and knowledge around the community about this. But runtime is not enough. We also need the microservices capabilities, like we talked about before. And so to understand what the microservices capabilities is, we might have to go back at times. Let's consider the Java microservices platform at 2014 approximately. This was before we had containerized, well, the containerized that we know today, the ecosystem around containers that we know today. And the common way to do isolation between processes for actually using virtual servers. So a good example of this is Netflix. So Netflix, when they're building their original architecture, it was based on the infrastructure as a service running on Amazon. And for each service, they deployed on different virtual servers out there or notes in the Amazon cluster. So that is good, but it's kind of low level, though. We don't really have any benefit, except from the infrastructure as a service providing us compute storage and network. We don't really have any additional ways of how to do load balancing, failover, et cetera. So we need a bunch of other services, infrastructure services here. So that's how Netflix came up with a lot of really, really good and great projects to tackle those kind of problems that we have with microservices based on an infrastructure as a service. So that includes, for example, Eureka, which is a service registry that clients are doing look up to find the service location. It's, in a sense, a service discovery. And then you have the configuration server to externalize configuration. You have ribbon to do client-side load balancing. You have hysterics as a circuit breaker. SIPKIN to do distributed tracing. And SUL as the smart proxy purely based on Java in there. And these are all very, very nice, but it means that you have a lot of these infrastructure services that you also have to deploy with your application. So that's going back to the current state and where the tools and tools we have today, especially looking at something like OpenShift and deploying Spring Boot application on OpenShift, we don't have to have that. So because Reddit OpenShift is a complete container application platform that natively integrates technologies like Docker and Kubernetes and provides a common platform for developers to build and manage the containerized application in a self-service fashion. It's built on top of Reddit Enterprise Linux. And it's the trusted enterprise operating system which is used by 90% of the Fortune 500 companies. And it provides tools and services for building container images and source code and application binaries and managing their lifecycle in a production scale. Redis is a leader both in the Kubernetes and Docker community. And it's a top contributor to both communities to make sure these technologies fit smoothly to the needs of our customers and building microservices on top of OpenShift. And specifically for microservices, OpenShift provides a large set of capabilities to take complexity out of the building and running distributed systems at scale. It provides service discovery. It provides routing and load balancing to be able to send traffic to microservices and control how traffic gets distributed between multiple instances and enabling patterns like A-B testing, etc. You have metrics and monitoring to make sure you can identify anomalies and configuration and secrets management to decouple environment-specific data from the microservices. Finally, we have centralized logging, service isolation, etc. So OpenShift is the only platform in the market that provides multi-tenancy on top of Kubernetes and allows all teams and developers to work on collaborating on that. So the nice thing about that is there is less thing we need to do to actually to provide that type of infrastructure. I'm going to come back to that first, but let's consider a demo where we want to do deploy now on top of OpenShift. So let's go back to our application that we developed. We have here, I'm going to break out of that running instance that we had. Now if I reload, you can see that there is nothing on my host. So what we're going to do now is we're going to use a plugin called Fabric8 that will help us and add to this project so that we can deploy it immediately on OpenShift. So before that, let's check that the current POM we have does not only have the currently have the Spring Boot Maven plugin, but if I run this command, which is, this is the group ID IO Fabric8, and then you have Fabric8 Maven plugin, the version number, and then an instruction called setup. And if I run that in the project, it will add the Fabric Maven plugin and a set of sensible defaults for us so that we can quickly deploy this and build this into OpenShift. So I have an OpenShift instance running locally here, and I have currently in there, I have a project that is empty. It doesn't have anything in it. So we don't have any builds, and there's nothing in this project currently. So by going to our project right now, and we're going to issue a command until Fabric8, our plugin we just installed, Fabric8 deploy. And by issuing that command, we are going to start a build of the project, and which is going to start on send. Basically, it takes our existing jar application that that's that bootable jar application, it sends it up to OpenShift. OpenShift takes that and runs it through the source to image process where it creates a container image out of this out of this jar file. So we have a container image that is now capable of running, and it will automatically create a service for us and route as well. So all of a sudden, we have our application deployed here, and we can see we can now see greet is from Spring Boot from, and then we have the container name here. The container name matches the pod name here in here. So you can see the pod name in here. That's quite easy. And that's really powerful, because if we want to scale this up, and because of the isolation, we don't have to worry about port conflicts and other things. All we can do is just scale this up and say that I want more instances of these. Because the HLO balancer in OpenShift default uses stickiness, I will only see the same server here if I reload. But you'll have to trust me that we actually have failover. So if I add what I can do is I can actually start, I can do like this, I can do Open in an incognito window. And now you can see that compared to that one, we have another host name here now. I hope that's readable and can make it a bit bigger. With that, I'm going to go back to the slides. I talked briefly about Spring Cloud Kubernetes. And Spring Cloud Kubernetes is actually where Red Hat is providing to the Spring community adaptation to make Spring Cloud and Spring and Spring Cloud work better on Kubernetes. So since OpenShift is based on Kubernetes, that's an effort to improve how Spring can run on top of OpenShift. So taking that a bit further. So for example, service discovery. So by providing service discovery is something that's automatically in OpenShift. But by providing a Spring Discovery client, people don't have to change the code anything going from another environment over to OpenShift. They can still use the lookup, even though the lookup will basically just query the Kubernetes services. And then the actual load map, the round robbing and sort of load balancing is actually happening in the OpenShift layer. But it makes it seem less to move from OpenShift or to another type of deployment infrastructure for Spring type of applications. Same thing is the configuration maps hooks into the configuration properties and Spring property source. So you can use the Spring property and Spring configuration to read Kubernetes config maps value. So that could include environment variables, it could include other configuration things like secrets that we need to have. You have a ribbon service discovery. We have SIPKIN service discovery. And we also have configuration management with Ocean in there. So let's talk a bit about evolution of microservices. So I'll talk briefly about this, that 2014, we had the way to do microservices was basically provided on top of an infrastructure service, provide a set of infrastructure services that included distributed tracing, smart ride routing, API management message, messaging, cash SSO, configuration services and service registry. And then on top of that, we provide business logic that is actually tainted with some infrastructure logic as well. So in the business logic, we still have some infrastructure logic. For example, that is common that you client-side load balancing, service registration, circuit breaker pattern, distributed tracing, these are all part of your service that you are deploying on top of that infrastructure and that cloud provider. So currently, using something like OpenShift, a lot of the infrastructure services, some of the infrastructure services, I can say, can be moved down into the cloud layer. So configuration with config maps, for example, service load balancing and service registry is already in there. It can also help automate the deployment of certain infrastructure services, but there's still a certain amount of infrastructure services we still need currently. So that still includes distributed tracing, smart routing, API management, et cetera, et cetera. And we can remove some of the problematic things in having infrastructure logic and our business logic, but we still potentially need circuit breaker and we might still need distributed tracing logic in our actual built application in the yard that we deploy. In the future, we envision that together with OpenShift, together with Istio, it will remove the need for having that type of business logic in your application, the infrastructure logic in your business logic layer. Your business logic can focus on being exactly your business logic. It doesn't have to care about circuit breaker, distributed tracing, et cetera. Instead, we can move that type of functionality into Istio. So that's where we envision the forward going. And this is regardless of which platform we're talking about. This should be the same for Spring. It should be Spring Boot. It should be the same for Wildfire Swarm. It should be the same for Vertex and Node.js, et cetera. So that's our vision going forward. And with that, I'm going to end for today, but we're going to have a bit of Q&A, I guess. We'll see if we have Q&A. We have a number of people that are on, but I think you've done a pretty good job answering most of the questions that people have had already. And since I'm a Python person, I don't have a lot of Spring Boot Java questions up my sleeve. So we'll give everybody a few minutes to see if anyone has a question. And if not, where is the best place for people to reach out to you and get more information? So the best place to reach out to me is probably through Twitter. My handle is tqvarnst, which is an abbreviation of my first name and my last name. So that's tq-v-a-r-n-s-t. Or you can email me as well on tqvarnst at reddit.com. Is there a place where some of this information is in the OpenShift docs already? So since most of the information in here are actually going to be part of what we release as the OpenShift application runtimes. And maybe I should mention that the OpenShift application runtime will reach the market as a beta around Java 1. So that's when we expect to publish much more on this on developers.reddit.com but also on www.reddit.com where we'll have documentation and other things as well. So this is really fresh off the presses. That's why we're doing all these intro courses. And we'll have John Klien back when things settle down and do the intro bit. So look for that video too. That should be coming up soon. Looking to see if anyone has any questions in the chat or if anyone has anything to add. If not, we thank you all for coming. There's quite a lot of people considering the quick shift that we did at the beginning of this. So I really appreciate your patience with our process. And we'll get this video and the slides up on the OpenShift blog this week. So or actually Monday morning probably. You take me to weekend to process all this. So thanks again for your patience. And we will work our magic and get this and the slides out to you. And as soon as the beta is out, we'll just keep doing more red hat application runtime talks until we've covered all the bases. So thanks again, Thomas for stepping up. And we'll talk to you all soon. Thank you very much. Thank you for having me.