 Hello everybody and welcome again to another OpenShift Commons briefing. This time we're going to get Red Hats, John Klingen to give us an overview and explain exactly what OpenShift application run times are. It's the acronym I love, it's Roar. But I think it needs a little bit of a deep dive and explanation and I'm really happy that we have John with us today to do that. So the format is ask questions in the chat if you have them. We'll open it up for live Q&A at the end of the presentation, but we're going to let John rock and roll here and introduce himself and his topic. So thanks John, take it away. All right. Yes. Thank you very much and thanks for the invitation and thanks for everyone either attending or viewing after the fact. So as I mentioned, I'm a product manager at Red Hat and I am responsible for the OpenShift application run times product. This includes just a quick brief overview. Node.js, a wildfire swarm, Spring Boot certification as well as Eclipse Vertex. And I'll explain more a little bit later as we go throughout this presentation. I am also an active member of the Eclipse Microfile project, which I'll also briefly describe during this presentation. Okay, so we're going to talk a little bit about the monolith to microservices trend along with an overview of microservices, talk about the evolution of microservices as well as how the OpenShift itself and Kubernetes actually can offer quite a lot of value to microservice developers. And talk about the actual product OpenShift application run times with if we have enough time, a quick demo, and then a little bit of a kind of a looking forward with Roar as well. So Red Hat OpenShift application run times is kind of shortened to the Roar acronym. So if you hear me say Roar, that's what that means. Okay, talk a little bit first about the monolith to microservices trend. What I think most organizations have in place have been traditional Java EE application servers, and these run either Java EE applications or Spring applications. And the interesting thing about Java EE application servers is that they actually combined with an operations group, a lot of services that are available on behalf of the developer. So as a developer, I can focus a lot of my business logic and not have to worry a lot about some of the supporting services that are available by an app server platform like JBoss EAP. Examples include provisioning, high availability clustering, session replication, functionalities like that. And what's going on now basically in the industry is that software development is changing, right? The industry as a whole is moving from, you know, well, I don't think we've been waterfall for a while, but from agile methodologies to DevOps, from an infrastructure perspective, we've gone from bare metal to virtual machines to now cloud environments, whether it's public cloud, private cloud, or what's probably most prevalent is actually hybrid cloud, where you have a combination of both public and private clouds. And from an architecture perspective, developers have gone from developing monoliths, which are very easy to develop, right? I mean, there's a huge benefit to developing monoliths to today where we actually have microservices. And the challenge to developers is all of this is happening simultaneously, right? They have potentially, you know, an architecture change, monoliths to microservices, a cloud platform change, which may be perhaps virtual machines to now Docker containers, right? An underlying platform change, as well as, you know, changes around an application runtime, maybe as a part of moving to microservices, you're evaluating not just, you know, how does Javi even in the cloud, but maybe I can get into reactive development, you know, maybe I can look at Node.js, right? So all of these things are happening simultaneously and the developer kind of has to, you know, keep up with these changes happening within an organization. And the way most organizations are approaching this is through kind of, you know, a smaller team, perhaps architects or application leads, kind of all collaborating to go off and evaluate cloud runtimes, like OpenShift, right? That's where I think a lot of you are here. Cloud runtimes, application runtimes and so on, but at some point the rest of the organization is going to have to be brought on board and brought up to speed quickly. And in some organizations I've spoken to, it's, you know, more than 10,000 developers, right? So the question becomes, how do I make those developers as productive as quickly as possible as they kind of move through this software change? So from an infrastructure perspective, right, some of you may be familiar with the graphic to the left. That's actually one of the, you know, traditional OpenShift graphics, right? Where, you know, you have, it's the architecture of OpenShift, where you have, you know, nodes, first of all, you have an OpenShift cluster, right? In that cluster you have nodes, which can run containers or containerized applications that the developers in your organization develop, right? And what's interesting about this is it provides a lot of functionality on behalf of the developer. And what's interesting is that a lot of that functionality isn't necessarily being tapped into, which is kind of what Roar directly addresses. So from switching a little bit, and I'll get back to kind of how developers can leverage a lot of the functionality available in OpenShift, you know, there's microservices itself, right? And there's a million different definitions of microservices. But generally speaking, what it is, is instead of a single monolithic application that includes many pieces of functionality, a microservice is just, you know, a collection of small services that each individually own part of the business problem domain, and they all kind of collaborate to expose eventually a service out to the end user. And these are each independently deployable, which means they conversion at different rates. I don't have to, you know, wait for another part of the application to finish before I can actually, you know, provision a monolith, right? Now I can provision each service independently, right? And there's other benefits I'll be getting into. But the main piece is that it's independently deployable services, right? It's no longer a single monolith. And basically the monolith is broken down into individual services using domain-driven design, typically. It fits the microservices model well. So I take a business capability and I make that business capability a service, right? And I think the most interesting piece is relative to microservices and the folks that are most likely watching this is the fact that microservices, right, where you used to have one big monolith, now you have many microservices, you've kind of exploded the number of deployable artifacts, right? Maybe you've gone to 20 within your organization to potentially 100 or 1,000, right, depending how you decompose your monolith. And so what really helps in that situation is that you have fully automated software delivery stack, right? And that's where OpenShift comes in a lot of ways because it makes provisioning services actually a lot easier and managing those services a lot easier, provides a common infrastructure for developers to write to as well as IT ops to kind of manage. And the other thing is that you no longer, with a monolith, you're developing in one technology typically, right? Whether it's Java EE or Spring, maybe you're writing to a specific database, you know, an Oracle database or MySQL and it's always that database within an organization. And what microservices enable is now multiple application runtimes. Each service that I have in my environment can potentially be running a different application runtime is how we refer to them. So I mentioned some of them before, Node.js, Eclipse, Vertex and so on, right? So it provides developers a lot of autonomy, right? And I suspect within your organizations you'll narrow it down to perhaps a subset of what's possible in the application realm, right? It's not going to be a thousand different runtimes running in your environment, but you'll narrow it down most likely to a subset of supportable things. And that's kind of what Roar is going to be offering. The good things about microservices, it's basically changes, you know, the way you approach developing applications. So agile software development, I mentioned domain-driven design. There's no common packaging model with a container format, right? So now it doesn't matter what the runtime is, the packaging format is always a container, right? You provision the container. And then within OpenShift you manage and operate those containers consistently across all the language runtimes. And full lifecycle automation, right? I kind of mentioned that. So there's a lot of really good things around microservices. Part of the issue with microservices though is that it trades off agility with operational complexity. So now with agility, I can actually deploy services independently and no longer does one team have to wait for another team. Each team can grab their services at the need of the business, right? Whatever that is required by the business for that service. The issue is now we've introduced a lot of complexity, right? There's more things to manage. There's the fact that I have many services in general. And some of the things that were provided by the Java application server aren't necessarily there in a microservices environment. And I'll touch on some of those. And this complexity means that it can be tougher to bring or onboard the rest of the organization and onto a microservices and kind of cloud platform. Right? And I touched on that. So the really ugly piece of it is that building large-scale distributed applications or distributed systems is really, really hard, right? Because with a monolith, you may perhaps find yourself having created some spaghetti code perhaps within a monolith. But at least you don't have a network in between service calls within a monolith. If you don't design your microservices properly and the boundaries between your microservices, not what you've done is you've also introduced a network, right, which increases the latency significantly. So many more pieces to be that have to get involved, you know, many microservices, getting them all to interoperate on these issues like how does one service locate another service? How do I configure that service, right, in a way that isn't tied specifically to that service? These are things that were kind of inherent in monoliths with Java EE that now we have to go off and solve in the microservices world, right? So, you know, microservices recommendations, I think the first thing is to think about what applications actually require microservices because if you find yourself in the situation where you have these monoliths in your organization and they're working well, then maybe you don't need to actually convert them into microservices, right? Just perhaps you can re-host them in a container, which JBoss EAP does very, very well. You could re-host your monoliths in a container running on OpenShift and you still benefit from some of the operational aspects of a cloud platform. But, you know, as soon as you begin decomposing some of your more complex application into monoliths, I think the first thing is, can you decompose it in the context of the monolith, right? So, Martin Fowler, there's a great blog entry he has on monolith first. Make sure that you've defined your service boundaries properly within the context of a monolith before you go off and separate these out into multiple services running in your environment, right? So, that's kind of a recommendation that I have if you want to decompose, right? If you want to start from a greenfield application, right, a brand new application, which is a strategy that a lot of organizations are taking, start small and grow from there. Don't pick real complex services or applications first, right? Choose the simpler ones, gain some experience, make sure that you have your domain model properly modeled before you go off and do it, right? And, of course, Red Hat can help with the Red Hat Innovations Labs as well, right? So, a little bit here on the evolution of microservices, beginning around 2014, microservices became, instead of becoming quite popular, the idea of microservices, the roots of it reached back before 2014, right? But I think where we really, you know, organizations really started to evaluate them more seriously beyond just kind of the absolute leading edge companies started right around 2014. And the interesting thing is, when you're developing business logic, right, and now underneath you what you have is just pure infrastructure as a service, right? So, you know, it's an AMI with just an operating system, right? Or a container with just RHEL, for example. What you're missing is a lot of those services that were available to you from a Java EE application server, right? And even when I talk about Java EE, it could be .NET or some other kind of more complete platform that was really good at running monolithic applications. A lot of those services are no longer there, which means you have to go out and replace them with something. And so what we kind of did in the 2014 era as an industry is we defined or created our services, some of these services that actually ran on top of this infrastructure, right? Infrastructure as a service. So some of these managed services might be a service registry, right? So that one service can register with a service registry. So all the other services that need to use that service can find it, right? So you need to be able to register and discover services within your microservice environment. So the idea of a service registry, which has been there and computing for a long time, but in terms of microservice, right? It's definitely a very, it's definitely a required component. And things like, you know, configuration server, how do I externalize my configuration? I no longer have state management with session replication like I did with Java EE, which means now I have to have basically a data store, some kind of caching system to actually store some session data between requests, right? So these are things to think about. And we've, even as an industry, kind of polluted some of our business logic with these infrastructure type concerns. So if you think about how do I deal with, if I call a service and that's so long, that services isn't available, how do I deal with that failure situation? That's where things like circuit breakers, bulkheads, these programming patterns, many people think of hysterics from the Netflix OSS stack as an example of that, right? So now I've kind of baked into my application this notion of services come, services go, or they're available and not available, I should say. And there's many examples of that within a microservice. So these red bubbles kind of, or red dots kind of represent the supporting services that I have to have, and the yellow dots are kind of things that have been, infrastructure concerns that have been kind of dealt with at the application layer, right? So if we kind of think about OpenShift, not just as an operations platform, but as a platform that can be leveraged by the developer, there's a lot of services available in OpenShift and Kubernetes by natural association that can be leveraged by a developer. So we've got service discovery, perhaps we could just use DNS, right? Because Kubernetes, as I create new services, that service has a DNS name represented by that, right? And as I create many instances of that instant, many instances of that service, those IP addresses are automatically added and removed from DNS, but I also get the benefit of load balancing, right? These are things, both things that can help replace monolithic architectures with microservices running on top of Kubernetes environment. Those really help. Auto scaling, scale up, scale down. Java EE application servers often offered that functionality, and now that is kind of being replaced by OpenShift and Kubernetes. Rolling upgrades become much more straightforward operationally across runtimes, not just Java EE, but any runtime that you choose to develop with can now benefit from these features in OpenShift and Kubernetes. Now, getting to some other interesting things that can actually impact an actual application and how I write it is externalized configuration. So instead of having a configuration service that kind of, you know, where a store configuration state, you know, maybe it was a database, maybe it's some other service, you know, RKS or whatever, right, that can actually store properties. Not RKS, I apologize, but Eureka, you know, store properties as well. Maybe I can externalize some of that configuration, right? Well, the interesting thing is Kubernetes has built into it the idea of config maps where I can store a configuration inside Kubernetes itself. So why not just use the features that are available in Kubernetes instead of relying on this other service that I actually have to go out, you know, go and manage myself, right? If I have to create an instance of something, then I have to manage that thing. Also, the idea of credential store, instead of having a vault as a separate process to store secrets, maybe what I could do is just use the inherit secrets capability built into Kubernetes as well. And getting a little bit further, instead of using, you know, these configuration strings, let's say to connect to a database and storing that in a configuration service, maybe we can use the actual, you know, use the service broker, right? That is now, I think, tech preview in OpenShift, which is 3.6 as of this recording. But I think it's in 3.7, it'll be out of tech preview if I recall correctly, right? But the idea of service binding is I can basically bind to a broker and that broker can provide to me all of the credentials that are required to connect to a database and maybe even instantiate the database as well, right, if one isn't already running. So there's lots of interesting things that are showing up in Kubernetes that are directly impactful in a positive way to developers, right? These are just some examples and there's more, right? So if we think about developing applications now, what I've done on this slide is I've kind of added container platform services provided by OpenShift, right? And maybe now what I can do is push out of my business logic into the underlying infrastructure, some of the capabilities that are provided by OpenShift. So one example could be ConfigMap, right? Installing a service configuration inside of Kubernetes itself. And what used to be something stored in my business logic, maybe that's something now that becomes a supporting service, something that's a higher value service running in my environment, right? So what I'm doing is I'm pushing some of these concerns out of my actual business logic and into the stacks below the business logic, right? So the question then becomes, how can the application run times actually take advantage of all this stuff that's been pushed down into the underlying container run times, for example? And that's where the OpenShift application run times come in, right? And you can see here what some of these are. I mentioned Vertex, Wildfly Swarm, Spring Boot, in terms of certification, and then Node.js. And we also have Jboss EAP-7 we're planning to include in this product. It's not released yet. It's actually in early beta, and I'll discuss that here. Shortly. And with Jboss EAP included in the SKU, I can basically first create my applications using, begin decomposing my applications in the context of the monolith. Remember I was mentioning that, don't go straight from what could be a spaghetti code monolith into a microservice. First solve the problem in the context of the monolith, right? And having Jboss EAP included in the SKU lets you do that, right? And we even have this concept called the majestic monolith. There's some organizations out there that are able to deliver, you know, weekly releases of their applications running in a monolith on top of, for example, an application server, right? There are customers doing that. And if you think that weekly releases of your service are frequent enough for your business, then it may be actually simpler just to leave it in the context of an application server, right? But if you decide to move to microservices, first try and solve the problem in the context of an application server and then decompose it out into microservices, right? All right, so simplifying deployment on OpenShift. So what Roar offers is basically the application run times and support for the actual application run times. So support for EAP, support for Verdex, support for wildflies form. At the moment, we're just certifying Spring Boot, but we'd look forward to feedback if you'd like us to go further. And Node.js is in tech preview, right? So in the first release, it won't be fully supported, but we're definitely working on full support for Node.js as well. And what we want to do for each of those run times is create the bindings to those Kubernetes features so that it simplifies the development experience for developers, right? So they don't have to know all the ins and outs of Kubernetes to actually leverage the features that are in Kubernetes, right? So that's partly what it offers. We're going to extend that not just to the features in Kubernetes, but also add on JBoss Mid-Ware services. And I'll explain that. Well, I'll just cover a little bit here. Think about JBoss Data Grid, right? If you need either a data store to store your session information between requests, you could use JBoss Data Grid for that. Or you could deploy an entire Data Grid, right? If you need many services in a very, in a distributed data cache, right? For complex scenarios, we want to develop with Roar, making that experience of using any one of these run times with that Data Grid very natural for that run time, right? And so we're extending beyond just Kubernetes into our application run times, all running on top of OpenShift, right? Documentation examples. So we'll have documentation and examples around the bindings and the simplification that we've done, as well as some tooling I'll cover and a totally awesome getting started experience, which I hope to demo if I have time. All right, so VertX, just to explain a little bit about some of the run times that we actually support a certify. So VertX. Think of VertX. It's an Eclipse project. And it started in 2012, kind of as a way to do what Node-Date JS does for JavaScript, reactive development, asynchronous development, doing it on the JVM, right? So VertX basically takes a similar approach to Node-JS, but on the Java virtual machine. And it's really good at high concurrency, low latency applications. It excels at that, right? So if you have a high concurrency, low latency application that you think, you might need to develop it in a reactive style or an asynchronous development kind of style, but you still want to use your Java expertise, go ahead and do that, right? It is polyglot in that you can use many languages to develop VertX applications, but all we're supporting today is the Java language binding, right? So that was a kind of a really quick overview of VertX. If this sounds interesting, there's a couple of books that will kind of introduce you into VertX, VertX, developing asynchronous or reactive style applications for the JVM, for Java. These books are a really good place to start. Just go to VertX.io flash docs, okay? Wildfly Swarm is the next runtime. So many of you have probably heard of Wildfly. It's a Java EE application server that's upstream led by Red Hat and Red Hat productizes that as the JBoss Enterprise Applications server. And Swarm basically leverages Wildfly, the upstream application server, right? And some of the Java EE technologies, not all of Java EE, right? The technologies that are relevant to creating microservices, right? So we combine that with micro profile technology which I'll describe shortly. But briefly here, micro profile is all about bringing microservices, patterns, and frameworks to the Java EE ecosystem, right? So it combines that with micro profile and those OpenShift bindings that I mentioned, right? So we kind of combine all these things and we have Wildfly Swarm. So you can create embeddable, fat jars if you don't want to do a traditional app server kind of scenario, but you really want uber jars and develop using that methodology. You can do that with Wildfly Swarm. It's very lightweight, it's extensible, which means we can add capabilities very easily. Both Vertex and Swarm, I should mention, have those bindings that I mentioned just to OpenShift. But it's also interesting in that they're both they're both just at the end of the day, Maven artifacts that are available in the Red Hat Maven repo for the productized pieces and upstream in Maven Central for the upstream artifacts, which means there's no runtime that you actually deploy to. You just build your application by creating in the Maven world a palm file with your proper dependencies on the right artifacts for the runtime and you build it and you get an uber jar, right? Really cool stuff. But our uber jar approach for Wildfly Swarm is still based on the Jboss modules. It's just packaged differently as an uber jar, right? So Micro Profile is a project that Red Hat co-founded along with IBM, Pyara, Timey Tribe, the London Java community, and many others have joined. You can go to microprofile.io to kind of learn more about Micro Profile. But the idea is that we are bringing microservice patterns to Java EE developers, right? So Java EE was a little bit slow as a mature platform in kind of moving forward, right? It was mature, had the functionality that was required by a lot of traditional applications. But now with the uptick of microservices, what we wanted to do was kind of innovate more rapidly in an open-source project. So Micro Profile is actually an Eclipse project and we're all collaborating there on Micro Profile specifications. There's multiple implementations of these things. Think circuit breakers, think externalized configuration, health checks, monitoring. These are all things that are available in what is just released, Micro Profile 1.2, right? So microprofile.io, take a look. And Wildfly Swarm is our implementation vehicle for the Micro Profile specifications that I just mentioned, right? So I'm one of the co-leads. IBM is another co-lead currently, right? That someday that may change, right? As it's just an open-source community, but for now that's where things are. And there's committers from across many companies, but also individuals not associated with companies. So if you want to help bring microservices to Java EE, go to Micro Profile.io. There's a Google group as well where we kind of hang out and have the discussions and join any one of those, right? And participate if you're interested in this project, in this concept, right? All right, there's also a book in the Making on Wildfly Swarm. I created a Bitly link, otherwise it'd be kind of long. It's just bit.ly slash enterprise Java microservices book that you see there in the bottom of the slide, right? It's scheduled, I think, for later this year, but it kind of provide you some idea of how we're creating, allowing you to create microservices with Java EE technologies along with OpenShift as well, right? So definitely take a look if you're interested. All right, Node.js, it's a large and vibrant community. I suspect most of the people on this call know what Java, sorry, Node.js is, right? Typically it's considered server-side JavaScript, right? So there's a tremendous amount of JavaScript expertise in the industry. A lot of it has to do with traditional development of JavaScript in the browser. And some just want end-to-end JavaScript, right? JavaScript in the browser, but also JavaScript on the server side. Where it tends to fit really well in enterprise architectures is kind of that touchpoint, right? A client JavaScript application does really well talking to a Node.js server and often that Node.js service, which kind of acts as a gateway, to all these backend services that may be written in who knows whatever language, right? So architecturally, that's where it tends to fit the most in enterprises. Roar, as I mentioned, is going to have a tech preview of this at GA, and eventually we're going to actually be supporting Node.js itself. And just to give you an idea here, Red Hat is a Node.js foundation member. And so the Node.js foundation is where Node.js itself evolves as a platform, right? Red Hat is a platinum sponsor, and we also have Node.js committers, right? In fact, we have Node.js committers on all the projects I've mentioned so far, and we even kind of, you know, lead the Eclipse for DAX, and we lead Wildfly Swarm as well, right? In my rush to get the slides done in time, I forgot to mention or add, I think, a, yeah, I forgot to add a Spring Boot slide. So what we're doing is we're certifying, testing and verifying Spring Boot on top of OpenShift, right? So that means, yes. I'm just going to interrupt for a couple, there's a couple of questions about Spring Boot, now that you're talking about it, that might be good to answer that. About what is the Spring Boot implementation? Is it using something like Tomcat as a servlet container, or is it from Fabric8? And a lot of folks are asking about Spring Boot support, if that's on the roadmap. Yes, in fact, the more comments and feedback I get in the chat on this, the better. And I apologize, since I'm full screen, I can't see the chat. So we are, in terms of just a servlet container, from a servlet container perspective, if you're using, obviously, upstream Spring Boot, you're just using an upstream Tomcat container. In terms of product, what we've done, in case you didn't know, we have something called the JBoss web server, which is a productization of the Apache HTTP server and Apache Tomcat. So what we've done is we've worked with the JBoss web server team, who has Tomcat committers, I should mention, in this context. We've worked with the team to create a supported embedded Tomcat container. So if you're running a Spring Boot application on top of OpenShift as a part of Roar, we actually support the embedded Tomcat container. There's been some interest, as well, for us to have a standalone productized build of Undertow, which is the servlet engine used by both JBoss EAP and its wildfly upstream equivalent. And it's also used in wildfly swarm. So I'd be interested in understanding if people would also be interested in having Undertow as an actual product option. We just started off with Tomcat. So that answers that question. Other things we've done around Spring Boot is we've tested and verified something like around 10 plus Spring Boot starters. So if you go to start.spring.io and you look at some of the examples there, they're backed by starters. And we've tested a bunch of starters. We've tested running applications. And I'll show you in a demo the fact that you can use Spring Boot to run on top of OpenShift using some of this. Another important piece to Spring Boot is something called a Fabricate Maven Plugin. Sorry, the Fabricate Kubernetes, Spring Cloud Kubernetes, I'll start with that. Spring Cloud Kubernetes basically is that binding I've mentioned, that glue, that lets you develop Spring Boot applications in a very Spring Boot natural way. Think about Spring Boot to a large degree as annotations that allow you to inject things into your application. It abstracts away things like, where does my configuration come from? How do I register and discover my service and discover other services? Those things are all injected in. And the way we do it with Spring Cloud Kubernetes is it's just something that you add to your palm file. You add Spring Cloud Kubernetes to your palm file. And then it'll use the Kubernetes binding equivalence. So to do service configuration, it'll actually use ConfigMap. To do service registration and discovery, it'll use Kubernetes. Under the hood, what that does is it'll use KubePing to find all the instances of a service and actually use that to populate Ribbon if you're using Ribbon and client-side load balancing. But the important thing is that you're not changing any of your application code to do that. So we're trying to make it as natural for Spring Boot developers to develop on top of OpenShift. And that's true of all the language runtimes I've mentioned. The other pieces, in fact, I think it's the next slide, so I'll go to this, is tooling. And this is true for all the runtimes. So, well, with an exception here. So for Java-based applications that create UberJar, so that Spring Boot, Wildfly, Swarm, and Vertex, we have something called the Fabricate Maven plugin. And this is an upstream, it's still an upstream project, but what that does is it basically lets you take an application that you've written in UberJar and build and deploy it on OpenShift, right? So you don't have to worry about Dockerfiles to a large degree. You don't have to worry about OpenShift templates. If I'm a Java developer that's just been used to creating a war file and deploying it on top of an application server, that's kind of what the Fabricate Maven plugin does, right? It just says, here's my app. I add something to my Palm file again. I say Maven, Fabricate, Call, and Deploy. And then I could deploy to OpenShift. It'll actually deploy to OpenShift, right? And that makes it just feel like an app server, right? It makes OpenShift to a large degree just feel like a traditional application server, although it's not, right? As we, I think, all know here. So when we started towards productization of Node, we've also created something called NodeShift. It does something very similar to what the Fabricate Maven plugin does, but for Node applications. Now, Node doesn't quite have the build cycle, right? That Java applications do. But what NodeShift does is it basically takes care of the deployment cycle, right? So again, creation of the Docker image, actually deploying it on top of OpenShift. That's something that we've done upstream. There's an upstream project called Bucharest Gold in GitHub. I believe that's also in the Docker Hub as well, images. So it's all upstream, Bucharest Gold. And it's basically our efforts around NodeJS at Red Hat, right? So it includes NodeShift, for example. And it'll also include the bindings that I mentioned. Any binding work that we do to the Kubernetes features will happen through Bucharest Gold, right? So NodeShift, I could just develop like it's a local application. And then when I'm ready, deploy it to OpenShift using NodeShift. Again, I don't have to worry about Docker. I don't have to worry about OpenShift templates, right? As a developer. The online environment. So the tooling around the online environment, we mostly rely on Java S2I with the tooling that specifically kind of geared around just Roar, right? So I'm going to show you a demo. And what this demo is going to do is use Java S2I, right, to actually provision applications, generate an application and provision it to OpenShift, right? Red Hat as a company is also working on something called OpenShift.io. And OpenShift.io is basically an entire developer experience, right, around enterprise development. So while it's like an online IDE, right, where you can edit your code, build your code, test and debug your code all online, it also has some team planning features. It has a really neat feature called Analytics that basically analyzes the code you're writing and the stack you're using and provides recommendations, like, hey, you're using versions of certain plugins that's not very common. Most people are using this combination, right? Which might be different versions of plugins. The ideas could also see, hey, there's a CVE, a critical vulnerability, security vulnerability, in this version of this plugin that you're using. Maybe you should use this updated one, right? So the idea is that over time, as more people use OpenShift.io, it'll get smarter and smarter around these types of analytics, right? And really help the developer become more productive. So we're making sure that the code that we generate through this demo I'm going to show you, what we call, which we call launch, is actually compatible with OpenShift.io. OpenShift.io right now is because it's a huge demand, a huge interest. We're still trying to scale it up, so not everybody can get on today, right? But go feel free to register today to OpenShift.io, there's a URL, and you can register, kind of get on the list and you'll be notified when they've scaled up enough capacity to actually add new developers, okay? So maybe I'll just go ahead and show the demo now. Okay, I am going to show, let's see, can you see this, Diane? Yes, I can, it looks great. All right, it's the OpenShift console? Okay. So what I've done, okay, is I've actually, I'm actually running this launch experience, I guess, this launch tool in MiniShift on my desktop, which anybody can do, right? It will also be available as an online service so that you don't have to do that if you don't want to, right? It'll be available so that you can actually, here, I'll just go through it and explain it, and then I'll explain more about as I go along, right? So this is basically to let you build and deploy some example applications that we have. So if you think about, like what we do with JBoss EAP today, you know, if you want to get started quickly, we have a getting started page and there's a set of steps that you can go download, JBoss EAP, clone a repository of examples, build the examples, deploy it to EAP, and that's true of a lot of Red Hat products. And what we're trying to do is actually simplify that to just a wizard, right? Where the run times are all available online, right? It mentions mostly Maven repositories, Maven artifacts, or in the case of Node, it's a container image. And the deployment environment could be OpenShift Online, whether it's OpenShift Online Starter, which is a free account, right, that you can sign up for and try this with here shortly or OpenShift Online Pro, right? That's kind of much more open, much more, you know, many more resources available for developers to actually develop their applications. So these are the supported run times that I mentioned with launch. So if you want to launch your project, you click the launch button and I can do a couple of things here. I can either, and we have to kind of know this ahead of time, do I want to build and run locally? So if you think, since there's some people familiar with Spring, if you think about start.spring.io, right, you can build a quick Maven project and then you download it locally and run it, you know, as a zip file and then you unzip it and you've got your project ready to go. You could do something very similarly by building and running locally. The only difference is that it's actually a full working example with, you know, a database and health check and all this kind of stuff. It's, we have very specifically defined use cases that we implement and you can actually download and run them locally and run them and then provision them to OpenShift online or in MiniShift if you want to run OpenShift on your desktop, right? The other thing you can do is actually use it with OpenShift online. So pretty shortly here when we go public beta, what you'll see is a list of clusters, right? Do I want to do online OpenShift starter online? OpenShift, sorry, do I want to do OpenShift online starter or do I want to deploy it to my OpenShift online pro account, right? Or, you know, you'll be able to choose between which online account you want to provision these examples to. Since I'm running in MiniShift, again, that's OpenShift on my desktop, we're going to provision that to MiniShift running locally and that's pretty obvious here through the 192.168 URL, right? All right, now I select the mission. It's a launch-themed experience, right? So launch is kind of the overall experience. Mission is basically a use case. Do I want to provision a database and a sample application and show how I can have the sample application use that database all running in OpenShift, right? We have a circuit breaking example, externalized configuration with config map, health check, a simple rest endpoint that's just basically Hello World if you want to start out really simple. And we're even working on one that'll actually secure an endpoint with Red Hat SSO. Initially, some of these boosters, like the Red Hat SSO one, take more manual steps, but over time we hope to remove those manual steps, right? As we think about things like the surface broker, right? Maybe we can remove some of these manual steps and replace them with automated steps. All right, but many of these are actually just fully automated, right? So what I can do is I'm going to choose health check. I'm going to click Next. Now, which runtime do I want to do a health check with, right? I'm going to choose Vertex just because I tried this earlier. I just picked a random one. You'll be able to use any one of these, right? Since I tested earlier with Vertex, I'm just going to use Vertex now. Okay, now I got to name the project, right? Because what's going to happen is it's going to actually take this project and fork it to my personal GitHub account, right? So there is some setup initially. The first time you use this, they set up kind of the bindings to your OpenShift Online Starter account or your Pro account and GitHub, right? It's a one-time thing. Initially, there'll be some manual steps. We already know we have a path to get you to just clicking a couple of check boxes, right? To get you there, right? With web hooks and everything set up. So a web hook in GitHub basically means whenever an application changes, then I can actually re-provision that application and redeploy it on top of OpenShift, right? So OpenShift project name, I'm going to call this Vertex-OpenShift Commons Briefing, OCD, right? And okay, I'll just insert. So I think it might be the username here. Jake Klingon, I'll just hit Next, right? Now it's going to give me an overview. It's saying I'm going to do kind of continuous delivery. Well, build and deploy through Java S2i. I chose the health check with Eclipse Vertex and here's a Vertex repository that's going to be created and so on and so forth, along with the Maven artifacts, which you can change. I just didn't select that. And now it's going to launch. And again, here it's going to launch to OpenShift running on my desktop in MiniShift. So now it's actually forking the project to my GitHub. It's going to actually push the code into the repo and then create the project on OpenShift online, which again in this case is running on my desktop, and set up a build pipeline, which is for now Java S2i. What the way this kind of works is if this launch experience I'm showing here is mainly around Java S2i, what if you want to start using Jenkins pipelines and get more complex deployment scenarios like blue-green and AB kind of testing, that's where OpenShift IO comes in. So what you would do is you would take this project and import it into your OpenShift IO account and just kind of continue on from there. This is just a really quick getting started experience. So if I want here I can go to github.com slash jklingon. I'll just go to my GitHub account real quick and show you that it's actually been created. Wow, my CPU is pegged here as it's going off deploying things. Yeah, so maybe what I'll do here is pull up OpenMiniShift and what you'll see here, I've actually tried this earlier today with Node. I've tried it with Vertex. They're actually already both running inside of MiniShift here and this is the one I just created, anonymous Vertex OCB. And what you'll see is it's actually deploying right now. So down here, I know it's probably small on your screen. I can view the full log. It's kind of downloading the internet right now to do a Maven build. So the first time that you actually go off and build this, that's kind of the typical Maven thing that it has to do. Maybe what I'll do in the interest of time is show you, sorry, I'm going to hit home. I'll just show you, maybe I'll show you the Node example. And here is the external route for the Node application. It's the same health check one that I mentioned and it's the same flow for all the runtimes, right? So there's consistency across these, across the use craze across all the actual runtimes. So I want to say hello OCB, OpenShift, Commons Briefing. Okay, yeah, I got them in the right order. Hello OCB. And I got to work with our developers. KillMe is probably not the best button. I think stop service might be a better name for it. And so I'm going to show you something here. So when we deployed this, what's going on is it's actually setting up an OpenShift health check for this service, right? So if I click the red button to stop the service, what you'll see is at some point, I think we have a two or five second kind of check, you'll see that it actually is restarting the service now. It went down and so it's restarting it, right? And if I go here and I try to invoke it, you'll see, oh, it's probably back up and running already. Yeah, I can say hello test. It's actually up and running again, right? So what I can do is I can use this get an starting experience to actually run these examples online, check out the code, see the OpenShift bindings, as long as the bindings that we're adding to other Red Hat middleware. And I mentioned Red Hat SSO is one of the ones that we're kind of targeting first to secure an endpoint, but this will grow beyond that, right? All right, so that's just a really quick demo. And we're going to be announcing this very shortly in terms of letting you do this on your desktop using the latest iteration. Yeah. Well, just to iterate, the launcher that you just showed, someone was asking about it, and then I think they found it in GitHub, but is that available for folks to use and to extend and port? It will be available very, very shortly. The redirect, it's not set up, installed yet, but very shortly, if you go to developers.redhat.com slash launch, right? That's going to be where it's going to be set up, right? And imminent is the word I'll use. Oh, by the way, Java one's coming up next week, coincidentally. Coincidentally, I love how we're very event-driven. Well, I've talked about domain-driven design. I often say the way we often develop, vendors in general develop products is conference-driven design, right? We're always targeting conferences. So, all right, a couple of quick closing slides, and I apologize. This is kind of going along. Yeah, but looking ahead in terms of where we're going, I suspect many of the folks here in the OpenShift Commons attending these briefings have heard of Istio. So, the idea of Istio is that what I can do is, when I deploy a service in a pod, what I can do is deploy with every service a sidecar, right? Container, and that sidecar container provides a set of services. So, if I have a hundred services running, microservices in my environment, every one of those services has a sidecar. And these sidecars can basically all be conducted to each other, which is what we call a service mesh, right? Now, once I have that mesh in place, I can do some really interesting things. I can do intelligent routing, right? I can do routing within the mesh. I can do AB testing through the mesh. I can do all sorts of interesting things via the mesh. I can do distributed tracing, service to service within the mesh, and not even bake that into my application as well, right, if I don't want to, right? The mesh can tell me what's the time to get from service one to service two, right? And that is actually quite exciting, right? Being able to do this. And what Istio does in addition to these sidecar containers, it has kind of this control plane above it, right? Where I can define my policy centrally, and that policy gets pushed down into the mesh, right? So, I have some level of control and centralization and consistency within my environment, right? Really cool stuff. Istio.io is, I think, the website, but just Google Istio, right? You'll find it. Red Hat is very interested. We're active in the Istio community. It's early days, right? It was just publicly announced just this past May, right, of 2017. So, very early days, but we're definitely interested in trying to bring this to open shift customers, right? The when and stuff is probably somebody else within Red Hat to better answer that. So, now if I think about this in the context of the evolution of microservices, Istio offers a collection of services that now even let me remove even more of this infrastructure concern out of my business logic and into the underlying platform, right? So, the idea is if you think about circuit breaking, maybe I can remove that out of my application and have that in the service mesh. Maybe with the distributed tracing, if I don't need to actually trace into a container and get tracing within my actual business logic from one service all the way to the business logic of another service, then if and all I want is service to service tracing, then I could just use Istio for that, right? So, eventually we'll have kind of this end-to-end tracing through Istio. Now we're also doing work with Yeager within Red Hat, which has recently joined the CNCF, and Yeager lets you do distributed tracing, not just point-to-point, but also into the container, right? So, what we're doing there is open shift application runtimes is working with the Yeager team to kind of enable tracing into the container for those that actually want to do that, as well as how do I visualize tracing, how do I store tracing, so I can go back and review some of this tracing information. That's stuff that I don't think Istio covers, but we're looking at tackling it at Red Hat, right? So, these are things that we're kind of interested in and working on, right? This is a previous briefing on Yeager and on open tracing and distributed tracing too, so there's a lot more coming in that way. Yes, actually a very good point. Yeah, I actually watched that one. So, yeah, great stuff. And last, I think this is the last slide, public beta is imminent, GA for Roar, which means support for the runtimes and all the glue work and the documentation examples. It's targeted for this calendar year. We hope to have more online examples at launch, so you saw those use cases defined on that screen. We hope to get more in by launch. If not, don't worry. Over time, you're going to see that list grow, grow, grow, grow, right, to more and more examples and maybe even more complex examples as well. And some planned middleware integration, right? So, not just Red Hat SSO, but how do we easily interoperate with some of these other products like the JBoss Message Queue and Router, right? That's a part of that product. JBoss Data Grid, right? Yeager, I mentioned, Fuse, three scale API management, right? There's a whole bunch of products that we have in the Red Hat middleware that we just want to provide out of the box. It just works examples all running on OpenShift, right? And that's the end of kind of the presentation and I'd be happy to take any additional questions. Well, there's a couple that went a little long, but John Osborn and Michelle have been asking good questions. And one of them is, and I think there was a little confusion when you launched, whether you were launching just to OpenShift Online or just locally. But if you had, can one of the deployment targets be Enterprise OCB cluster instead of, you know, the other two, have you tweaked it out for that? Oh, excellent question. So if it's running in OpenShift, if that launch experience is running in as a service, so I mentioned the URL developers at redhat.com slash launch, which isn't live yet or shouldn't be, then you'll typically deploy to one of the online services, online starter or online pro. Although you can download the zip example locally, right, and play with it locally, and then later decide if I want to deploy it to OpenShift Online. I could also take that launch experience that you saw that running and run it, install it locally inside of MiniShift and just run it inside of MiniShift. You might be able to do that with OCP as well, but we haven't done a full suite of testing with that, right? There might be permissions that you might have to enable for a launch to run inside of OCP within your environment. So we haven't fully done that, but definitely provide me feedback. Jay Klingon at redhat.com, right? And if this is something that you would like to see running inside of your environment, right, so that you could leverage this and deploy it inside of your environment, let me know. In fact, one idea that we have is, well, let's let our customers add to that list of use cases so they can define their own that are specific to their environment so that they can then go ahead and do this, right, within their environment. If the code for the launch is in GitHub, which I think it is, they could always tweak it specifically for what they need to use on their enterprise clusters as well, or is that not recommended? You could do that? Well, so again, we haven't tested that, right? So if you test that there and you find issues, then send us pull requests, right? So if you're going to fork it, send us pull requests with any fixes that you've got. At a minimum, whether you're running in MiniShift or online as a service, what you have in your project is going to get forked to your GitHub account, right? So what you could do is take that out of your GitHub account, clone it locally, or you download the zip directly, like I mentioned during the demo. Now you have it on your desktop. Now all you have to do is OC login to my OCP account and just say maven fabricate, colon deploy, dash p, OpenShift. It'll select the OpenShift profile. It'll deploy to your OCP account, right? I've actually done that internally here at Red Hat, right? It's just the actual UI and the wizard steps, right, aren't kind of tested in OCP. We've done some kind of nominal testing of actually running the examples themselves inside of OCP, right? So you'll still be able to run those inside of your OCP and your environment. Some of the other ones might be hard, like SSO and stuff like that might be a little bit harder, but the crud example you could probably do as long as you have the resources in your account to do that. And the only other comment outside of that because we've run a little bit over time, but that's quite okay, was you were asking for feedback on Undertow and someone had mentioned that Undertow is basically the most popular embedded container in Spring Boot in the entire Spring Boot community. So support for that would be, sounds like it would be a key thing to try and get there at some point. So who mentioned that? John Osborn, who I think is a Red Hatter. Okay. Put that right, we can reach out to him. But we'll see if we can do it. We are about seven minutes over time. So I'm going to, there aren't any other questions. I'm going to give everybody going once, going twice. John is definitely available for you to ask questions. And we will put the links to these things and many more in a blog post shortly on blog.openshift.com. But it will go up on the YouTube channel probably in a day or two at the most. So thanks again, John, for taking the time to do this and explain this. And we'll look forward to seeing more run times added into this as well. And we'll see maybe what we can do also is work with you guys to do a survey of the OpenShift Commons mailing list to find out what else people are interested in seeing and getting added in. That helps with the helpful information. Thanks everyone. Yeah. Take care guys.