 Good morning everyone. Props to you for getting up early and actually getting here on time. The session today is middleware automation from the edge to the cloud. My name is Andrew Block. I'm a distinguished architect here at Red Hat. I am in the Global Services Office of Technology. What we do is we work with customers around the globe to develop common patterns and implement them at scale. I am one of the founders of the Ansible Middleware Project that we'll talk about today. I am a big open-source advocate. I have mainly these days in the cloud native space. I'm a maintainer on the Helm and ORS project and contribute to a security project called Sigstore, which attempts to provide tooling for securing your software supply chain. I'm an author on a couple of different books. I'm a long-time mercenary in banks, insurance, breaking monoliths and working in automation. Then I became a redactor to make it freely available and to give back to the community. As some of us have already discussed this morning, we're starting to see applications run in many, many different places. We've been running in the public cloud for a number of years, but another trend that was starting to come more recently is running applications at the edge. That is really because we're starting to see more demand at processing and managing assets closer to where they're being sourced. So look at when we got on the bus today or the tram. You went ahead and you scanned your phone, maybe? You paid? That is another edge device that is currently running out in the field. There may be some processing on there. There might be some other capabilities. That's just one example. Check out at a store. You name it. We're seeing more and more demand where it's needed. So that's incredibly important because you need to manage it somehow. So one of the key tenants of applications running anywhere is a type of application called middleware. So middleware really is the connective tissue between many of your applications. That's okay. Let's go ahead. So middleware really is the connective tissue between many different components of enterprise architecture. It's going to be the plumbing. Our team loves to use the term plumbing because that's what we really see it as being. Without it, you're just going to have nothing because it provides so many different capabilities out there. And it really does power organizations throughout the world. So question for you. Not you, for them. Who here has managed a middleware server? Have you used any automation tools to do so? Or it's a bit manual when you're managing the middleware server or the applications? And the question is who here has never heard of middleware before walking in this door today? Okay. We got some new ones. Awesome. All right. I love seeing some new faces for that. Okay. Great. Okay. Next slide. So one thing that we've seen is that even though there's a lot of popularity in middleware, there's still most of it not being managed in an automated fashion. And that's a huge opportunity that our team sees as providing tooling to close that gap. Next slide. So what are some different ways to manage your middleware portfolio? Well, number one, it's got to run on a server usually. You could obviously manage the underlying libraries, dependencies that your middleware needs. You then have the lifecycle of the middleware server. Everything from the installation, the upgrades, downgrading, you name it. The configuration of that server. So it's going to be all different unique properties for that specific environment. Maybe some database URLs, credentials, messaging, use names, you name it. And finally, the application deployment and management layer on top of that. All the different ways that you can go ahead and automate your middleware portfolio. So the question is, what are the different ways to automate it? Well, we can start with scripts, bash, PowerShell, you name it. There are some underlying specific operating system tools. Sysprep, no. We're looking at the more unique ones for Windows or Sysprep. Linux has, oh, it's the one for Rails. I'm blanking out right now that helps me stand it up. I'm blanking out it. Oh, whatever. I know. The automation tool for, you know, the underlying for standing up, never mind. I'm going to forget it. Not sour now. I'll remember, I promise. And then finally, an automation tool. That's a good one. No, not that one, I promise. And then finally, a specific tool designed for automation. That's going to be something like a chef, a puppet, an ansible. So, how many, how many here has heard of ansible? Please say yes. How many here has never heard of ansible? Okay, good. You haven't? Good. Okay, so we can go ahead and talk about what ansible is. Ansible is a simple, powerful, and agentless automation tool for managing automation at scale. It's simple because it doesn't require a lot of special syntax that you need to know under the covers. The tasks themselves, as you define in a declarative fashion, are executed in order. And it's used by every team. Anyone from your developers or your operations team can use ansible. It's powerful because it can do everything and make your breakfast too. It could have made breakfast this morning. I don't know. Who knows what those machines are using. It can manage everything from application deployment, configuration management. It has a workflow engine as part of it. It can do network automation. And then finally, one of the most important parts of ansible is, it is agentless. We're looking at automation tools like Pat Puppet and Chef. They are traditionally an agent-based solution. So this is agentless. All it needs to do is connect to an end system via a number of different connection plugins. And you can manage it as you need to see fit. And it uses, out of the box, open SSH and winRM depending on what OS you're playing with. So why don't we go ahead and think about combining the power of the benefits that middleware can provide plus the power of ansible. It's that combination that amazing stuff starts to happen. And that's what our team has gone and designed is tooling around the ability to manage your middleware portfolio with ansible. And that's what ansible middleware is. It really is that set of tools that are specifically designed to manage Red Hat middleware and its underlying run times. It has support for multiple middleware products. It is configurable. This is one of the best parts about it is that every organization and every use case is different. There are a million, put a number of different ways that you can configure it. Would you say? That's true. Quite a large number. That's true. Yeah. And then finally, this framework ansible middleware is built and run by a team that has not only engineered Red Hat run times in the past and current, but also those who have delivered it at customers globally. So you basically have a combination of the two of us. He builds it. I work with customers to roll it out at scale. So this is how easy it is. Simple, easy, as pie. This is how easy it all takes really to deploy a JBoss enterprise application platform server. It's simple. With 14 lines, Guido. Then we've got 14 lines of configuration. A little more yet to configure it in the back end, but still a lot easier than the thousands and thousands of lines of bash scripting you may have done in the past or currently are doing today. It was opinionated defaults which provide a production ready enterprise grade deployment, even if you do not have the right configuration. Of course it is opinionated, so it doesn't work one hundred percent of the times, and that's why you cannot override with the environment configuration. Yeah. So what are some of the benefits? Oh. You have that consistent deployment to be able to manage and scale your applications as you see fit. Right once, deploy anywhere without manual intervention. We talked about being able to manage different systems at the cloud or the edge. You can go ahead and use the same baseline configuration and then have specific environments overrides that can make it so it's speed defined and used in these different environments. Maybe you want to specify your JBM with a smaller, you know, heap size in certain environments where you have a different profile you want to go ahead and run out. Those can be overridden per environment. You can use the power of Ansible Automation Platform to then be able to then orchestrate at scale. We'll talk a little bit more about how you can integrate Ansible Automation Platform into this entire ecosystem. Has seamless integration for upgrades, downgrades, install, uninstall, configuration management scaling and more, and then obviously you can save time by reducing the manual errors that you have by using other more traditional based approaches. So what kind of different technologies do we support? Well, think about all the different types of middleware that's out there. We have everything from managing a web server to then manage systems that manage enterprise applications to messaging, caching, identity. All these different capabilities and technologies are available to be managed via Ansible middleware. So what does that translate to into projects in the upstream community? So for web, we have Tomcat. And for more enterprise applications, you have Wildfly as your application server. If you want to go more towards the caching set of things, you go ahead and use InfinisVan. Identity with KeyClick, one of the newest CNCF sandbox projects. Then finally, if you're looking on the messaging side, there are two flavors of messaging that we support. Traditional JMS brokers with ActiveMQ and Apache Kafka for the streaming Kafka-based approach. But Agito and I are both work for Red Hat. We also support the enterprise version behind all these projects. So if you're looking on the Red Hat supported side of things, you have everything from the website with JBoss web server, JBoss enterprise application platform, Red Hat single sign-on, Red Hat data grid, and then finally the two different flavors of Red Hat AMQ broker and streams. So this is basically a nice little technology project product chart. Just kind of talk about all the different components that our team currently looks at managing and providing solutions for. So we do take a very community-first driven approach. We're here at a community conference. So we want to hear from the community. When I work with the community, so we're not running in a silo. We have everything out there in the open. We have everything from our source code repositories, which allow you to not only look at the code that we have available for our automation solution, but contribute. We have a consumable, so it's widely available and open for that one-touch, one-click installation. That's 11 lines of ginger or of enamel. Yeah, you really don't need much more than the primitives that are provided by Ansible out of the box. There's no additional components to stock Ansible Core. Then finally, we want to hear from the community on the roadmap. So we want to hear feature enhancements, bugs. So we want to gain those and get that information out there. So we really want to hear from you what's working well, what's not working well for the solutions that we're putting together. So how do you get started with Ansible Middleware? First, start with the source. Because how many of us here are developers, or at least, you know, say we're developers? How many of you probably would have gone to get, if I were to put a GitHub link out here, how many of you would have either gone to GitHub right after this talk, or since we're all on our mobile devices right now, you might have actually gone to it right now? Raise your hand if you would have potentially done that. So start with the source. First, we have all of our Ansible Automation content collections, our documentation, our demos, our labs, and feature requests all on GitHub in the GitHub organization and Ansible Dash Middleware. That is the first place I always help people to look if you are more of a cooter. If you are more, and if you want to be able to consume the content, you can go to Ansible Galaxy. This is how you can consume Ansible Content Collections anywhere. This is a totally hosted service, very much like Maven Central, or PyPy, or NPMJS. And I did not update the link at the bottom. It is not github.com. It is the middleware underscore automation GitHub or Ansible Galaxy. That is my bad. Sorry. Nobody saw that. And finally, I got that link though right. If you want to look at more of the documentation out there, we have it there too. Check out the docs at ansiblemiddleware.com. And that is, right now, it has basically just our documentation for our content collection. We are relaunching the new website in the next month or two, which is going to be totally refreshed, rebranded, and have a nice look and feel. So you will want to check it out as the project continues to evolve. All right. And of course, try to demo. We have a number of different demos out there that you can look at leveraging. One from those that you build yourself to other entire workshops that you can also look at leveraging. If you were, well, since there were only a few number of people in the workshop yesterday, you could have gotten your hands on trying out Ansible Middleware yourself. We'll provide some links later on in the presentation where you can actually spin it up without doing much on your own laptop. You can use Instruct as our platform so you can do everything and learn about Ansible Middleware in your web browser. Our demo is providing only capabilities of just getting off the ground, but more complex use cases, multi-product integrations. We're going to show some of that today in our demo shortly, as well as some more advanced capabilities. And then finally, what is the roadmap? What's coming in the Ansible Middleware? Number one, we want to improve the getting started experience. Now, for many of you, this is your first experience in Ansible Middleware. We want to make sure that that initial experience is as crisp and seamless as possible. Finally, our next, we're looking at enhancing our documentation. We talked about looking at providing a new, refreshed way of our documentation and website. We want to then look more into the integration with open to virtualization. We see that being a nice target because it's kind of a mix between those that are really looking at adopting the new way of working on a container platform but still need to manage more traditional workloads, ones that do run in VMs. I work in many organizations around the globe, and I'm sorry to say that there is still a world that still runs in VMs and will still always run in VMs for a very long time. How many of you know that most of the time you're using systems that are talking on mainframes still? Most of us took a train or plane today. It's a good chance that all those back-end ticketing systems are running on mainframes still, or at least parts of it. I definitely know that Amadeus, that it runs most of the airlines out here in Europe, is running on mainframes still. Isn't that fun? And finally, since we do focus very much on our Red Hat customers, we want to ensure that we have a great Red Hat customer experience for those who are looking at getting support at scale in an enterprise-supported way. So looking at providing certified and supported content for all the different distribution channels that we look at providing. Okay, I'm turning this over to Guido. I've talked enough. Now, I know Guido. Oh, you're going to go, okay, want me to demo? Okay, he's just going to do the demo, I guess. Okay, we're going to go ahead and we're going to show Ansible and Red Hat Middleware in the wild with a very complex demo. We're going to show how we can push technologies to the edge by using a multi-product use case to make and define highly available environments. We're going to go ahead and spin this up in multiple clouds, in the public cloud. We decided to not go ahead and attempt our luck and do cloud and edge of this demo because you never know with things. So this is just a multi-cloud demo. We're going to use Ansible Automation Platform to be able to roll this out at scale. And most importantly, while, yes, we've got the initial installation and configuration, but after it's already running, how do you know it's still running? How do you know it's still maintainable? Well, guess what? It's fully observable as well. So we use a lot of monitoring and observability tools to show you how you can then not only stand that up, but use our tools to be able to manage and monitor itself. And finally, every environment that we are deploying is going to look at the following. So we consist of an intelligent load balancer based on JBoss, JBCS. What does that stand for? JBoss Core Services. Wow, there we go, see? JBoss Core Services. Yeah, basically Apache. A JEE application server, a single sign-on server for providing identity for not only the environment, but also the application that we deploy as two cache instances for highly available single sign-on and two database instances. And it takes into account everything from things that our cloud provider will provide, everything from virtual networks, public and private subnets, state replication, failover, crossover, and multiple regions and peering and connectivity between those. All of those is fully automated. It's kind of scary when you think about it. That's one of some of the power of Ansible that it can provide. Cool. So before we get into the demo, I'd like to just share some resources because a lot of what we have will be cool to talk about after the demo so I want to put this out here. If you want to learn more about Ansible middleware, go to our GitHub organization, github.com, slash Ansible middleware. Go to Ansible Galaxy where you can look at our Ansible content collections. This is the correct link for that. So if you want to jot that down from the link from before, you can do that. We have the Ansible middleware website, AnsibleMiddleware.com. Our Red Hat certified content collections are on Automation Hub. And finally, if you want to get in touch with us, ansible-middleware-core at redhat.com. Gido, it's your time to shine. Time for the demo. Yeah, of course, infrastructure like this requires a bit more than Ansible collection for middleware. So in this case, we are using managing the resources in AWS and Azure. Both AWS and Azure provide Ansible collections for generating resources. And we provide a repository of code, if you want, or YAML and GINJA that reuses the third-party collection to make the resources, including this path, which is the important path that allows different clouds to give the abstraction for adhesion networks. Which allows us, with the middleware collection, to generate clusters configurations. Like this caching service is composed by a cluster in a region, another cluster in another region of the same cloud, another cluster in a first region of Azure, and another cluster in the fourth region. And then, all together, they form an additional cluster on top of that. The configuration of which is fully controlled by the variable you provide as parameters to the data grid collection and the host names you use to run Ansible against it. And I'll show you quickly how it works. I don't have a pointer. I have a pointer. So this is an instance of Ansible Automation Platform Controller, which is a product of Redat, which gives an abstraction wrapping up the Ansible command line. It's both an API and a web UI. In the first case, it allows to integrate Ansible as a process in a pipeline or in a workflow, so it can be an integral part of every CICD you might have and can be useful both for testing and also for staging and deploying production environments. And it also provides an UI which makes the first deployment more comfortable, in particular for people that are not so comfortable on the command line, and at the same time provides some additional features on top of Ansible workflows, namely the ability to autosynchronize the state of the project and jobs inside the UI with the backend SCM repository, which in turn allows to trust that repository as the proper single source of truth for the live deployment you will be executing by Ansible. It's an extremely important concept. If you repeatedly are able to run your Ansible deployment against an environment, Ansible is made so it reports changes only when the found configuration is not what you expect, and what you expect is what you define in your Git repository. Conversely, it would report changes when a commit has been made in the Git repository. The third option is an unexpected change. It is something that you would look for to see what happened while Ansible reports so it is proactively fixing it, applying the change thus turning the configuration to the expected one, meaning the code in the source code repository is the source of truth of what is live. It's an absolute beautiful, easy way for Ansible to solve a complex problem with other automation tools. They don't even take code. Let's go to the demo. So for an environment like that, we have a number of, a number of groups which are collection of hosts. Any host can belong to multiple groups, like one random machine could be in a region and be assigned a service role, like single sign-on or JBCS, be part of site one. And we have the hosts to be a long list, belonging to those groups. These are the hosts against which we launch a job. We have really two jobs for this project. One creates all the infrastructure, all the cloud resources we have seen in the diagram earlier, and the other one is meant to run the full deployment. Now the system is up. We are launching this job. We did a little bit of a cooking show, so some of it is already running. This is an important configuration as part of this. Yeah. Going from zero to everything with a project like this is something that I won't like, we would take 35 minutes, which is not long. No, because it spans up the entire cloud resources and then installs and configures everything. All we're doing right now is just enforcing state, ensuring the deployments are as we see fit. One of the benefits of Ansible is that IDPotency being able to ensure drift configuration doesn't occur. We take a lot of care that all of our collection are 100% important, because reporting one change or ten is the same. The good part is reporting zero changes when there are zero changes. That's it. An important run against all those 24 hosts takes around five minutes. Five minutes. We have about six, seven minutes left right now. While it's only having cooking, as I say, are there any questions that we can take in the meantime? Any questions that you may have had regarding any of the content that we've shared everything from what is Ansible middleware, what the tools that we provide, what capabilities that you can look at leveraging, whether it be from the edge to the cloud, any questions? We answered all your questions. Or you just have a second. What are some of the answers? That's going. If you are aware of the Instruct tool, we have a workshop available that allows people to experiment with the Ansible collections. You can get your hands on this without even having to install anything on your laptop. So you get your own Ansible automation platform, you get everything stood up, resources. You basically have to learn how to do a demo similar to this, but see how you can take the role of being not only an administrator of this tool, but also a developer. So you have to get your hands on playing in Visual Studio Code and committing and managing and modifying content. So you get a full hands-on experience of learning not only what Ansible middleware can provide, but some of the different ways you can look at configuring it. It is worth mentioning validated content. We are working with a validated content team so that a consumer of our middleware collection will be turned into another collection and provide a pre-baked, good scenario installation for instance services like single sign-on or caching. Yeah, so the validated content team under the Ansible business unit is looking at providing more validated and just certified, I wouldn't say, I think it's certainly certified from their team, different demonstrations and use cases that has been known to have gone through the rigor of Red Hat testing and conformance. And the good news is this is almost done in the four minutes, I think, do it. Almost, almost. Is there any insight into this platform? We are on EAP right here. We see it's download... it's good to download the patch. That's an example of how you provide EAP. Actually it doesn't because it's already parted and the patch is already there. It does some detection, you can put it in the same way as it's already parted. And it also costs the clean. We also, one of the cool things that our team is partnering with different teams in Red Hat to establish a new API for downloading content from Red Hat Customer Portal. Downloading content from middleware runtimes had been very challenging in the past. There's now a fully available API that you can net on the query for real-world versions but also download content directly. So it's a really cool... now you need more... I wanted to mention that the reason for which it installs at least version 7.9 in EAP is because that version has a nice technology preview feature which is the YAML configuration extension which allows to provide a declarative configuration to the bootstrap phase of JBoss. Turning JBoss into something configurable. Yeah, before JBoss configuration it was a little more manual. You can script it up to an extent as you don't mention the YAML configuration in tech preview. You can do so in more of a declarative fashion but aligns up very nicely with what our team fully in automation provides. So basically you don't need to query the state runtime of JBoss to check its configuration. You can move that in the bootstrap phase. Of course then you don't use the key much anymore. It's a good thing. It's a very good thing to test what is the configuration without checking. Remember this shouldn't get cut. The CUI is coming for like one time. We were able to see the application to roll out. It did. Great. Did you see there we are successful? Let's go and pop up the application. Can we do that? Let's show them something in the two minutes. Instable fast. Is that it? Yes, it is. It's a simple address book. All integrated with single sign-on and we have a little bit of an installation via Ansible Automation. All we need to do is use flange user. We have a bit of a joke. Remember Ansible middleware is helping address the plumbing of your middleware portfolio. So we have that. It then goes ahead and validates the user against it. The automation stood up the key call extension for JBoss EAP. So you can now integrate single sign-on into your applications and we are able to manage that. So we decided to kick off one of our regions which we've done in the past. It still works. We have these two minutes. So what we did opening that page is doing a query to the Route 53 DNS of AWS being latency checked to see which region is the closest to us. The DNS forwarded us to the closer geographically to EAPD which reverse proxy our request to EAP which has a war file deployed in it which is configured for single sign-on. So we were redirected in the browser to single sign-on which has a database down here in this cluster picks our user validates our authentication we are redirected back to EAP and we see the HTML page. For a simple HTML page there is a logic out front of the covers and that just all automated through the power of Ansible Middleware. We thank you all for joining us this morning I know it's always hard getting up for the first session after the party but thank you very much. I appreciate it and we're here to answer any questions that you have. Thanks everyone.