 Good evening, thanks for coming out tonight. So yeah, like you say, I'm Brian Benz, I work at Microsoft. Been there about six years and before that I was at IBM and Deloitte and some other companies and I was hired as a evangelist slash developer advocate, developer relations person for Java. I've been working with Java since about 1999. I wrote some books and some other things and it was a long time ago. This is my third trip to Singapore, which I'm thrilled to be back actually. 30 years ago I stayed in Pearl City at Johnny's Guest House. That was a long time ago. And it was like this rooftop place with minor birds outside and stuff like that. And then the last time it was about 10 years ago I was doing some work for the Singapore Army and I was staying at the Hilton on Orchard Road and this Marina Bay Sands, this wasn't even here yet. And the thing I remember most about that trip was there were no air conditioning in the buildings I was in and I wore suits back then for working. So it was a rough 10 days or so working out in the suburbs here of Singapore. But anyway, so now I'm here. I just spoke at a conference called Ignite the Tour. Somebody, was anyone there by any chance? A couple of you, okay. So just a couple of people, but it's a tour that's going all over the world and I took the opportunity to stay on and speak here before I go off to Johannesburg, which is my next stop in South Africa. So yeah, anyway, so tonight we're gonna talk about, enough of me talking about micro profile and open tracing. How many people are familiar with micro profile? What it is, just to show hands. Okay, so it'll be new. What about open tracing and what that does? A couple, okay, okay, cool. Well, basically I'll give an intro into what micro profile does and then I'm gonna show it running on my local machine. And then I'm gonna show you how you can run it on containers on Azure, on the cloud. And then I'll talk a little bit about how you can run on Kubernetes and I'm gonna finish up with a demo on Red Hat OpenShift. So it's kind of a mix of all open source technologies. Everything I'm gonna show tonight is on open source and available publicly. So, have you guys ever seen one of these before? This is a jiggler. I was talking to someone earlier about this. So this is a jiggler from the olden days. When you had the low DVDs and CDs on your computer, you had something like this. It was usually big and you plug it into the computer and it would actually not shut the computer down while things were loading. So I use it for making sure that it doesn't shut down while I'm getting ready for a presentation and blabbing away. So, yeah, so let's talk about Micro Profile first. So Eclipse Micro Profile, it's an Eclipse project. I'm sure everyone here in the Java world is familiar with Eclipse already. It's been around for a couple of years now. It won the Duke's Choice Award in 2018. So what is Micro Profile? Basically, all these companies are adopting it and what they're saying is essentially we don't have any, you know, when you build a microservice, there's no way to sort of ensure that everything's gonna communicate with other microservices, right? So how do you actually get a microservice to talk to other microservices? There's supposed to be these loosely coupled things that actually connect through protocols. But how do you choose what protocols you're gonna use for those? So the Eclipse Micro Profile, it's not a foundation, it's just a project. They got together and they said we're gonna use three basic things. DAX-P, which is RESTful Processing for Java APIs. And then Context and Dependency Injection, which is CDI, is another project we use. And then JSON-P for managing JSON. So anytime you're gonna use RESTful Endpoints, anytime you're gonna use Context or Dependency Injection or anything you need to do to talk to different, to inherit things from one microprofile microservice to another, you should use CDI. And then for Processing of JSON, you should use JSON-P. And those are the three basic things that everybody decided that they should use. And since then, things have been built on top of that as well. Other projects have come into the fold and they said, you know, okay, we'll use those three things as well. And so basically all of these companies have agreed that if you're gonna build a microprofile microservice, it should use these to actually help build it. Those three things that I described. Let's see, what else can I show you here? So since then it's sort of evolved into all these projects. And these projects themselves have importance because they allow you to do things like add, let's see, health checks into your applications. If you wanna use metrics or let's see a REST client, they have a built-in REST client. And here's Open Tracing. That's the other thing I'm gonna talk about. So one of the main parts of microprofile is Open Tracing because when it comes to managing the performance of your microservices, it used to be very easy to track. Let's see, you've got a website and you've got an end-to-end application that basically someone logs in, they look at your website, they identify themselves through some means and then they purchase something and then you can track that purchase. When they purchase it, the inventory has to be reduced. Revenue has to be processed. So there's a financial part of the transaction. Everything has to happen. And that was fairly easy to control when you're talking about sort of a monolithic application. But what if you have one microservice to handle the website, one microservice to handle the financial transaction, one to handle inventory, et cetera, et cetera. You get the picture. One to handle identity. How do you actually trace and track your performance across all of these microservices? Okay, and so Open Tracing is a way to do that. And that's an Open Tracing, let's see, I had it here a second ago. Is it there? Yeah, the Open Tracing API. So Open Tracing is once again, so all these things are on GitHub as well, each one of the projects I was just talking about. If I go in here to Open Tracing, click here, it actually takes you to the Open Tracing project and then somewhere there it's on GitHub, but I can show it to you right here. This is actually the Open Tracing API project as well. So this is once again, open source and a bunch of vendors have got together and decided on standards that they can use for tracing things across different applications. So for Open Tracing, what they mean is a end-to-end process, like I mentioned before, someone logs in, inventory, purchasing, et cetera, et cetera. That's a span, and then there's a trace which is a specific performance of one of those microservices. So everything gets put together and the trace actually tracks each microservice and then the whole thing from end-to-end is called a span. And if I go to OpenTracing.io, you can see some of the projects they have as well. And where's the, there's a hub, GitHub. So they have a bunch of different APIs for Open Tracing as well. Oh, there's the registry. Yeah, this is a registry. It's kind of like a hub, but basically these are all the people who have built tracers. And these tracers for Python, Ruby, et cetera, et cetera. And some other ones for like LightStep is a big partner of that. They've done some things with Scala and Acca and some others. Let's see, Zipkin, Yeager, a few others. I'm actually gonna show you Yeager today, which is an implementation of the OpenTracing standard. But these are all implementations of OpenTracing. So the idea here is if you have a micro-profile microservice and it has an OpenTracing project built into it, like here's Cassandra. If you have Cassandra as part of your microservices and if you're using micro-profile, it's a bonus, but you don't have to be. You can just use Cassandra and some microservices. You can use the Cassandra provider here, the Cassandra driver to manage traces for you. It's called a tracer that actually tracks things. So that's just one example. Couchbase, Elasticsearch, Hazelcast, some of these names I'm sure you recognize. JDBC, of course, and a few others. So these are all tracers that are pre-built and you can add them to your OpenTracing implementation. And there's two main OpenTracing implementations, one's called Zipkin and the other one's called Yeager. And these are open source implementations. So these are actually, they've taken the standard and they've made it into something, a project that actually runs. The Yeager's written in Java and it runs on Docker. I'll show you an example of that in a little bit. But basically, that's the idea behind that. Same with micro-profile. So micro-profile has an implementation called Thorntail. And Thorntail is written by Red Hat. It's managed by Red Hat. But it's basically a standard implementation of the micro-profile. It's not a standard. They're not organizing this into a standard's body. But the micro-profile project, basically, that I have to be careful. If I say standard, it's not a standard. So I have to say something, but I guess micro-profile project is the implementation that I've got. So Thorntail is what I'm going to use for my demos today for this. So hopefully that gives everyone a good idea of what open tracing and micro-profile do and the implementations we're going to be using today are Thorntail and Yeager. So I'm going to run some Thorntail application that actually accesses open tracing using a Yeager implementation. So let me start with that. Let's see here. What does the code look like, Brian? Stop yammering on about all the different standards and things. OK, so the code itself is actually kind of interesting. This is Visual Studio Code, by the way. How many people are familiar with Visual Studio Code? OK, cool. It's a Microsoft project. It's open source. It's free. It's a text editor that you can use. And in this case, I'm using it to edit a markdown file, but it also handles YAML, Java, Docker, all kinds of other things as well. So what I'm going to show you today is actually, let's see. Let's go in here and let's have a look at the actual. This is what an application looks like. In Thorntail and or in MicroProfile in general, there's a big POM file and less code in general. So if I look at the POM for this particular application and I'll get into the Java code later, I'm assuming everyone's seen Java code. I've probably seen a POM too, but this is the specific one for setting up your Thorntail application, which is MicroProfile. Of course, this is just the way that Thorntail has chosen to implement the MicroProfile standard, those three things, Jaxrs, CDI, and JSONP. You could do it all with code or anything else, but they've decided to go through Maven and use a POM XML for that. So if you look here to set up Thorntail, you just set up a dependency. The first one is IO Thorntail. You use a bill of materials all, and that basically imports the Thorntail, gives you access to Thorntail functionality, basically. And then you use the Thorntail Maven plug-in to actually make sure that everything's updated. Fabric 8 is used. Now let's go down here. So this is actually where things get interesting. So because if I go back into MicroProfile real quick, the project here, each one of these projects has a base of a POM, it has a POM XML dependency that you can simply add to your application. And that will actually pull in all the code for you. So in this case, we've got the basic one, MicroProfile config, MicroProfile health, Jaxrs. We've got some data sources we're working with, JPA, H2 database, but we've actually changed this to be a different database. And I'll get to that in a little bit. And, oh, actually I'll get to it right now. So the H2 database is what you use when you're running it locally. And then we actually have Cosmos DB, which is an Azure database that's compatible with MongoDB and Cassandra and a few other things. And I'll show you what that is in a bit. And then we go and we want to add a Yeager. This is our open tracing implementation. This is all I had to add to this code to add open tracing to my application. There's a little bit of configuration you have to do. You have to open some ports to tell it what port to talk to. It talks via UDP instead of via TCP because it's a little bit more performant when you're doing this kind of work. Everything's on the local machine, so it's assumed that you're not going to lose anything through network traffic. So UDP was acceptable for this particular implementation, and so that's what they use. And then MicroProfile Open Tracing as well. So we've got the Thorntail Yeager implementation, then we've got Open Tracing, which allows us to access, basically allows us to shoot off all of our tracing information, which Yeager will then pick up. So what does this actually look like when it's running? Let's see here. Let me show you this first. This is just an application that was built with the help of some guys at Red Hat. And basically, to run this application, I just say Maven Clean Thorntail Run, and it's going to take a minute to run. And I want to show you a couple things here. For those of you, and this is one of the reasons we wanted you to say at the back, oh, what happened? Wait, wait, wait, wait. Oh, you know why? Maybe I'm using it somewhere. Hope not. Maybe here? No, try it again. Let's try the same thing again. Nope. All right, where am I using this? I'm not using it here. Oh, maybe. Now, this can't be. Let me close this. Now, let's see what happens. OK, where is it? Local Yeager CDTarget. Let's see here. Demo.war. Why is it being used? Hold on, let me check and see something here. This is live code, folks. OK, nope, Java. There it is, in task. OK, something happened, and it was, OK, fingers crossed. All right, so it was a process, because I was testing this earlier today, and I probably didn't shut it down properly, so it was still running on my local machine. So basically, a few things I want to show you here. And as this runs, it takes a while to run the first time. It's going to build the war file, and it's going to resolve a bunch of artifacts. This is always the scary part for me when I do live code, because it's using Maven, and it's pulling down a whole bunch of packages. And I don't know if something's changed. I did try it a few hours ago, so it should be OK. But sometimes it changes a few packages or something, and I have to change some code. Generally, no, though. We'll see. So once it gets past this part, we get into the interesting part. So because this is using Red Hat technology, because it was written by Red Hat, so these are all the things you see it said installed fraction here. These are all the things you saw in the Palm.xml. These are all being added automatically, including JPA, the Open Tracing, and Yeager, and stuff like that. Then if you see here, it's starting up a Wildfly. It's using actually Dayboss and Wildfly. And it's using Wildfly Swarm to actually run the application. That's the original term that they used for the micro-profile implementation. Now it's called Thorntail. So that's why it says Wildfly Swarm there. And then Wildfly, they have funny games with their branding. So their Jboss server, the open source one, is now called Wildfly. Wildfly Swarm, which used to be their Thorntail implementation, is now called Thorntail. So that's why you see in this code references to Wildfly Swarm. And you all see references to Jboss. So basically, we've got this ready now. And if I want to run this locally, I can. But I want to do one other thing first. Let's fire up that. Hopefully I didn't lose everything here. No, good. So the other thing I want to do is I want to use, and this just takes a second, I want to run Docker, and I want to fire up OpenTracing. So if you saw in the code over here, it actually read about around the bottom here. It does tell you that it's implemented Yeager, a Yeager interface, and it's also implemented OpenTracing. So OpenTracing is what's basically going to create these traces that get sent. The Yeager implementation tells us what port to send them out on and stuff like that. And then all we have to do is fire up a Yeager instance on the same machine, and it will start reading the UDP that gets produced by this application. So let's start a new command window. There we go. So basically this is going to run. Oh, okay, so maybe this just works now. Hold on. I'm fun today. Let's see here. Yeah, Yeager's already running. Okay, so I tried it a few hours ago. I guess I didn't shut it down properly, either thing. But this is running now. And as you see here, it's got Yeager query is a trace that it's looking for. So Yeager query is the standard basic trace that you get with OpenTracing slash Yeager when you don't set up any environment variables. This basically handles all of the Yeager queries at the moment. So let's go in and we'll look at what the application does. How many people are familiar with this? Okay, cool. So this is a game of Minesweeper. And basically what it does is it's actually going to use on locally, it uses an H2 database. And it sets a score here. So every time it writes that score, it's actually updating Yeager. So let's try, I'll do three games. I always do well when I'm in front of a crowd. And then I heard a story about Bill Gates was addicted to this game for a while. But anyway, I don't know why. Okay, so now it's there. So if I refresh Yeager over here and I do a Yeager query, you're gonna find some traces. Okay, so the traces themselves are here. See these little dots? So the trace is, if I click on one of these little dots, this is the trace. And this is actually also the span at this point. But normally if I had more than one trace, more than one micro profile, you would see multiple spans here in a little drop down sort of cascading system. Now, graphically what this thing does, and this is just default, very basic stuff. You can also buy commercial open tracing implementations from companies called LightStep and some others. If the circle is, the bigger the circle is, the slower the trace. And if it's over here on the right, actually, so if it's up here towards the top, that's the slower the span as well. So that's the whole idea is you want everything to be kind of over here in this area. If it starts getting over here, that means that the span itself is slower. And if it's a bigger circle, that means the trace is, wait a minute, that means the span is, if it's up here, is slower. And if the circle's bigger, that's the trace. The trace is slower. So that's the idea behind this. And as you can see, everything's running on my local machine now. And it's just a really basic implementation. But I didn't really have to set a lot up. I just had to add those dependencies in my pom.xml and it runs everything for me. So because this is Docker, we're gonna move into some more interesting things with Azure now. We're gonna take this and we're gonna Dockerize the whole application and then we're gonna deploy it out to the cloud. And I'm gonna use Azure, because I worked for Microsoft, but this could actually be done as Docker. It could run on any cloud platform if you wanted it to, although I do coincidentally recommend Azure for this. And I'll show you why in a little bit. There's good reasons. So let's see here. So the first thing I wanna do, oh, actually, yeah. One of the other things I did wanna show you is if we go into the thorn tail docs, there's a few ways besides the pom.xml to actually set up your Yeager environment variables and your settings. And you can use environment variables in a local machine. You can set up the pom.xml as I see here. And then there's just writing it directly into your code as well. Could be done too. So this is all in the thorn tail docs and you can get more details on how to set that up. All right. There we go. So the next thing I wanna do is I wanna actually set up, I'm gonna set up something on Azure. And to do that for, I'm gonna use a resource group. So I'm doing awesome scratch. There's nothing on the cloud right now. Nothing pre-baked. I'm going to set this up. So first thing I do, this is as I mentioned before, Visual Studio Code, if I hit control back tick, actually get a terminal here that I can use. This is a bash shell terminal. How many people here work with Windows as their desktop? A few? Okay, about how? What about Macs? Okay. So if you work with Mac, you can actually use the script that I put together. You can find it on the cloud on GitHub. Let's see here. That's the open service broker. It's called bbends microsweeper demo. But what I've done is I've downloaded my local machine, I cloned it, and then I actually set up some environment variables. So it runs with Cosmos DB when you put it on the cloud. I'll show you how to do that in a bit. But the first thing I do, I set up a Visual Studio Code, and then I can fire up this window here, and this is a command window. It's gonna send commands to here. Now one of the interesting things I can do with this, the reason why I asked about Windows or Mac, this script I'm actually running for this is actually written as a bash shell. So I can run all of this as a bash shell. It runs on Windows Mac or Linux, and we have something called the Azure CLI that's actually already also runs on Windows Mac or Linux. So you can do all this on any platform you want. In my case, I'm using Windows and I'm actually writing to a bash shell. But there's some interesting options here. I can, I select my default shell. I can use command prompt, power shell, WSL bash, or git bash. In this case, I'm gonna use WSL bash. WSL bash for those of you that aren't using Windows is a full, or even if you are, and you may not have this, this is a full implementation of Linux, of actually Ubuntu Linux running on top of Azure inside your Windows machine. And that's how I can actually make bash commands work properly. You can also use git bash. And to do that, I just set up the shell here. So this is actually right here is the shell. I have another interesting option here, which is open bash in Cloud Shell. Now what the Cloud Shell is, is something kind of cool. If I go over here, shell.azure.com, this is actually a shell in the Cloud that you can use for this. So I could actually run this script on the Cloud as well. So what the script's gonna do actually, I'll just talk a little bit about it. It's actually gonna create some environment variables, then it's gonna set up what we call a resource group, and it's gonna set up a container registry, which is like Docker Hub, and I'll explain what that is in a bit. Then it's gonna create an instance of this thing we call Azure Container Instances, and it's gonna use the registry to deploy the containers out to Azure. And I'm gonna set that all up through the command line. But I could do this on, like I say, Windows Mac or Linux. And in this case, I'm in a browser, and I just opened a window here, which is on the Cloud. So I can access this from anywhere. I can access the Cloud. And if I do az, I get the Azure CLI, and there's all kinds of things here, basically to send commands to Azure to tell it to create things, edit things. Destroy things, all kinds of stuff. It's a powerful language. I can also, somebody was mentioning Cloud Foundry. Cloud Foundry is built in here. I didn't have to install it. It's already there, Terraform is here. Let's see, what else? I don't know. There's a whole bunch of languages here that you can use. They're all built in. So if you already have bash scripts, if you guys were already invested in bash scripts, you can run them on our cloud. Oh, this is a cool one. So Visual Studio Code, I mentioned. It's an open source. It's on my local machine, and then I have to download a bunch of extensions. There's a basic version of Visual Studio Code if I just type code. I'm still in the Cloud Shell. This is actually a version of it. Let's see, here's a markdown file. Does it read it? No, Terraform, here's a Terraform. So it reads Terraform. This is kind of cool. If I go to Java, I believe it does Java now. Let's see, let's find a Java file. Source, main, Java. So my point here is that it actually handles a lot of different programming languages, straight out of the box. So if I want to run an application, so it's kind of cool. If you have access to a browser, if you use that dodgy hotel lobby browser or anywhere else, I saw these little game places here where people play games here. You could use the cloud here if you get a call. You're like, yes, I'm at work. I'll work on my Cloud Shell now and do what you need me to do. Also, there's an app. There's an app on your phone that actually has access to the Cloud Shell as well. So the Azure app on your phone. You can manage things, but you can also, if you have a script, you can use it. I actually used that as a vacation last summer, and for the first time in years, I didn't take my laptop with me. So I was far away from home, and I got a call that some of the Kubernetes implementations I have, I got an email saying that they're unsafe. There was a bug discovered and they're unsafe. So I was able to go into my Cloud Shell on my phone and disable them until I got back from vacation. So that was kind of cool. But anyway, so that's enough about Cloud Shell. I'm not going to show it. I'm not going to actually use that today, but I could. And one of the cool things I could do if I'm in Visual Studio Code is I could send commands to the Cloud Shell from here. I just opened that Cloud Shell window. But in this case, I'm just going to use the Windows subsystem for Linux. So let's do this. So I'm going to set up a bunch of environment variables first, and those are going to be used. So the next thing we do, every time you do something in Azure, you have to create a resource group, and that resource group is, we don't have to, but you have to use a resource group. You create it or reuse one you already have. In this case, I'm going to create a totally new one. So this is what it looks like when I'm actually running things in the Cloud. Easy group create. This one takes a couple of seconds, and then I'll explain what this is doing. So whenever it's done, it sends back a response in JSON that you can use in applications or something later. You can grab things out of here, and I'll show some examples of that later. But for now, it just says that I created a new resource group called 19122 Singapore, which I'm just going to use for this. Now, this is the one that takes a while, so I'm going to set this one off, and then I'm going to tell you what it's doing afterwards. So basically, inside that resource group, it's going to create this thing called a container registry. And the container registry, how many people are familiar with the Docker Hub here? A few of you? Okay, cool. So Docker Hub is like... Let me see here. Where's my... I have too many windows open. That's a problem. Let's see. Oh, let's just... I'll open another one. So Docker Hub, hub.docker.com. This is basically an FI search for Yeager. So these are the Yeager implementations. So Docker Hub is a way to actually store containers. And anytime you're running a container locally, generally instead of creating one, most people grab a container from Docker Hub. It's a base container. They use that to build other things into a container. Sometimes they just use it as is. In the case of Yeager, you can do that. You can actually use... There is a Yeager all-in-one. Let's see. I had it here. Anyway, there's a Yeager all-in-one that you can use. And basically, that's the one I'm using when I use the Docker Hub. Oh, here it is. Actually, I can get it from my code. And I'll check back in with the command prompt. Yeah, it's taking a while. So what this thing does... Oh, actually, no. So this actually created the registry quickly. So ACR create is the one that doesn't take very long. Hold on. ACR build, this is the one that takes a while. So let me file that one up. Okay. All right. So it's actually... I created the registry, which is similar to Docker Hub, but it's a private registry. So let me go back into my code here. If I look up here, the actual name is called... Oh, it's called Yeager Tracing all-in-one. Okay. So let's go back to here. And let's say Docker Hub. Oh, there it is. Yeager Tracing. Oh. Thank you. What? Ah! Thank you. Yeager Tracing all-in-one. So this is the Docker Hub. It has a bunch of images. So that image that I used to do Yeager before on my local machine, what I was doing is I used the Docker command to go and grab it. Docker run. These are all the ports that it uses. So I tell Docker to run with these ports. These are the ports that you want to access by standard... They're standards, basically the open tracing standard. And this Yeager implementation says anything that comes in via UDP on these ports, I want to track it and I want to trace it. I want to put it in my H2 database and actually store it there as well. So that's actually what's happening when I run this Docker run. But it's running on Docker Hub. So the Docker Hub is here and there's a bunch of different images for that. The Azure Container Registry is something similar, but it's a private registry. So you can actually create your own private registry using Docker code that's open source, or you can use our service to do that. Now to use our service, you just put this command, which is this one here, azacrcreate and that creates an Azure Container Registry and you name it and then there's different SKUs for that. We charge a service feed to actually have that and keep it updated and secure. So no one else can actually access your ACR builds. So the idea here is I've got Cosmos DB and a few other things with connection strings in my application. I don't want to expose those to the world. Even if they're secure and hidden away, if you run my application, you get access to my Cosmos DB and I might get a bill for that. So you don't actually want that to happen. You want to have your own private registry, most companies do, that have things they're going to do. Now the azacrbuild, that's actually what's running right now. acacrbuild says go get some code and run it in my container registry because I have a private registry. There's a command called docker build and that's the same thing that Docker Hub does. It actually builds things not on your local machine or anything else. It'll actually build it on a container that gets generated on the registry itself. So that's kind of cool. I didn't have to use that. It's not using any of my resources. It's using the Azure Container Registry. So it's actually running, it's gone out using a docker file and it's found the files. Here's the docker file itself. It's found the files it needs to actually build this application. So it runs Maven Clean Package. The POM tells it what files to actually pull in. And it's doing a two pass build because this is docker. The two pass build is kind of handy. The first build uses all the resources of the JDK8, I believe that's from Oracle. And then the second one uses Open JDK8 Alpine, which is a much more lightweight version of Java. So it actually does the build with docker, with a docker Oracle Java image. And then it does the deploy. Any deploy that happens of that code uses the Alpine image, which is much smaller. So it takes less resources. Because you don't need as much to actually run the application as you do to build the application. So did this actually finish? Looks like it might have finished. Let's see. Yep, okay. So it took two minutes to run. And I successfully talked about that stuff for two minutes. So that's good. I didn't have to spend any time showing you guys too much of that code. But basically it's almost the same code that you had when I showed it running locally with the J Boss and the Wildfly and everything like that. It just doesn't run. It just builds it. So now we've got an application that's sitting on our Azure container registry. And it's sitting there ready to be deployed somewhere. It could be to Kubernetes. It could be to OpenShift, Cloud Foundry, anything that basically takes a docker file. In this case, I'm going to use something called Azure Container Instances. And let me show you what that is. So... Oh, first before I do that, I got to get the password. So there's a... and actually the container registry itself. So I go and grab that. Yep. So it actually ran that behind the scenes. So you don't actually see it. It sets it in an environment variable. So the next thing we do, I could create a single instance. So to deploy this somewhere, I've got a docker file. It's sitting there. It's got Yeager and it's got OpenTracing. And they're both sitting there waiting to be... I'm sorry. It's got a microsweeper application and the Yeager OpenTracing application sitting there, ready to be deployed. To actually deploy this somewhere, you have to create a YAML file. And for those of you who are not familiar, if you do this with Kubernetes or pretty much anything else, you have to create a YAML file. There's no getting around it. There's these cool things you've got called Helm Charts. And they actually help you build YAML files and or leverage YAML files. Microsoft bought this company called DAIS a few years ago. And they're actually building these Helm Charts. And there's two products they have, Helm and Draft. And they actually have some cool stuff for deploying this. But let me just... I'll do this first. So what I'm going to do is I'm actually going to build a YAML file. I've got a base YAML file already. And what I'm going to do is I'm going to copy it. That's what this does down here. And then I'm going to do a bunch of editing of that YAML file with some of the things that we just created. And I do not want you to have to sit through me try to type this. So we put this together to actually do that. So then I can say, let's have a cat of that. And actually, I'll just look at it here. So deploy ACI. Yeah, I'm a file open file. Did I put this? It's in GitHub. Documents. Sorry. You got to look at my messy machine here. GitHub. And then it's going to be in microsweeper application. I have too many of these running. There it is. microsweeper demo local Yager. So it created a file here. Or it should have. All right, whatever. I'll show it to you here. It's going to be easier at this point. Sorry. So this is the actual file that got created. When I do a cat, it shows it. Basically, it's got a Yager service. It's got a Yager agent host. This is the Yager all-in-one agent that I showed you on the Docker Hub. A bunch of other information for ports you want to open and a few other things. Now, what are the interesting things? So we have this thing called Azure Container Instance. This is sort of like a Kubernetes pod by itself. So the Azure Container Instance itself, it's a single container, basically. And it allows you to put more than one Docker application, more than one container in a container. So it's basically a Kubernetes pod, but it's not scalable. So it's just an easy and quick way to test out your Docker files and your deployment yamls and all that stuff. So I'm going to do that right now. So I created this thing. This actually could be used for Kubernetes with a few small changes. Things like you have to set up load balancers and front-ends and a few other things. But basically, this is the base that you need to actually make this run. So we've got that. And then we're going to do this. We're going to set up a Azure Container Create. And this is going to be the ACI container that we actually want to use. So that should run down here. There we go. So that's going to take a couple of minutes to run. So basically what this is going to do is it's going to say go into my ACR instance that I just created and deploy the Yeager instance and the Micro Sweeper instance that I gave you instructions with the YAML file to do. So we've got the Docker file we used to create the containers. And then we've got the YAML file that says that gives it instructions on what to do to deploy this out to our Azure Container instance. And when that's all done, we'll actually have something running that you can use on the cloud to run the same application we ran locally a couple of minutes ago. So one thing I want to show you while that's running. Oh, hey, it worked. Is it working? No, I'm not surprised it worked. I'm surprised that it worked so quickly. All right. So I was going to show you something else while that was running. But let's do this then. All right. So now we can actually test out what's happening here. So let me show you this. There we go. So this is actually going to give us a log. It shows us the actual. Let's just make this a little bigger. So it created one container. It created two containers. Both of them are running. Everything's good. So then let's go down here. Let's not delete that. And then we look at the log files. And let's just make sure that the log files are up and running. Yep. Looks like Thorntail is ready. All right. And this is checking the status of Yeager. Looks like Yeager is up and running. OK. All right. So the next thing we want to do is we want to actually access these. To access them, we have to get some IP addresses. So they're in one container right now, but you can access them individually through IP addresses. So the first thing we want to do is we use this command here. Container instance IP. We're setting up environment variable, which is basically going to go into the resource group. It's going to run some bash grip that grabs the container instance.fqdn, which is actually the one for, that is the one for the micro super applications. So let's go and run that. OK. That sets an environment variable. And if this isn't clear yet, it will be in a second. So then we can curl those applications and make sure everything's working. We didn't get an error. Good. So then what we want to do is we want to echo that IP address with the 8080 port. There it is. So that's the, I believe this is the micro sweeper application. Let's just see. There we go. So we got that. All right. The next thing we've got, I can go up here. All right. This is the actual Yeager implementation. So you can see the IP address up here. This is actually a container now. And it's 10445-189-237. It was just created and it did the pinning of that. Now, a couple of other things we can do here. Let's play a couple of games. Oh, that was quick. All right. Play a couple of games. And this is actually writing the Cosmos DB. So when it tracks this this time, it's doing something slightly different with Cosmos DB. Now let's go into our Yeager. Refresh this. Now, instead of Yeager query, we have two things now. We can do Yeager query, which is the base. But I've also set some environment variables in that container called microsweeper. So we can actually track the traces for microsweeper this time. And in this case, there's several traces. And once again, the performance is pretty good except for this guy. And the circle's a good size. So if it was bigger, that means the trace is too slow. The microservice itself is too slow. If it's higher up here, like this one, that means that the overall process is too slow as well. So this one probably needs some work. But normally, you'd see the microsweeper and the span here. In this case, it's just like that. So that's actually deployed this out to ACI. And this is actually running on one container, both things. So that's kind of cool. So what can you do with this? So now that we've got the container running and we've got the container working, we could deploy this to Kubernetes. We could deploy it to... We have an Azure Kubernetes service, or you could set up your own VM and install Kubernetes open source from scratch. You could use some third-party tools to do that. Cloud Foundry just announced their pivotal Cloud Foundry Kubernetes service runs on Azure now. So you could actually set that up and do it. But my demo I'm going to show you today is actually OpenShift. So OpenShift is a partner of Microsoft and OpenShift is a Kubernetes implementation that's built by Red Hat and has thorn tail functionality built into it. And that is easy to use with Yeager as well because OpenTracy is part of the MicroProfile standard. So let's go ahead and see what that looks like. Here's the OpenShift instance that I've already created. And let me show you how we actually did this. Yeah, so there's last September some guys from Red Hat and myself got together and we put together this implementation. And like I say, we have our own Azure Kubernetes service that you can use or you can create an OpenShift service on top of an Azure virtual machine. Azure runs virtual machines, windows, and Linux. We have all the flavors of Linux you can imagine. So if I go into, this is my Azure portal, by the way. If I say new and I go to Compute, let me make this bigger. We have all kinds of, we have Red Hat Enterprise Linux. We have Ubuntu servers. We have something called Service Fabric Web App for containers, et cetera, et cetera. But I can actually create a Ubuntu server really easily running on top of Azure. We also have SUSE, we have all the major flavors of Linux. And basically I just answer some questions here and it'll actually create a virtual machine for me based on that. And the virtual machine is sitting on Azure and you can use it just like any other Linux box. You have to manage the operating system yourself. We have multiple versions of course of this. This is Ubuntu server 1804, but you can use all kinds of versions. Of course we have Windows Server too. And basically what we did when we worked with Red Hat is, let me see if I can make this bigger, yeah. So we created the virtual machine. First we built that app. We did exactly what I just showed you. We built the app and then we used container instance and Kubernetes to test it out, make sure everything works. And then we just set up an implementation of OpenShift to run this as well. Because if it runs on Kubernetes, it will run on OpenShift. It's very similar to... The OpenShift implementation is very similar to Kubernetes with some added things like they have a built-in Jenkins server and some other things for managing code and dependencies and CICD and some other things. Including micro-profile. So we built that. We used something called OpenShift Origin as the main image, which you can find on GitHub. That's the source. It's license-free. That's the demo. Let's see. I want to show you this one part of it. The source code is here. So you can see the source code. I've actually cloned the source code and added some things to it for myself. Whoops, what happened there? Yeah, so that's James, the guy who's working with the Red Hat. That's his version of it. So the next thing that's kind of interesting, if you want to deploy OpenShift, we have a button here, a deploy to Azure button that we created for this. And basically what this is, it uses an ARM template and something called OKD, which is OpenShift Kubernetes deployment. Those things working together, it just, when I clicked on that button, it goes into my portal again and it gives me a bunch of options here for my subscription. I mentioned the resource group. You have to create one all the time. We have location, we have multiple locations you can use for this. You put an admin username, password, SSH data, you have to have an SSH key to actually run this, and then a VM size. We have dozens if not hundreds of VM sizes these days to actually create this. And then when you say purchase, it's actually going to go ahead and build a VM, or in this case a cluster of VMs, and it's going to deploy OpenShift using OKD, the OpenShift out onto that. And so we did that, and that's actually what you're looking at over here. So this is our OpenShift deployment that we used that process for. I literally hit the button, it takes about 35 minutes, and it creates a bunch of VMs, and then it creates some Kubernetes clusters on top of those VMs, and then it puts the code out there, and when you're done, you get something like this, except you don't have Micro Sweeper built into it yet. All right, I don't know. Actually, they might have added that since I looked at it last, but when we did it, we had to implement the whole thing separately. But one thing I want to show you here as well, this is kind of an interesting thing on the front page of OpenShift origin. This is an Open Service Broker implementation, so in this case we wanted to connect Cosmos DB using the MongoDB API, and to do that we used Open Service Broker implementations. It's basically what you're looking at here. And that is something that Microsoft and Red Hat and some other companies have been working with, and it's an easy way of literally just clicking and going one, two, three, four to set up your Open Service Broker connection to Cosmos DB. I didn't have to write any code for that. It actually generated the code for it behind the scenes for me, and then I could actually use that code to access the Cosmos DB server. Now, Cosmos DB server itself is, I better mention that a little bit, so Cosmos DB itself is a globally distributed database. So what does that mean? It's actually multi-model as well. Let's see. So when you create a Cosmos DB instance, it actually creates it in three different regions all over the world by default. A region. And you don't actually have control over those regions. You can actually pay for regions that you want to put it on. But we have 54 regions in 140 countries. I still get people once in a while who ask me, aren't you guys just renting space from AWS like everyone else? No. No, we have these multi- billions, literally billions of dollars we've spent on this, multi-football field size centers. Huge things. And they're all over the world. Now as you can see, the available regions are the blue circles, the announced regions are little dots here, and availability zones are things that you can use to connect things together. So basically, where is let's do a little test. Can Brian find Singapore on this map? Australia, we have Southeast Asia. I think it's in Malaysia right now. I don't know. But yeah, we also have several other locations. As you can see, 54 regions. And a region is actually three data centers. And each one of those data centers has redundant capability to cross them. We also have these things called paired regions. Which, like North Europe can be paired with France, Central or West Europe. And those are going to be fast updates between the different regions for disaster recovery, for redundancy, all kinds of things like that. But anyway, so we created a Cosmos DB instance and it automatically created three locations of it. And like I say, you can pay for more of those if you want. And then I want to talk about the multi model. So multi model means that we have app models for several different things. We have web apps. And basically the bottom line is you can use this to access Cassandra, MongoDB. If you have, if you're using MongoDB, Cassandra, SQL, Gremlin and a few other platforms out there, we have, you just change the connection string in your application and you can use this database on the backend automatically. So that's what we've done. And we've implemented this all over the world. It was started by, for those of you who might know what the OneNote is. It's a Microsoft product that used to take notes and things as part of Office. This database was started as the backend for OneNote many years ago. And they said, wow, this is a really solid document store. It handles structured, non-structured, semi-structured data. Why don't we use it for other things? And so the first thing they did is connect it with MongoDB. So you can use it for JSON documents that MongoDB handles. And then we added Gremlin, Cassandra, NCCquel, a few other things as well. So it's kind of like a cool multi-model, multi-location database that you can use. Anyway, so back to the application that I wanted to tell you about. This is actually how it works and how it runs. So I've got these, I created, we created a Cosmos DB implementation of that. And we go into our MicroSweeper implementation. And MicroSweeper itself, here's the links, and then we've got Jenkins, which basically handles our CITD for this. And if I go into the application deployments, we have set some environment variables for this as well. So if I go in here, MicroSweeper, this is where you can actually, whoops, environment, there we go. This is where you can actually see the environment variables. So I mentioned that the way back when, you might have remembered, you can set all these up in pomm.xml application, you can set them up in code, or you can set them up using environment variables. And the environment variable, in this case, we called, we created a Jager service name, log spans, so you actually want to log spans and sample parameters, the name of the Jager agent host and the port you want to use by default. And then we've got a configuration that we set up with the object service broker to Cosmos DB, and we called the database Scores DB. Now the cool thing about this is we didn't have to set this up, it was all set up automatically inside OpenShift just by clicking those four steps to create a Cosmos DB database for that. And then we just tell it what database we want for the actual application. So easy to set up, and the nice thing about this is this is set up in the environment. So we could deploy this, this is actually the deployment, not the application itself. So I could set this up as multiple deployments and I could change the Jager service name or something else on each deployment. And each deployment gets managed separately using here's the sixth, the latest and basically it deploys things to a pod and you can track and see the different stages along the line for that pod and the events that happen each time you deploy things. So there's no events, that's good. But it's kind of cool because it allows you to do that, it uses Jenkins to actually do the deployment itself and you can manage your Jenkins set up as well. So anyway enough about OpenShift basically all I have to do now is I go back into my overview here and I can click here and once again I got my, oops I go here because we didn't use a public SSH key there we go. So this is our Jager so I can see the Jager there Micro Sweeper, here's the application it's not failed there you go, good. So I can see here and it's actually writing this the Cosmos DB now instead of the H2 database that it was doing locally and if I change this and you can see that I'll do one more. That was an easy one. So if I go back into my Jager search here search for Micro Sweeper find traces so there's a bunch of traces. Let's have a look at this one doesn't seem to be very good but the idea here is I could set all this up I didn't have to do anything extra I set up a VM then I set up OpenShift and then I deployed that dockerized, yamalized file for deploying out to any Kubernetes platform and in this case set up Cosmos DB by using those click interface and basically I was done. One last thing I want to show you if you want to use this on Kubernetes I mentioned Helm charts before so this is a very similar to the docker hub but in this case it's for Kubernetes. So anything you can imagine that you want to use for Kubernetes let's see so Jager again I did it again so there's a Jager operator and this Jager operator uses Helm I mentioned Helm before so we have draft which allows you to create these YAML files and then we have Helm that allows you to run the YAML files really easily and this to deploy this out to Kubernetes this is an actual Jager operator instead of deploying it to docker this would deploy it in a multi-stage multi-pod scalable Kubernetes environment and you can actually see the chart if you look in here where's that chart I can't remember now Jager operator there it is so the chart itself is here too anyway you can find the chart there and it's a Helm chart that allows you to run some strips it's a YAML file, a really complicated YAML file that allows you to deploy this out to any Kubernetes cluster on any cloud platform so that's basically the whole idea here so we started with an application running on my local machine using micro profile and then we added up a tracing we packaged all that up in docker we deployed that out to a Azure container instance which could actually run in Kubernetes as well and then we took that and we put it into OpenShift and we ran the same thing on an OpenShift Origin installation and all the stuff is open source it's all in GitHub and that blog I showed you by James my colleague at Red Hat and the GitHub repo that I showed you the actual microsweeper code is pretty much all you need to do to try this out on your own and there will be a recording of this as well so you can actually watch and follow along with what I did and do that yourself as well and the script that I was using to cut and paste things into the container instances is actually on the GitHub repo too so that's it and we'll take some questions any questions at all alright thanks for coming guys I'll be around if you have any questions after coming that was great