 My name is Rade Vochal. I'm one of the organizers here and I'm glad that we have a full house here. And not only in this room but as I've heard in every other room that we have in this building. So it's going to be a great event. So I'm going to do a very quick introduction about this conference, very quick group housekeeping stuff. So you guys know what to expect here. So first thing, this is DEF CON a couple of years ago. I was actually looking yesterday how long are we doing DEF CON for today. So this is the ninth conference that we're organizing. But since we skipped one year, this is tenth year of organizing DEF CON. So it's sort of an anniversary to this time. So thanks a lot. And DEF CON is getting bigger and greater. So first thing first, for people who has been here already a couple of times, you guys probably know about this. But for those who are here for the first time, DEF CON is organized for developers and by developers. It's a volunteering conference that is not organized in a professional way. We have a lot of volunteers helping us out. So what I want to ask you for is be open-minded. And help out where you can. So most of the speakers or all of them are not professional speakers. Some of them speak here for the first time in front of such a huge audience. So be patient, open-minded, and respect that there are little nervous, that there will be probably little nervous. And again, they'll be presenting this for the first time. So what do we have for you? So there'll be three keynotes. One keynote will be starting right after I'm done here. These keynotes are a little different. We didn't want to do an ordinary keynote. So these will be more of like a joint session of several engineers, several developers telling you some interesting stories. So I hope you will like that. And on top of that, we have more than 200 sessions. It was very hard this year to pick these 200 sessions because we had over 500 submissions. So you can imagine that it was a lot of work to come down to a schedule that we created for you. We intentionally picked some new topics and new speakers. So you won't see the same old. You will see some new people and hopefully you will like it. It's going to be probably a little different than last year. Or that's the intention. For sure it's going to be different than last year. One thing, and you can probably see it already here, the rooms are getting full quite quickly. So please do respect the full sign. We have restrictions here on the capacity of these rooms. So if the room is full, try to find some other. Or in the worst case, it's always string. Yeah, hopefully people can hear me in the other rooms again. So respect the full sign, please. Food options. So in the lobby here, there's a huge sign or whiteboard with all the food options. The food trucks are sort of hidden over there. So there are a couple of food options for you already. And if you ask anyone from the volunteers, there's a couple of good restaurants around here where you can always find something really good. Now let's go through 10 things to do when there's no Wi-Fi. Seriously. So Wi-Fi is working here in every room. It's actually right there, written in red. There's information how to log into a university Wi-Fi. But there are some restrictions. The restrictions are no torrents, no porn. If you start downloading something like that, you will cut off everyone else in this room. So be careful about it. And don't do any mobile hotspots as well. It messes up the local Wi-Fi setup. So just don't. You've probably heard about the party tonight, right? Not tonight, sorry, tomorrow night. So the party is unfortunately limited. So there's only 700 places available at the party. At the registration desk, you can still get a button that will be the entrance ticket to the party. I would actually encourage you, if you can't get to the party, look around, you grab some people and do your own. There's going to be a couple more events for sure happening during that night. And I guess there's still registration going on for a sightseeing tour tonight. So if you want to get a little bit excited about the town here, you have never been here before, and you're not afraid of the cold that we have outside today, you can sign up for that sightseeing tour, and it starts right after the last session. Of course, there's Twitter hashtag, DefconnCZ, or there's a telegram room. There are links on the whiteboards in front of this room and other rooms. If you need anything, if you want to talk to the organizers, if you want to help with something, follow this. Also, any changes to the schedule, if we drop some sessions or move some sessions, will be announced on both Twitter and Telegram. And also, if you look at YouTube, DefconnCZ is our official channel, and hopefully most of the sessions will be streamed almost online, right? Almost. So, one important thing. I've already mentioned that this conference is organized by a large group of volunteers. So all the people who have these nice black t-shirts, nice black t-shirts with the volunteer sign-on are here to help you. If you need any advice, give that to you. I would like to thank especially to the University of Technology here, because without their great help, we wouldn't have this nice venue. Please, make sure that we leave this venue as we get it. So make sure that you don't leave any trash behind, you don't break anything. You're very careful about this, again, nice venue that we got from the University. And as you might have realized from the schedule, DefconnCZ is becoming kind of a shelter conference for other conferences. So the folks from OpenShift, the folks for JBoS, are pretty much organizing their mini-conference during this event. So I would like to thank them as well, because they've been a huge help for helping us organizing this whole thing. And then the last thing Moe Fedorat says it, our official Fedora portal, all the sessions and probably a couple of interesting articles will appear there after DefconnCZ. So follow that, look up there for any sessions that you will miss here. And yeah, let's go read that. So with that, thanks a lot. And I'm going to hand it over to my friend, boss, and our VP of Engineering, Tim Burke. Thank you, Radick. Dobby Den, Brno, welcome. How many people here saw the keynote I gave two years ago? We talked about DevOps, Hybrid Cloud, a lot of people. So I'm not given the same thing. In fact, they told me I could only talk 10 minutes, so I don't know what we learned from that. So what we're going to do is, the main purpose of this talk is I'm going to refresh a little bit of the material we talked about two years ago. But then the real question is, is this stuff real? Two years ago I laid out this vision of this beautiful environment where you can dynamically provision hardware and that the DevOps world with the software developers can seamlessly deploy on that infrastructure. It's a great story, but the whole point here is to see if it's real. And so to do that, I've invited a lot of the crew here from Brno and they're going to have a series of segments where we're going to talk about these different things. Oh, I guess I need the clicker. Okay, Mark Andreessen has a quote, software is eating the world because it really is. You know, all the stuff that used to be in hardware is now in software, whether it's cars or telco, network gear or mobile phones or everything. And what a great time for us to be developers, right? There's so much cool stuff going on. There's never been a more interesting time. And let's look at a couple examples of that. One such example is Volvo cars, automotive company, because now cars don't just drive you somewhere, right? Look at the software in them, it does navigation, it does entertainment, it does diagnostics, it does connection to cell and other things. And there's a lot of trends in, you know, autonomous cars and electric hybrid vehicles. So all this stuff, it's really getting software down into the low level hardware. And so what this means is that companies that used to just, you know, like typically their IT departments were like overhead, you know, you'd run your accounting software or your mail services and stuff like that. But now what software is doing, it's really adding value to the vehicles or whatever product they're introducing. And it's not just automotive, there's a few examples we have. One such example is Paddy Power Betfair. They're from the UK, they're an online gambling site, and they do millions of transactions a second. And their software tie-in is that they want to be able to dynamically scale out in provision based on when there's World Cup events or other, you know, to be able to burst up and down. Another example is Target department stores. And so they've done a huge effort to have the DevOps environment in order to have a consistent experience between the online shopping as well as how they operate in their stores. And the last one we show is Amadeus is a travel service and what they have been doing is they are constantly trying to get new apps rapidly deployed to be able to add new services really quickly. So it's a very wide diversity of industries. But the challenge is for DevOps is we have developers who want the latest and greatest bleeding edge stuff. They just want to grab things and they just want to ship it. And to the other side, on the operation side, that's chaos, right? Because how do we control, like, where did this stuff come from? How do I know there's not viruses in it? How do I know it's up to date? How do I know it works with certain things? And so the real challenge of DevOps is to find a world that can satisfy the needs of both developers and operations. And so what we're going to do a series of demos through this session, it's to talk about how we are using operations to deploy our environment and then we're going to show how developers can develop their applications seamlessly onto that environment. So it's a really good way of meeting the needs of both developers and operations. But in order to do that, you can't just do the same old stuff that you've always been doing. In the past, a lot of the applications were monolithic, you know, one big blob of an application, but that doesn't work anymore. And so that's where these new tools come in to meet the needs of developers and operations to be able to have agile processes and to be able to have low-level cloud infrastructure that allows dynamic provisioning. And even on the application front, where applications were typically just, you know, one piece, now they're breaking up into what's called microservices and that's the ability to decompose applications into discrete pieces so that you can update them independently. So you don't have to deliver and deploy one whole huge application every time you want to add a little piece to it. And it's not just, you know, multiple pieces, but it's how do we coordinate that across clusters of systems because it's not just you don't have everything running on one box, you want to be able to run, you know, across multiple machines and even across multiple clouds. And that's what we call hybrid cloud. And you need orchestration services to pull that all together. And so these are the things that we're going to be talking about in the demonstrations that have come on up. And, but it's also important to have a flexible infrastructure because people want to be able to build and deploy these applications on bare-metal hardware, on virtual machines, on on-premise cloud, meaning cloud that's running on hardware right within your data center, but also on public cloud. And you want a set of tools that enable you to use the same development tool and to be able to deploy on those multiple environments because that's where a lot of the errors would come in the past. If you had to build your application differently for every environment, then it's, it's, it makes it difficult to test it and really difficult. It requires a lot of knowledge for each of those. But what we're going to show is how we can abstract all these different deployment models and make it really easy to deploy anywhere. So, here's, here's the outline of the demos that we're going to do today. We're going to go from bottom to top. We're going to start with the hardware side. And our first demo is going to talk about how we discover the hardware and how we provision it to, as part of our environment. So, basically, we're pulling in the hardware resources into the pool. The next demo is going to take these bases of hardware, the pool of hardware, and we're going to create virtual machines on it. And that way we can have more scale on the systems and we can better share the resource utilization. The next step we're going to do is create identities and accounts and policies because when you have clusters of systems or large numbers of systems, and you want to be able to migrate workloads to different places, you want to be able to have a common set of accounts. You don't want to have to create the same accounts everywhere. And so that's where identities become important in this cluster of systems. And then we're going to create a pool of dynamic virtual guests because, typically, these environments, what you want is the developers to have self-service, meaning you don't want the developers to have to submit a ticket to the IT department and wait several days until a VM is created. What we want is self-service portals that enable people to dynamically spin up virtual machines. So we'll be demoing that as part of OpenStack Platform. The next thing we're going to do is provision a containerized application. So this is where we transition from the op side onto the dev side. Now that we've got all our resources, virtualized guests, containers created, we're going to use OpenShift to create a series of containers on top of that. Then what we're going to show is CloudForms. And CloudForms is a high-level system management tool that's going to enable us to monitor the resources in our environment. We can do things like see which systems are up to date, which ones have the right latest versions of the software, and we're going to be able to do scale out and bursting through CloudForms. And the last part is we're going to talk about deploying applications with middleware services via JBoss EAP. And this is a really cool demo from the developer side because it really ties it all together. It shows that the developer is going to run and deploy their applications on the infrastructure that we created in the preceding steps. So it's really an end-to-end use case from ops to the developer world. Through the course of the keynote address, we're going to be referring to this architectural diagram. And each one of the sessions, as they do their piece, you will see through animation that it'll be grayed out for the parts that haven't been discussed yet. So what you're going to see as we go through this is the diagram is going to become increasingly populated and filled in. And if we've done this successfully, you should really be able to see the end-to-end flow. So you don't need to really follow the diagram right now because we're going to be getting into it in each of the upcoming segments. Okay, and then the last part, we're just going to talk about what did we see? Is this stuff real? Okay, so with that, we're going to turn this over to our first speaker. And that is Martin Pevek, right? I'm sorry. Thanks, Tim. So, hello, everyone. So the first demo is about the satellite. So I will play the nice animation by Nenad Perich that will show the, like, we just bought a couple of boxes that we want to give some meaning in life. And we'll use satellite for the first part where we'll deploy satellite and using the QuickStart cloud installer that helps us to bootstrap this process a bit. We'll deploy the redhead virtualization so that we can actually start using VMs instead of bare metal that scales a bit better. And on top of redhead virtualization or RAV, we'll deploy cloudforms that will help us managing the cloud infrastructure that we'll build later. So without further interaction, this is the satellite UI. We deployed this into our environment using the ISO image and we let that discover the host or the hardware that we are going to provision. So you can see here that in the provision host we have two boxes right now, one smaller one, one a bit bigger. We used puppet facts for knowing what actually is there. And we'll switch to the cloudforms QuickStart installer so that we can use these to install various redhead products such as redhead virtualization, OpenStack, cloudforms or OpenShift. So in this demo we'll use or we will install RAV and cloudforms on top of that. We'll give some name to this deployment, profile some defaults as the default password that we're going to use. We are next asked how we want to treat the updates. This is the functionality of the satellite where it can let you control which updates you actually want in your infrastructure and reward node. So right now let's give all the updates that are available to the systems. Then we are asked how we want to deploy the RAV. We'll go for the engine plus hypervisor. This setup is better for like scaling up the infrastructure later. We'll choose the smaller instance for deploying the management console or the engine of RAV and we'll use the more beefy machine for the host... hypervisor host. This will be actually the machine running the VMs. So after that we need to profile some settings such as type of CPU or storage. Tomas will talk about that later. In next step we are asked where we want to deploy the cloudforms. If we used QCI for installing OpenStack we could put it there. But right now we'll put it just on the RAV. So after some other... leaving some defaults there we are given the overview of how the deployment will look like and with the next button the thing just starts happening. So what actually happens is the satellite is synchronizing the RPMs from the internet so that we have available it locally. Then Foreman that is part of the satellite is configuring the DHCP, DNS, all these records on your services that the unattended provisioning works and we can actually start installing the basic OS. After that finishes Foreman hands over to the config management in this case it's PAPET that finishes the installation of RAV and then we'll put the cloudforms inside this RAV. So while we are waiting for the deployment to finish I would like to mention that the main part of the satellite consists of many upstream projects that make this really nice UI working at the end. So there is Foreman for provisioning and configuration management there is Candlepin for substitution management PALP for handling content such as RPMs or Docker and we integrate with OpenSCAP or we provide remote execution capabilities and we have really nice coverage of these projects at this conference. So first of all there is a boost of features the Foreman or satellite has and there are several talks there is workshop about the Foreman today afternoon this is the introduction so if you haven't seen Foreman I recommend going there and we have talks about remote execution today we have talks about PALP and OS3 we have talks about Foreman and OpenStack tomorrow in the OpenStack room and there is also a spacewalk on Sunday that was based satellite 5 so after the deployment finished we see ref running on one machine and the cloud front on the other and just to prove you that satellite is not just this nice UI that can get you started with your deployment you can use it in day-to-day operations and this is actually the reports that we got from the config management that show what actually happened to one of those hosts so this is all my demo we have now satellite ref running together with cloudforms and I'll hand over to the next speaker Thank you, Ivan Nechaz well done okay so we've discovered and provisioned our hardware the next step now that we have that is we're going to create virtual machines on it and for that we have Thomas Yellenek going to describe it so this talk is ref hosting cloudforms so from the previous talk I already have an instance of ref running with one hypervisor configured and one instead of virtual machine hosting the cloudforms in this talk I will use an external satellite instance to deploy a new host and live migrate this cloudforms virtual machine on that one and when I will leave ref will still have the cloudforms and will have enough resources which will be utilized by the upcoming talks to deploy some more load on it so this is ref ref is a traditional virtualization solution which is organizing KVM hosts in the clusters and manages virtual machines on them this is the dashboard which is showing you some nice overview of it in the top left corner you can see that there is one up and running host so I will deploy a new one I could use our internal stuff but in this case I will offload it to the satellite all I will have to do is just to fill out the name of the hypervisor and its root password and when I will submit this dialogue satellite will discover the host, install the operating system on it install the repositories and all the packages needed to act as a ref hypervisor do all the configuration both on ref site and on the bare metal site until it is done I will just quickly mention that ref is providing lots of different services for its virtual machines for example, storages and networkings for networkings we have our networking or you can offload it to the OpenStux Neutron very same way as I used with this satellite or for storage we have various storages we support for example, GlasterFS, NFS, iSCSI fiber channel or any POSIX component hardware storage so in the bottom part you can see how the operating system all packages are being installed and for storages so it's almost done when it will be done the host will join the cluster and will become operational and will be able to host some load all this will happen in an unintended way as you can see I haven't clicked anywhere so now it's green so let's talk about the virtual machines virtual machines are the core of ref ref is great if you want to manage large virtual machines which demand long up times one virtual machine can have up to 288 virtual processors and 4 terabytes of RAM if you need to do some maintenance on the underlying hypervisor you can just take the virtual machine and live migrate it to another one with zero downtime I am mentioning migrations because there was a large effort to enhance it and I had the pleasure to be part of it so I'm part of them and I'm showing them now also large and heavy loaded virtual machines are migrating blazing fast and reliably so to conclude the same major solution it has been out for many years and will be around for many more but this doesn't mean that the development is still there we keep exploring new options and experimenting on every single layer of our stack to make sure that we stay competitive and cutting edge thank you thank you Tomasz Jelanek next we have Martin Kosak who is going to describe how identities and policies are created cluster wide ok thank you for interaction in the next part of our story we will deploy an identity management service for the whole infrastructure free APA next part of the infra in my part I will first introduce what identity management is because it may not be well known to everyone and then continue with demoing how we set up a server of IDM called free APA and a client SSD on a federal atomic host which is a container focused operating system deployed in a ref that you could see in a previous case free APA is a service for your infra that provides centralized storage for identities and their security credentials and security policies for these entities and then based on this it provides authentication and authorization like service for the whole infra what this means in practice for example entities typically users and their passwords and by policies it may be like how long these passwords before it's expire or if this user needs to use some more advanced application like a two vector or smart card and well and the whole goal of having this centrally is to provide unified authentication and authorization experience to the users so that you don't have to set this separately or separately on these nodes and applications as for some of the other entities of using free APA first well it's integrated with the other applications and part of the infra and second it can collaborate with the directory so that you don't have this unified experience just for Linux users but also for Windows users because they are part of the directory realm and the AD realm can have a trust with IDM and they can log in for example to the satellite or to the other parts of the infra okay let's move to the actual demo as I mentioned we have set up a federal atomic host on a ref from the previous case so first we log in to the ref and we install a container containing free APA server the installer of the container asks for some basic data about the free APA and when it gets the data it will set up all the components of the free APA solution so this may be like the typical service you could expect from IDM like a lab server or Kerberos server for authentication and so on when the container is set up it's available for all infrastructure we will demonstrate it just by showing that the web dashboard is available for the infra now to actually demonstrate what that free APA works in this part is the deal set up client on the atomic host called SSD which binds the actual operating system to the free APA so as you could see we simply installed an SSD container which joined the system to the free APA server and suddenly all these interfaces on this local host are bound to the free APA server and any user operation is resolved against free APA so now we can actually show how we use this infra by opening the free APA web management console where you'll first click around a bit so you can see what other services can offer, for example you can see that it can do certificates for the users or other policies and we will add a user in this free APA dashboard just by filling a couple of data for the user and when this data is there this user can login to any part of the infra where the user is allowed in this case it's the atomic host of course so as you can see now we can SSH enter password and user is allowed and in the same case it will work with all other parts of the system that are joined to this management so before I conclude let me just mention that there will be other presentations related to security or IDM where you can join more about what we do thank you Martin Kosciak so we have our identities created the next thing we're going to do is create a dynamic pool of virtual guests with OpenStack and for that we have Jirka Tomaszek hey okay so as a next step we're going to deploy OpenStack so we can later install OpenShift on it and we'll manage it with CloudForms so we're using OpenStack Cloud because it gives us the ability to scale the environment as our requirements are growing for OpenStack deployment we're using quite an interesting concept of deploying OpenStack on OpenStack which basically means that we are using OpenStack's own facilities to deploy it for our deployment we're going to be using four bare metal nodes one node for the OSP director itself which is driving the deployment and three nodes for the OpenStack infrastructure itself which consists of one controller and two compute nodes as part of the OSP director we get the brand new director UI which I'm going to be using to drive the deployment and so I'm going to log in using the credentials from IDM from the previous presentation and once we log in we're presented with a deployment plan page which guides us nicely through the process of deployment so as you can see the first step is to prepare our hardware so we're going to be doing deployment of three nodes so I'm going to register those and to be able to deploy them we need to first introspect them to get all the necessary hardware information and then we'll just provide the nodes for the deployment saying that they are ready to be deployed once this is done we can actually move on to configuring the deployment the director UI provides a list of predefined high level features which we can enable it also it's also pretty easy to add a custom one because it's joined by the heat templates so in addition to the basic configuration I'm doing the network configuration where I'm going to enable the network isolation which takes care of creating several isolated networks and I'm further specifying this with the single nick with VLANs option in addition to those so enabling my custom network environment which holds the parameters specific to my setup the OSP director introduces a concept of roles each role represents nodes type which runs a set of open stack services we have currently three nodes which are ready to be deployed so I'm going to assign one node as a controller and two remaining nodes are going to be compute once this is done we are almost ready to start the deployment as you can see we are getting the warning as you may have noticed we have a validation sidebar which is a this section provides a set of pre-deployment and post-deployment system checks which provide instant feedback during the deployment design process each validation is an unsymbol script and we made it fairly easy to create custom ones so we can add custom ones to support some specific user requirements once all of our pre-deployment validations are passing we are safe to start the deployment once the deployment starts we can track its progress as well as see all the resources as they are being installed and in the meantime I'd like to invite you all to the open stack sessions which we have here at the DEVCON so you can get more information about the topic also the directory UI is a modern JavaScript application written in React.js using Redux architecture and we're putting a lot of effort into gathering feedback from the end users so we make sure that this tool is actually usable and once the deployment finishes we are presented with the credentials and we can start using our open stack so now we're ready to start installing OpenShift thank you you know what's cool with this stuff is most of this it's really guided GUI deployments it used to be the case to install OpenStack, he had to be a rocket scientist maybe even I could install that what do you think Hugh? there we go so next up we have Jan Pravasnik and he's going to talk about OpenShift for us so in the next step we will deploy OpenShift in the OpenStack environment and we will take a closer look at the created resources and finally we will login into the deploy OpenShift for the deployment we will use OpenShift on OpenStack project it's a collection of heat templates and these templates are then used by OpenStack orchestration engine to create infrastructure resources and it also runs OpenShift Ansible playbooks which set up the OpenShift cluster itself for the demo purposes we will create 8 VMs 3 for running OpenShift Master service 2 for running OpenShift Compute service there is an infrastructure node for running registry and router and the bastion node is for running OpenShift Ansible playbooks and the load balancer balances between master master nodes let's start with the demo we start the deployment by running stack create commands command and we pass it a custom configuration and also an LDAP file for using the idm server which was prepared by Martin and we tell it to create 3 master nodes and 2 compute nodes the OpenShift itself the stack creation takes around 30 to 60 minutes so maybe we can skip to the end of the creation and at this point we have a stack complete which means that the OpenShift cluster is ready to use and we can check the list of created nova instances and we can see that all of them are running so now we will switch to the OpenStack UI and we will take a closer look at the network topology the color lines are defined networks, the blue one is public one which is pre-existing the orange one is infrastructure network which is used by all nodes for talking to each other and the green one is containers network which is used by containers running in the OpenShift cluster and only OpenShift master and compute nodes are connected to this one in the next step we can take a look at the details of a single instance a single master node instance we can see that it has a set of firewall rules which is defined and also a set of IP addresses for each of the network and at the bottom of the page we can see that it has also a syndrome volume attached to this one instance and it's used for a docker storage in the next step we can look at the StackOverview page and this StackOverview page you find there all the information how to access to the OpenShift cluster and there is also console URL which points us to the OpenShift cluster and now we can use the employee user to login into the running OpenShift OpenShift on OpenStack is purely integration project it doesn't contain much logic itself rather it uses OpenStack and OpenShift tools if you are interested of any of OpenStack or OpenShift projects you are lucky today because there are many talks so you can choose. Thank you very much Thank you, Jan Pravaznik Okay so now we've got our OpenShift environment created and now we're going to provision containerized applications within OpenShift so now we're transitioning the boundary from the OpenStack to the OpenShift cloud platform and it's kind of the off side and this begins the dev side of the presentation So next up we have Marek Ofart Thank you Hello everyone my name is Marek and I now when OpenStack is deployed and runs and there is also running cloud forms we can go to day-to-management CloudForms can control So what I'm going to show you is, first, the OpenStack, both overcloud and undercloud, as a cloud form from the Cloud Forms point of view. Then we will set up a compliance to validate host CPU usage. And then we will perform scale down action on OpenStack undercloud. In order to do that, we first need to look into CloudForms. In the CloudForms, there is a dashboard with basic overview of the cloud and the most important charts and statistics, including resources usage. On the left side, there are categories of the providers, which is how CloudForms call service it managed. And there is also a bunch of powerful features about automate and control these things. In order to see OpenStack undercloud, we need open infrastructure providers. And there is a ref and also the OpenStack, which we are going to focus on. There are basic information about capacity and resources about OpenStack, including a relationship to OpenStack services. To be able to see undercloud in graphical representation, there is a cool feature, which is topology view, which show us the most important inventory there, which is provider compute deployment role and controller deployment role. So we can see there is one controller and two compute nodes, as Yirka Tomashev deployed in one of the previous presentations. Now we are going to focus on one of the compute nodes. We can see there are some information from OpenStack APIs, but there are also information about the file system and system services, which were captured by smart state analysis, which is process of direct connection to the VM and get information from there. On this undercloud, there is OpenStack Nova service, which runs overcloud instances. The OpenStack overcloud is a cloud provider here, and again shows basic information about the stack and the relation to various OpenStack objects, including instances, volumes, and so on. Now I'm going to show you how to create a compliance check, which will ensure that the CPU usage is not too much. So the node is not overloaded. This is a compliance policy feature, which allows CloudForms to check it automatically. We have the compliance policy, so we are going to edit to one of the hosts. It will be the controller node, compute node, and these policies can be grouped in the policy groups, and we are going to edit. Now we can initiate the policy check manually and see what is the result. Everything looks OK, so the compliance passed. We can see it here. Now let's go to the last step of this presentation. Let's say we want to remove one of the compute nodes from the OpenStack undercloud deployment. We are going to choose this undercloud compute node. In order to remove it from the deployment, first we need to switch it to maintenance mode, and also we need to ensure there are no running instances. The switching to maintenance mode can take some time, but when it's finished, we can go to the OpenStack undercloud page and try to initiate the scaled-down action. We need to choose the right node, which we can see. It's in the maintenance mode, and there are no running VMs, so it's safe to remove it. When scaled-down action was initiated, there is some background job on the OpenStack heat service, which runs the real scale-down. Again, this might take a few minutes or maybe even long, but when it's finished, we can check hosts of the undercloud, and we are able to see that the host we chose was stopped. And also, in the topology view of the provider, we can see that the current configuration contains only one compute and one controller node. That's all from me, and I would like to invite you to one of the OpenStack talks in Room 104 or to visit the Manage IQ Boothealer in the hall. Thank you. Thank you. That's Merrick Ofart. The cool thing about CloudForm is it's one single pane of glass where you can see the resources for rev virtualization, OpenStack, and we didn't get into it here, but it also depicts containers as well, so it really helps to ease the training curve on system administrators. OK, next up we have Josef Karaschek. So hello. Now I represent the developer here, and I have all of these dynamically provisioned hardware that I can use. So I will deploy some application on it. It will be written in Java, and I will actually deploy a whole cluster of application containers. And I want to demonstrate that these traditional Java middleware services work very well in the OpenShift environment. And the way how we will demonstrate it, I will create a cluster of my application servers, and they will all share the stateful information, which is basically information about the session. And what I will do on the go, I will run a rolling update in the environment. So this means I will kill every single instance of my cluster, and when I get to a new version of my cluster, of my application, this will be totally transparent for the client. They will see no downtime. Basically, the application still will be responsive and operational. The architecture of my application is, OK, I was going to have one more slide there, is three nodes that talk to each other in a cluster, and they use PostgreSQL as a back end. So here you can see that I've started deployment of my application in OpenShift. And I also start a build of my source code. So at first, it goes to Git repository and downloads the code and runs the build. What I'm going to do, I'm going to add a webhook on my Git repository. So anytime I push to Git repository, a trigger is initiated, and a new build is started, which I will do in a few moments. So now we can see that the application is still building. I'm downloading all of my dependencies of the source code that I use. And once I have the application compiled, the application will be baked in a Docker image, and the Docker image will be sent to Docker register, which is part of the OpenShift infrastructure. Here we can see that the layers of the image have been pushed to the registry, and now the deployment is started, which means that the image is downloaded on the nodes where the application is scheduled. And once this is done, the application will be available and accessible. So here we go. I can access my very simple web form. And this data that I input here, this is stored in the PostgreSQL. So this is very simple, just the username, first name, last name. And I'm using a browser, which automatically sends the decision information to the cluster here. It's also important information is the node, very landed as first. This will be useful as we go to a new version of my application. We will see that I'm talking to another node. So let's change the source code. Just add some change to the Java file, commit it, push it through GitHub repository. And as I said, a new build will be automatically triggered. This will pull down the updated source code and compile it again. So in the UI, we can see that new build already started. And now it is a lot faster, because it uses the artifacts, the dependencies that were downloaded in the first build. And this is the UI for a really good loading update. We slowly go from version 1 to version 2 of our application. So you can see that the nodes on the left are slowly dying, and the nodes on the right are becoming ready. And all of the stateful information was also copied to the new server. So you can see that I didn't receive server error status, but this is HTTP 200 success. So that's all. Thank you. Thank you, Josef Karaschak. OK, next we're going to talk a little about JBoss heavily used in the application development space. For that, we have Dmitry Bokorov. He got in a bar room fight if you're wondering what's going on here. Hello. I'm going to speak about the top level corner of the diagram and its JBoss tools. So Red Hat JBoss Developer Studio is a certified Eclipse-based IDE for developing, testing, and deploying your applications. Developer Studio includes a wide range of technologies and tools for OpenShift, Hibernation, Java E, and many more stuff. So let's start with installing Developer Studio. Since recently, it is possible to install Developer Studio with a simple DNF install command as an RPM. And while it is installing, I want to say that I'm going to demonstrate you how to use OpenShift tools in Developer Studio. It is great that you can do lots of stuff right in your IDE without using terminal. I will show you how to connect to the OpenShift instance, how to build your application, how to manage the OpenShift, and some other stuff. Firstly, we need to open an OpenShift Explorer view. And let's create an OpenShift connection. We'll connect to the OpenShift instance, which was just showed by the previous speaker, Yosef. To do this, we just need to type in the IP address of the OpenShift instance, type in the username and the password. As soon as we done this and click Finish, we can see that a new item appeared in OpenShift Explorer view. And we can see the Yosef's project cluster, all the services, and the containers. This is exactly what we saw in OpenShift Console view in an external browser, where we can go easily with just a right click and show in the browser. The next thing we need is to get the source code of the application. To do this, we just click on the service and import application. We see that a new project appeared. Let's open it in a package explorer and go to the class that was just changed by Yosef. And we'll see that his changes are there. Next, we want to run this application. And we can do this right in the internal browser of the developer studio. We click on the right click on the service and show in internal web browser. And here's the application which we saw recently. Now, let's create a server adapter for the service. And we saw that changes made by Yosef were applied after Git push operation. In developer studio, you don't need to do Git push. You just need to change the file, save it. And it will be deployed by the server adapter right into the deployed pods of the OpenShift. We can see that changed file is already an OpenShift. Let's try it. And we'll see that the changes are applied. One more thing that you can do with server adapter is to debug your application. You just need to restart the server adapter in a debug mode. Then we put breakpoint and try our application once again. So now we can see that a debug mode started. And we can easily debug our application. Now, I want to show you how you can manage the number of containers which your application is running on. We can just scale up our service easily to the number of pods we want. Right in your IDE, for example, 5. And we can see that the number of containers increased in OpenShift Explorer. And the same we can see in OpenShift Web Console view in external browser. You also can build your application right in the OpenShift but through the developer studio. To do this, you go just to the properties of the service, choose the build config. And with the right click, you start a new build. In the top of builds, you can see that a new build is running. And you can see the logs right in the console view of the developer studio. These are exactly the logs that you can find in OpenShift Web Console in the external browser. So I want to invite you to the developer tools sessions, which will be in Saturday evening and on Sunday during the whole day. You can find them in the timetable with the label DevTools. Thank you. Thank you, Demetri Bokorov. I think you can really see here how integrated the tool sets are because that was right within the development GUI. It was directly going behind the scenes and creating the containers in OpenShift. So that's really the whole point is that this should be like a seamless end-to-end experience. Last up, we're going to complete the developer experience with Nenud Pettich. No, we're not. All right, so we'll wrap it up then. So was this stuff real? Hell, yes, it was real, right? Are these guys real or what? So I think the whole point is that this is all open source software. And so we're just showing some of the cool stuff that we've been doing here at Red Hat. We welcome. I hope there's a lot of community people here in the audience who are welcome to join us and participate. Great way to learn. There's people in universities, students. It's a great way to get into the business to try to see the source code for this stuff, to mess around with it yourself. If there's customers or business users in the audience, a lot of our customers directly add features or tailor these programs to meet their needs. And that's really the value and the power of open source. So I hope that that's been useful and that you can really get an overview of a lot of the different technologies we have that you can learn more as the session goes on. So what we've done, just as a wrap up, we've started from bottom to top. First thing we did was provision the hardware through satellite. Next thing we did was create virtual machines using Rev. Then we created identities, policies, and accounts to be used cluster wide. After that, we used OpenStack to create a pool of dynamic virtual guests. And after that, we brought in OpenShift to create a containerized application environment that we later saw in the development slides was well integrated with the Eclipse-based tools as well as the JBoss services. And also along the way, we showed, using CloudForms, how we can manage, set policies, scale, and grow our clusters and be able to do things like accounting and compliance checking. So it's a really full end-to-end solution. So with that, I hope this has been a good session for you. I want to invite the guys back up on the stage and thank them for a job well done and hope you enjoy DevCon for now. Well done. Good job, guys. All right, thanks again. I might need to switch it from a VGA to a VMI.