 So, hello everybody. Nice to have you here, why we have strong competition out there. But I hope you will see Susan's point with you on the open stand and how we deal with this deployment of the open stand. It's an early enterprise company, so we're focused on production and environment. So, even if you, the stand as the public said now, I will not do the stand as my presentation, but I will try in my setup installing Susan's open stand cloud within this half an hour. And while I install the stuff in the background, I have different faces where I will tell you a bit about the stack, our vision. And now the step is where we explain a bit how we deal with control and compute clouds, while we realize my ability to serve a stable and reliable platform for even production applications. Further, I will also show you then how we configure the open stack services in a very easy way. To give you a short overview of what I'm going to do, how the setup is done. And Susan's open stack cloud, we have a minimum requirement of nulls, at least when you initial setup the infrastructure. So, we're providing an up and inspiration server that is providing a tool which is called crowbar, compared to all this stuff before, as an example of a tool or all these kind of tools that you can deploy. The approach here is a bit that we discover nodes in the network environment, and then we allocate these nodes and put rows on these nodes. As you see, we have here a control node, that these nodes become stack services, and then we have compute storage nodes, and my simple setup. I have 12 each, and a KVM and a compute node at the home, but you will see we have other opportunities as well. The Chef is our configuration management at the moment, comparable with the soul stack, right? So, this is using cookbook, how to install and configure the programs, like services, databases, entity, HAA, all that stuff. We are using PXE environment, specifically deployment, we can say. We are providing our repositories, either with a local resource or a custom center, all that stuff in the circuit is based on the administration server. So, the first phase that I will do now is deploying this administration node. And I will start now because it will take some minutes. I have a colleague who helps me with the timing. So, I will need to say it's a triggered installation, and I will explain to them all the stuff that works on that. So, I will interrupt and show you how our tool looks like. So, we are choosing installation from scratch, and I will tell you what it does. So, now we can start the timing, and I jump back. I will show you what it's doing. So, actually, it does all the steps that are showing up here. It's doing initial jobs, providing all the open stack services, providing the underlying operating system. Since it's integrated, it's providing some pro bar settings, and then also the proper tools then to use. It's providing Chef Client for this requirement. Almost on each client that the Chef Server will be there. And you also can check the work and all the log files then. So, you have all of you where the installation is. If you have a problem, you can always get the debug information as well then to see what's going on in the installation. So, we let it run now. It will take just a couple of minutes, and I continue with my presentation. So, we go back to the first step. So, as you know, it's a rich tool, right? We have a big community that is providing a lot of nice tools. The call is available to everyone. But when you go into it and want to install it, you can get a bit complicated. How many of you have just used commercial tools? We've used any of them. Yeah, exactly. So, of course, if you try that one, then you have also... My new tool works a bit different than our approach, but that's one of these things. And that's the point from an enterprise class perspective, that we want to provide you with a possibility to get a problem where you can not only develop applications, but also prevention work loads on it. Mission-critical work loads on it. The other thing is when you open source mediums, then you have to find support. When you work in enterprise tomorrow, right? Price solution. So, Susan is providing Susan in this enterprise as the default operating system. And on top, you are getting the open stack solution, this high availability extension all packed in one solution. So, you have good value. So, you get a very pre-built solution, where you only have to focus on your actual work. So, you don't have to spend a lot of effort and knowledge in preparation work in getting the open stack installation done, right? And in the former times, we have proved that we can do that also quite fast. We saw today quite nice solutions, which were really fast, but they maybe have a different approach in terms of maybe, let's say, compatible with different hardware values, because you get an underlying operating system that's reliable. For the hardware, you have to be certified for certain enterprise-class applications. So, it's made a different approach than we do. But, yeah, so we provided this platform, and we do it fast, but fast, that works for enterprises. And that is included by different open stack topics, mostly not open stack topics here. So, we have some pairs. So, these guys put this stack and then have done this stuff in quite a short time, you can say. So, in the beginning, when the open stack showed up in 2012, then it took just a couple of minutes, and if using heat, then they would have done a minus factor with some bonus points. So, they have done fast deployment here as well in Atlanta, when they did a high availability installation, where they really packed cables, and the stuff still worked, services did properly fail, and so on. And then the last time it was full stack, and in the back over, there wasn't even two teams, for instance, winning that competition. I don't know why they don't do it any more probably, because nowadays it goes for seconds, so probably they skip this kind of competition. But at that time, we were quite fast. So, now I have to start, because the base installation I heard is done. So, now we can actually go in our presentation mode. So, now our crowbar is properly prepared. I will just start, to a better instance, one of them will be uncontrolled, and the other one will be a computer. So, the controller is hosting all of the stack services, the computer is the one where you can put your workload on top. The minutes that will show up in this dashboard. So, getting back to the presentation, we talked about rule of the stack. So, how did we get there? That's what we are doing right now. You have two opportunities to install the whole stack from SUSE. One is that you do the stuff more or less manually by downloading operating system, put our configure stuff, the things that takes most time, this is getting very mandatory stuff, that you can provide, for example, the subscription management tool or SUSE manager. And then you have also to configure your network setup. What I installation here, so I actually didn't prepare much, I used actually the admin appliance that you can download yourself on SUSE Studio.com. Currently you have version 5 there, which is the Kilo version, the Liberty version is on my machine, so it's Cloud 6, this is almost like Cloud 6. And when you have downloaded it, the only thing you have to do is create a little instance, for example, if you want to try calling. And you have to take care of the network chase setup. I just used the default, because it's just a better environment. But that's the way how you're very easy to get. My next step was when we started, the first time that I started to install the code, the next step will be to deploy the services on these nodes. So when the nodes are done, then we can do that. The network chase is very important, because you don't lose network chasing. I think you do it in a different way. So that is the definition of your network setup. There are different nodes, depending on how many network parts you have in it, and how much load it goes through your network. We have seen a lot more teaming nodes, so seeing that the teaming is more or less the same, it's either to have an administration network, you have to take separately, or teaming is evolving that way, you can also run all over, all different parts of the network, and your instance network on the same node. Do what is to your app and network separate from the long operation network. Physical network devices, and of course the IP registers you have to adapt to your setup. There are a lot of limitations from our deployment guide, where you can read how to do these things in more detail. Good, let's get back to it. Let's see, so here are the services. So I want to tell one of the nodes should be taking care of the open stack services, and the other one should be the one where I can use for all the KBM workloads, or I can have native Docker workloads. So I go into the nodes, and I have a ball addition, and here I can allocate them on place. So I give these guys a name here. Let's see, I have to make it a bit smaller that I get all the fields in here. So I have an administration node, and then you can check on your records what node it is. So this is for example, like a control node, this is the compute node, so I'm going to give them some names, and I put them in groups, so we have a little better control of them, what they are doing, what they are for. So this is my app node, this is my control node, so I have a couple of names, you can also be a lesson, compute node, and here I can plug them to a compute node, which is higher than my hardware, I can make separate network devices, and I'm about to have Swift, for example, or Zephyr, I can have storage nodes here as well. So I will define it as a control node, and this is a compute node, and so far we haven't done anything more than the apps, we have just preloaded the kernel in, and so on, and now that I have allocated an app here safe, then it will start to install operating system, it will configure the stuff, it will install all the services, fully automated, so you don't have to care about it. Okay? So this will take also a couple of minutes, and when these guys are then allocated, and properly up and running, then we will have a green light for them. So all the stuff that you have seen in the keynotes, the nice presentations, all the fields, how they are interconnected, that's all you don't care, because this is done automatically, here in the background. Any questions so far? Yeah? Yeah, I mean, the production environment recommended, I mean, you can't do that, so you would run a lot of instances where you define that this is a control, and the compute, and so on. You will see later, we have also high availability that we cluster these nodes, but you can't mix the roles of all of these nodes then. So because it's installing the services, it's installing the services, and then you have it there. I mean, you couldn't go in, you have a raw view as well, and you could configure certain parameters, but then you are ready. And so what we have found out was we have a PXE environment that just started two nodes, they got a set, in and or leaving, all the necessary star stuff to get booted, bootstrapped up, and then in the wild conversion, or so on. So this is a more complex scenario that you can see, so this is a bit of answering also your question. So our focus is that you really position the nodes with their own roles, and you specify control nodes, there are several of them, so I will tell you about high availability. So you can put control nodes, high availability, high availability, and then you have computer nodes, and this is also interesting, because here you have the possibility to have all type of high devices, at least for loading your, or computing your work, and so on. So we have VM, we have Hyper-E, set and taken, and also even bringing a set VM in the game. You will see the old legacy hardware that was used in matrix. And then three kind of storage nodes. So you can, with the APIs, you can also interconnect to bigger values, EMC, Dell, NetApp, we have your storage, so vCenter for example, putting on top is the set, which is our enterprise storage program. So these are the components for a more complex setup scenario. Here is the call address. SQL database we are providing glance. The nice thing that, so the studio called appliance creator, let's say, which is free-away for you. You can build your own images, very easy. It's a graphical listener, it's a browser listener. And then you can all these images into your glance or storage. It's a very nice function. Then you have the identity keystone up here. You can, for example, connect it to an LDAP. Of course the dashboard is there, which we want to get. Then you have the API scheduler and a message queuing protocol. Declone queue nodes. This is where your work also gets processed. So it's important, of course, to have proper CPU and RAM on your machines. Primarily running it on lower services. For Docker, we have the queue nodes. You can even do live migration of VMs if required. Eventually with high availability or if you talked about business continuity, is that required at the process? If you see that picture, you don't want to be able to land at a position or an IT management position. It's easy. What are you doing in such a situation? OpenStack is nice, but it's not natively maybe providing you the tools to have such enterprise environment as it would require for mission-critical applications. SUSE has also supported the H&A project for many, many years. The good thing for us was OpenStack requires high availability so what we did we just integrated it in the OpenStack solution. So now you can really build a high availability platform for your services so you set it up. Very easy setting up, for example, three control zones on the line pacemaker and the nice thing is it's just one more term. We'll see what it is. It's probably our own definition. So it's just one more term where you define what has to be done to set up all services in the cluster. So that's on the one side and on the other side what is new we have also integrated new functionalities for pacemaker removal to keep your compute nodes host high availability. So as soon as you see one of these nodes is going out, you can do migraines that is up and running. So this works not only for cloud-aware applications but you can use this for legacy applications. We get a much wider range of cases for the open stack something. Another good thing is also with this AHA solution especially on the infrastructure side you can very easy maintain the environment. So you can deal with the planned downtime. So you just take out and for the unplanned downtime of course the pacemaker functionality integrated. And if you are after results recovery so we have also a tool coming with the AHA extension which is called relax and recover. Someone knows that. That's also an integrated tool so you make an image that can very easily bring back a lot of the nodes and you have a very little downtime. Then we have different kind of hypervisors and that's probably also quite interesting when you look on the investment perspective. So we have normally, I would say 80% maybe in the country that using VMware maybe others might use a lot of Microsoft and IBM environments so we thought we won't want to help you in the architecture and the new platform where everybody has to learn Linux and everybody has to use the KVM as hypervisor. So we keep it open. You can keep your results and your knowledge and via the APIs you get the possibility to integrate existing solutions and use these solutions for the self management and for the workers that are based in the cloud. So you do more of this VX role using pure hypervisors and you get a better overview with the service interface here that all the stack provides. So and the other thing is also, I mean if you of course need a certain hypervisor, if you need certain applications that has to be around on the side of the hybrid let it run there because investment is probably good for that but then you don't struggle because you can very easily integrate it here into the cloud environment and any other. And that's the main purpose for us. Everybody is holding open stack code. There are a lot of multi-progress, I mean more than 500 companies missing nowadays over 30,000 to keep the contributing to that. We are part of that as well and as I said, we don't want to lose. Not in. We would take certain products or buy a whole product range from smooths just to get it done. We keep it open that it can work with partners. Here are some on the screen that we work on. So different areas like if you want to have platform as a service, we work with Cloud Foundry and integrate that then you get a proper platform to do your work on partner vendors of course because that's important in the enterprise that software is less is probably working with the features for example, Rust features that one of us are providing. We want to give you an option of your own which storage you want to use, same as that working and then there are tools like job cruiser for building systems or in the area where you see interfaces to manage your instance with your users, your instances and all that kind of stuff. I also pointed to three of Susan products. We have Susan Studio as a manager that's for building images that you can re-deploy in your environment. You have Susan Management for lives like the management so you can easily patch your system in your environment you can patch your infrastructure it's very useful, it's nowadays also using configuration management sold state and last but not least we also have a storage product set based which is Susan Open Susan Open Stack Cloud, yeah that should be Susan Enterprise Storage So that's how we work and how we want to provide you an open-stop stack solution. So let's see we jump back to our setup and we can see the guys are ready to deploy it so if you can go in one of the nodes and then you can see some descriptions and we can also see if they have some outcomes and roles deployed so these are the services that are running here that are required for providing the infrastructure and go in and if you know that you have normally to configure like 7000 parameters in open stack you can do vanilla then we bring you another a bit easier way on to configure all these parameters and therefore we have so-called alternatives which are recipes how to set up the different open stack services I could now go through each of them and configure and show our setup but what I will do is I have prepared something that makes it automatically so I don't have to do the timing difference it's just a couple of minutes so it's going a bit faster and so I will just start it so we hopefully can make this done on time so I just copy a file which is a batch file don't mind ok we can here this is a young file which has all the configuration for my open stack services in tune in because this is a pro bar function you have a pro bar so if I install the whole setup then I can reuse my setup for deploying it on another machine for yeah very easy re-deploy this kind of settings and I can modify the settings of course and when I have these things then I will now import it into the current setup and then it will run into the job automatically you can see how it looks like so it's called pro bar batch build and then something like this so this is at least how it looks like we have not so much time anyhow so it looks after the declines probably because I had a time of error so it's looking for up and down control of the computer and then it's deploying all different origins ok and then it will deploy the stuff on the control mode so I can show you the ones that I wanted to show you and one of them maybe take loba for example so we go to the stack part now we go to the loba it wants us to do it in the right order ok so we create the database stuff so as you see you already know how to do it here is setting the parameters if you are not having these parameters or want to add stuff you can always get the raw code then you can configure it manually and the nice thing is you can then use the nodes that are available and just drag and drop for keeping the service the nice thing is if you have a cluster then you just drag for example a cluster here and put it there and then it will tell you ok or then you will have three nodes in the back end taking care of that service and that's the old thing you have to do for HA then you have the whole interface where you easily can maintain your interfaces so if you would have done everything properly I will show you the last step then when you have everything deployed then you can go into the dashboard and then this is way more of the app and also now with all the more tips for deploy you have your horizon dashboard and then you can log in and then I guess this kind of picture everybody has seen so that's where you want to end up good so that was all from our side I mean I hope you got a bit of insight feel free to test stuff from our web page and if you have questions we have to hurry we are already up there and you can discuss your use cases there thank you very much for your time and your attention