 So, hello everybody. So, it's the last session for today probably. I'd like to introduce myself very shortly. So, my name is Markus Gürtler and I'm from B1 Systems and I'm here with my valued colleague Michel. I'm working for also for the B1 Systems as a cloud architect. Yes, thanks Michel. And our title for the session today is build your own hyperscaler. And what this actually means, I will tell them later. So, I just want to give you a little overview about our agenda. So, what we will talk about. First of all, we make a little introduction of our company, what we are doing. Then I want to explain a bit the vision behind build your own hyperscaler. What it really means for us. Then I will give a little introduction into the OpenStack Solution Osism. We'll explain what the Surin CloudStack is. This is maybe very interesting for the European colleagues here. And also we introduce SAP Gardener and how everything fits together basically. And at the end, we have a very nice session from Michel to give you some field experience from customer projects that we did in the past or currently are doing. So, let's start with B1 Systems. B1 Systems is a consulting company. It's mainly active in the Dach region. So, it means Germany, Austria and Switzerland. But we're also doing international projects. It has been founded in 2004. We have around 150 employees, which are mostly developers, trainers and consultants. We started actually with Linux Consulting, doing consulting, development, training and support. And since 2011, we are also in the cloud space active. So, this OpenStack we are working already since 2011. And yeah, basically OpenStack is one of our main focus areas right now. And of course, everything which is around that. So, we're also Jeff and all the services. And we are doing basically projects really from the beginning to the end and doing the whole OpenStack life cycle. So, we are doing the planning. We are doing the architecture, the implementation. And then also afterwards when the systems are live, we have managed services where we provide for our customers operation services. And we have several models here. One model is to do it permanently. The other model is to do it just for a certain amount of time and then hand it over to the customer doing training for the customer and things like that. Just like to mention that we are not just doing OpenStack, we are also doing other cloud solutions as well. So, namely, Kubernetes here, for example, but also some, yeah, more commercial open source solutions like Susie Ranger OpenShift. And we also have people that are certified for the big hyperscaders. Yeah, we also have a lot of partnerships, for example, with Red Hat, with Canonical, with Susie and a couple of other ones. And of course, since we are so long in the market, a broad portfolio of different customers in various industries basically. Just want to name a couple of them here that also are internationally known because some of them or many of them are just in the Dachisen. A name, for example, Airbus Industries, Deutsche Telekom, Audi, SAP, yeah, so these are the most important customers here on that slide that have an international brand as well. Yeah, and with that, I'd like already to dive into a little architecture comparison. This is a very simplified slide or architecture, of course, but it already shows that there are similarities between the hyperscalers, so the big hyperscalers like AWS or Azure or Google Cloud Platform and OpenStack. So on that slide here, you see here on the left side, hyperscaler architecture. You have typically your self-service portal and APIs. Then you have at least two regions. Most hyperscalers have more than two regions. You have availability zones in there. And within the availability zones, you have always compute, network, and storage. And this is designed for massive scale-out. And with OpenStack, it's basically the same. If you build it like that, you have a shared service with OpenStack APIs and the OpenStack dashboard. You can have one or more data centers which you can refer as regions. And within the regions, we have availability zones. And in the availability zones, again, we have compute, storage, and network. But the cool thing is with OpenStack, we have a much bigger flexibility and versatility. So we don't have just one hypervisor. We can basically choose any hypervisor we want. So KVM is default, but we can also use VMware, of course, and all other open-source hypervisors. The same with storage. I've mentioned here Ceph and NetApp, but of course, all other storage solutions which are supported by OpenStack work as well. And the same with network. Here, I just named the most prominent ones, so Sonic and OVN. And this is basically also really flexible because you can basically tie everything together. This, of course, creates all the challenges. The versatility creates challenges. And Michelle will talk later about some of these challenges we usually have if you try to puzzle everything together for our customers. Yeah, everything is designed for massive scale out. What we also see is that we have customers now that have already experienced this clouds because they went first into the hyperscanners and now figure out well because of several reasons. One is data protection. Another one is because they figured out that the promises from the hyperscanners weren't fulfilled, that they build up their own private cloud deployments in their own data centers. And some of these customers actually are coming from VMware because VMware is here for whatever reason their customers like to move to open source software, want to give this data a chance. And then OpenStack is a very good start for that. This slide here is the same slide as you have seen before, but just one zoom level zoomed in. So it's basically the same. We have here again the two data centers which you can refer as regions. We have our availability zones. Within the availability zones we have our compute nodes and we have also host aggregates here. And we have local services like for example Cinder or Neutron or Glance and we have shared services like for example Keystone or the OpenStack dashboards, Horizon or Skyline. And yeah, this is an architecture that I've seen by the way in other presentations here on the Open Infra Summit. Yeah, very similar. So for example, the guy from Samsung had a slide which showed a very similar architecture. I think also the people from StackIT had a slide that showed a very similar architecture. So this is basically the way to go. Of course, it's very simplified and in reality it's much more complex just to give you an overview how this can be designed. Now I'd like to introduce a solution that's called OSISM. It's an OpenStack distribution and I have to say that it's not a B1 solution. Instead, OSISM is a separate company. B1 Systems is an implementation partner for OSISM. I've chosen that solution here for this talk because it fits very well into our story, built here on hyperscaler because the solution is really built for massive scale. So it can be used also for medium deployments but it can really also be used for really large OpenStack deployments. It's 100% open source software. It's the deployment framework is completely containerized. It's based on Color Ansible and provides basically all OpenStack services, so compute network storage and all variations that OpenStack supports. It's also the reference implementation of the Surin Cloud stack and what the Surin Cloud stack is I will explain later. If we talk about massive scale we very often come also to the point where we say okay it's not just enough to have one cloud but customers also want to spawn workloads across multiple clouds so these are typical hybrid cloud scenarios and this is not directly possible with OpenStack but with the Kubernetes layer on top in this case here it's the Gardener managed Kubernetes platform. It's possible to do exactly that. So we have a layer here sitting on top providing Kubernetes and therefore container workloads can run there which can spawn across multiple clouds. So we have here on the left side a private cloud deployment and on the right side the hyperscalers. To show this a bit in more detail how this basically works also very simplified of course we have here on the top and the Gardener control cluster. This control cluster spawns so-called seed clusters for each cloud one seed cluster and these seed clusters again spawn so-called shoot clusters and within the shoot clusters you can run your actual cloud workload and the cool thing is this can be managed centrally via API so web interface and it really you can think about it it really expands into a cloud so you can really do a massive scale out there by just starting more and more clusters and the good thing is you can also shrink in that deployment again so you can kill that clusters and this is very useful for workloads that have peak demands. So many workloads have maybe at the end of the month for demand for a much more compute resources for example and with this solution you can really go from your private cloud for example into hyperscalers use that temporarily pay some money for it of course and after you did it not anymore you can throw it away and just to bring everything together just to give you a little bit and information how this works together as an autism plus the Kubernetes Gardener is basically a full stack and it can be used for traditional workloads which means yeah old pet workloads or workloads that are somewhere between pet and cattle workloads so close to cloud native but not really pet workloads are typically for example some old oracle databases or yeah some Microsoft Active Directory for example these are pet workloads we have workloads somewhere in the middle for example SAP workloads SAP NetWeaver or SAP HANA somewhere in the middle and cloud native workloads are usually running containerized and with the Kubernetes this is possible as well like I already said it's a when there's a reference implementation for the suwin cloud stack and just to give you a little introduction into the suwin cloud stack and what actually is so there is a project from the European Union that's called Gaia X the idea is to build an alternative for the big hyperscalers but the idea is not to build a new big organization that has just one new big hyperscaler the idea is more to connect little hyperscalers together and this is being done by the so-called Gaia X federation services but you can imagine if you have such a complex architecture here and you need a lot of standardization and so everything is standardized and this is basically across all layers and for the most layers down so the platform as a service layer and the infrastructure as a service layer this is combined and this architecture for that both layers is called the suwin cloud stack so suwin cloud stack is an architecture that a reference implementation that implements that architecture is done by the Osism OpenStack distribution so if this really lifts up this Gaia X it's it was last week on the SCS summit in Berlin I think this will be really a great thing probably at the moment I would say it looks really really promising yeah I said I'm at the end of my part and we'll hand over to Michelle who is then doing the field experience part yeah thank you Markus for the overview yeah as you already said we have the problems or the problem if we go to the customer who wants OpenStack we are facing several problems and I've tried to draw it as a puzzle as an unsolved architecture puzzle the first piece of our puzzle as you see there are already two solved and two unsolved the two unsolved is the first one is the network and probably you would agree network deploying on the customer side is sometimes quite challenging we have several options to deploy a network for example default I would say default network but floating at peer addresses external stuff overlay network that's it then of course we can have multiple provider networks connected to our hypervisor so that the instances are directly connected to a corporate LAN let's say and the bgp option where the network node announces open bgp speaker to an layer up switch security gateway router the networks which are configured by a customer on the open stack and of course third-party option as the end software whatever it is so network is always challenging it's always a piece of puzzle that we need to solve with the customer directly so we don't have an master plan for a network let's say the other one is then storage of course who runs open stack with sef just three two yeah who runs open stack with net up for example one two okay at least three three really okay um so we have several options of course for the network we can go with the sef as I said if customer decides here I want to go with sef because I don't have a storage that fits to our open stack cloud we have to figure out what kind of hardware he has if not we have to configure a hardware takes a process if the customer wants to integrate his existing storage for example like a net app can we have credentials to access the net to access the net app storage or we only have an nfs share or or some other kind of storage so storage is another piece that we need to solve with the customer on site also no master plan for that we have a master plan for our deployment and our images deployment as marcos already mentioned we are using osizem open open source infrastructure service manager which is a wrapper around multiple open source projects as you already mentioned color ansible sef ansible and some other internal apis um we start always with a configuration repository that restoring and git customer site public site or public is not the bad one or on our git color ansible has some configuration options sef ansible has some configuration options all these options are stored in that git repository and with that we can start with the seat note here and the seat note can be a laptop virtual machine a piece of hardware and in some cases also the controller itself but later that um and the seat note just powers up some bunch of containers so the manager is I would say 10 to 12 containers okay ansible uh color ansible sef ansible some internal stuff and sec networks and by first for the deploy mechanism um if the manager is up and running we can go to the next step deploying the controller the hardware we can use or at least we have two and almost three options to deploy the hardware the first one is of course we do it manually spin up an e-zone of the management install a ubuntu system and that's it and then we can roll out the next step and the second option is to use by first to bootstrap or to install the controller computes over pixie with the ubuntu os and there's the new option which is let's say a technical preview that's um pre-built image what we can use what we can fire up over redfish for example and install the controller computes over that specific image and report back to status of the installation progress or status to a net box and with the net box we are using the net box as an inventory for our ansible rollout for ansible playbooks that's one option the other one is the traditional ansible inventory over text files with net box we have several options or benefits features the first one is we can generate the inventory for our environment and the second one is you can configure in the net box the whole environment that means compute storage with mac addresses ip addresses switch ports serial numbers dns entries and all the stuff that you need or you want to store in the net box net box more or less a config dp and with this information we are able to generate the switch configuration so and with that we are able to move a host dynamically from one roll to another one like in this example here we can use this one this note here as a compute in the beginning and if we're seeing oh the demand is more like we need bare metal workloads for ironic we can reassign over the net box with a simple tag that this note is an ironic note now start the rollout process from the manager the note gets cleaned up and regulated on the controller so on the open stack itself that is this one this note is now an ironic note for example um if the os is installed last step is of course color ansible rollout all the open stack services and yeah after that we are able to start workloads on ironic for example bare metal workloads for an ironic note or on an instance for example the next one is how to get images into our open stack environment and with that we have a build pipeline it also starts with git repository the git repository stores config information for the specific image easy way is to or the easiest way is to go upstream ubuntu.com download the open stack images upload it and that's it i would say that fit for 60 to 80 percent of our customers and the rest needs custom images because security wise or they want to add the specific software or they want nightly builds of an of the image patches and all stuff and with that we can add custom scripts custom configuration parts to the git repository and over in cacd we can trigger the build process we can use kivi build for example or mkaose which part of the system decollection or mdt but mdt is a completely different story that's only for windows build services um for mdt you need the dedicated windows host and all the stuff that makes it a bit tricky um if the build process is ready we have the images um and the images are uploaded over the cacd into our open stack of course and yeah with that i'd like to hand over to marcos again thank you yeah i think we said all the pieces are connected together basically so the puzzle is closed in the end yeah as you can see here is still a little hole in this little puzzle so this stands for all the things that are still not working after that which is of course always the case usually in our projects when we do it before we do the go live there's an extensive testing there are also extensive trainings and to the end users that are operating that or using that as end users or operating that but as always it's IT technology nothing is really working perfect but in the end when everything really is going stable usually then we hand over this operations then to the actual yeah local teams to continue with the operations yeah with that um we are at the end of our talk um thank you for your participation are there any questions so no questions okay then thank you very much yeah any other question from anybody yes oh i'm sorry thanks for talking i just wanted to to hear how modular the different parts so for example you said that the reference implementation is called up for actually deploying open stack how easy would it be to like run the vice first part and the net box part but then utilize for example open stack ansible or something else for for actually deploying the parts that are open stack for example that's part of the ss so it's um yeah exactly so this is basically part of the of the architecture yeah so this is basically a given in the end yeah of course technically would be possible um but um like the architecture is really pre-given that's the only option okay thanks that's an open source project yeah so feel free yeah any other question okay then okay have a great evening thank you very much for telling thank you bye