 Yeah, good afternoon. Thanks for attending this session. Infrastructure is something we in the OpenStack community know very well and are familiar with. But however, if it comes to platform, that's also a question that a lot of users demand and I ask for. And my name is Niaz Magnus. I'm here with my colleague, Artem Goncharov. And we'd like to share some insights from our work as architects at the OpenTelecom Cloud when we are confronted with our users asking for platform services. So this is a work in progress report. It's not the final, the ultimate experience we made. And we've just started our journey here. And if any of you have any comments or questions on what we describe here, please raise your hand directly. I'm not sure if I can actually see you because of all the flashing light. And to make it even a little bit more interesting to ask questions, I brought some popsicles with me. So if you're hungry, ask a question, give a comment or something. Yeah, as I said, my name is Niaz Magnus. I'm a cloud architect at the OpenTelecom Cloud. I have more or less two roles at Deutsche Telecom. One is doing architecture, further developing our platform. And the other one is reaching out to technical communities like you are. And, well, I did a lot of consulting security, a little bit, operations, and so on, and so on. I organized a couple of times. Linux Tag, actually, in this very venue, our next door in hall seven, we had a number of Linux Tags. And I'm also a member of the German Unix user group. Well, originally, we planned to have Alex with us, also a colleague of ours. Unfortunately, Alex is sick and is at home trying to get rid of his cold. So best wishes and blesses to you, Alex, if you can see us. I'm actually not sure if we get recorded or transmitted by that. As a very competent replacement, I brought Artem with me. He supported us on some of the platforms and the insides. So he stepped in last minute. Thanks a lot, Artem, for that. Well, this is our agenda. I'd like to give you a rough overview of what we are talking about. The first part is about what is platform, actually, and what does it mean how to deal with this term? The second is how to integrate platform into what we know best, which is infrastructure. And well, originally, we planned to show you a number of use cases in the third part of the presentation with some real-life demos. But unfortunately, that was Alex's part. And so we are not able to show you all the nitty-gritty details of the use cases. But we did a very brief overview. And so I hope you can understand and follow the examples we prepared there. Well, let's take a step back and look from the early days of the term cloud computing. The NIST made a number of definitions of infrastructure as a service, platform as a service, software as a service, and so on, and so on. A scheme that is widely adopted today. But looking back to this definition, the NIST defined a service model, platform as a service, or P-A-E-S, as the capacity provided to the consumer to deploy onto cloud infrastructure. Not to read everything here, I highlighted a number of the terms there. So it's something that is consumer created. That is very important for us as a public cloud provider to keep in mind. It is very tempting to try to solve each and every request for our customers directly or beforehand. That usually does not really make sense. It is a good idea to enable our users, our clients, and customers to help themselves. That is meant of consumer created and consumer driven. Well, obviously, we're talking about applications. And if it comes to the platform itself, the P-A-E-S platform started with environments for specific programming languages. So I gave an example here, Heroku, for example. I'm not really sure. I think they started with Ruby on Rails. Is that correct? Can someone confirm or tell me wrong? But this is no longer, especially with the rise of the container world, not so much of an issue, since once you packaged applications in containers, it became way more easy to deploy these applications to a platform. But, well, it's still in the definition there. Well, it's not only the programming languages itself, the code itself, but also library services and tools and so on. And the platform takes care of managing these applications preferably over the whole software lifecycle. Yeah, as said, one of the first examples I know of, maybe, well, the concept, maybe even older, but one of the widely adopted and implemented versions has been Heroku. But today, most of our modern software development takes place inside of containers, because they are so neat and easy to use. But once you started to understand how to work with containers, you quickly realize that containers themselves are not the whole solution, but you need something to manage those containers. So actually, you're looking for some kind of a platform that is able to deal and connect and manage containers. That is more or less what is kind of a common ground of the platform term today. Since it's not exactly the same as the original definition, there is sometimes a mixture of the terms, platform as a service. Some call it also container as a service, or container as a service management, or management platform for container as a service, something like that. I just mentioned this, because platform itself is just a complicated term. And it's hard to get a real agreed upon definition on that. But if you look at the details, you see at least three aspects of that. One is the question of the programming language itself, or the framework in there. And in general, you have also something like a layer of abstraction or convention-based framework inside your platform. So for example, you are asked to provide the code with a convention of entry points, or how to structure your code base, and so on and so on. But even with these attributes, there are still a number of very different platforms out there. I listed a few of them. Well, they are kind of arbitrarily chosen. There's Cloud Foundry, as we can see here, which is for more of a larger enterprise application. Nutanix is very infrastructure-centered and abstraction of the virtualization layer, but also providing kind of a platform. Mezos was very successful at a specific time, and I still think it's still very successful for certain types of workloads. And then there's Kubernetes. And Kubernetes is probably the most successful foundation of a platform that exists today. Can we call Kubernetes itself a platform? I think this is also a difficult question, but if we have a look here at the landscape, the CNCF, the Cloud Native Computing Foundation issued, we see the vast space that the Kubernetes and Cloud Native environment spans. Hold on a second. I tried to count all the sub-projects there. I think there are several hundreds of projects and components on top or augmenting Kubernetes so that I'm pretty sure that we can call Kubernetes as a kind of common denominator for a platform today. Even if it's not exactly aligning with the official definition. So now let's have a look into how to integrate from an operator's point of view of these platforms into infrastructure. So to deal with this perspective, it makes sense to look at the subject from two different perspectives, angles. The first is the user's perspective. As a user, using a platform, you focus on your applications. You're interested in how can I deploy my code onto this platform and make sure everything is working. Users are generally not interested in the underlying infrastructure. It should be completely transparent or, well, actually many users don't care about the lower levels of the infrastructure. However, the perspective is different from an operator's point of view. So if I talk to my colleagues, especially the colleagues who want to offer our cloud services to our customers, they're interested in a good integration and offering that has a number of features and then that can be easily used as a differentiator from other cloud offerings. But on the other hand, as we've seen on the previous slide, there are so many sub-projects and new developments in the platform space. We see a very fast-paced development. And it's very, very difficult to cater for all needs because workloads are different, applications are different, and the problems are different. So we need different components. So having a one-size-fits-all solution might not be the right way here. And how we try to address this issue, we see in a minute. But we are here at the OpenStack Summit. So one of the first questions is probably, OK, we want to go for a platform, so don't we have something native in the OpenStack universe? Well, actually we have. Magnum is the best-fitting project that takes care of setting up platform environments on top of an OpenStack cluster. On the other hand, Magnum is currently only a little bit more than a unified installer. So setting up a Kubernetes cluster with Magnum is comparatively easy, but this is just the installation. What about day two operations? What about updates? What about resizing? Well, actually you can do that. You can usually resize the number of the nodes. But the actual management of the platform is not covered by Magnum itself. It is covered by the platform. And well, we'd like to see a little bit more integration of those features into the OpenStack platform, since usually the platform components themselves make use of the underlying infrastructure anyway. So for example, most often you need a load balancer. You need storage. You need network facilities, and so on, and so on, to leverage the actual platform. So the main question is how to integrate that in a seamless way and not to reinvent everything that is there on the infrastructure level, again on the platform level, or vice versa. Well, our current approach is to go for a strategy that is, has at least two facets. First of them is providing managed services to some degree on agreed services like a Kubernetes cluster, for example. So an example for that is our service we call Cloud Container Engine. Cloud Container Engine is a managed version of Kubernetes on top of the OpenTelecom Cloud, so on top of a OpenStack installation. But it does not make sense for each and every single detail to offer a managed service for that. And in these cases, we agreed on setting up a set of technical blueprints. A blueprint is kind of an architecture description or a reference architecture that can be more or less used as a template to create the resources you need with the means provided by OpenStack. It's in best effort support that we call somewhat supported. That means my colleagues and I, we are available for requests. We run a blog, for example, and some other platforms to get in touch and stay in touch with our users and customers so that we can iteratively, in an agile way, enhance our blueprints and recommendations. They should be easy, adoptable by our customers. And while one aspect is important for us, we try to stay as close as possible to the upstream, not making up complex solutions that work only for this specific version of our OpenTelecom Cloud, but that works generally on an OpenStack platform. And that can also be shared and distributed to others as well. And a blueprint should implement not a demo setup, but should have a real production ready or production like a setup in mind. So for example, setting up a Kubernetes cluster with Minikube is just a matter of a few command lines. But extending a Minikube installation for something for a production use is very tedious and probably not a good idea. So we decided on going for the production ready setup directly. This takes me to the third part. And well, as I said, my apologies that Alex is not here. He prepared some of the original demos. But to give you a brief overview, we picked more or less arbitrary, but maybe not completely free platform systems, the first being Rancher. Rancher is, to my opinion, a container management in a nutshell. If you just have a number of containers that you don't like to organize yourself with a Docker command and spinning up containers on different O's. So NOVA instances or virtual machines, then Rancher might be a good choice for you. Currently, the version 2.0 with a major redesign in the back end got released in this summer. And with version 2.0, Rancher manages more or less Kubernetes clusters. So that makes it a very easy first step into this field if you haven't investigated that before. The provision that is quite easy, it's more or less there are YouTube videos and so on and so on. So you can be done in 15 minutes if you're quick and have all the resources ready. Rancher provides an installation Docker container. So if you have already a Docker runtime on your virtual machine, you can very quickly get started and, well, launch the management platform. There are multiple cloud adapters available. And, well, obviously, we prefer to use an open stack one. I think there is even a specific adapter available for the OpenTelecom cloud with some extra features. But basically, it's the open stack vanilla one. Our second case study is about OpenShift. And, well, as a lucky coincidence, Artem was here. He's way more familiar with OpenShift than I am. So I ask the mic to you, Artem. Thanks. Well, basically, what is OpenShift? OpenShift is nothing else as, nothing else I wouldn't say, but still it's enterprise version of Kubernetes created by Red Hat. What's funny for me is that OpenShift is, to one extent, would be even developing much faster than the Kubernetes itself. They have packed additional features and those features are evolving even faster than Kubernetes. They, of course, do a really great job by putting those features back to Kubernetes. But that's not a point. So basically, let's consider OpenShift as one type of Kubernetes. What do we have here? Enterprise platform for software development. We can also consider OpenShift IO, for example, as an example of how platform as a service could be created based on OpenShift, which is really also a great example. We also have a managed version of OpenShift, which is a project in systems called Epigile. Basically, it's also exactly a managed OpenShift working on top of OpenTelecom Cloud, which means working on top of OpenStack. But here, even in this case, we have decided to go both ways, managed as well as creating a blueprints, since, in some cases, having really a managed OpenShift would be really costly. Can you please switch to the next slide? Yes, of course. Sorry. Yeah, just a small reference architecture, how it can look like, or how it should look like, in a more or less production environment. So it's definitely we are not talking here about MiniShift or some really small, tiny usage, some example, but something that can be used in production. And we see here on top load balancers. We see here some masters, which should be definitely, in this case, as with Kubernetes distributed across different availability zones. We see here worker nodes. And all those are just normal Nova instances. In some cases, it could be, of course, even a bare metal server. If you, for example, would want to try using Cata containers, we might have also additional persistent storage, which can be also additional Nova instances, for example, running GlusterFace, or SEPF, or whatsoever. So basically, what I'm talking here about is that even with OpenShift, you have so much different options of how to install it and how customers might expect you to provide a managed service that it's simply nearly impossible to cover everything. One customer would come to you saying, OK, I would like to have just two masters. I don't need more. Some would say, no, I need definitely three. But we have two availability zones. How would you place three masters into availability zones? Might be a question. Some would say, I would like to see Etcd, the amount on master. Someone would say, no, please don't do this ever. And so on and so on. Some would say, OK, I'm fine with GlusterFace. And having registry on top of this. So yeah, there are way too many possibilities how you can install OpenShift. And basically the same for Kubernetes that it's simply impossible to follow all the possibilities. Neither you can give a managed service, which would basically provide all the possibilities. Nor you can provide even a blueprint, which would also cover all the possibilities. So at the end of the day, you may make some assumptions, some decisions for you from your practice. But even this is tricky, since your customer comes to you and says, what is really the best practice in having some tool or using some tool or installing? It's impossible for us, even, to follow all the development. With each new release, you would install it even in totally different ways. But let's focus on the typical installation, the typical installation of OpenShift or Kubernetes. I have chatted also with some guys who are using Zoo on top of OpenShift and so on. It's basically quite, quite same for lots of guys. How would you do this? You basically would create a bastion host somewhere in the cloud, and you will either use this bastion host or you will still use your local host with proxy into the cloud. But then you would normally, what you would do next is create a couple of subnet or even networks, one for DNS service and one for the cluster itself, DNS. Basically, yeah, it's a requirement coming from Kubernetes that all instances need to know each other quite good. And while OpenStack is somehow moving and improving in this way, but still, in our case, we haven't found that it's really properly working. So basically, we will create one network and put their own DNS servers and manage your zones yourself. And you will create an additional subnet and put already really a Kubernetes or OpenShift instances there. And then we are not trying to reinvent a wheel and just use OpenShift Unscable installer or a kubespray in case of Kubernetes. Yeah, I think that's all. So basically, that's how we try to make life of our customers who are asking, how can we install Kubernetes or OpenShift in more or less production way easier? Yeah, so as Artem just explained, one size fits all solution is simply not applicable to a framework like that. And however, we published a few examples and work and progress reports on our blog. I think I have put down the link for that in one of the references resources pages. So if you want to get started at a certain point, that might be an option. If you're looking for something on an OpenStack environment. More or less the same also applies to the Cloud Foundry as well, but even one way, one level more upscale. So Cloud Foundry is kind of an abstraction platform for really large enterprise organizations. It has a number of abstraction components for databases and queues and, well, actually covering the whole software development lifecycle. The integrated CI CD, which is, well, if you define it precisely, not the case with OpenShift, but which can be quite easily added to OpenShift. Well, Cloud Foundry was created by originally VMware as far as I remember and, well, devoted a couple of years ago, almost 10 years now. And the project itself is now also under the custody of the Cloud Foundry Foundation, which is in turn a part of the Linux Foundation. Cloud Foundry also kind of adjusted their path, and especially the execution or the deployment platform is nowadays more or less also a Kubernetes platform. That's not mandatory, but usually users do this this way. Very, very brief view on the architecture diagram as well. Alex could explain this way better. And, well, the installation itself is more or less similar to what Artem explained for OpenShift as well. So it's more or less setting up some foundation, setting up an initial virtual machine that acts as a bastion or management installer node, creating all the infrastructure, configuring the infrastructure so that you can access the OpenStack components and service endpoints, and then create the necessary Cloud infrastructure. Well, in our blueprint, we used Terraform for that. And after that, you can deploy the so-called Bosch. The Bosch is the deployment tool that is part of Cloud Foundry. And once that is available, Bosch is able to bootstrap itself or is able to bootstrap the Cloud Foundry setup. Yeah, and that's it more or less. This looks so easy, but in terms of the actual resources being virtual machines as well as days spent to install, obviously different. We made a very, very rough comparison chart to show you a little bit about our experiences with the different platforms, their targets, and the resources we spent on them. Well, this is not set in stone, as it obviously depends very much on your specific needs and requirements and workloads. But after quite some discussion, we agreed on these example numbers. So you can run a rancher node with two compute nodes and just a network in between. Whereas you need at least 8 to 10 nodes for Kubernetes cluster and more than 30 to implement the reference recommendations of the Cloud Foundry reference architecture. I just explained the typical workloads, I think. Today, the programming languages are not an issue anymore. So actually, you can use virtually any programming languages. You can package inside a container, if you like, to actually set up and configure everything that is, well, at least similar to a production environment, that differs a lot. And that is also to the fact that, well, the sheer number of services differ greatly between the free platform software components. Yeah, that brings me to a final slide where we collected a few of our resources. Maybe you can look that up in the video or in the conference material. And I'd like to conclude with a few thoughts that are our takeaway from our journey into platforms. So the first is to offer flexibility to our customers and users to adapt their platform to their specific needs or to enable them, to adapt them to their specific needs. We are here to provide support. We hear you. We try our best to answer your questions and to provide solutions if there is something that is missing. We have to constantly monitor the development in the platform area. And once we see specific platforms to become a quasi standard, we also consider providing managed services optionally on top of our platform, as we already did and do with Kubernetes and the Cloud Container Engine. With that takeaway and wrap up, I say thank you and have a great last day of the OpenStack Summit. Thank you very much. Any questions or comments or just interested in the popsicles? Otherwise, I need to take them back again. So if you are interested, just come by and grab one. Oh, yeah, interesting, yeah. But this is actually from the next presentation. Where does it come from? OK, sorry. But it's still true, yeah. Thank you very much.