 Hello everybody, my name is Kurt Galloff. Together with Christian Berndt I'm going to present about our project, Sovereign CloudStack, which is about building large federated infrastructure for server NT. So what is the problem we're trying to solve? Looking at Europe and our past digitalization, there are a few challenges that developers meet when they're trying to build software. Very often we still see in Europe a rather classical development and production environments. These are typically bare metal or sometimes virtualized systems that are managed by the IT department. They're well protected, they are in-house, so you have no trouble with data being submitted to unwanted places. And if you're lucky, your IT department has still enough staff to really provide good support to you as an engineer. However, you do have to work with proprietary tooling or sometimes home ground tooling that allows you to access such infrastructure. It typically comes with slow and manual deployment processes, so it's not really following the modern infrastructure as code principles. And very often you need to actually procure new hardware or get approvals for even getting a virtual machine being deployed. And those are even slower, they really slow you down in your development process. On the other hand, a lot of companies obviously have made the move to an agile CICD style of development. They typically work on hyperscalers. You can use the full automation of such hyperscaler infrastructure. It's API driven and you can really use best-of-class DevOps or DevSecOps operations principles. Typically, these infrastructures also come with a number of ready-to-use building blocks. And there's a significant set of open-source tooling around them. Sometimes it's not really open-source. This advantage of this is obviously once you're starting to use these hyperscalers and you're not very careful in managing what technologies you use and how you use them, you easily end up with very hard dependencies that are very difficult to get out. These can turn out to be expensive so it's really disadvantageous in an economic sense. It can also be a problem in a strategic sense or a legal sense because, of course, in Europe you have to obey GDPR data protection rules. And this is not easy and just recently the highest European court has ruled that there is trouble. The privacy shield is certainly not enough to protect personal data that is transmitted to the US from being spied on. Fulfilling the GDPR regulation requires more effort there. A number of European companies actually have gone down the way of using smaller European providers or build their own infrastructure as code platforms. So you can deliver to your engineers, to your software developers a full API-driven automation. You're rather flexible in how you do that. The trouble with that is you end up with a non-standardized niche solution. And this is not very efficient in the long term because you need really expensive in-scars skills to operate such infrastructure well in a very reliable, high-quality manner. In addition, all the building blocks you want, you need to build yourself because you're not on top of a standard platform. You cannot do some cloud bursting and basically consume resources from some other cloud because you're on your own, you've built your own platform. So GaiaX wants to support the digitalization in Europe. It's a project that has been started by the German Ministry for Economy and Energy. And it's meanwhile a European initiative. And the idea behind this is that the digital sovereignty for business science and government should be strengthened by supporting digital innovation. And it really is about leveraging the value that is in data. One of the things that we want to do differently in Europe compared to what's happening in other geographies in this world is that there's this idea that there should be an ability to leverage and use data without losing complete control over that data. So there should be an opportunity for people providing data, people using data to do that in a way that still allows you to control who has access to it. The sovereign cloud stack is there to build infrastructure that supports such sovereign data usage, such sovereign data processing. And the way we look at this is that we really want the IT landscape to be under the control of the people that provide the data, the people that use it. And this can be achieved by really having a broad set of providers that deliver such modern agile infrastructure and data services. And that they are interoperable and this interoperability can be certified so you can prove it's interoperable. Data protection security is built into the platform and this really supports the innovation. So sovereign cloud stack really wants to empower developers and users to have access to modern platforms without becoming dependent on a small set or a single large provider. So here's what SCS is delivering. It's really about three things. Software, standards, ecosystem. So software that we want to provide as part of what we're doing in sovereign cloud stack is a complete stack of software that allows you to deliver cloud infrastructure. That includes infrastructure that virtualizes your resources, manages your hardware resources to provide an infrastructure as a service platform to your users to provide a Kubernetes as a service platform to your users. And I'm intentionally saying Kubernetes as a service and not just containers as a service because in the end we think that engineers, developers need control over a Kubernetes cluster, not just access to it. And then optionally over time certainly growing will be certain platform service components that you can use as building blocks to deliver to basically be more efficient in delivering a higher level services. One thing we include and that's maybe different to what some of the existing open stack distributions, for example, attempt to do is we have a very strong focus on operations. So life cycle management tooling is key to what we're doing, delivering CI components that allow the providers to validate updates and upgrades in the infrastructure monitoring is all part of what's a part of our platform. Then importantly, we believe if you won't have choice, those operators need to be able to federate with each other. So users that want to use several cloud providers can do so without having to manage users and access rights separately on each of these platforms. This is core of the GAIA-X concept to have federation in there. Then the software we're delivering is modular. So we acknowledge there's existing cloud providers out there, sovereign cloud providers even here in Europe that want to reuse some of the components they already have. And we want to make that possible. We will look at those components if they are compatible with the way we look at the software stack and the architecture. They can be used and there's no need to use the complete SCS software stack, but of course we still want to deliver a complete stack. Then no surprise, probably presenting to this audience is we fully subscribes to the four opens of the open stack and the open infrastructure community. So we are working with open source, we're developing that open source code with open development processes, with open design processes with an open community. Very important to us is standards. So we know there's a lot of great open source software out there that can be combined in many, many, many different ways. And in the end, the fact that each party can combine things in non-standard ways actually hurts the overall success of the open source cloud providers. Because in the end, without having a certain set of compatibility and standards, each provider ends up delivering a different platform. So we believe there needs to be a strong standardization effort here. And that really covers all the layers we have in the stack, the infrastructure, the service layer, the Kubernetes layer, the Kubernetes cluster management. And then also not just looking at API compatibility, but also looking at the behavior of a platform. So for example, what does an availability zone mean exactly is something that needs to be defined here. We also want to work with providers to provide operational standards. So if providers work with us and want to be SCS certified, we will have some rules in place that define how long it might take to deploy updates. We don't want to have outdated platforms out there, which then just because of being very old become incompatible with other newer platforms. Things like SLA definitions need to be defined and standardized across the ecosystem. And in the end, of course, we want to make sure that the platform services that are delivered, the infrastructure, the Kubernetes, is being discoverable. And one of the very useful tools of the GAIAX project we are using here is the GAIAX self descriptions that help users to select and find and identify the platforms and characterize them. Ecosystem building, very important to us. We don't just want to deliver a standard and software and then say, well, hopefully some people will use that and create something useful out of that. We are coming out of the cloud service provider community. And we want to work very, very closely with that community and create some sharing that is currently not yet completed, yeah, practice in that space. So we want to make sure that not every cloud service provider needs to learn all those operational challenges, those best practices you need to become good in operating such a complex, difficult cloud platform alone. But we really create an open community of cloud providers that share best practices, that share root cause analysis in case of trouble. And we have some transparency on those challenges so we can actually create some joint learning for that community. Obviously that standardization, that ecosystem also then in the end ends up really being a platform to the software vendors, to people that learn how to operate this, to consultants that want to help customers to create software for such a platform. Because if those platforms all are very similar, it's a lot easier to really learn how these platforms work and specialize your knowledge and your skills to support that. So this is just a different view on the design criteria, just renews through standardization, certification, transparency, sustainability, federation. This is really required in our opinion to create a relevant platform. This is how we believe the ecosystem should look like. So this is a busy picture and I apologize for that. We have like an ecosystem of seven different cloud providers here that collaborate to different degrees. One thing they all have in common is that they all provide a certain set of compatible APIs. If you look at the legend of this picture, we see the APIs for the identity and access management in white. We see the Kubernetes as a service APIs, those are mandatory in the SCS standard. We see the S3 object storage API, which is mandatory part of the standard. This is all provided by all providers and this makes sure that applications that build on top of these standard APIs really can run without any change and can be automated and deployed without any change on all of those platforms. This applies for example to what I've depicted here as the third party SAS application number one on the apps layer here because it really consumes the standard interfaces only. We also have a set of optional standards. So we believe there will be providers that want to expose the open stack APIs, having open stack as some of the building blocks that we put below the Kubernetes as a service platform. That's an optional piece, but if it is provided, it can be standardized. So applications that want to consume those interfaces can be done in such a way that it can run on all of the SCS providers that provide open stack APIs. We also over time will have a growing ecosystem of platform as a service applications and we will slowly over time also standardize these. As soon as those are provided as a standard, they are optional standards. So some cloud providers may decide to deliver them. Some may not. And of course, these self descriptions will make it easy to discover which ones are available. So applications like the Red Wonder, the SAS 2 application here in this picture that built on top of these optional standards, they will run on a subset of the SCS providers, but it can be found out without testing because just by looking at the self descriptions and discovering the infrastructure, this can be determined and it's important to us. If I look at various offerings from cloud providers here, you will see some differences in the details. For example, looking at provider number one on the left side. This is actually a provider that has a lot of pre existing infrastructure already existing. So he builds using the standard SCS service pieces on top of his non standard infrastructure service. And based on that, he of course cannot expose a standard OpenStack APIs, but he can still deliver standard S3 and Kubernetes or container platform APIs. And in this way still be a compatible platform. So this can be certified and can be run and can be consumed by users in a fully compatible way. Provider two is the ideal case from an SCS point of view. So this is a provider that really consumes all of the SCS standard software, also exposing the OpenStack pieces, also exposing standard APIs for the two platform services that we already have standardized in this example. Provider number three has a few differences, not exposing OpenStack, not exposing the second pass building blocks and this is also a provider that's actually not a public cloud provider, but this is a provider that really builds and provides services just to a limited community, just accepting maybe identities from a specific selected, vetted identity provider number two in this picture and this way limiting access to certain trusted customers. We also have examples of providers that are using non-standard pieces in their infrastructure, like provider number five. Interestingly provider number five has a compatible OpenStack implementation in place. This can be certified by using testing from tests from the SCS community and then actually providing a standard interface here. So applications that rely on standard OpenStack interfaces actually can run on this platform. Provider number six actually is not a cloud service provider, but it's an IT department of a large company building on standard SCS technology but protecting the environment very well because it's only meant to be used internally, so external access is very limited. Provider number seven is even more protected. This is really something that we imagine to exist in a government environment or maybe even military. It's not connected to the internet, at least not without going through many, many difficult hoops. So from a practical perspective it looks like a completely separated environment. In that environment still the IT department running such environment will benefit from SCS because it still allows standard applications that are built for SCS to work without any change and without any special work that needs to be done.