 So let me talk about the project status. We are a small team, only came together less than a year ago. And with this small team what we're doing is really orchestrating a larger community of people that contributed to this project. We currently have more than a dozen people that work most of their time on our project, really basically being funded by the companies that are working with us that are supporting the SCS project. The three of us that work centrally are supported by funding from the agency for disruptive innovation in Germany, which we're really grateful for and we are trying to get some more funding from the ministry for economy and energy in Germany. So in the end we believe there will be some association or maybe foundation that will be the home for the central coordination work, but that's still being worked out. The ecosystem of partners that are working with us is actually growing, so we do have discussions with interested companies and we're actually growing every month by a few companies, which is really great to see. We already have SCS installations at some providers, most of them virtual, and I'll talk about what that means in a second. We are part of the GAIA-X project, so we are a work package in the GAIA-X project and we intensely collaborate with several groups, other groups in this GAIA-X project, most maybe one example being the identity and access management group, which we're using some infrastructure together to move forward, to have some test environment and some technology in place. There will be a GAIA-X summit at the 19th and 20th of November, and I would like to invite everyone to join the virtual event, so it's easy to join. We also actually have a bit of public coverage, so there's web pages obviously from the people that support us, but also then like the German computer magazine CT or we have like a feature in a podcast from a German radio station. We have actually listed some of those on our page, which is hdpsscs.community. And of course we're looking for more contributions. So you can see a snapshot of our page and of course we also have quite a bit of code out there written. It's available on GitHub. On the technical side, it's a technical conference, so let me share some of the technology work that we've already been doing. Right now we do have automation in place to implement part of the stack that we're working on. So infrastructure like the virtualization of your computer resources, like storage virtualization using Ceph, like network virtualization using Open Respitch, actually we're currently switching over to OVN, database management tooling like monitoring the automation using Ansible Database, message skewing all there. So we can do a bare metal install using mass photos. We then collect the inventory using Netbox, have Savics in place for having monitoring solution in there. And then on top of that, we start really rolling out with Ansible. We're rolling out basically locker containers that provides the various services such as a Ceph, such as the core OpenStack services. So we're really building on top of Color Ansible here. We can also do a virtual deployment, so we don't start with doing bare metal installations, but we're actually using virtual machines and then deploy the Docker containers in there. That's what we call the testbed. And it's great for testing for RCI tests. It's also great for demos. If you're actually using nested virtualization performance is decent, so we can actually do some reasonable work there. We're using Terraform actually to do the virtual deployment with the testbed. Currently we have physical deployments with two providers, one on beta cloud that's actually in production and one with plus server which is going to be launched early next year. And the virtual deployments actually we have them running on top of OpenStack environments for a number of our partners and it's actually fairly easy to get working. So we actually add companies to that list very, very soon. We have not yet done the work to use Terraform to actually deploy on top of Amazon or maybe just Lippert on a single machine. So we're currently deploying actually DSCS stack which includes OpenStack on top of a pre-existing OpenStack infrastructure. We do also have a key cloak as an identity proxy in there and we've actually carved that out to provide some test environment being used in the IAM project in the GaiaX space to support our colleagues to make progress with technology there. Currently if you're asking well what do I need to do to be SCS compliant and compatible we have not written all of that down yet. Right now I would say well look at what SCS deploys and consider that as the standard. Obviously we need to get better than that and really write down and create tests that allow you to determine what does compatibility mean. On the container layer which is of course the very important piece because that's what most modern cloud-native applications are built against we're currently looking at projects like SAP Gardner, KuboMatic, Renscher we also are talking to the China's warm people and one of the challenges we have in that space is right now that the Kubernetes cluster management does not yet appear to be standardized as well as it should be. So we're currently fighting that a bit obviously. Our goal is to provide standard interfaces so we don't need to live this very a lot of change going forward as technology changes and that's something that still needs to be worked on. We're currently looking how we can actually overcome that challenge. We do have proof of concept work with Gardner, KuboMatic, Renscher running so you can actually deploy container Kubernetes cluster management solution on top of the base SCS stack but it's not yet actually working according to things we consider standardizable. We currently consider actually shipping with the open stack Kubernetes cluster API provider because that seems to provide a minimal set of standardized APIs and then built on top of that hopefully actually being able to work with one of the companies mentioned here going forward so we actually have a Kubernetes as a service layer that actually goes beyond just open stack environments. Here's like a short view on the workflow if you do a deployment of SCS. There's two different possibilities to deploy. The first one you see here is a physical deployment. Obviously you need to then do some manual work like putting servers in racks doing the cabling and then starting a deployment using mass net box and deploying SAPIX to do half a monitoring there and then a fully automated process starts using Ansible deploying all the software in a standardized way with standard configuration. You can do the same thing in a virtual testbed. Here the bootstrap is much easier because you can use just Terraform to deploy VMs and then starts the Ansible-driven installation from there. The whole thing takes less than 90 minutes. Depends a bit on the network connectivity and the performance of your infrastructure. We have worked that running on a number of systems and we are fortunate to have great partners that are supporting us giving us access to their infrastructure from OTC, OVH, City Network where we have this running. I'll not do a demo here. The infrastructure that's being set up in the testbed deployment really is just using four nodes. We have three hyperconverged nodes that bring block storage virtualized using SAP and also deploy all the open stack services that you need along with a number of infrastructure services and there's one management node that you can access as an administrator to actually control the whole environment. Getting it to run is fairly easy. If you have access to an open stack environment that is standard you will not have any trouble. You actually really fill in a handful of infrastructure dependent information such as the flavors of the VMs that you want to use the name of the Ubuntu image that you want to use to deploy. That's basically it. If you are trying to do this on OVH or on the OpenTelecom cloud you will need a few workarounds because there are some aspects of those two platforms that are not fully standard. And once you have done the deployment successfully you can actually access the web interfaces and see how the SCS environment works. We haven't done the work yet to port the Terraform recipes to also work on LibWord or maybe even on Amazon because it has not been a priority yet. But if you are interested on that let us know we will work with you to get that done. Once you have done the deployment you will have access to a number of dashboards and I have just shown one to you here which is a net data which is kind of a neat and nice looking tool to do some live observations of your system. It's not what I expect operations teams to use most in practice because most in practice you want to collect monitoring data and store it in a time series database and maybe just analyze that later or maybe watch for trends and alarms. Net data really gives you a live view which is nice if you want to for example look for some performance challenges or have some live issues that they want to debug. Here's the big view on the architecture. So on the bottom layer we do have a virtualization for a compute storage network using standard technologies that you will probably expect from the open source universe such as the Linux kernel and operating system KVM for virtualization LibWord to control and drive that on the storage side using SEV and radars on the networking side using OpenVswitch and OVN technology. And then on top of that we do have the OpenStack services. It's basically just a core services plus a handful that we have chosen that we need for specific reasons. Maybe important to notice that we do consider the OpenStack services as an optional standard that means we know that some of the providers we're working with want to not expose these services and not necessarily standardize on these because all they want to provide to their customers is a container and Kubernetes layer on top. So this will not be exposed in such a case. If it is exposed it can be standardized and certified, really tested against the standard. There's one piece at the infrastructure layer that we actually want to use as a standard which is the S3 protocol that is the most commonly used object storage interface so it's something that we want to provide as a standard there. On the container layer you see several Kubernetes clusters here because we believe that the Kubernetes cluster itself should be able to deploy it as a self-service infrastructure and Kubernetes is not just Kubernetes itself but it also is tied in with the underlying infrastructure via the storage and networking interface and there's also some standard tooling around Kubernetes that you expect there to be. All these pieces are part of the standard SCS platform so developers can rely on them being available. If you're looking at the left side of this graph you will see a selection of operational tools. Those are rolled out if you deploy the SCS stack and they really help operators to run such a platform so we do have monitoring pieces such as Prometheus NetData, Skydive Cortex in there we do have some CI tooling which is really building on top of Sewell for logging we have the ArcStack for automation Ansible. The whole thing really provides automation to deploy but also at least as importantly and more difficult automation to do updating lifecycle management is one of those things that a number of operators struggle with and it's really one of the things that we want to provide solutions to. On the right hand side you see the identity and access management technologies that we currently look at obviously in the end we do expect customers want to be able to do federation using the SAML and the OpenID Connect protocols. Those are really standard protocols in the identity federation space and those are also the ones that are currently being used in the GaiaX demonstrator that's being built so we are actually helping there to provide technology to make that work. We do have Keystone in place to drive the OpenStack services and we do also deploy a Key Clock so we have an identity proxy that we can use to do the federation and the identity federation mapping so very often in identity federation the challenge you have is you have certain attributes stored in your identity provider and then you need to map those attributes to certain roles or rights in your infrastructure so that can be nicely done using Key Clock. UCS is technology from Univention that we use to store and manage identities that need to be managed locally and it has nice technology in there to do things like password rotation or password policies for example. We have a lot of work ahead of us so here's a quick look on the roadmap and obviously the more I look into the future of that roadmap the less well defined things are basically documenting that there's a lot of things that we have on our radar that we want to work on without necessarily having a very detailed plan at what point in time exactly we are able to deliver certain pieces. What we're currently looking at most is really to work on the Kubernetes cluster management standardization challenge. This is the thing I mentioned that we lack some standard or a majority of standards and adoption of standards in that space so we're trying to work with the community to see whether we can make some progress and then the second very important piece still that we're looking at this year is that we want to really strengthen the test coverage in RCI because the vision we have is that we want to enable providers to really do daily updates of their production environment and that is something that obviously requires a high level of testing so you can create the confidence that doing these daily updates doesn't break anything. We are aware that not every provider will follow that policy but we really consider it best practice for us as a provider to at least enable that model and make sure we have the processes in place that this can be done. So let me summarize what we have. We believe that for data sovereignty it's very important to have control over your infrastructure. We are building a network of providers so we really believe that the target picture is to have a large choice of providers that provide infrastructure services that are interoperable, that are federatable so it's really not about building one European hyperscaler it's really about building a very healthy ecosystem and in order to achieve that we need to make sure that providing such infrastructure becomes a lot easier than it is today and we do that by delivering a standard software stack and more importantly by helping providers to overcome the challenges with operating that infrastructure. So that is really the main work we're doing is to work with the provider ecosystem to share best practices, to build better operations and to really become better in providing high quality infrastructure. Current statuses we have actually successfully built automation to deploy already infrastructure efficient tooling infrastructure as a service and that is something that is actually being used so we are doing daily rollouts running our CI over that and it's also actually already in production with one cloud provider which is BetterCloud the next pieces we really need to do is to work on the Kubernetes cluster management interfaces to get some standardization there so we can deliver something hopefully with one or maybe all three of those companies listed here but this is something that will still being discussed. So this is where we are, I hope this was interesting to you, I hope I could entice some of you to really join us and work with us and hopefully I also have great enough interest, there are some questions so Christian and myself will be available to answer questions now.