 Okay, so good afternoon, everyone, and welcome to this session where we will speak about the OpenStack usage in the European Open Science Cloud. I'm Anolfo Hernandez, working for the EGI Foundation, and I will give you a bit of the context of what is the European Open Science Cloud and how OpenStack fits into the picture. I'm General Ponsonel and Boris Parak, that are two actual resource providers in this initiative, and they will give you more details on their, say, day-to-day activities and support for OpenStack. So, as I said, I will start with describing the European Open Science Cloud, or the EOSC, as we call it, and the EOSC has one of the projects that is developing this idea and put it into practice. The European Open Science Cloud is an initiative from the European Commission that tries to address the fragmentation in the current landscape of the European digital infrastructures for research. The idea here is that by 2020, the European researchers and innovators, companies and citizens will have a federated and globally accessible environment where they can publish, refine, use and reuse each other's data and tool in a way that is secure and with well-defined trusted conditions. This should build on existing infrastructures and should have a lightweight governance, so it should allow to have a large degree of freedom in the practical implementation. With this initiative, the European Commission wants to give a big push to the FAIR management of research data. FAIR is having the data being findable, accessible, interoperable and reusable that will help to have better data-driven science. So this idea started in 2016, and as I said, the idea is that it will be fully operational by 2020, and the European Commission is funding a series of projects that will further develop the idea and make it real. And one of the projects is the EOSCAP that is a project that mobilizes providers from the EGI federation, UDOTCDI, Indigo Data Cloud, these are main infrastructures existing in Europe already, and those three with other major research infrastructures come together to offer services for advanced data-driven research and innovation. The idea of this project is that they will create the HAP, which is the Inter-Rated Management System of the European Open Science Cloud, that will be the single entry point for researchers to access the EOSCAP. This is a rather large project. We have 100 partners, and it will last for three years. It started in January this year, and it has 33 million euros of funding. This is one of the biggest projects developing in the EOSCAP, but new ones are coming and we'll start soon, but right now this is, I would say, the largest one in place. So as I said, this will create the HAP, the Federated Integration and Management System of the EOSCAP, and this HAP has four main pillars. The first one would be the services, which is what the actual researchers will consume, and these services are both coming from the existing partners of the project, but the idea is that we will have external contributors bringing in new services that can help our researchers to do their job. Then we have the Federation Services, which are the glue that actually makes the Federation. So we have here things like the Marketplace, which is a place where the researchers can browse and order services for their consumption. We have things like Authentication and Authorization Infrastructure. We have Accounting, Monitoring, Help Desk, these kind of things. Then we have the day-to-day Federated Operations activities, like the Certification of Providers, the Negotiation of Service Level Agreement, Customer Relationship Management, and all of these are based on the FITSM IT Management Standard. This is a lightweight IT service management that is compatible with the ISO 20,000 family of a standard, but it's lighter and it's easier to adopt for organizations that are not used to this kind of working with IT service management. And last, we have the Processes and Policies that include things like Security Regulations, Complaints to Standard, Terms of Use. So everything we need to care about to make things working in a legal way. The EOS have services. The current one are listed here. We have separated them in several areas. So we have the Compute Platforms, Identity and Security, and Data Management and Storage. It's what we call the baseline services that are the ones that allows the research community to build domain-specific services. That would be the thematic services. So these baseline services are the ones that provide the raw computing and storage resources, basically. Then we have the Professional Services, which are basically human-based, so training, consultancy, and this kind of stuff. And then we have aggregators that right now are basically catalogs of software. So today we are focusing on the compute area, and there we have the services coming from EGI, the EGI High-throughput Computer, EGI Cloud Computing, and EGI Cloud Container Computing. This is where OpenStack is playing the main role right now. So this was the EOS have, and now we'll dig deeper into EGI. EGI is, as I said before, the provider of basic computing services of EOS have. This is a federation that was established in 2010 after years of investment by national governments and the European Commission. And this is a European-wide federation of national computing and data storage resources and the ideas that we provide services for research in Europe. EGI federates resources from national infrastructures. We have currently 22 of these national infrastructures, plus one European Internet Governmental Research Organization, which is CERN. We bring them all together, and we have the EGI Foundation, which is where I work, and that is the coordination body, and it's located in Amsterdam. Some numbers of EGI infrastructures that could be considered the largest distributed infrastructure in the world. We have more than 260 research computing and data centers spread all over the world, most of them are in Europe, but we have collaboration with North America, South America, the Asian Pacific and the African Arabian region. There we have more than 70,000 computing cores, more than 300 petabytes of disk and 300 petabytes of tape storage. And just this year it has allowed to publish more than 600 open research publications. We are giving service to more than 40,000 researchers, and here in this plot you can see the evolution of the CPU consumption over the last few years. And just in the last six months, we have 26 billion CPU hours being used by researchers to perform their day-to-day work. This is the service catalog of EGI, and as I said, basically we are focusing on the cloud compute part, and these compute services are running on top of a federation of cloud service provider, which is what we call the FedCloud. This federation is a multi-cloud infrastructure as a service with single sign-on based on virtual organization. A virtual organization is how we call a group of researchers with a common interest. And what we do EGI acts as a broker between these virtual organizations and the providers, so we find the right providers for each community and formalize server-level agreement between the communities and the providers, and in these SLAs we define exactly how many resources will be given to the community under which conditions, for example, the community will pay this much for each core per hour, and it will have this kind of 99% availability, etc. And the idea is that with this federation we make it easy to research community to access the computing near where the data is, so they can run their analysis, and we also make it easy for the providers to support international communities. The federation also have some extra features, it's not just this single sign-on. We have a virtual machine image catalog that a community can select a set of images and they will be distributed automatically across all the providers. We have centralized usage accounting that allows us to produce these plots that I showed before, so we are able to collect the usage across all the providers. We have resource discovery, so the communities can discover where to run their virtual machines. We have monitoring of the availability and reliability of the providers, meaning that we are able to visual that the SLAs are fulfilled correctly. And we have a unified graphical user interface dashboard that I will show a screenshot just in the next slide. And we are supporting different technologies, OpenStack, OpenEvolution, but as I also will show before, after, we are mainly converging into OpenStack. So this is the screenshot of our dashboard, and here this is myself running virtual machines across different providers, Spain, Italy, the Czech Republic, and Belgium, so I can, from this single page, I can manage them and, well, it's like horizon basic features in a federated world. The carbon infrastructure is shown here in this map, are the green spots. We have 20 providers supporting 11 of these virtual organizations, 15 of them are OpenStack. Then we have four OpenEvolution, but two of them are moving to OpenStack, and Boris will tell about his experience with that. We have another one, which is Cinefo, which is a Greek technology that is also moving to OpenStack. And with this EOSCAP project and other EOS-related activities, we are seeing new providers of using OpenStack coming, and we have six new of them being integrated right now. So what we have seen is that the OpenStack is becoming the factor standard in the EI Cloud Federation, so it's mainly becoming the cloud technology in the European open science cloud at the moment. In the past, in order to deal with the heterogeneity, we enforced the use of a standard API called OCCI, but it didn't bring the results that we wanted, because even using this standard API, when you move to one provider to another, you had to tweak the usage of the API. You didn't have access to the advanced features, because being a common API means you have only the common features, and the tool ecosystem was rather poor, so users were not very happy. So now what we say is basically you need to use OpenStack, and sooner or later everyone will be there also. So this was the overview of the EOSCAP and what the EI is, and what is our Cloud Federation, and now Jérôme will talk about his experience at his data center, so Jérôme, please. Thank you. So after this overview of the EGIF Cloud, I will present you now our experience on how it goes with the operations of all the components required by the EGIF Cloud at our scale. So first I will present quickly our institute, IPHC, Multidisciplinary Research Institute by Zinstrasbourg, so there's a border with Germany. So the institute is composed of around 100 people working on several scientific domains. It covers a large scale from particle physics to ecology, passing through analytical chemistry or medical imaging. And to give access to the researchers to recent computation facilities, we have developed on deployed scientific computing platforms called SINE. This platform gives access to the researchers to several other services, to containers as a service, to several types of storage, rapid storage based on SSD or something more common. All this is powered by self. I will speak a bit more about this later. We have several collaborations, for example, the local scale with the University of Strasbourg, the national scale with CNRS, with France Gray and IFPs, French Pyinformatics Institute and the European or international scale with CERN and EGIF course. So one of the core services of this platform is the cloud computing service. This cloud computing service is based on OpenStack. We started OpenStack in 2013 with a crazy distribution and we are now running with Pyke and are using the audio packages to deploy the cloud. That's a small infrastructure. We have only 520 cores and are providing as low as 300 terabytes of storage to the users. All this is powered by self with a luminous version. This infrastructure is designed for scientific computing. So far, we are not doing any CPU over allocation. We are also providing some GPU for some use cases. So the infrastructure is configured and maintained in the U.S. quarter. It is not well-known like Puppets or Uncivil, but it works pretty fine for this type of usage. And we have also an availability above 99% who is for our use case sufficient. So now some words about the integration of the cloud tools. The infrastructure has been certified in 2015. And for the certification, we used both extensive documentation provided by EGI on all the experts from EGI to help us to deploy the tools. At these times, most of the tools were available as course code and you have to compile them and install them on your infrastructure. Now it's much more simple because all these are available as RPM or devian packages. So documentation has a migrate from the wiki to the RedsDocs website. So you can take a look if you want. We are using all the components proposed by EGI, so EGI check-in for the single sign-on, Casua Pell for the centralized usage accounting, CloudKeeper and CloudKeeper OS for maintaining image synchronized with the central catalog hosted on the FDB. And then as also a cloud info provider for the resource discovery. So what's to tell about the integration of other tools? That's now, as I told you before, very simple to install with the RPMs. There are some configurations to complete, but all is done for like an open stack service because these services are using the open stack libraries, for example, the PBR setup tools or other log files facility from open stack. So that's very convenient. But there are some things to care with when you are upgrading your open stack because that's like the services the API can change or some behavior of open stack can change. So you have to check before to upgrade of open stack that the modules are working as expected. We have some acts to do actually with NOVA for supporting the UI component who provides the OCCI interface. But that's really a short issue because the OCCI is not mandatory anymore and will be removed from our production platform. So to finish the part at the IPLC, some words about the use cases. Most of the use cases coming from GI are related to life science and health science communities. For example, LXEA, Biomed and this. And all these communities are using OCCI's provided through SLA. And with the SLA was also giving access to the OCCI's to opportunistic OCCI's. Several projects are undergoing, for example, the deployment of GI notebooks. Service based on the use of communities. We are also working, for example, on the development of the new version of CloudKeeper Us who may permit to maintain or synchronize images. Also to develop some service as container as a service at the open scale, so multi sites on the creation of an under mountains of services. So I will give now the talk to the boys. Thank you. So good afternoon, everyone. I'm Boris Parak, I'm from Cessnet. And I will give you a brief overview of what we are doing right now because we are migrating to OpenStack from OpenNebula. We are part of the EGA Federated Cloud and have been for years. So that's our connection to EOSGAP and EGA Federated Cloud. As you can see, we have two logos on this slide. So I work for Cessnet, which is an association created by universities and the Academy of Sciences in Czech Republic. And we are also in infrastructure and basically providing resources, whether it's network, compute, or storage for research and academic use cases within Czech Republic. We are also involved in major projects and communities such as Géant, EGA, EOSGAP, or Elixir. The other partner in this cloud endeavor is Cerit Scientific Cloud, which is based at Masary University in Brno in Czech Republic. And that's a national center for computing and data storage. Again, offering resources to research communities and scientists, also involved in Elixir and BBMRI and heavily involved with the life sciences, which is the best life, for example. And this center places emphasis on working with researchers and tries to come up with creative uses for infrastructure. So it's not just for researchers, but it also performs the research at the level of the infrastructure and experiments with the infrastructure and its uses. We have a legacy infrastructure, which is currently or has been for the last seven years connected to EGA Federated Cloud and has been used within Czech Republic and various international projects. And it's, of course, it's mainly HPC since we are providing compute services for researchers and they want to do heavy computation, so there is not a lot of overcommitment happening in our infrastructure. We have some odd 6,000 CPU cores. We have hundreds of hypervisors in multiple cities in Czech Republic. We offer capabilities such as general purpose GPUs, SRIOV, Infinity Band, and we have provider and overlay networks. This for the past seven years approximately has been running on Open Nebula and right now we are migrating to OpenStack. So I hear you ask why? So what's the motivation? Over the last few years, OpenStack and its APIs have basically become the de facto standard. So if someone asks for a private cloud or a community cloud, they basically mean OpenStack. So there is no other choice and we have to go with the flow. So that's a major part of our decision. The second major part is having the support of the community and the tooling and the ecosystem that OpenStack has. So there is a lot of tools already prepared and portals and whatever the users may choose to use to manipulate the infrastructure, it's simply ready for OpenStack and we would have to modify it otherwise. So popular demand, everyone is asking for OpenStack, so why struggle just switch? And we have also growing demands on diversifying our portfolio. So you have heard a lot about containers these days. So we have to go where the user communities go and we have to change. So we established a few rules when trying to figure out how to switch to OpenStack. First and foremost, we have to learn as much as possible. So we are not trying to deploy some ready-made solution. We simply have to understand the platform underneath, so we have to get the hands-on experience. We also have to train a completely new team of people and hopefully grow the team of people responsible for handling the platform because that's mission critical. We would like to avoid vendor lock-in as much as possible. Of course, it's not as easy with hardware but definitely on software levels, we will do as much as possible to avoid using ready-made solutions. Since we are experimenting a lot, we are expecting dead ends, but if we can keep them within reason and having just a few of them, it's completely fine. We are also looking at experimental features which I will be mentioning in a little bit. So that's something that's part of our design of OpenStack and our work with OpenStack. We want to try new things. Since we are switching and starting from scratch, this is a good time to look at what's available and perhaps try something experimental. For us, also uptime and reliability is not the primary issue we need to deal with. So most of our users perhaps need something a bit specific or are happy to try new features even if the platform doesn't have five nines. So some outages are okay if we can bring new functionality. So we are not focusing on high availability in the beginning. We are pushing for production as soon as possible. So we started this year and hopefully at the beginning of the next year, we will be able to go into production with some form of the platform. Of course, it will not be finished in any way, shape or form, but we will be able to offer it to end users and get some feedback. And at the end, of course, we will try our best not to get murdered by angry users because, of course, we are changing the platform. It will be buggy in the beginning. There will be a lot of issues. So we will be trying to all, at least half of the team needs to survive. Okay, I promised some technical tidbits. So we are deploying in containers. So we are heavily relying on the COLA project. We are also trying to orchestrate the whole installation, the whole platform in a minimalistic way. So we are using a lot of custom puppet and Ansible. In the future, we will perhaps switch to something more or less homegrown and more widely used. But right now, it's heavily customised and something exactly as we needed. We are experimenting with Aben and SDN on OpenVswitch. So if there is anyone who has experience with that or would like to know more, please come talk to me. We need partners in crime when dealing with SDNs. And of course, we have some fun and games with federated identity. As you heard from Anol, we are part of EGA federated cloud, which means we need to connect to their IDPs. We are part of the national infrastructure. So we have our own identity providers. Then we have some external communities which also come with their own identity providers. So at this point, we are connected, or we have three separate OpenID connect domains configured in our Keystone. So that's fun as well. OK, so the main message is we are trying to switch to OpenStack. If you are interested in any of these topics or have advice or in any way, shape or form, would like to know more about what we are doing, please come talk to me. And hopefully, we will figure it out. So I will hand it back to Anol. Thank you. Just a short slide. We were thinking together what we are missing in OpenStack. And the messages, overall, we are quite happy. Works as we expect. And Anol wants to have the federation in place. But we will appreciate some improvements, mostly in Keystone, let's say, to improve the support federation. So we learned yesterday from the Keystone project that some of these are already being in the roadmap. So we are quite happy about it. But just to list them, we would like to have hierarchical projects out of provisioning to deal with our users, help Boris in this OpenID connect NIMER and having an easy way to support more than one provider and manage that in a sane way. Deprovisioning is something that we really miss. So our communities can be large, and users can come and go. And when they go, we need to do something about them. So right now, it's quite manual, and we would like to automatize that as much as possible. Also, one thing that we like is a better documentation on how OpenStack services interact, especially for new people. It's quite hard to understand what's going on. Another thing is we would like to see better tracing of user actions, especially for security, when you have an incident in this kind of research institutions, you want to exactly know who was doing the wrong thing and isolate the issue. And you have the security team chasing you and saying, hey, what happened? When did it happen? What do we know? What do we do now? So having a nice way of tracing user actions from beginning to end would be really nice. And also, as we are in a federation, policy can be tricky. So having a nicer way of managing policy would be also good. And just two slides to say, you can become a provider also. In the US Hub, we have this form where you just feel a few information about yourself, and you could become part of the US Hub marketplace. There are different levels of integration. The low one, which is filling that form, will be basically, hey, I'm here, and I can offer my services to research community in the US. The high would be what Boris and Jerome was talking about, so really deep integration to AI, accounting, monitoring, et cetera. And the medium is you can have AI, but ignore the others. So if you're interested, just go to the pointers and the slides and you can join us. And just to finalize some conclusions, so EOS Hub is establishing the key elements for the European Science Cloud. We have the first set of services, including the EGI Cloud Federation, where OpenStack is mainly used. We have already the service request, provision and management processes in place, so we should be able to, we are already doing that, and service users. And as I said before, OpenStack is basically the main infrastructure service technology in the current landscape, and it will keep growing in the near future. And we are open to providers, so if you want to join us, just do that. And I don't know if you have some time for questions. You are pretty much welcome. So thank you.