 Okay, let's get started. Welcome to the last session of the last day of this exciting summit. And I hope we can add some exciting stories of our customers to conclude this summit and session. My name is Band Herd. I'm a technical marketing engineer working for NetApp in our competency center in Waldorf, which is SAP's headquarter. Supporting SAP, and with me here is Mark Kuderer. Maybe, Mark, you can start and tell us a bit what we've done. Yeah, thank you. So, basically, we have a really good relationship with NetApp about OpenStack contribution and feature development. And we wanted to use this session to show how we run our project, how we share our requirements and how we get them implemented. And at the end, let's say, this is beneficial for the whole community here. We will run one example of a feature that we were developed all together. And we show a demo at the end. At first, I will just present what we are doing with OpenStack at SAP. What's our aim and what are the projects that we are focusing on. And then we will go in more detail about the work that we both do together. Basically, SAP has a lot of internal clouds. So more than 20 internal clouds, somehow self-written, self-maintained and all that. And there was a decision that we move all these little, small clouds to OpenStack as a API contract. So basically, all these little tiny clouds and maybe some of them are really huge do have special requirements and orchestrate different kind of virtualization. So basically, also OpenStack needs for us to support a lot of those workloads. So what we are underneath OpenStack support as hypervisor is KVM. I think this is something that everything considers. But we are also supporting VMware as hypervisor. And we are also big interested in bare metal because our HANA workloads in particular have a high demand on performance. So as an overview, we're spending efforts also in the community and developing features basically on components that we see a strategic reason and also the need that we need to be involved. So basically for projects that are not mature enough to be in an enterprise cloud. So Manila is one example where we spend a lot of development efforts all together, also with NetApp to get the things implemented for our cloud at the end. So I will hand over to Band. He will tell us a bit more about why SAP needs shared file systems on Manila. That's a good point that I should explain that. But yes, SAP, almost every SAP system has a need of shared file systems. That's true for a HANA system like you see here when in a scale out environment executables trace files are located on a shared file system like the HANA shared. On classic SAP, whether it's an Oracle or other databases, usually sub-MNT is well known to put all the trace files on it. But in addition to those typical shared file system use types, using NetApp in a classical environment also allows you to use NFS as a base for putting the database files itself on it. So the data and the lock, and this has additional benefits. So since you have an NFS server underneath, it's easier to manage, easier to relocate, easier to scale. And using a tool such as SAP's Landscape Virtualization Manager, it has built-in NFS features to relocate and move things around. So there's a need of a shared file system and a good place to position Manila. When we look at enterprise, typical enterprise requirements, we look here in that example at a massive scalable cloud, 10K tenants. So there are a lot of additional demands or specific demands on enterprise. We use three areas out of many to point it out. So security, so when we look at the tenants, they should be isolated on all the different layers, including network. If you look at classical SAP applications, such as an in-memory database, high demand on memory, CBU, and IO performance towards the storage is very important. And if you look at that huge scale, automation is important. You could imagine you don't want to set up for all the 100 tenants, isolated storage setups dedicated, that should be automated and running. So comparing those parts to Manila, that means in the security area, we want to use the secure share networks, secure access. We want to use massive scaling, so we have to solve the VLAN, VXLAN, and other limitations. In the performance areas, we want to have the maximum throughput for our database and log files sitting on NFS. So we look at jumbo frames to enable it on all the different layers from network down to the storage. We look at a selection or pre-selection of protocol types, whether it's NFS v3, v4, and we use share types for selecting volumes or storage backends depending on the database requirements. And of course, in the automation areas, we switch to managed shared servers instead of manual configuration. And in addition, of course, we want to use features that are built in Manila like Snapshot or Volume Clone. But we want to use it on a level where the storage backended all the jobs and get it done fast and reliable. With those requirements, we build that collaboration, as you mentioned, and we have the goal to put that actively into the community. So putting Manila blueprints, feature requests, we identify and report and fix bugs in the different areas throughout our journey. And we also have set up this partnership from NetApp development, so having constantly sharing with the NetApp developers, core developers of the Manila teams. We have a local support team and SAP itself did a lot on coding. So that was a perfect way of collaborating. And to give you a small example, so we have a public Wiki page where we collect all the enterprise requirements on a shared file system. So we select what's the issue we identified, a description on it, a priority from our perspective, an assignee, some reference over reference to a launchpad, to blueprints, whatever it came out. We even have accomplished quite some over the time. Some are still outstanding or planned for Orcata or even later. So that is basically a list where we collaborate. And one of the examples we want to show you is the hierarchical port binding, and that's mark your part. Yeah, sure. So basically, as Brand mentioned, we wanted to have a full automated cloud. So this means we don't want to care if Manila creates a share about the networking in between. We want to have a full automation that the virtual machine connects to the storage and to its part in the storage. So basically what we are developed, and this is already part of the Newton release. So it's completely landed and fully ready to be used. So basically before Newton, Manila created neutron ports, but didn't care about actual binding these ports to the network fabric. So this changed here. And what basically is done here, Manila creates a neutron port and waits until the network fabric does a real binding, creates the network connectivity from the storage cluster to the virtual machine. So this is, let's say, the first step, it's the binding step that we implemented. The next step is the massive scale. The thing is, in particular, the net app storage and other storage boxes to have the issue that they are not supporting overlay networks like VxLan. So we need to have at the storage and Vlan segments. So and there was the other question, so we will have more than 4,000 networks. So how can we support that? So the next step of the binding was to also support hierarchical port binding. So this means all the magic is done in the network fabric. So at the end, at the outcome of the switches, there will be just Vlan segments. But in between, there is VxLan under the hood in the network fabric. This sounds maybe easy and maybe the question is why is Manila concerned? But in particular, neutron has a different way of binding for multi-segment. So we also needed to take care here. So what we did is we created a test lab to reproduce all these features. And we will show you now a small demo how this works. So basically what we have, really a small set of pieces. We have a compute node x86 server, just a Cisco switch and an ad-app cluster. Basically all connected with two ports. The compute node itself to have the Cisco Neutron driver activated. This means it's fully automated, the Cisco provisioning is fully automated. So let's have a look to the movie. So basically, as I said, we have this Neutron driver active. So within the Neutron configuration, we will see there is configuration on the IP address to SHH to the switch. So this is Cisco configuration. This means if you create a network and DevStack in particular creates a default network, it will assign a Vlan segmentation ID on it. And if we have a look to the switch then it will be automatically configured. So now we're as it's etching to the switch and have a look. So automatically by creating a network, this segmentation ID is put to the switch port. So you don't see any Vlan range or whatever. It's just this particular Vlan that is assigned. So if you create a new network, then there will be another assignment. So and now how is Manila concerned here? Here you need to give Manila the idea where the connection is in the switch to the storage cluster. So if Manila creates a port, it will be also adding this information to the Neutron port create. So automatically, the switch will be reconfigured. So what we see here now is just that the connectivity to the NetApp system is completely blank. So we don't have any Manila share and nothing. It's just configured as non, so there's no Vlan already configured. So what we're doing now is creating a network in Neutron. This will get a new segmentation ID and after that we will create a subnet. So what we see here is the segmentation ID 2339 assigned to the network. And now we have to create a subnet. So we created a new network storage too. We will create a subnet with a IP range. So this will trigger now the Cisco driver that the compute noteports will be reconfigured. So it has nothing to do with, you see now we have two VLANs automatically provisioned on the switch. So that's the usual Cisco driver magic. But now we create a share and now the right port channels for the NetApp cluster needs to be also automatically reconfigured. So basically now we create a shared network with the Storage2 network. Then we will create a share on this network. So basically now under the hood, Manila we create Neutron ports with given information how the NetApp filer is connected to the switch. So it will take a while until the storage is available. So and if we now have again a look to the switch we will see that not only the ports for the computer is configured, but also the two ports for the NetApp is automatically configured. And just the one VLAN that is for the storage network that is used for the Manila share. We now have a look at the NetApp filer. We'll see that there is an OpenStack SVM spawned automatically to be secure. And we see here also that the networking part that this segmentation ID is also configured automatically in the NetApp filer. So we have here a completely end to end automation from storage and also networking. So Manila does the needful that everything is automated just to connect a virtual machine to the storage. And this was, let's say, a big part of our work within the last cycle. So you wanted to give some overview of what the next steps. Yeah, next steps. So we will focus for the Ocata on using the share migration, which is of course important in a large scale. You must be able to add resources and to relocate the shares. So that is a part we focus was a pretty high priority. And we of course continue the partnership to work with the community and together with SAP and the NetApp development team. And still work on the wiki and invite to have a look there and even add other enterprise requirements to it. So with that, just a little hint, there are blog posts out that explain other requirements like looking for the MTU size to get that solved from Mutron down to the storage. And with that, we'd like to hand over to the next person. Good afternoon, everybody, and thanks for attending this session. It has been a long week, so we really appreciate that you are still here in the last slot of the last day of this summit. My name is Lourdes Peñas, I'm the technical account manager for BBBA, and the goal of this presentation is to explain the challenges that the BBBA storage team are facing to build and agile technology platform to meet the digital transformation goals. And how NetApp is helping them to address those challenges. We also have a challenge and is to deliver this presentation with three speakers and almost 30 slides in only 20 minutes, so let's do it. This is the agenda for today, and the first two topics will be covered by Luis Sánchez Vidal. Luis is the Head of Stories at Architecture and Global Deployment in BBBA. Luis, it's your time. Thank you. Well, first of all, I would like to explain why a bank as our, and especially a storage department is here in this journey to open a stack. So first of all, as I said, BBBA is a global financial institution that is providing financial services in 35 countries to 67 million customers. We are the biggest bank, one of the biggest bank here in Spain. And the biggest financial institution in Mexico with our bank that is called Bancomer. We also have a presence in South America, as well as in the Sambal region of the United States. And we are one of the biggest shareholders in Garanti Bank in Turkey. So the bank, sorry, the world have changed, the rule have changed. And also, therefore, the way we are doing business, especially in our sector, the financial sector, have changed. In one hand, we have post-crisis, we have a very regulated environment. And also, due to technology, we have now other companies that are, well, like, I want to say that we have other companies, startups and fintechs that make us evolve, because we have a very complex environment. So the bank is aware of that and is anticipating. And we are, because we consider that technology is our key factor, we are addressing this thing. So starting in 2007, we began to deploy our platform. And now we are adapting it to the new exponential growth of the market. So we consider a cloud as the key factor. So we start our journey to the cloud. As architecture global department unit, our chair, we are in chair of ambition, design and implement, the future global platform that will support our core banking infrastructure, as well as core banking, our business units, and also our platform. So inside that organization, IAS and Open System is in chair of design and implement that architecture. So we consider also important government and process, as well as talent management. We are building a new platform based on these key principles. We want to create an infrastructure that is global at birth with a low cost and based on commodity infrastructure, is so redefined and open source and also reliable and data centric. So this took us to the use of OpenStack. And before Lourdes will explain how we address the challenges of storage inside OpenStack, we would like to highlight what we consider a very important thing, a key factor that is aligned our organization with this new model. We have created multidisciplinary groups that work as a team and we work in a more collaborative way and agile way. So please, Lourdes, go ahead. Thank you very much, Luis, for sharing this information with us. So based on the key principles that Luis just explained, the BBBA Storage Team has translated those principles of designs into the following storage decision factors to build the properly storage platform within the OpenStack deployment. So the fact of having multiple deployments with multiple geographic distribution implies that the storage must provide a mechanism to propagate the storage configuration design to all the countries in order to standardize the international deployments. Also, they will have multiple tenants, multiple applications, and multiple countries. So that implies that the storage must provide multi-tenancy capabilities in order to first isolate resources and also to design deployments that physically and locally will have a separation of resources. They will also build a catalog of storage features to make intelligent provision decisions and be able to map the workloads and the services into the backend storage technology. Automation is key. So they will have an automation tool to not only to control and maintain the configuration of the cloud infrastructure, but also to be able to launch deployment and application in a very, very fast way. As part of the design fundamentals, they need enterprise storage features, such as eliminating single point of failures, minimizing the possibility of data loss, seamless scalability, and a high available architecture that ensures data integrity, data availability, and integrated data protection for backup and DR purposes. And finally, storage efficiency to reduce cost, consuming less space, taking less time, and reducing the data traffic during the storage operations. Now, Peter will explain how NetApp is addressing those requirements with our technology. Hi, my name is Peter Helcombe. I'm working for professional services at NetApp Spain. And we're helping BVA transition to OpenStack and their digital transformation. Now, BVA is a global entity, and they want to have a global OpenStack. So how did they go around doing this? They wanted to create a building block approach. So they wanted to create a fully-automatized, homogeneous OpenStack region. And this region, they wanted to go ahead and they wanted to deploy in different regions. So they first built up their OpenStack region in Madrid, Spain. They fine-tuned their building block, and then they deployed it to Mexico. Now, Mexico has a caveat where it actually covers more countries within one single region in OpenStack. And now they're also deploying towards Turkey, future Argentina, as in other countries. Now, this building block approach did not only allow them to create regions in different countries, but it also allowed them to create regions within their own countries. So they created two OpenStack regions. They've created a production OpenStack region and a pre-production OpenStack region. Now, all these things are running actually on data on tap. And since it runs on data on tap with a simple replication, they were actually being able to create a full DR solution to their building blocks. So they have their production region and their pre-production region connected with SnapMirror. And therefore, they have their DR solution working. Now, house is all set up. So they have their control nodes working on a virtual storage machine within data on tap. And then they have another virtual storage machine for their Cinder, their Glance, and their ephemeral storage as well. So what are we doing? We're actually decoupling the data from the physical storage running underneath. This gives us data flexibility. So if you actually want to move information from one's place to the other, we just have to move that SVM. And as we saw before, with the simple SnapMirror connection, we can actually move it to a DR site. Now, this also allows us to do seamless scalability. We can do scale out. We can scale up. And we can even scale down because we have a cluster data on tap, which allows this virtual flexibility. Not only that, from Mexico, we are creating a separate SVM per country. So this also allows us to just, let's say, if Mexico, they cover Peru in different countries, imagine they set up another building block in Peru, and they want to move their data there. Well, it's a simple SnapMirror solution to move the data. We're only talking about data. Straight to, they set up their building block, and they create a SnapMirror, and they can move their data there. If they want to go to Amazon, if they want to go to Azure, then we can do the same thing. We connect to data on tap cloud, and we move the data up there. So how do we do this in Mexico? How do we make the data separate from two different SVMs? So what we're proposing to do is a follow-up. We want to create private volume types with an open stack, and we only have private volume types. Each country has its own projects, own tenant, and these tenants have their own private volumes. This means we will always create volumes within their own private SVM. So we always have separate data. Again, this allows for data flexibility. Now, one of the other things we had is that we're working with different backends, with different exports, and we found that when we create a Nova instance with an attached volume, then let's say we create a Nova availability zone one, then it would create a volume in any export, depending on the filter, it would grow any export within the data on tap. But what they also want to do is incorporate some kind of storage availability zone, too. So the thing we're working on now is to actually create different storage availability zones. Now, how are we doing this? Well, as you know, we have different sender services, and one of the propositions we were doing is why don't we create three different sender volumes, each with their own sender conf, and each of these sender confs has its own back-end, i.e., creating different data storage availability zones and assigning a default availability zone per Nova node. What happens when we do this? That means that when we create a new Nova instance with an attached persistent volume, it will automatically create a volume directly in the storage availability zone under this. If you create it in a different availability zone, we will create the new volume in a different storage availability zone. But this still allows us to have cross access of any storage availability zone volume to any storage availability zone Nova instance running on top of it, in case we lose the storage availability zone, then you can still do cross access to the storage availability zone that's under it. Okay? Now, let's talk a bit more about why data on tap, why we're using data on tap, why we're using NFS. Now, one of the advantages of using NetApp and one of the advantages of using data on tap is that it actually automatizes, some automates some of the attaching of volumes and creation of volumes. Now, let's see how it normally works with a generic NFS driver. When a generic NFS driver creates a volume, it has to copy its instance from glance, has to copy it up to glance, pass it over to the sender control nodes. The sender control node has to later write it down on a NFS share, okay? Now, what happens when you do this with the NetApp driver? We are able to slow down this process. We're able to actually capture the creation of the volume in the state and avoid copying the data up to glance, using CPU, passing it to Nova, using CPU, passing it down to the sender export. So, with the NFS driver, we can actually capture the process of creating a volume. We are able to copy the instance, the volume straight to the sender mount point which is underneath. We cache this volume, okay? We're creating an NFS cache, and this we then clone within the sender, okay? This is then attached to the Nova instance. Now, we're not occupying any space, except for one copy of the volume, and not only that, we're not using any network or CPU on the glance and on the sender control nodes, okay? Once we have this cache created, any future use will automatically clone, and again, it won't use any extra space to create these volumes. Now, space will be created once you start using these volumes, any new changes will start using space, but before that, it won't, okay? But not only that, the sender driver is also capable of capturing any new creation of a volume that exists in the glance, and let's say, if it wants to create a different export, a different availability zone, the NFS driver is still able to capture this and actually create the copy from the already cached image on a different export, okay? So if you don't have glance living on the cluster, you still are able to use the copy offload features, and it still copies to a different availability zone or a different export, if it's in the same cluster. And again, we can clone from there, okay? So now, let's put this in numbers, okay? No, this is actually from a TR at some of the guys that NetApp corporate did, and what we did here is, what they did here is they actually created a single bootable volume of 60 gigabytes, okay? And they did this 100 times with a concurrency of one, okay? This means they created one, then another one, then another one, and another one. The average time it took to create the volume when it was used with a generic sender driver was 743 seconds while we were using the NetApps, and the NetApp NFS driver, it took only 32 seconds. Now, the footprint, well, this is a 95% quicker than if you use a generic driver. Now, the footprint, how much did they copy? Well, we did make 100 copies of something that's 60 gigabytes heavy. That's about six terabytes of space on the back end, but because we're cloning this, the actual footprint on the on tap, cluster data on tap was 87 gigabytes, okay? So that's a 98% less storage. Okay, now we're talking about this when it's created, so it's only using the same blocks as it was before. Now, once you start using the system, you'll start creating more blocks, but again, we've got data on tap system running behind it, which means we're deduplicating any new traffic we're creating there, okay? So again, that's a 98% savings. Now, how is this all done? What they were doing in BVA is they automated everything using Ansible. Now, how did we do this with, how did we help them out with the data on tap side? Is that we went ahead and we documented every single step that we did, and we created Ansible playbooks for every single creation. So we were able to create Ansible playbooks that helped them create SVMs, lifts, and also for the part, for the DR part, helped them be able to execute the DR automatically, okay? Now, next steps, we don't have time to look at everything here, but these are the things we're working on with them at the moment. Almost done, and we did it in, let's say, 19. 19. 19? 19, 20 seconds. 19 minutes, so we covered the challenges and we explained how NetApp is helping BVA in the digital transformation journey. Before we finish, we would like to thanks the NetApp OpenStack team for their support and especially BVA for counting on NetApp. Thank you for your attention.