 OK, we're ready to rock and roll. Excellent. Good afternoon. Thanks for joining us. Hope everyone's had a great day. It's been a fun first day at the summit. My name's David Safae. I'm the CEO of Trilio Data. And I've got the honor of moderating this panel of cloud experts. We're going to have a fun discussion. And you'll note that none of these folks are shy at all. So before we dive into this, a couple of quick thoughts. I'm sure many of you are cloud purists. And some of you started your journey with OpenStack thinking that the world will be stateless. So can I get a quick show of hands from the crowd? When you first started your OpenStack journey, did you believe that all your workloads would be stateless? Anyone? A couple of people. Let me ask you a question now. Where you are today and stateful has become important and part of your business backup come to mind? Backup comes to mind today versus before? Are people thinking about backup? I mean, it's a pretty full crowd. I assume so. But I assume it's becoming an important everyday conversation that we're here for this day two conversation, right? So what I'd like to do, I'll introduce my fellow colleagues here, and then we'll dive right in. At the end, we have Michele Bello, cloud engineer with CSI Pimante. Michele, like many of you, is going through a process of transformation and a metamorphosis of his cloud. And he provides cloud services and deploys for the Italian government. Also next to him, we have Stefan Krull. Stefan's a longtime OpenStack operator. He architects, deploys, and manages infrastructure and key elements of OpenStack within Volkswagen Financial Services. And lastly, we have Sean Cohen. Some of you may have know Sean. Sean's been here for, Sean's here 11th summit. Well, excellent. Season veteran. Sean has 15 years of experience in senior product management and a delivery of private clouds in the enterprise market. He drives the enterprise cloud infrastructure product management strategy for Red Hat OpenStack cloud offering. And again, my name's David Safae, and I'll be taking you through today's journey. So Michele, let's start with you here. What? Talk to us about your background and your experience with the OpenStack and why you're part of this conversation today. Hi, everyone. And thank you to David for giving the opportunity to own this panel about our cloud infrastructure, new cloud infrastructure, based on OpenStack community with Ocata, and how we solved the problem of backup and data protection on our legacy workloads. My name, as David said, is Michele Bello, and I am a cloud engineer at CSI Piedmont. Unfortunately, this is not the CSI. You have probably thought of the body in the whole. But CSI stands in Italy for a consortium for perisystem informative that, in English, can sound like something like a consortium for information technology. CSI in Piedmont, because we are located in the northwest region of Italy called Piedmont. What we do, well, we develop, implement, service information technology services for public administration, like hospitals, like municipality, like local health agency, and many other public authorization. Something about me, I developed, I dealt with the virtual infrastructure since 2005, ranging from pure virtualization, being physical to virtual immigration, to the cloud infrastructure. In fact, in 2011, CSI implemented its first cloud infrastructure based on women wear the cloud retro. But after some years, after five, six years, our business management asked us to implement something new, to implement something that was different from the past. In fact, they asked us to implement a new cloud infrastructure where to migrate our legacy workloads, about 1,030 virtual machines. So we have a very big virtual machine to migrate. It's in some news with new requirement. The requirement is to be software defined as a center, as to not to have no vendor lock-in, as to have full API restful, and that to have multi-regions. And so it's why we choose OpenStack as our solution. In fact, two years ago, a new project called Nivella was born. This project, in fact, is ambition project, first of all, because we would like it, we would like to deliver business services like platforms services, infrastructure services, software services in a simplified manner to our end-user. In fact, we won't like to hide the interior complex of the system to the user. And that's what I've developed. We developed a CMP based on with Python that integrated here, STFI, all the infrastructure on the bottom of the system. In fact, we have OpenStack, we have Sphere, we have Trilio, we have Veeam, NC Square Storage and so on, local global load balancing and so on. This is why we want to give to our end-user a simplified way, a simplified model, a user portal that in a single pane of glass. So it's what we do and what we would like to do in a short experience. Stefan, why don't you provide us with some of your background? Yeah, my name is Stefan, as David just said. Yeah, my background is I was a former security network architect the last decade and now we are facing some challenges in cloud context. That means one of the biggest use cases for our cloud solution is transforming legacy applications. More migrating it, but in most cases, we should transform it. Because we have only a few percentage of cloud native applications or even cloud ready applications most of our applications. We have hundreds of applications, our legacy stuff, monolith stuff. So we have to deal with that. So then there's a funny story. Last year I was at the OpenStack Summit in Boston, as was my first year in cloud business. I don't care at all about back up the cloud stuff because everything should be shiny and cloud native and stuff, so I don't care about it. And one year later, I'm sitting here. So we have to talk about it. So and the reality looks completely different. So there's one, that's the biggest challenge for us to have the shiny way, it's nice, but we have to deal with the reality first and then take some time and then maybe in some years we are cloud native at all, but. Nirvana. Yeah, welcome to reality. You're right, great. So thanks for joining us and Sean, thank you for joining us and congratulations on the transaction as well. So he's referring to the blue elephant in the room, so. Thank you. It's okay, you can talk about it. I think it's a great testimony for open source, right? And they're right at the way of being open. I've been working with this community for the last five years. In fact, some of the blogs I've read over the years actually address the same principles we talked today and even the one that goes back to 2013 talks about the need for hypercloud and how we back up hypercloud because that's gonna be a thing. And guess what, here, 2018, it's here. So I think the main thing I want you to take away from this is you live this room that you're in the cloud business, right? And you have a reputation to maintain. You have to escalate to your customers and the worst thing that can happen, it can happen to you too. And that's the biggest thing I think. I totally share Stefan's basically understandings and realization. We have a, being right at where the largest distribution of open stack, yes, we serve hundreds of hundreds of customers in production, right? And guess what, this was in day one. So one of the things you asked in the beginning was like, was this like an afterthought because we designed open stack to be cloud native, basically the open source AWS, right? So I don't think so. I think this was actually on our original design. I mean, if you look at Cinder, Cinder backup as a service was introduced in Grizzly, right? So, and then over the years we extended it and we had this new replication because you need data protection across site as well, right? So it's not enough to have a backup from your block level. And then in somewhere around Mitaka we introduced even the concept of backing up your Cinder workloads into a hybrid cloud, the first example was Google cloud storage. So you can actually do today backup across cloud. So this is something that was already on our plate, but I want to go back to what you said. The digital transformation is a journey, right? And for some of our customers, I'll give you two examples. We have served both enterprise and telcos. Having a backup or data protection plan is not luxury. It's a must have for many reasons, right? This is like you cannot, it's part of your SLA and it's part of your edit the reputation, right? Because things can go south. Oh yeah, they can go south very fast. All right, so, no, no, I mean it, so you dovetail right into our slide pretty well because we talk about backup and redefining what backup is for in our eyes. It's not just anyone can replicate your Cinder volume. That's fine, it's just the data itself, but we think about moments in time, right? There's a lot more to backup that also includes configurations and everything from a per tenant basis, right? So you start thinking about, one, how do I backup and you may start with scripts. That's kind of a good luck to you when you want to scale your environment. But also you start talking about, I think, part of the conversation which is important is recovery. Backup is one thing, but speed of recovery is really critical and I think we can touch on that later on. Unless you've got a solution that's automated, it's sort of a, we joke around, it's a wing and a prayer. Then you think about recovering 500 or 1,000 tenants, right? All individuals here. So my first question to the panel is why, right? Why? Why do you need data protection? In fact, why have you chosen OpenStack for serving stateful workloads against OpenStack's original mindset? This is kind of counterproductive, right? But why? And you know, Sean, how do you guys want to begin? Dive right in, seriously. As I just mentioned, so not everything is cloud native just like with Fingersnap. So we are a very big company. We have hundreds of applications and just because we have our cloud now productive, it's not changing anything. So we have to care about the application we currently got and mostly it's 90%. It's legacy stuff. So we want to have the benefits of the cloud and yeah, we want to use it because for all the fancy reasons, but we have to deal with the regulations as Sean said. We have regulations, we have compliance, we have security, we have data, we have the GDPR, we have to know where our backups are and we should have some backups and yeah, that's why we don't talk about cloud native stuff. Actually we talk about backup for our legacy applications on our cloud. That's very important. So I think Sean, I want to hear what you have to say about this, but I'm curious again, I want to go back to the crowd. For folks out there, show of hands where compliance or regulations are driving you to make decisions. Folks? Yeah, it was driving you guys, right? I can add more color probably. So I mentioned the two footprints, right? And it's not a coincidence, we have a public sector and finance institute sitting on stage with me. So I'm going to try to represent the rest of the segments, right? They're not all enterprise class that we have Telco. So if you look at Telco, you mentioned compliance. Here in Europe we have something called ANSI. There's Etsy, there's Telco compliance grade you need to meet. This is not a nice to have. You have to have data encryption, you have data integrity. This is, I'm not even talking about where you need to, and how long you need to keep the data. If you look at financial customers like large banks and banking institutes, sometimes they need to keep the data on somewhere else, right? Additional site not on site for at least seven years. I'm not sure what you're seeing in your app. Is it like seven or more? Up to 10. Up to 10 years, right? So, hmm, I need to have backup because I need to restore things fast, but I also need to keep the backup because guess what, they have regulation coming up. And you mentioned GDPR, how many of you have heard that buzzword before? So we're sitting in Europe. This is the largest regulation that was enforced just over a year ago, which is general data protection regulation in Europe, which is the largest law, new law in starting 1995. And it forces everyone in this industry that provide cloud services as well to align with this regulation. So this is not a nice to have for a lot of segments, right? So this is- Well, the global impact actually, talk about GDPR, it bleeds, it's a global initiative and phenomena. I know it's forcing other geographies to think that same way. And I believe in California, one of the largest economies in the world is now starting to adopt and they're looking at policies very similar to GDPR. And I'm sorry, you grabbed California and my thoughts goes directly to the disaster now with all the fires taking place, right? So there's natural disasters, but it's not just about natural disaster, I will come to you, right? So the floods are one thing, hurricanes one thing, but I say that your worst nightmare is you. You're sitting in this room, right? And human errors is probably the number one thing that brings our cloud down, right? If it happened to Amazon, it can happen to you. You managing your private or public cloud using OpenStack, it only takes one script or command line to bring five regions and X number of customers down for five hours, right? That's what it takes. And the question you need to ask yourself, how fast can I restore operation? Because this is where the phone's gonna get the call. You get all the calls, beepers or whatever you want, right? This is where things basically hit the fan. So don't think it cannot happen to you, I mentioned that before, is think what do I need to do in order to avoid it? And how can I maintain that service today, right? It's not just about I need to protect the workloads. I need to protect my business, right? And today my business is operated by cloud. That's what it's all about. So let's actually touch on some real life lessons here. Now, once you have a data protection solution in place, you have point-in-time copies. Michele, talk to us about your experience in leveraging data protection, but also leveraging data protection for other initiatives that you may have internally, Chris. There's a lot more you can do with a point-in-time, right? In fact, in the previous interview of CERN, he described a scenario of nightmare scenario. In fact, one of these nightmares come true for us because actually we had a disaster on our previous cloud. In fact, we have a disaster on some instant virtual machine. We lose some virtual machine. We have problems in some virtual machine, but not in other one. So when we, it was a very, very bad situation that I don't want to be again in this situation because it's not very comfortable to be in this situation. Hopefully you're comfortable. So we designed our new cloud frustration with work-close migration, sorry, work-close migration, because we have three regions. So we have decided to project, design a solution that can be easy to migrate work-close from one cloud to another one. For any reason, it started from management, started from, and to the problem of disaster recovery scenario where not all your cloud goes down. So, you know, our main application are fundamental, three-tire application, legacy work-close, three-tire application, old system, operative system. Sometimes we have a big database on the same virtual machine, on the same application server, on the same web server. And so it's not cloud-native. So we must have something different, something that can protect us from disaster in this case. In fact, we have a designer, so this solution, and well, just one thing, this solution is not a live immigration, it's a cold immigration because we have three different cloud. We don't have one open-stack installation with three different regions, but we have three different open-stack installation that don't share any component with each other. So we have three different open-stack with one region in each. So we can use scene replication, storage operation or something like that. We must use something else that makes this work for us. And we find that that protection can help us to achieve this requirement. In fact, I show in this slide, a four-step process. First, we have to decide what immigration, what workload to migrate. Then we have to backup immigration. Third, we have to replicate data from one side to another side. Then we have to import the data in this new cloud and then restore the data. So it's not something easy to do, it's not something that can be do in live event. We have to do this thing in a cold process. And so that we have a thing to our production work. So what you're showing, right, you maybe were talking about migration here, but really you have a semblance of a DR scenario for yourself, right? Even if there were two separate sites, you can end up moving copies over through sender application and you always have the ability to spin up a new cloud. So what are some important items within this environment for you, as far as a DR or a backup solution? With no doubt agentless. Because if for a good solution to work in this case, we don't have any agent-aborted instance. We don't want to manage agent distribution on something like that. Then if we were, as we had in our last cloud in data protection solution, the data agent that the mean of the instance can remove the agent, can stop the service, can disable the net interface. And then you have to manage calls like why doesn't work, why your backup doesn't work anymore. So we need to have a solution that is agentless. So it sounds like agentless and I guess really non-disruptive for your environment sort of key elements. So with that said, Stefan, I look at you here. What does the industry need as far as a data protection solution? We've heard agentless and non-disruptiveness I think are really important to Michele here. But what about yourself? From my perspective, it's in our current status, it's very comfy for our customers to have a very easy to use, maybe GUI or CLI, so that they guys who maybe using backup and restore scenarios or service the first time could use it very easily because a lot of guys using the cloud, managing their stuff, but they did not handle backup and restore in the past because we have dedicated sub-departments for backup restore, Linux, Firewall and stuff and now they should cover all the stuff themselves. So it makes sense to have it as easy as possible with examples, with templates and maybe click and make everything very fancy and easy. It's very important for us to maybe introduce the guys to backup and restore and especially inform them what are there, what are the reason to do so and maybe give them some examples to, okay, there are some procedures, how to backup and you are safe as well. From compliance regulations perspective, it makes sense to have kind of compliance rules. That means our customers can do whatever they want. Everything is compliant and everything not compliant is forbidden or is avoided by the administrators of the service solution. There's a second good factor, I guess. So backup is put in the hands of your tenants. So you empower your tenants to do their own? Yeah, normally it should be complete self-service but they should be safe that we are, and not investigating on some stuff they did but they should do everything and then they are completely compliant from, for example, regulation by European Central Bank. So if they prove us, they could say, okay, they could do whatever they want but they are compliant. Backup is a different location. They backup, I don't know, every day, every minute, no matter what. And maybe the last thing is what Mikheil just said is maybe in the future, the topic for us is if you once backup your solution, you can easily restart on a complete different location in case of worst scenario. It means BCM, so business continuity management. Everything go up in flames and then you have to restore your very, very important applications on a different location. Easy, as the same way, you easily backup, you should easily restore the stuff because maybe that's nightmare from the past. You know, everything is easy to backup and once you have to restore it, it's okay, it takes two weeks. Right. Have fun. That's one very experienced from the past. In fact, my old boss usually said every data protection can make a backup just if you can make a restore. So. There you go. I'm glad you can restore it. So. You can rest the world here. I'll just add a bit more color. I want to go back to what Stefan said about easy views and detainment, doing their own backups as well as a service. Now, OpenStack is all about open services and everything has a service, basically. Now, we don't have a backup as a service. Service in OpenStack. Let's put that elephant on the table, right? So we have APIs, you can call up, you can write scripts, but going back to Stefan said, from an operator perspective, I do want to be able to put that plan into action and schedule it. And as you said, sometimes you have a maintenance window, some of our like big telco customers, they have a very serious windows that they're limited to do maintenance on, right? That should be part of your day two operations. Your backup plan should be automated, right? This is not something you want to do manually. You want it automated, you want to test it, you want to run it, and again, even the migration process, which is painful, because you need to have a stand up hardware for that and it's cost for the organization. Yes, don't do it every day, but do it at least once every three months, right? So otherwise, you'll find yourself again in the same place. So the ease of use and it has to be automated and I want to talk about the cloud tenants. Our cloud tenants is not just the cloud as we know today. I mean, you heard the key notes in the morning, we're talking about open infrastructure, Kubernetes, OpenStack and Patricia are going to bring up the container questions soon. If I'm a developer using Kubernetes, I have a news item for you, right? I don't care about infrastructure. I expect it to work, right? So when we go to data protection, that's my expectation. Don't even give me the backup as a service like options. I just expect it to work, right? And we're going to see more of that coming out into the nature of the base. So it's not just to be automated, that's key, but it has to be transparent and it has to be there, right? So, and as I said, when we look at backup, it's not just the infrastructure, right? Especially if you're running container, if you're running Kubernetes and OpenStack, guess what? I need to protect Kubernetes, I have data, persistent storage, persistent data. I need to take care of that. I have all my infrastructure. I have the control plane, right? OpenStack runs somewhere. I mean, we have all of these controllers and compute nodes and services and state of the services. Your backup, if you just talk to the R, you need basically all of the layers backed up, right? As a consistency point of recovery. So it's not trivial, but the expectation from our end user is like, I don't care, it should work, right? And I expect you to deliver services. There's two words that are always in the data protection top, come on up, right? RPO and RTO. Our goal, shrink it down as possible so I can restore it as possible so I limit the downtime and the disruption of my business, right? We'd like to be in the disruption of technology and rather than be disrupted. I like to protect the world. That's fine by me. Well, so Sean, we're diving into what about containers? So again, quick show of hands. How many people are playing with containers right now in some way, shape or form, right? You're all dabbling and I'm sure, you know, we chuckle, this is kind of like OpenStack, you know, a number of years ago, you think they're cattle, but you're seeing Stateful find its way in all over the place, whether it's databases, et cetera. I mean, you guys want to touch on the world of containers because are you guys looking at it today? It's like, Sean, you're probably seeing this all from a lot of your customers. Do you want to go first? I try. Yeah. My very limited experience of containers is actually that we saw some, let's call it creepy applications running on our cloud. So there are some applications running in containers and it looks like some software company put the applications into a box in a Christmas present and once you open it up, you'll see the same, let's call it challenging application again. So there's no magic happens. So you have to take care about back up the boxes. Nicely packaged. Yeah, nicely packaged, but you have to take care about back up the same way like before. So it's normal, it was a legacy application put into a container, put it on a cloud and there's no magic. So if you analyze this application, you see, okay, this application needs back up the same way like 10 years before. So there's no secret behind it. So we have to investigate on that and we have the same solution for that, like for other applications. So I mentioned the digital transformation journey, right? If you look at different segments of verticals, I go back to the telco. Telco basically have going through this position, right? Yes, there are some cloud native container as workload VNFs that can actually now start to tackle things like 5G, which is great because we're going there. However, there's a whole lot of sets of virtual functions not even virtual yet. Some of them are just boxes that needs to go to virtual as in containers. Some of them will be like maybe bare metal use cases. But it takes a long time. When I talk to my customers, some of them will take five years to complete this journey. So it means that they still need to have the orchestration and the new orchestration of doing things. So they really like Kubernetes because that's how we want to go. But they also still need to drive this use case that you just mentioned. And what I've seen is like, I don't want to mention the big names of our telco customers, but some of them are basically taking the old legacy workload in a container, running in a VM, which is basically the same thing. And just like he said, you open that Christmas box and it's the same thing. I need to still do the same things to back it up and recover. But they're treating these containers as lightweight VMs. Yes, so that's one side of the coin which is how do we deal with the traditional legacy. The other side of this coin, as I mentioned, this is the future, right? 5G is driving, there's all the new use cases, augmented reality, smart cities, by the way, next week here on the same stage, we're going to have a smart country conference, right? So everything's getting smarter. Now, while we get smarter, we still also need to support workloads, but how do we deal with data protection in Kubernetes? So obviously we have the registry, we have the persistent data and so on. But the living that I had is the same principles, right? Kubernetes has snapshots. You can actually run a snapshot today. That's a, I call it backup for the course, right? So you have the data protection. But what's missing is the things that you mentioned, right? How can we expose it to the developer, which doesn't care about the infrastructure, to make it seamlessly? How can we expose it to ISVs in the backup industry to actually automate it? So someone needs to take that snapshot, basically expose it, bind it to the workload, right? Use it as a way to recover. We need you guys to automate it so it will be the same thing, right? Yeah, easy button. Now, as I mentioned, it's Kubernetes running an open stack. Obviously Kubernetes can run in any cloud, guys, public cloud and private cloud. But here we are in open stack, it's majority, in the room probably are private clouds running open stack, right? You're gonna run Kubernetes if you want it or not, right? But you're gonna be asking the same questions and the same practices will apply. The good news, not a lot of change, right? In terms of the requirements, pretty much the same requirements. Definition of insanity, right? We're seeing this thing over and over and over again. Okay, so we've heard the importance of agent lists, non-disruptiveness, empowering the tenants, making it easy for them to manage their own backup environments. We've heard how containers are following the same path that open stack blazed before. So let's kind of open up to the crowd. Questions out in the crowd? You've got great panelists here. Anyone, anyone? Yes, sir? Use the microphone if you can. Okay, more than a question, just a suggestion. When you think about backup, okay, but for operations, it's more than backup. It's data recovery solution. So please create the environment and restore the backup and assure that it works. If you can automate everything together, it's fantastic. Automated recovery. There you go. To the point I made earlier before, it's not just the data, right? It's the metadata as well and doing that on a pertinent basis is a very tough, tough thing to do. And one thing I wanna relate to you is like, when you do the restore, it's not the same, right? This guy is doing data migration to another site. You know what's the IP address in the other site? Different, right? One has the security on the other site, different. You know what's the availability zone in the other site, different. So when you automate it, it has to apply the changes to your target site, right? So this is, they're not one-to-one. Any other questions? Wow, we answered everything for you guys. An amazing panel. Well, so I guess the key takeaways here for everyone is backup everything, whether it's your open stack or your container environment. Please rate this conversation. And if you wanna continue this conversation, you can either take it online, we have the hashtag here, or we can take it offline in the back where we're all available and we'd love to answer any more questions you may have. So thank you for joining us. And enjoy the rest of this summit. Thank you guys. Thank you guys.