 Hi everyone, thanks for coming. My name is Henrik Blickst. I'm a product manager in the NFE BU at VMware. So I was going to talk to you a little bit about what I've been hearing from the service providers we've been talking to, some of the customers that I meet with a little bit on their feedback and what they've been seeing in terms of using OpenStack, what they love about OpenStack, maybe some of the challenges they're seeing and I'll talk a little bit about what can be done to mitigate that. So just a standard disclaimer, I wasn't planning on presenting any roadmap but sometimes I talk about shooters anyway and being a product manager or anything I say will be used against me so the only thing I can promise you is that any roadmap things I talk about will change so please don't hold it against me. So just quickly some high-level challenges, the promise of OpenStack I think most of you in here know OpenStack pretty well. How many in here are from a service provider, work for a service provider or with a service provider? About half of you maybe. Okay, and I'll talk a little bit about how I came back to loving OpenStack again. So what are some of the concerns that you know when I meet with these service providers, you guys, some of your colleagues, some of your, some of your other companies are out there. One is that breaking free from legacy hardware. A lot of the hardware that's out there is legacy and purpose built hardware for your network services and that's one of the driving forces behind NFD as well. Getting off from all these purposeful built hardware, getting to something that's a little bit more agile, a little bit easier to manage, getting away from having something that's very uniquely built for a specific thing which makes it very inflexible because if something is built into the hardware to solve a certain thing, making that hardware thing do something else is pretty hard. It also means that upgrading it is pretty hard. If you need to, if something is built in, if there's this switch or a certain functionality, a certain card built into that hardware and you need a new feature that needs a new card, you actually need to send someone to that machine and replace something or replace the whole thing. And once you start getting into distributed deployments and make architectures where you might have tens or hundreds or thousands of these boxes, sending out thousand people to replace a card or a whole box is getting pretty tricky and expensive. And it's going to take time too. It's not just going out to all this place. It's actually all of you that have been SUSE, I started my day as a SUSE, being in the data center, pulling cables and doing all that takes time. So trying to get away from that and doing something that can be done by fewer people, more automated and faster. And some of these, the hardware life cycles for a lot of these boxes, since they're pretty expensive, they're hard to get out there, are also pretty long. So that means that the skills for maintaining this hardware thing or the card that is, might be very legacy. So maintaining the people that know these skills might be hard, might be hard to find people that know the skills for a hardware box that was only sold for a few years, you know, 10 years ago. Something else that there are a lot of concerns around is innovating faster. Not just innovating, because innovating is fairly easy, but innovating faster is what the way I want to do. And the networks they have based on what we just talked about, the legacy hardware, they're not really prepared to innovate because of the slowness of doing anything to the network. And they need to roll out features faster. And especially when they're getting into the NFV and virtualizing, they have their upcoming competitors that have a newer, more modern architecture that can roll out features a lot faster because they're virtualized versus rolling out new hardware. So they need to get into that thinking of getting things out faster. And also innovate continuously. It's not a big bang innovation that, hey, we have a great idea, let's build a box for it, roll it out, and then five years later, hey, we have a new great idea, let's roll out this new box. It needs to be continuous. You can't have innovation cycles that are five, or even two years, even one year, you need to get new things out all the time, maybe not daily, but as new ideas come up, they need to get out faster so you can get feedback. Because when you have that long innovation cycle, it's also going to get very costly because it takes years to build the box. And then rolling it out takes a while. So it's going to take a while to get feedback. I'm not really sure it's this really the thing that people want, the feature that my customer is asking for. So if you can do that innovation faster, you also get feedback faster, and you know that maybe I should roll this back and maybe I should focus more on this. And that also helps them shorten the time to revenue. Because there's also a lot of risk when you innovate slowly, it's also going to take you time to get to the revenue. It might be a good revenue, but the risk of getting that large revenue is going to take you so long that you want to make sure that it's better to take a little bit less revenue, but getting it faster because it makes it easier to do your planning and so on. And one that's attended any session here this week has probably heard a lot about Kubernetes Cloud Native containers and asked the telecom service providers to start looking at how they're virtualizing, they're also looking at containers. And they're realizing that it's more than just containerizing. And everyone that's done anything with containers know that, or tried to put anything with containers into production, it's more than just taking, hey, I have an application, I'm going to package it up in a container, and I'm done. Anyone that's tried to do this in production, it's quite a lot more than that. It's a different way of thinking. It's a different tool set. So there's more than just the packaging that need to happen. And if you have an organization that's more legacy focused and with the thinking, actually getting the processes changed might be much harder than just the packaging. You might need new tools. Instead of learning how to deploy a VM, you need to figure out how to plug something into a CI CD tool chain or something like that, which is a different way of thinking, different way of working. You need to learn a new set of technologies. And another issue that, for those of you that have attended more sessions or more than one summit or maybe a CUBE console like that, you also know that these technologies change a lot. Three years ago, it was all about Docker. Then a couple of years before that, it was mesos. Now it's all Kubernetes. So trying to keep up with that, figuring out what technologies to bet on, what to invest in, is another risk. And they're moving very fast. Not only do you need to figure out what do I bet on and bet my money on, also what people do are higher, what do I train these people for. So that's an area of concern as well, because the telecos want to make sure that whenever they bet on something, it's going to be in the networks, it's going to be there for a while. So they want to make sure that it's something that they can actually support so they don't end up with the same legacy situation they have today with the hardware, but just with software instead. Just telling you that would be almost as bad. And building a software-based care grade environment. A lot of these networks and network services are very mission critical. If you try and use your phone and your phone is not working, you're going to get pretty upset. You might even be in a life and death situation. So you can't have something that you're trying to call 911 or 112 and you get an exception in your phone or you can't connect to the network. It's not going to be very popular. How many in here has seen software or written software with bugs in it? Probably most of you, right? And that's just something we as software developers, we just accept the fact that if you write software, there's going to be bugs in it. If you deploy a VNF, a virtualized function that has bugs in it and you can't call 911 or 112 depending on where in the world you are, that's a little bit different. So when you do this hardware-based, it seems like for some reason, no offense to all the software engineers in here, but it seems like hardware engineers are better at building stuff than software engineers because it seems like there are fewer bugs in these older purposeful built switches or boxes they have than the software that we build. So that's one of the challenges. How do we make sure that what we build doesn't have those bugs and something that's as stable and mission critical with all four, five, six, nine as they have today in their hardware-based platforms? As we build all these various software and we do microservices, we build all various software parts, we also need to stitch them together. In some way, it was almost easier in one way earlier when you pulled one cable out here and you plugged it in over here and you're like, hey, my integration is done. We're trying to figure out the various payloads in terms of text files or whatever it needs to be sent between the various system and how they interact with each other. They can show that the APIs are compatible. The APIs aren't getting removed or changed. It's also a bit of a challenge. With the hardware, it was also pretty figuring out the resource allocation was pretty easy. Hey, here's a box. It has this many resources in it. That's how much resources you have. If you need more resources, you need to put another box in. Once you start deploying VNFs and you have multiple VNFs sharing a virtual environment, it gets a little bit harder because another thing you don't want to end up with is resource starvation. You might have a couple of VNFs on the same infrastructure. The one that handles those emergency calls doesn't get resource starvation. If someone is watching Netflix or watching your favorite TV show on the subway, it probably won't be too upset if it pauses for a second or two. If you can't reach one or two, you're going to get a little bit more upset. So you need to make sure that the critical software applications that you have deployed here actually have the resources they need and making sure we can carve those out in this virtualized environment. And not only that, you also need to make sure that the software doesn't crash. And for those of you that have seen a bug or two or written a bug or two, it's also the same somewhat similar to what you said before about the hardware. The hardware boxes, for some reason, don't seem to crash as often as our software. So you want to make sure that whatever we build has the resilience in AHA because eventually we are going to run into one of these bugs. And when we do want to make sure that whatever was running will actually survive, we'll come back up again. And for those of you that were here for the previous session, you heard Vanessa talk about distributed clouds and MEC. And that brings in another set of challenges as well because these boxes that were placed, they were pretty autonomous. They were sitting there doing their stuff. But they were also not very distributed. You might have had a couple of data centers in the U.S. or a handful at the most or spread across the world. When you start looking at distributed architecture and MEC and you might have hundreds or thousands of deployments, then you can't have a SUSE item and logging on to every box to do a dot slash update. So you need to make sure how do you automate this and how do we handle these new environments at scale? So all this hide and the service write is scratched on the head. How do you figure this out? How do you do this? What are the best tools to do this? And there's been a lot of excitement around OpenStack. And for those of you that have been in the community for a while, you've seen that telecoats and service providers have started coming out in larger and larger numbers and looking at OpenStack. So what is so cool about OpenStack and what do they love about OpenStack? One of the big things is the community, all of us working together. When we had more specialized hardware, you had five vendors who were all building their own little box. There was no, this vendor was sitting in here building their box. The next vendor was sitting over here building their box. Here we are in thousands of people all working together building the infrastructure for all those things. So service providers usually are very nervous about sticking to one vendor. They have maybe multiple vendors. And here in the open source community, in the OpenStack community, we have all vendors collaborating. So instead of having the vendor sitting in their own labs, their own rooms, building these hardware boxes, we have all the software vendors here working together trying to figure out how to build the best platform that they and you guys can use. And that also helps the, and talking about that in some of the keynotes as well, how this collaboration helps foster innovation and helps innovate faster, helps us innovate more when we're sharing ideas across all the companies. You know, coming from, coming from VMware, you know, I have one view of the world. Someone coming from Red Hat has a slightly different view of the world. But here we can actually sit together, talk together, figuring out the best way to just solve these problems that, you know, work for all our customers. And that also helps drive standardizations because we want to make sure that something that works on one version of OpenStack works in a different distribution of OpenStack. You know, this was back in the day when we were all doing, most of us, a lot of us were doing Java, trying to deploy Java to different application servers. Now there's always the problem. It's like you can deploy it here and, or build it here and deploy it everywhere. Didn't really work back then, but now actually, you know, a few years later down the road actually come to a point where all the APIs are in OpenStack are standardized. So if you can deploy it to one, you're very, very likely to be able to deploy it to something else. Just that standardization, getting telcos, you know, out of the single vendor lock-in and making sure that they can have, not only because they want to pitch the vendors against Dodgers, make sure they can push down prices, but also making sure, you know, if someone goes out of business or start focusing on something else, you know, they can, they have some options and choices and flexibility. We already talked about it and touched a bit about the simplified APIs, but they, you know, as they develop their application, as they do the VNFs, they do a lot of testing with these VNFs, having the same, the same APIs to the underlying platform makes that testing a lot easier, because they don't have to write a bunch of different tools to test different APIs. They can basically write, write it once and use it everywhere, right? So that makes that a lot easier, cheaper. And also, you know, you don't have to build different skillset and learn different APIs. Say, here's one API, we can use this for everything. And it also helps, you know, the people that actually develop the VNFs, because they know that there's one, there's one platform that they're going to get deployed on, might be different vendors, but they all have the same Northbound, Southbound APIs. So whatever network functions I write, if they have any resource needs, I know exactly how to deal with any, whether it's storage or compute or networking or secret management, whatever it may be. And there's also a common architecture, and what I mean by that is when you write an application, you might have, that application might have certain expectations in terms of how distributed deployments are done, how HA is done. And having open stack underneath, with the standardized APIs we just talked about, we have the same concept of regions, availability zone, host aggregates or whatever those various constructs might be. So if you have an application that has some expectations around that, they're going to be the same across, you know, across all the various platforms and all the vendors. Another massive benefit is open stack itself. And that's why, you know, we built open stack is to have standardized APIs that's detached from the underlying drivers. So whether I want to have a VM that gets deployed onto KVM or ESX or Zen or something else, it's the same API call basically. So it's hidden from my VNF developers or my users, what's actually running underneath. And I can plug and play basically the various drivers and have a unified experience no matter if I decide to change something in the underlying platform. And then that gives them maybe portability into the best word, but they can deploy and move these VNFs between these various platforms and make sure that if it works here, it's going to work over here. And I don't really need to change anything in my application to make it run on a different platform as long as that platform is open stack, whether it's runs on a Red Hat stack, whether it's a VMware stack, maybe it's a Rackspace public cloud. It doesn't really matter anymore, right? It's all open stack. And there are also a lot of the workloads that run in these environments that are, there are very few workloads that are truly cloud native, even though there's a lot of talks and sessions about cloud native here, there are very few companies except maybe some newer and younger startups that had everything containerized. Is there anyone in here that runs a deployment environment where everything is containerized, everything is cloud native? Maybe someone may be thinking about raising a hand, but I don't see any hands in the air really. I mean the truth of the matter is that everyone pretty much has something that's either containers or VMs or even bare metal. And we've been trying to virtualize on the enterprise side for the last 15, 20 years. How many in here still has bare metal running in your data center somewhere? Yes, quite a few. And the thing when I talk to customers, it feels like it's almost, almost everyone still has something bare metal running. And I think that's a reality that we're going to live with and maybe even see more now when we start deploying containers to bare metal and things like that. We're going to have environments, we're going to have workloads that have parts of it deployed to bare metal, parts of it deployed to VMs, parts of it deployed to containers. How do we get all these workloads talking together? How do we stretch the network between them? And how do we manage and deploy these in a single unified way? And that's where OpenStack shines. Right now, if you want to do this outside of OpenStack, you might have one API, one way of doing bare metal. You have one way of deploying VMs and something completely different deploying your containers. And that's one of the things where OpenStack helps as well, where we can have the same tendency models, you know, have the same networking. If we're here listening to Marcus's talk earlier about NSX-T, showing how you can share network between VMs and containers. And when you take that a step further and have that stretched onto bare metal as well, bare metal as well. And that's something that OpenStack can help with. Another thing that's cool with OpenStack is that a lot of the VNFs, even if we're working towards virtualizing the world, a lot of workloads still require direct access to the hardware. It might be because of performance reasons. Might be other reason that it's just more efficient in some way to just talk directly to the hardware. So even in this new virtualized world, we still have workloads that need to talk directly to our NICs. There might be a GPU or something similar. You want to make sure that we still have the capability, even as we're virtualizing containerizing, we can still get that, whether it's streaming or whatever it may be, that they can talk directly to those cards. And then on the other hand, there's a lot of work also going on and have been going on for a while on enhanced performance, on DPDK, and enhancing network performance in a non-hardware specific way. So you want to make sure that we can do both of these and all of these together. So even if we're doing SRV, for example, for one workload, we can still share the same environment that virtualizes network and get the high performance network with DPDK without having to choose one of the other and have them all coexist. And there are some projects like Cyborg within OpenStack that's looking at how do we handle this more hardware-centric view of the world in an OpenStack environment and making it play nicely alongside all this virtualized stuff. That all sounds awesome, right? And if all that just worked, life would be all good and dandy. But it turns out, that even moving from your hardware view into OpenStack, there are still some challenges. If you've tried, anyone that's tried to deploy OpenStack in production knows that there are still, even though we've come a long way, there are still some rough corners in there. One of the things that is still a bit of a challenge and is both good and bad is that OpenStack has a lot of knobs and things to turn and configure. So that's good because we can cover all use cases. We can do pretty much everything with OpenStack because there are so many things we can change. But that also means that we're building a lot of snowflakes. If you look at OpenStack and look at the various deployments that we have out there, if you have 100 deployments, you might have 100 different deployments. Something goes wrong over here. It's going to be very different from something going on over here. It's really hard to troubleshoot figuring out what's going on because they're all slightly different. So that makes it hard to very efficiently manage all these various deployments. Even within a company, you have a large company like a telechore service provider, they might have a fair number of different OpenStack deployments. And if they're all different, the configuration management and making sure that all the drivers are the same, configured the same way gets really, really hard. And even though OpenStack has come a long way from where we've been in the last few years, it's not really any better than the platform you run on. So OpenStack has a framework. If the platform it runs on isn't super stable and has the resiliency and the high availability that we talked about earlier in the drivers and in the platform underneath, it doesn't matter how good OpenStack is, it's still not going to be able to be better than that. We talked a little about a little bit earlier about global deployments and scale and OpenStack, though it's getting better on some projects around this, trying to address this. If you have a global presence and you want an OpenStack deployment that spreads across the globe, there's still some challenge involved in that. There's our RabbitMQ that we all know and love, still has some challenges. Now you start getting into, it all works fine maybe if you're all within the US, it's all deployed within Germany, and start stretching it into other countries where you might not have the same bandwidth, higher latency, you start running into some issues. Just a sheer scale of things as well is back to the edge use case that you mentioned earlier, once you get into hundreds or even thousands of nodes, it starts to get challenging as well. And even in smaller deployments, monitoring and operations for OpenStack is still hard. The tools have definitely gotten a lot better since I started working on OpenStack, but there's still some challenges in doing things like root cause analysis, trying to figure out in which of my log files do I need to look at, or how does the error that shows up in this log file actually relate to something that's in the log file over here, and how do I correlate those and figure out what's the actual root cause of this. And that makes it pretty expensive and costly to operate these OpenStack clouds, because it just takes too much time to figure out what the issue is, because every now and then we do hit one of those bugs that unfortunately is still right. So we need to get a lot better at that. How do we do some more, I mean even better, there's a lot of work going on on AI and predictive analysis to figure out how do we figure out where something's going to crash, or which bug are we going to hit before we've been hit it, so we can take some measures to prevent that before it even happens. And lastly, if we want to do migrations, if we want to do the import, I know that most workloads today don't run under OpenStack. And we all want that to happen, and we're trying to get there, but there's a lot of real estate, a lot of workloads that run on KBM, that run on ESX, that run anywhere, and we want to get that into OpenStack. Importing those workloads into OpenStack might be tricky, depending on what those workloads are. If they're all ephemeral workloads, then yeah, it's not really a big deal. Once you start getting into workloads that have advanced networking, they have some persistence, they have dependencies between them, it starts to get a lot harder. We might also want to consolidate data centers or consolidate workloads, and if we have 25 data centers, we want to move to two large ones, getting that the workloads are running all those five data centers into two data centers, and not having to take a lot of downtime is also tricky. We might want to move from a OpenStack version, for some reason we're not happy with the vendor we have now, we want to move to using a different vendor, or just we have a really old version of OpenStack, we don't really want to upgrade this because it's running, you know, grisly, and we want to move to queens, then doing all those 10 upgrades in between might be easy, just trying to do a migration, but how do we do that with that, and killing all those workloads? So, I've been working OpenStack for a while now, I've been with VMware for a year and a half, so some of those things are things we've tried to address and worked with some of the, some of the things, some of you guys in here, some of your colleagues and competitors, and how do you solve this, and how do you take those, because it all sounds good, and the promise of OpenStack is really great, but how do we take that promise and overcome those challenges, and what do we need to do to overcome that? So, we have something called VMware Integrated OpenStack, I don't know how many of you know that VMware actually has an OpenStack distribution, we have that for four years now, and it's, even though the name might imply that it's something proprietary, I can assure you that it's not, we basically take upstream OpenStack, like all the other vendors, we package it up, deploy it on, we're installing on Ubuntu, package it up in VMs, deploy it on ESX. So, we have this OpenStack distribution, the biggest difference between other vendors is that we run on AV and VMware on platform, and that is, you know, one of the challenges that we've had, a lot of telecos and vendors already run VMware somewhere in their, and mine might be on the IT side, maybe not the NFV side, but they, almost all companies today, or a lot of companies today, run something VMware, so extending that environment to also cover OpenStack, and get some of the benefit of that virtualization platform that we've been working on for two decades, and extending that to get those benefits we talked about earlier, and all those things that the service provider is looking for, but having that stable platform underneath, and that's pretty much what we're saying, that you're saying that you can get the best of both worlds, you can get all that's so good about OpenStack, and also get that on that stable platform, so that was the main driving force behind us doing OpenStack, because it might confuse some people that, you know, why is VMware doing OpenStack, but it's actually a really good fit, because we get a lot of requests for having those open APIs and the open standards, but having that solid platform underneath, so some of the core things that we've been working on is, just like I just said, like the open APIs on that core platform, being virtualized means that we've taken some of the pain away out from installing it, that's at least early in the OpenStack days, especially installing an upgrade, and we're really big pain points in OpenStack, by doing that with VMs, we can do some really cool things where we basically, when we do an upgrade, we basically use the same workflows we do for install and just install new VMs on the same host that we were already running on, and then the last step in the upgrade process, we just sync the databases over to the new VMs, point the load balancer to the new VMs, and we're done with the upgrade. If one of those nasty bugs still happens after we've upgraded, we can just point the load balancer back to the old VMs, and we basically rolled back to the previous version with zero downtime or having to restore machines or bring something else up from tape or whatever it might be. VMware has also worked a lot on, we have a very comprehensive operation suite, and if you have admins that already know VMware, they already manage that IT side of the house, then extending that and having them manage the infrastructure that runs the VNFs comes natural, and they can use all the root cost analysis and the cost estimation tools that we have and just extend that into the NFV world. And I think we're starting to run a little bit short on time, so I might have to speed it up for the last ones. So we do some of the things we do a little bit different than a KVM-based OpenStack, for example. In the KVM world, you have one Nova Compute, for example, per host, and that means that if you deploy your Nova Compute host, at the edge you get those latency issues that we've heard in several sessions here before. We just let things slide differently where we have a Nova Compute map to a ESX cluster, so that means that instead of having a one-to-one mapping, we have a one-to-many mapping, which means that underneath the cover, OpenStack really just sees an aggregated pool of resources, which means that we can use some of the functionality and some of the benefits of the virtualization platform like DRS, for example, and do automatic workload rebalancing underneath the covers. How many here have tried to do workload rebalancing on KVM? A couple of hands. It's pretty hard, and it's not something that just comes out of the box, but since we aggregate that pool, it's transparent to OpenStacks. We can do that underneath. If a host goes down, we can fail them over to different hosts and do that automatically without crashing OpenStack or making Nova get really sad and unhappy. We also have something called valuing the box, and they talked about that in one of the keynotes, so this is somewhat similar to what they're doing with Starling XX-Wall, where we package up OpenStack in a single box, both for a branch office use case or maybe a opinionated edge use case, maybe, if you like, where we can deploy OpenStack to a single box and you can take that, ship it as an appliance, ship it as an edge, and then we expose all the lifecycle APIs through, we have something called the Oracle, not the Oracle, the OpenStack management service, so all the lifecycle management APIs to patch and extend the capacity are exposed to three APIs, so you can, if you have an orchestrator or some automated tool, you can use those to manage all those thousands or hundreds of nodes that you have deployed across your infrastructure. And we have a couple of different modes to deploy that, and we actually have one mode where we have almost similar to DevStack, where we've taken all the OpenStack service and put them together into a single VM, which makes it really quick and easy to deploy and very small foot prints that you can use for these single box deployments. And I think lastly, I think we've shown some demos on this, we have something called HCX Store for Hybrid Cloud Connect, which is a way where we can extend network fabric between on-prem and public cloud, and there's something that connects into what we do with AWS, the VMware on AWS, where we can actually extend the network fabric from your on-prem data center into the cloud data center and as we continue to build on these, we'll be able to start doing things like doing live migration from an on-prem data center into your cloud. It'll also help us do migrations we can do, basically do a live migration of a VM from one environment to another, which will help us do some of these migration use cases. And we've also plugged this into some of the IP we got for the acquisition we did some time back, so we can actually do a warm migration from a KVM-based open stack into a VMware-based open stack. So I think with that, I have about five seconds left for questions. So I'll stay here for a few more minutes. I'll be outside afterwards as well when they kick me out if you want to ask me some more questions. So thanks everyone for coming. Have a great rest of your summit.