 Good afternoon everyone. Thank you for coming to this afternoon talk. We're here at the Open Infrastructure Summit and this is where we discuss open infrastructure, which is to summarize using open source solutions for providing infrastructure. And today I wanted to talk to you about why. Why you should adopt open infrastructure today. Why you should choose it compared to say using proprietary infrastructure, which is obviously about using proprietary software to provide infrastructure. And there should be plenty of time for questions at the end. I hope that if you spot any missing reason why you should adopt open infrastructure, you will share them with the rest of the room. And to kick off this talk, I wanted to first define what we mean by infrastructure. Why would you need infrastructure? And if you look at the history of computing, it's all about piling up layers and obstructions. And this is primarily done for two reasons. There is market pressure on one side, where if someone is owning a certain level of the stack, it's the only way to displace the old king is basically to commoditize the lower layer and build a new layer on top of that, where innovation and differentiation happens. And this is basically what happened in the 80s and the 90s, where the IBM PC platform was displaced by the Microsoft Windows platform. And there is on the other side, there is developer pressure. Developers are looking for more convenience. And so they're interested in adding new layers to mask and abstract the differences between the lower layers. This is basically what happened with the web browser being used at an application platform. And because it masks the differences between the values operating systems. And this piling up of layers and obstructions is happening on all sides of our industry. If you take application delivery, for example, traditionally, you would to present applications like this with users on one side, accessing applications on the other. But the way you deliver those applications has been evolving. 20 years ago, you would procure some physical hardware. And as an application deployer, you would deploy an operating system on top of that. And then all the dependencies required to run your application and then your application. But then we added more layers. The first layer we added was obviously hardware virtualization, which is abstracting the server your application is running on from the actual physical hardware that runs it. And then we added new layers. We added cloud APIs, allowing to programmatically access those virtualized resources. So you have cloud native infrastructure on one side and application that being deployed on top of that programmable infrastructure on one side and cloud native applications on top of that. And then we added more recently a new layer, which is application deployment API, which is basically what Kubernetes provides, higher level abstractions and primitives that you can use to deploy complex applications on top of this programmable infrastructure. So the infrastructure space has been evolving. We are seeing an evolution with developers and application deployers wanting to care less and less about infrastructure details. We're seeing an evolution towards using commodity hardware. So when your needs increase, rather than scaling up and buying a more specialized or more performance server, you would scale out and spread the load across a number of simpler identical machines. We are seeing an evolution towards commoditized runtime environments, rather than use a highly curated environment that is precisely fine tuned to your workload and has exactly the right amount of polish on it. You would rather use easy to recreate disposable runtime environments that you can easily kill and recreate from scratch whenever something goes wrong. And finally, we are seeing an evolution towards lighter and lighter runtimes. So from physical machines to virtual machines to containers, not of functions. And there is no really no reason why that evolution should stop there. So we are building more and more layers in the infrastructure space. And as we add those layers and abstractions, the share of complexity that is handled by software compared to hardware is increasing. And there are a number of factors driving that trend. We used to optimize purely for performance. We used to try to get the best performance out of our applications. And that's still a key concern today. But today, the key challenge is to optimize for utilization. Making sure that you don't spend any money on unused computing power. Avoid paying for servers that do not help your workload. And so we add software abstraction layers to make sure that we extract every drop of that computing juice that we pay for. And that's basically the drive behind Kubernetes trying to optimize packing as many applications on single machines. In industries that would typically produce specialized hardware appliances like telcos, we're using standard hardware and differentiating with specialized software on top of that allows to dramatically reduce hardware development costs. You don't have to prototype hardware. You don't have to have long hardware production cycles. Software is also faster and cheaper to deploy. So that allows to dramatically reduce deployment cycle costs. And since it's cheaper and faster to deploy, it allows you to deploy more often, which enables you basically to react faster to changing market conditions. So you can have a way faster way of adapting to the customer needs. But the last and perhaps the most important lesson from this infrastructure evolution is that it's no longer just about developers or application deployers. As you pile up those obstructions, it's clear that there is this growing separate role of providing infrastructure for others to deploy their applications on. And this is the area that we care about when we talk about providing serious open infrastructure solutions, those people who provide infrastructure. And this can be private infrastructure to cover the needs of a given organization, or it can be public infrastructure to offer a service for anyone around the world with a credit card to pay for resources. This is the same job. This is about providing infrastructure. And this is the new role that has appeared in the last 20 years that we are looking to facilitate here. So now that we set the stage and explain what we mean by infrastructure and what is this new role that we're trying to help, why would you choose open infrastructure compared to proprietary infrastructure? The first benefits derive directly from the fact that you're using open source software. And so open infrastructure software is open source software. So obviously you have the same benefits. But sometimes it's easy to overlook those benefits. So we'll summarize them here. The first benefit of using open source software is availability. The fact that there is no barrier, monetary, contractually or otherwise to trying out the software with all of its functionality. The fact that you can easily evaluate it for future use, you can play with it, you can have fun with it, you can really use the software, all the software functions for evaluation. The fact that there is no friction going from that experimentation to production. So it's really, to me, a major benefit. From a corporate perspective, an even more important benefit is sustainability. When an organization makes the choice of deploying software, it does not want to be left out without maintenance just because the vendor they chose decided to change path or just goes bust. And having the code available for anyone to take and modify means that you're not relying on a single vendor for long term maintenance. It also allows you to not be locked in because you chose a vendor and the cost of switching software is so high that you end up on the vulnerable side of a deal discussion. Like if you're ready to pay a lot of money to stay on the same software, the vendor will probably make sure that you pay that amount of money. So it's really a key benefit for corporations adopting open source. Another key benefit is that open source development makes it easier to identify and attract tenants. So organizations can easily identify potential candidates based on the open record of their contributions to the technologies that they're interested in. And conversely, candidates can easily identify with the open source technologies that a company is using and they can join those companies with certainty that they will be able to capitalize on the software experience that they will grow there. And for them, it's much better than working on proprietary software where they're not allowed to share what they're working on and they have no chance of encountering ever again in their career. From a technical standpoint, the fact that the source code is available means that you're able to look under the hood and understand by yourself how the software works or why it behaves the way it does. And that's critical because sometimes it does not necessarily work as advertised or documented. So being able to see for yourself how the software works or why it behaves the way that gives really more technical insight on the solution you're running. This transparency also allows obviously to run separate security audits to find vulnerabilities. But one step beyond that, the ability for anyone to take and modify the source code means that you have the possibility to find and fix issues by yourself without even depending on a vendor. And sometimes when speed is a key issue, having this extra reactivity in debugging issues and fixing weird behaviors in production is critical. And finally, with open source, you have the possibility to engage in the community producing the software and influence its direction by contributing directly to it. Organizations that engage in upstream open source communities are more efficient. They are able to anticipate changes. They are able to voice concerns when there is a community decision that would adversely affect them. They are able to make sure that the software will grow the features that they will need tomorrow by participating directly in the development so they can make sure the software adapts to their future needs. So those are not like philosophical or weird benefits. Those are practical business benefits and those are the main reasons why organizations everywhere adopt open source today, including open infrastructure. But using open source solutions for providing infrastructure gives you three additional benefits, which we call the three Cs, capabilities, compliance and cost. First, capabilities. So back in 2010 when we started OpenStack, there were a bunch of people saying that there would be a standard cloud based on standard sized VMs and that basically competing against Amazon Web Services in delivering that standard service was clearly madness. And yet nine years later we're seeing clouds of all forms and shapes. We're seeing memory optimized instances. We're seeing IO optimized instances. We're seeing CPU optimized instances. We're seeing GPU instances. We're seeing bare metal instances. Clearly one size does not fit all. And sometimes providing some of those features at scale means you are paying a lot if you're doing it in public clouds because there is just no way you can provide them at that scale with the same margins as the basic standard sized VMs. And sometimes the feature you need is just not there. If you need a specific piece of hardware present in your servers, like if you want to have an atomic clock in your servers because you want to build a new spanner like thing like Google, well, there is no public cloud providing that. So how do you do that? Using open infrastructure gives you control of what ends up in your servers. It gives you the extra capabilities, the extra flexibility, the extensibility that you need to build the infrastructure with the exact features you want. The second C is compliance. Open infrastructure helps with compliance to local legal requirements. It's especially true in Europe where we have strong data privacy laws and where data locality is critical. But we're seeing that concern with strategic companies or governments being concerned of hosting their infrastructure in major public cloud providers that are not in the same country. Or we're seeing that transfer also transfer with that concern transfer to companies that are competing with public cloud providers like Netflix running a competitor to Amazon video while running their whole infrastructure on Amazon web services. So the fact with open infrastructure, you decide who has physical access to your servers. And in some cases, it's compliance. In some cases, it's security. But you can see how that can be interesting. And finally, the last C is cost. If you're looking into providing private infrastructure, well, obviously, there are a lot of proprietary software that you could choose from to build local infrastructure. But the software per seat licensing costs can add up pretty quickly, and it will limit your growth if you choose that path. Whereas open infrastructure can let you control the cost of your private infrastructure so that it does not explode if suddenly you want to make use of more of that infrastructure. And if you're a company interested in providing public infrastructure, well, there's just no proprietary software out there to do that. So you could write your own proprietary software, you would probably get to something working really fast. I mean, OpenStack Nova, when we started was prototyped over a weekend of coding. So it's totally doable to have something working really fast. But there is a reason why nine years in, we saw 1,300 changes in OpenStack Nova during the last six months of the train cycle, 1,300 changes. That's because the devil is in the details. It's in the corner cases. And being able to rely on the same code as a community will save you massive amounts of money over the long run in development and maintenance. So now I want to talk about how open infrastructure enables interoperability and helps with hybrid cloud scenarios. So at that point, you may ask why bother with hybrid clouds? Is it not like an industry buzzword? Well, yes and no. So if you look at how you traditionally choose between public and private cloud for infrastructure, the traditional thinking looks a bit like this. If you look at the cost profile of a public cloud, it looks like this, where you pay a certain price per CPU up to a number of cores and then the price drops because you hit a new pricing tier and it goes flat again. And then until you hit a new pricing tier where the price drops again, until you hit the last mile and the last pricing tier and it goes just flat and you pay more for each CPU core that you add. The price profile for a private cloud on the other hand looks more like this, with high investment at the beginning, then you hit economies of scale up to the point of diminishing returns and then it goes flat. So if you look at those graphs, the choice between public and private infrastructure looks simple, like above a certain number of cores, private infrastructure makes more sense, under a number of cores, public infrastructure makes more sense. But in reality, things are a bit more complex because those previous graphs assume that usage is constant over time, whereas obviously it looks more like this. You have spikes in usage and drops depending on the activity and the requirements of your workloads. And so to avoid paying for unused servers, what makes the most sense economically is actually to use private infrastructure for the share of servers that you always run and use the public cloud elasticity to cover for those extra computing power whenever you need it. This is what is called hybrid usage. And open infrastructure is really great to enable those hybrid cloud scenarios because with open infrastructure, you can actually run the same software in your private cloud and in your public infrastructure. And that allows you to optimize the cost because you won't have to pay for servers that you don't use. It enables compliance and capabilities because you can actually run those sensitive workloads or those specialized workloads that require special hardware in the private side of your infrastructure while still being able to rely on the public cloud elasticity to cover most of the standard workloads. It also allows you, since you're using the same software and APIs on both sides, it allows you to save a lot of money in development and validation costs because you don't have to develop to or validate two separate versions of your applications. And you can seamlessly move workloads from the public to the private side of your infrastructure. And I'm not just talking about open stack here. There are multiple interoperability examples in open infrastructure and that includes Kubernetes which provides, promises obviously interoperability application deployment layer for any infrastructure that provides it as an API and open stack promises interoperability at the infrastructure as a service layer. So open infrastructure is pretty great, right? It really makes sense for today, but the best thing that it also makes sense for tomorrow because open infrastructure is future proof. So what do I mean by that? Obviously, it's hard to know what the future holds, but we are pretty sure of a few things. We're pretty sure that abstractions will continue to be piled. I mean, we've gone from virtualization to cloud APIs to application deployment APIs. This is not over. We'll continue to add new abstractions. There is also, we're pretty sure that there's no technology that will end all technologies. We are using VMs. We're using containers. We are still using BAMMATL servers. We are starting to use functions. There is no reason to think that one of those technologies will replace all the others. There is no reason to think that containers are somehow the end of all innovation in computing. New things will be invented tomorrow. And we're also pretty sure that we'll need to provide infrastructure for those new technologies that will come tomorrow. Infrastructure will always have to be provided. Applications will always have to be deployed. Even serverless needs servers. So, in that uncertain future, I think that open source helps. Because with open source, you are investing in a problem space rather than in a specific product. You're investing in a community that is going to tackle a problem together, rather than having only one technology in mind. And rather than produce this specific and narrow solution, that community is trying to solve a problem that its members have in common. And for example, the OpenStack Foundation is not just about producing OpenStack. It's about taking the perspective of the infrastructure provider, get them together to build open source solutions that solve problems for them. So, investing in that community lets you share issues with other like-minded operators of infrastructure and build solutions for whatever problem you'll have as a group tomorrow. Okay, so to summarize, we've seen several reasons why you should adopt open infrastructure today. Some of them are linked to the choice of open source software. We've seen the 3Cs compliance capabilities and cost. We've seen how it enables interoperability and helps with hybrid cloud scenarios. We've seen that it better prepares you for whatever is to come next, even if we don't know what that is. But there is a deeper reason why you should adopt open infrastructure. And it is that open infrastructure enables innovation. I love open infrastructure because I don't want a world where all of the infrastructure needs are provided by a couple of internet giants or worse by a monopoly. Well, first, monopolies are bad. Obviously, they don't make economical sense. They distort good market conditions. You end up paying more and you end up having less innovation in the end. So, monopolies are not sane. But what's even less sane, and I would argue borderline dangerous, is monocultures. Monocultures are vulnerable. If half of the internet is running on a single service provider, well, it's not resilient at all. And any class break can lead to catastrophic failure. What used to be mightly annoying because you couldn't access Facebook today with more and more devices connected to the internet to function. It can now be life threatening. So, yes, having some diversity in your infrastructure providers is it's important to avoid those monopolies and those monocultures. But beyond that, allowing everyone giving everyone access to infrastructure providing technologies is allowing everyone to play and participate. If you restrict innovation to a couple of big shops, well, you're actually restricting what the world can do in that space. So, it's important that we have strong open infrastructure solutions for everyone to use. That allows us to distribute the future more evenly. And that is to me the last, but not the least reason why you should adopt open infrastructure today. Thank you for listening. So, do you have questions, suggestions of things I obviously missed? I'm pretty sure I forgot a very important thing in there, but nothing? Well, you know where to find me. You can email me, which at me works too. Same handle as the Twitter handle on WeChat. So, I had some interesting interactions with some of the people from a few vendors downstairs at the marketplace. I get a sense that even in the contributor space, in the open-stack community, the vendors are shrinking and the majority of the contributors are sort of consolidating into a few major vendors. How do you intend to, I mean, what are your thoughts on how we can as a community tackle? Because we need diversity in the number of companies. Open-stack was red hot some years back, because there were so many different companies all actively participating that has sort of shrunk a little bit. So, just wanted to know your thoughts. So, that's a good question. So, the way I think about it is that, so, historically, people were complaining that open-stack was just too difficult to operate. And so, you will require to have vendors to help. And we worked on improving that a lot, and that resulted in less needs for, like, users are using open-stack directly much more today. And that's the main reason why you're seeing this constriction in the vendor market, is that people don't necessarily need a vendor as much today. We've seen people operating very large clouds with a handful of people. The Adobe advertising cloud, I think, is two persons, and it's like a 100,000 CPU cores. I mean, I don't know how they do it, but it's doable. And so, that means that some people will always need vendors, and because they don't necessarily want to have the experience in-house, that means that, obviously, the vendor market is not growing. It's more like consolidating on a number of key vendors that have, that clearly defined their market target and can easily execute on those. So, I'm not surprised, and I like to see it more as a positive development, means that we are actually making open-stack easier to deploy, to upgrade, etc. And distributions in general, we're still, obviously, we still need distributions because packaging the software and making it available for everyone is critical, but we're not really seeing that support go down. It's more like the professional services and products that are built on top of open-stack. You used to have to to add a lot to open-stack to make it usable. And so, that created a space for products, and today, most people are running vanilla upstream code. So, the value for product also means that the community managed to produce something directly usable. Thanks again for coming and have a great open infrastructure summit.