 So what I really like about that video is seeing real world impact that open source and community collaboration is having. This is something I think we should all be really proud of. I'm Rob McMunn and I work for Red Hat and they're director of Cloud from here. What I want to spend a little bit of time this morning, I think it is still morning again once you did finish your time, is sharing with you some of the experiences that we've had helping our customers take OpenStack into production and share a little bit more about why we think we're seeing these changes. So up here we've got some of the stats that the community are gathering. So we're not, I don't think we're yet six years into this journey and we've already gone from what was an early stage technology exploration into delivering really significant industry impact. Now that is a pretty brilliant journey for us to travel on. Now the data the foundation is gathering in the OpenStack user survey and I know you probably like being way on tender hooks for that to be published. It's showing up here and this is showing us some of the numbers around production deployments and what we've seen happen is we've seen a steady increase over time. That is telling us a couple of things. It's telling us that the product is maturing and it's also telling us that customers are understanding users, the user community is understanding how best to take OpenStack and put it into production. In fact if you look at these numbers in a little bit more detail and you take into consideration the number of respondents that are actually completing these and look behind the percentages, this 65% here shows a two-fold increase, a doubling of the number of users responding who are putting OpenStack into production. So if OpenStack is maturing and OpenStack is getting ready, we know that the technology works. So OpenStack is ready but at you. So how do you know if you actually need OpenStack yet? Will your application run on OpenStack and how can it benefit you? So this table here helps you understand the differences between traditional virtualizations like the MWB Sphere or Rev and the new cloud-enabled infrastructure OpenStack. So I think a couple of these points here really stand out to me. I think if you look at the life cycle, how long do you expect the virtual machine to last? Are you looking at it in terms of years? Are you looking at it in terms of hours, days, weeks, months? A second one that is very important is to have a look at how is it designed? So Monty is just very eloquently explained to us cloud native. So if it's not designed to be full tolerant, if you've got things like clustering and other high availability technologies around your virtual machine, the chances are that you are not running an application that is ready to be made into OpenStack. The reality is, the majority of users out there, the majority of customers in the industry are really over on the left-hand side of the screen. The cutting edge guys are always over the right-hand side, yes, we've seen that. So what that tends to mean is that since OpenStack isn't designed to run legacy applications that require that single monolithic virtual machine, most customers will require a mixed environment, with some form of traditional scale of virtualization and then alongside that a new cloud-enabled scale of infrastructure. And then it becomes very important to understand which applications you move across and which applications are ready for you to transfer and redesign into that new cloud native format. And this, I believe, is one of the reasons why we've seen the production team work very well. There's two things happening. OpenStack is mature, but at the same time, so is the customer's understanding of how best to use OpenStack and take it into production. One thing we've also noticed is that you cannot separate OpenStack from Linux. If you take a closer look at the importance of integrating OpenStack with Linux, you'll see that in a typical OpenStack cloud, you're running possibly nine, you have to run at least nine core services, up to twelve if you're including the infrastructure as a service plus elements. Each service runs on top of Linux, requiring a complex set of user space dependencies, and in addition to that, all the necessary third-party drivers that support the OpenStack service will run within that Linux operating system. So customers will be limited to the functionality, the support and certification of drivers based on the limits that they use. So when does that really become important? That really becomes important when you're moving your production workloads from a traditional stack into an OpenStack stack. You don't want to have to change your SLAs, you want to maintain the same level of functionality, support and certification that you had in your original production, in your new world production. So I'm not telling you that you need to go out and buy a grill, but what I am asking you to do is that when you look at moving OpenStack into production, you consider this and make sure that you're aware of the consequences and do what you need to do to make sure that you're managing production in the same way that you always have. So I talked about the maturity and things moving into production. If we look at that from another angle, so we've seen what the OpenStack user survey is saying, if we look at it from a bit of logo icon, this is my turn to put up logos. This is a small cross-section of some of our customers who are deploying Red Hat OpenStack platform into production today. And what I wanted to do was rather than take a customer-facing deck around our stack to share with you some of the experiences that our customers and the use cases that our customers are using with OpenStack, so you can start to understand how that will benefit into your own use cases and environments. We're helping hundreds of customers to deploy OpenStack. And what I find really compelling is this customer base is very diverse. We're representing all sorts of different segments of the industry. What was equally compelling is their use cases. These use cases are incredibly diverse. We're not just talking about test and dev or other non-production environments. These are deployments and use cases that span all sorts of different environments, including some of those trained business-critical production workloads. And I know that's a little bit of a business charm, but come on. This is really quite impressive, code developed right here by this community solving problems, big problems that real users care about. There's absolutely nothing wrong with that at all. If I take FICO as an example, in the States, FICO is best known as a credit score company. So when you go out as a user, an individual, and you want to buy something you need to get a loan, you want to buy a new car, or you want to take a advantage of one of those furniture deals and get the sofa today, but not pay for it for 12 or 18 months, before the retailer will give you the car, the money, or that sofa. They go and check your credit score. And when everything's okay, they'll let it go. But that's not just what FICO does. FICO is actually a software company behind all of that. And they build predictive analytics and decision-making software. And traditionally, what they have done is they've gone to large enterprises, and they've helped those large enterprises manage risk, fight fraud, and generally make smarter decisions. And the way they've done that is they've gone into their data centers, they've gone on-premise, and they've built out a solution. So that's great if you're a large enterprise. But FICO wanted to make some smarter decisions. FICO wanted to expand their coverage. They wanted to get to the SMB market. They wanted to get to the mid-market organizations. And in order to do that, they realized that these organizations couldn't afford for FICO to come in and build something out of their data centers. So they took a look at what their product had, they decided to redevelop it, and they decided to offer it as a service. So we worked together with FICO, and we built out an elastic, scalable infrastructure, and we built that out of open snap and set. So this cloud now helps them to reduce their time to market by 50%. They can now develop and deploy their analytics solution in just a few hours. At the same time, they lowered their infrastructure costs by 30%. And that was face one. In face two, what they decided to do was to put a power's platform on top of this, on top of this cloud that they built. And they did that using OpenShift. So now what FICO Customs can do is they can go in and they can develop their own analytic applications using FICO tools. They've managed to shave off 70% of the time to value. It's a lot faster for these companies to access the data that they need. So you've got to admit that is a winning combination. It's faster, it's cheaper, and they've opened up additional revenue streams by opening up new markets. Here's one of those examples I was talking about with business critical production workloads running OpenStack. Betfair is one of the largest online game companies. And it is the largest online betting exchange in the world. And a betting exchange is very similar to a financial exchange. There's a couple of subtle differences. It doesn't track the finance market. It tracks sporting events and other events that people like to place bets on like political elections and those types of things. And what they do is they find groups of people who want to make a bet. And they find other groups of people who want to lay against that bet. They bring those two groups of people together and then they take a small commission off the top. To keep an idea of the size that we're talking about here for the world's largest betting exchange. They have 1.7 million active users. They have 135 million daily transactions. 3.7 billion daily API calls, which puts them in the same league as Facebook and Twitter. They generate 2.5 terabytes of daily logs and they run over 500 deployments weekly. That's actually bigger than the New York Stock Exchange, the London Stock Exchange and the New Cane all added together. So they built their success by using technology as a strategic advantage to disrupt the betting industry. It's the story of modern business. I'm not going to put up a slide with Uber and Airbnb on just in case you want to. But that's what we're seeing. We're seeing IT as the source of competitive differentiation. So Betfair in their I2 program have built their next generation infrastructure, which is now running in production on OpenStack. And they've done that to support continuous delivery of their applications, including hundreds of microservices. Within this new OpenStack based private cloud and software defined network solution, they've been able to simplify their network operations. So they can now run far more transactions. But one more transaction per second has massively increased. The developers can now create identical tests, dev and production environments without even configuring the physical network. And as you can see from Paul's quote, this impacts millions of active customers a day. This is the kind of real world impact the OpenStack community is driving. And if you want to learn a little bit more about Betfair, they had two great presentations in Austin at the summit. And last time I checked it was a couple of days ago, their technical video, the video of their technical presentation was trending that it was the second most popular video on the OpenStack summit from Austin. They talk exactly how they've done it and how they've built out their solution. The other interesting video is around how they've achieved this. They didn't get here overnight. This has been an eight year journey for them. And they take you through some of the decisions that they've made over those eight years. One of the most interesting ones I found was when they put their developers on call. So rather than having the gray IT ops guide being on call and I used to be one of those gray IT ops guys, they put the developers on call for the products. It's quite an interesting challenge and they noticed almost overnight that the quality of production code went up. Developers don't like me working up it seems. Network functions virtualization. It's a fundamental change to how communication service providers build and deliver network services. And at its core is an API driven virtualization infrastructure, a cloud. This industry does not do massive technology retalling very often. If you think about the scale, the cost, the size and the risk, you realize why at best they do this once every ten years. But here we are at this perfect juncture where OpenStack is helping telcos transform their networks. And I find this really quite exciting. It's not often you get to say we're transforming the global communications network and we're doing it with OpenSource software. Verizon has launched an industry leading NFV OpenStack cloud deployment across five of its USA centers. This NFV project began in 2015 around the middle, a little bit early before the middle afternoon. And it went from conception to large-scale deployment in just nine months. We worked with our partners at Dell and Big Switch to build a modern network fabric with its resilient, high bandwidth, flexible, simple and secure. In their own words, they are automating everything, virtualizing applications, software defining the network that surrounds all of that with the ultimate goal of making their environment 100% programmable. And why are they doing that? Well, they're doing it to drive down costs, to increase operation efficiencies and really drive down the time to market. Volvo IT. Volvo IT is the IT group of Volvo that looks after all of the various different elements within our company, delivering IT services. And these are one of the companies who've realized that in order to do that transition from traditional to cloud native, they really need to do a phased approach and take that hybrid angle. As I showed that mixed environment where they've built an OpenStack environment alongside their traditional and virtualizing environment. And the reason they did it was because they found, they discovered a lot of their developers were using the public cloud. And with that, the demand for private infrastructure to mirror many of the capabilities from dynamic provisioning to scalability. And they've recognized that OpenStack was created especially for this purpose. So RedMap's not only offering enterprise-grade support but for the OpenStack project, they've been in it with Platform as a Service. And what they've done at Volvo IT is they've put what we call CloudForms across the top of that which allows them to seamlessly move applications from the traditional space to the cloud native space. And when I say seamless, I'm talking about in terms of the user experience. So the user is unaware of where the service is being delivered from. When the application is repurposed and moved across to cloud native from an external user perspective, there is no change. And that's one of the key things we need to drive here is making sure that the change management of our people and our processes is as simple and easy as possible because that is what fundamentally drives the success of OpenStack projects. So the University of Cambridge is one of the world's foremost research universities comprised of 31 colleges, more than 150 departments and faculties and schools and other institutions. They have a mission statement. If anyone is going to come up with a mission statement that's going to challenge Monty's assertion around mission statements, it's probably the Brainiacs University of Cambridge. And their mission statement is this, to contribute to society through the pursuit of education, learning and research at the highest international level of excellence. The university is also a global leader in high performance computing, including its work to create some of the UK's fastest super computer. The University of Cambridge has recognized an opportunity to extend its high performance computing and research computing membership by creating a service offering. Very similar to FICOTE. They realized that if they wanted a broad net, they needed to create a service offering. And they wanted to make that available to a broader scientific and technology research community. So today, the Cambridge Research Computing Service is responsible for hosting, system support, scientific support and service delivery of a large supercomputing resource. Its principal facilities consist of a large CPU compute cluster that they called Darwin and a world-leading energy-efficient GPU cluster called Wilkes. The university have identified OpenStack as the next generation platform for HPC workloads. At the same time, they've also recognized that it's probably not quite ready yet. There's a number of gaps that they've identified because they'd like to work with Red Hat and the community to scope, implement and push back upstream. And that's the key bit here. They're going to push all these things that they fix back upstream for the benefit of the whole high-performance computing community. They're currently hiring developers and have founded the scientific work group to foster a community around utilizing OpenStack for HPC and scientific workloads. If you want to find out more about that scientific working group, it's on the wiki pages of OpenStack.org with a rather catchy scientific underscore working underscore group tag. Each of these production examples began with upstream developer collaboration. Whether it was triple-o heat templates or core IPv6 enablement. And as a vendor, we work with our partners and our customers to understand the use cases and the requirements needed to bring OpenStack from POC into full production. As community members, we take and distill these requirements into blueprints and codes and work with the community to merge the necessary changes. Ultimately, we're enabling downstream customers like FICO, Betfair, Verizon, Volvo IT and the University of Cambridge to improve their business by deploying OpenStack into production. Why does this matter? This is the heart of our community collaboration. The user community directly or indirectly through vendors collaborating with the developer community. We are just here forming a technology thought experiment together we are here working with industry to rethink and transform how we build and manage software-defined infrastructure. If I take a step back and I look at the production examples that I shared, there's one thing that really stands out to me and that's diversity. These solutions are enabling different use cases for different industry segments. The first use cases stretch and grow the code base with the necessary side effect of improving it, not just functionally but also architecturally. So if you look at what University of Cambridge is doing that will have architectural impacts moving forward. And there's no way to sustain the continuous changes without ensuring they fit into a bigger picture. One way the success of Linux is measured by its ubiquity. So I'm not actually sure there's a word but it's very ubiquitous. It's running in the world's largest supercomputers. It's on your phone. It's in your car. It's responding to your web queries. Soon it will be in your fridge. It's everywhere. But it wasn't always. Embracing diversity is going to help folks to achieve that very same ubiquity. And that ensures that it will be run 10 years from now where we don't even know yet what use cases will be sold for. But of course there's more to diversity. The code must be maintained by a community. Similar to the technology the community improves with diversity and we need to ensure we're encouraging new community members regardless of their gender, their race, their sexuality, or their age because all of that broadens our perspective. Ultimately the community is made up of individual human beings. And the strength of the community is the individual trust relationships we build with one another. It's the trust and respect not the process that will sustain this community. So what is OpenStack? OpenStack is a major industry endeavor. It's an impressive number of individual projects all aimed at some facet of building open source cloud infrastructure. It's a large community rallying around those projects. It's a huge amount of collective experience. We're continually redefining how we define what OpenStack is from core projects to integrated projects deaf core to big tech. The community is evolving this definition. We have a great opportunity here to establish definitions based on the same community collaboration that we use to develop software. The playbooks Triple O heat templates and more generally the common deployment patterns that we already know that is OpenStack. At least it is today tomorrow OpenStack tomorrow is defined by our ability to grow and adapt to these ever changing requirements. It's defined by how we onboard new communities of users and developers. It's defined by how we engage with external communities. You only need to take a look at the collaboration summit and it was hard to miss the desire other communities have to collaborate with OpenStack, like OpenNFE bringing use cases and code and the cloud native computing foundation seeing the world through the eyes of a modern application developer. OpenStack is a solid foundation for building cloud infrastructure and for collaborating with industry. It was only a few years ago OpenStack was just an idea between Rackspace and NASA to build clouds that are massively skill and simple to implement. Now that idea is empowering a community to change an industry. But technology alone is just technology. Technology alone does not make change. People do, and by developing technology openly more people can participate. Thousands of engineers and hundreds of companies working together to help our collective customers move forward to help solve the unsolvable at a rate of innovation that could never be accomplished by one organization alone. Together we press forward Together we change the game. Together we do change the game. Together as a community we will take it from being the best open source cloud solution to the best cloud solution that just happens to be open source. We're at the booth at the back if you'd like to discuss further with Red Hat any of the things that you've seen or heard today. I just want to give a little plug to Angus who's running a triple O session far more technical than anything you've seen from me this morning. That will be in the studio rather than here in the centre. Thank you very much for your attendance and it's been a pleasure to be here at the first open stack day in Prague. Thank you.