 All right, we'll start in two minutes. I just got ready. Good morning. Thanks for coming. Good morning. Good morning. So this can be as much of a conversation as you want it to be. So I kind of wanted to show a hands. How many of you already know what Cloud Foundry is in some sense of the word? Awesome. And how many people here heard the word and kind of know something about it and wanted to learn more about it? How many people have used it? How many people deployed an app on it, deployed it? You have not used it. I have. But you don't get all my experiences unless you're going to give the talk. So I'll start. Cloud Foundry is an application framework. And so you develop your application and whatever your favorite application development platform is. And then you use Cloud Foundry to do a CF push. You push it out. You deploy it. You can monitor it. You can manage it. You can scale it. You can restart it. You can kill it. So it's the framework that lets you deploy your application so the application developer doesn't have to worry about the runtime part. And so to start out with why Cloud Foundry, I actually wanted to start out with my personal story about why I'm on Cloud Foundry. So I started out in open source software because I think we can create much better things in the world if we all collaborate together. So I think it was in college. We'd sit there. We'd have 100 people in the classroom. And we all go write the same spell checker. And I was like, this is really silly. Like, why am I writing a spell checker? And the guy next to me is writing a spell checker. And the guy next to him's writing a spell checker. So to me, the idea of open source, where we could all collaborate and build on each other's ideas and make things even bigger, was super appealing. And that's why I went into it. And then many of you may know, my real first start in open source software was with GNOME. How many people are familiar with GNOME? Awesome. And our goal was to have a free and open source software desktop. And at one point, I was executive director of GNOME. And I said, we have a free and open source software desktop. And I started talking about the fact that the web wasn't open. And we were storing all of our data in these data stores that we didn't have. They weren't open formats. You couldn't easily pull your stuff out. You couldn't move them from one photo management tool to the other photo management tool. And so I ended up at Mozilla. And at Mozilla, we were working on making sure that the web was open for everybody and that there are open formats and open standards. This is a group in Namibia, I think, at the country right. And they're working on Firefox OS phones. Because a lot of the phones that are going into developing countries aren't open. They aren't open. So they're channeling the web through Facebook or through another platform. So we're working on making sure that the next generation or the next billion people in developing countries had access to the entire web and had that freedom to create. And so then I saw this new thing coming up. And it's really exciting. And it's a little bit crazy still, I think. And there's a lot of work to be done there. And that's the cloud. I think the way things are going now, if you're an individual app developer, you really can't create an app that just runs on the laptop. I mean, there's a few that still work though. There's a few on the desktop. But most apps that are most meaningful in our world today run on the cloud, are available 24 by 7, scale. They're reliable. And so this cloud space needs to be open. It needs to be standard. And we need to make sure that application developers everywhere can use it. Or we're going to miss a whole lot of people and a whole lot of great ideas. So how many people remember the year of the desktop slogan? 2001, do you remember that one? And then it was the year of the desktop in 2007. And then it was the year of the desktop in 2010. And I think I saw something written late last year that it's now the year of the desktop. And we make a lot of fun of that. But I actually think we miss, by making so much fun of the year of the desktop, we miss some really key innovations that came out of all that effort. So for example, one laptop per child. They were built on Linux and open-source software. They said they were going to bring a laptop to every kid in the world for $100. And they didn't exactly accomplish their mission. But I think along the way, they really influenced the whole netbook market. So all the netbooks that were created went way down in price. They actually came down to $100. So they changed the Linux desktop, changed the world. That's just one example. Another example is the Raspberry Pi's were also built off the Linux desktop model. So now you can get a $35 little gadget that you can turn into a desktop, or you can turn into a monitoring system for your house. So there were a lot of innovations along the way that we missed by saying it wasn't the year of the Linux desktop. And so this new thing that's coming, this new run-it-in-the-cloud, I think it's really huge. It's going to be the next wave. It's going to be something like electricity or something. It's going to be an innovation that means that much to our society, maybe like the internet. So it's the year of the cloud, but not like the year of the cloud. And we've all heard the word cloud for about five years now. And probably we'll hear it for another five or 10. But it's really the time that we're starting to see a lot of people work together. And it's both individuals and companies. So the Cloud Foundry Foundation, I'm missing a few logos, but we have 53 companies. Most of who are users, not actually developers, they're not making money off Cloud Foundry directly. Like they're not selling it. They're actually using it in their businesses. And they want to be part of this open source movement. So I think this is the time we can bring individuals and companies together in a way that's reminiscent of probably the Linux kernel and when Linux actually got started. And at the same time, probably the Linux kernel felt this way too. But the Cloud Space and Cloud Foundry is kind of really big for individual minds to conceive and to know all the pieces and parts. And so it's our job, those of us that are good at that or those of us that want to use it, to make sure that all the pieces work really well together. For example, it has to be all the software has to work all together and it has to be scalable. So this is a picture of Oktoberfest in Munich. Who wants to guess how many beers they serve during that weekend? How many? You're close with one billion, but not quite that much beer. Any other guesses? How much beer? That's a really good question. That's about 10 after that. So it's 8 million beers in that weekend get served there. And so they've learned how to scale their beer pouring and their beer serving to 8 million. Also has to be really reliable. Has to be up 24 by 7. So developers should be able to develop an app and deploy it without having to have a team of 20 people to keep their app up and running 24 by 7. There should be some service or some way that we can enable individuals to create. And it has to be able to be monitored. And then all those pieces have to fit together. So when you take all those pieces, the ones that schedule it, the ones that deploy it across VMs, the ones that monitor it, the ones that can kill it and start it, they all have to be clued together. And I'll get into it in a little bit, but companies often end up with teams of 20 people to do that work. And we want to make sure it's accessible to everybody. So how Foundry does is take all those things that you would need to deploy an app, the thing to deploy it, the thing to monitor it, the thing to schedule it, the thing to scale it up. You know, it's Christmas time and in your app, we're first people to Amazon for your affiliate fees should be able to immediately scale to the size it needs and to eventually kill it if you need to. And what we're seeing is both individuals and companies are seeing like really huge exponential changes. So this is SAP TechEd, I was there, and they had these like real examples of how much people had saved or how much faster they could scale their app. And these are real customers that have their names on the bottom. And I see this like across the industry when I go to the vendors at CellCloud Foundry or touting the stuff. But our vision is that there should be a cloud platform that can be either private, like you should be able to deploy your own cloud Foundry platform and run your own app. Or there should be public instances of cloud Foundry you can deploy your app. And you should be able to move your app between them. So we don't want to create a monopoly where one person is the only place where you can deploy your app. We want to make sure that a lot of people can live in this ecosystem and that there's a vibrant ecosystem of apps that can move between vendors and various vendors that offer different services at different levels for different things. And just as a couple of examples, anyone heard of comic relief? It's a really big nonprofit in the UK. They do a number of good works things. The one I always remember is they help babies in Africa and like they have really heart rendering stories. They do all their fundraising in one day. It's called the Red Nose Day. Don't ask me why. And that day the public television service helps them advertise across all the UK and everyone makes their donations and they bring in a billion dollars that day. And they're doing about 400,000, 400 transactions a second. But they do all this during seven hour periods. The rest of the 364 days a year, they don't raise much money at all. And then during the seven hour window they have to bring in a billion dollars. And they really can't miss any part of that window or they just missed like one seventh of their fundraising thing. And a small development team of about five people from a company called Akumi used Cloud Foundry and set up their platform. And so they run on Cloud Foundry for this one, they run all year round. But during this one day they managed to scale from a few transactions a second to like 400. Another example is GE. They're bringing all of this technology to the industrial internet. So they're building big wind farms, airplanes and all these airplanes have tons of data. So they're reporting back, like the wind farms report back from like different parts of each windmill report back every second or a couple of times a second. And they take all that data and they're able to make decisions about where the airplane flies or what the wind speed is or what the gas mileage is or what the wind farms are working on in real time with all of this data. And it has to scale depending on what's happening in the moment. And so how does Cloud Foundry fit in here? Since the year 2000, 52% of the Fortune 500 companies have dropped off the list. And I heard an even more startling number. It was like since 1980, like 80% of them are different. So companies haven't learned how to ramp up become number one in their business and stay there. And so one of the things that they're missing right now, this is a book by Rita McGunther McGrath that's actually a good read. She says it used to be up until probably 2000. What companies did was discover their niche. They developed, you know, they dominated their niche. I pick on HP because I worked at HP. So HP was like the printer company and they did printers better than anybody else and they dominated the market. And if you wanted a printer, you buy HP and that was just obvious. And they could kind of ride that wave because if you're gonna buy a printer, you're gonna buy HP's printer. That's no longer the case. Now you have a customer and that customer is loyal to you as long as you adapt to their needs. So take a guess. How many times a day does Amazon adjust their prices? How many times, across all their platform, how many, way more than 10 times. They adjust their prices two and a half million times a day. Yeah, across all their products, they're constantly adjusting their prices, fluctuating by what they're seeing in the market, what people are asking, what the providers are doing, what they're adding. They're also at the same time updating, if you, I've never managed to see one, but like they're updating like their carts or their buttons or their look and feel, you know, from one day to the next, you can kind of see some changes in Amazon. So they're constantly adjusting to their customers. So they're not staying in one market. They're dominating all markets, but they're constantly adjusting to keep those customers. And so that continuous innovation cycle is what most companies and most businesses are gonna need to adopt, that ability to constantly know what their customers want next and to be able to change with it. And that's gonna require some culture changes. So you're no longer gonna have great big teams, you know, a lab of 100 people working on, like picking on HP, but HPOX. You're gonna have small teams. The guess is you probably need a team that I'm getting ahead of myself, but you need a team that like two pizzas can feed. And those teams need to be able to work really quickly and communicate effectively with other teams without having to spend a lot of time in meetings discussing like what my app product doesn't want yours does and how they're gonna interface. So it's a continuous integration. How many people will have some kind of integration place at your work or your project where all of you can check in code in the same place? Probably most of you, awesome. And how many people will have a continuous deployment? So when you check in code, it actually goes to market to your website or to your product within say a month, within a week, within a day. Okay, the rest of you are stuck in water scrum fall. So you have the very agile development cycle, but it's not able to be released quickly because of whatever QA testing or process, release process you have. And so it kind of looks like this. You develop something really cool and nobody can see it or you can't see what you need to see. And so we want to help people get to this continuous integration, continuous deployment is kind of the area we're focused on. And so that they can have continuous innovation and keep up with the market and keep up with their customers. Another way to think about it is how many people used to work in an organization that had kind of like software was broken up into front office, back office. And you had very clear roles and you actually sat probably in different buildings or different teams. This is a Price Waterhouse Cooper diagram, by the way. And then you moved to kind of like the enterprise service bus where there is a little less, you know, a little more interaction, a little more definition about what you are and how you interact with other people. And then we moved to this new model of APIs where you shouldn't have to have a meeting with the group that whose APIs you're using. They should say, here's what I do. Here's my API. And then your application can call it with a very well-defined, here's how I call it. Here's what I can expect back. And then you can fail gracefully or move on to the next thing if it doesn't work. So getting back to the two pizza team idea, this is Martin Movin Conway. And he said that any product will look like its organization. So however you set up your teams, your product will end up looking like this. This kind of gives me nightmares at times in certain places I've worked. Because however you set up your organization, your product is gonna mirror that in some way. Those are the people that talk together. Those are the people that work together. And so here's the two pizza team. And I wanna point out if Caleb is on your team, you probably have a two-person team because he wants his whole pizza himself. I think there's an Amazon, somebody at Amazon that coined that idea. And so we used to have, maybe I'm kinda saying the same thing over and over again, but it's a really important culture change and a really important change to get why we're doing what we're doing at Cloud Foundry. And I'm gonna give you a demo and I'm gonna talk about that in a minute. But you used to have an application world where you probably had a technical architect or a chief technical architect who knew how everything worked. And everybody in the organization was a little afraid they might leave and then you wouldn't know how something worked. Now you move to a world where everyone manages their own application or their own microservice and they interact with other people through APIs. And so no one really knows how all the pieces work, but they all work well together because there's clear definition of what they do and how they interact. How many people feel like they're in the monolithic layered world these days? Cool, a lot of us still are. And how many people feel like they're at least transitioning to microservices or trying to make that leap? Awesome. And so if anyone wants to chime in, feel free if you have questions or you have stories you wanna share. I'm down here instead of up there so we can try to do that. So microservices, this new world is great. And actually to be honest with you, I feel like microservices is like, it's the great new thing. Like now we're not supposed to eat flour. It's better than what we had, but I don't think it's the end all be all. So I think we're still evolving just like diets are evolving. But they're great, but you need to make sure that you can rapidly provision them, that you can create them very quickly, that you can deploy them quickly, that you can update them in place without messing with anything else around. You need to make sure you can scale them quickly. And you need to make sure you have a culture that works really well with them. And so if you ask anybody about this microservices, when I started in this area, they all point you to 12factor.net. They say that's the holy grail, the Bible that will tell you how to do microservices. I think it's a really awesome primer. I don't think it's gonna teach you how to do microservices by itself. Because there's a whole bunch more things I'm not gonna read through these, but you also need to make sure that there's not much, if you create microservices, you need to make sure that what developers are deploying is actually developing is actually what's getting deployed, that those environments look very similar. You need to make sure you can scale up really quickly. So in addition to just creating this thing called a microservice, you have to make sure that the environment around it also works well with microservices. And that's kind of the whole DevOps culture that they talk about. How many people have a group in their company that's called DevOps? I'm just kind of curious, yeah? And how many people would consider themselves a DevOps person? Cool. You two? No. But you know that's not enough. And you still need to know how you're gonna recover from failures, you need to be able to isolate the resources, you need to be able to figure out how you're gonna deal with data, because these microservices don't carry their own data with them, so they have to have persistent data somewhere. So there's a lot of issues to be figured out. And a lot of technologies, so the Borg was Google's cluster configuration software, and a lot of really cool technologies spun out of that, or around that time and around that space. I mean, that's what we're seeing people play with now. So we're seeing people play with Kubernetes, with Mesos, with Docker. I heard the Docker talk here was like, you couldn't get into it. So these are the technologies that as people move to microservices they're trying for. And I wanted to include this like, because I think it's really cool. How many people here are open source fans? We're at scale, hopefully everybody. So if you look at this, we now have open source all the way from the hardware stack all the way up to the software stack. So the whole stack has open source software. But there's another change here that I think is really cool. And the other changes, these are not company names along the, these are project names. So now instead of having technologies focused around companies, we have technologies focused around projects all the way up the stack. And I think that's pretty new. I mean, we've been working on it for a long time, but that ability is pretty new. And we've also changed the way we measure things. You can see how the world's changed by looking at the unit measurements. So we've changed from internet as a service to platform as a service or to the app platform. And so we've gone from measuring things and how many VMs do you have to having how many microservices or apps do you have with the whole world in it. And you don't have to worry about the number of VMs that the application platform should spin those up automatically and send your app where it needs to go. And the reason that these application platforms can make it is because they can do this is because they have constraints. But because they have constraints, they can make promises. So because they say you have to use one of these 50 languages, they can say, if you're using one of these 50 languages, we can handle the build for you. So they have constraints so that they can make promises. And then they can make promises, like this is one of the Cloud Foundry developers, and he said, you know, he wrote a haiku, I'm pretty impressed. He said, here's my source code, run it on the cloud for me, I do not care how. Like just make the thing work. And so what we're trying to do is turn that environment from the left where you had to have all your libraries, your dependencies, your zip files, your jar files, and turn it into something where you just have your app with a load balancer and a database thing and all you have to worry about is your app. And the platform should automatically self-deploy, figure out what build environment you're on, push it out there, scale it, add more instances when it needs to, kill it when you tell it to. And so this is CF push is what when you ask how you do Cloud Foundry, everyone says CF push, like that's the command you run the most. So CF, and there's a whole bunch of variables you can add to it and stuff, but you do CF target, my Cloud Foundry, I'm gonna give you a demo in a sec, I just wanted to walk you through what I'm gonna demo you through. So you can do CF target, so you're gonna say, you know, here's my Cloud Foundry environment, I'm using IBM Bluemix or I'm using Pivotal or I'm using the one I set up on my servers, that's my target environment where I wanna run my app. Then you do a CF push your app, which is literally just the files that are part of your application. And then you can create services, like so if your application needs a database and you persistent data store, you create a service. Then you bind your app to the service so that they know to work with each other and talk to each other, and then you start your app. And if you want, you can scale it or you can set it up to automatically scale. And so what's the container, so a lot of people get confused between like Cloud Foundry and Docker and Kubernetes, like how all those pieces work together. There are containers in Cloud Foundry, but what we've done is say, you know, so what teams have done is they go and play with all these technologies and they put them together and what they find, and they're learning in the process, they learn about containers, they learn about schedulers, and then they end up with a team of like 20 people keeping those things running well together. And so what we've done is taking the best of those, giving you as many choices as we can and packages them all up and do that for you. So it's a prescriptive environment instead of a self-assembled environment. So you're not trying to put them together like that. And okay, for you really little folks, this is not a one-to-one mapping, so like schedulers, not core services. I'm just trying to show a little bit what's under the covers here. So like Garden is the container management. There's build packs that have all the information about, you know, they can figure out if you're using Python or Go or Ruby and put your application together. DiGo pushes your apps across VMs. Go Ratter does a lot of the communication stuff. And what I wanted to show here is that there's these technologies on the right that we're playing with, other people are playing with. And I'm gonna take Docker here as the example. So a while ago, you may have heard the Docker and a bunch of other organizations got together and created a standard called Run C. So when they did that, we took our container technology Garden and said, okay, now there's a standard there. We'll make sure that all of our users, you can't see it, but the Run C logo is in there. All of our users also have access to the standard. So you can use like Docker containers in Cloud Foundry or any container that uses that standard format. And this is basically what a container is. So a container is a set of isolation rules. So it has a process ID, a user, a network ID, and then it also has a file system and a set of what we take in the file system and a set of processes and we call that a droplet in Cloud Foundry. And so that droplet is your application. So it's enough of the operating system, the application and the files that it runs on create that droplet and that's what can't manage. And so containers are awesome, but they're just really not enough because you need a platform to run your container on. And even if you don't think you have a platform, you have a platform. So how many people have a platform that makes them feel like that or make some team in your organization feel like that? You do. And so this is a CF push that I talked about earlier. And so these are all the things that happen when you do a CF push and this is how they're bigger than just containers. So when you do a CF push, it creates your app, it uploads all the app files, it stores all the metadata, it stages the app, it creates that droplet that I talked about. I won't walk you through all that, I'm gonna show you. I should have showed you first and then talked about it. And then I wanna talk a little bit about build packs. So build packs, so when you create your application for Cloud Foundry and any language you like, and I say any language, and I mean just about any language, someone told me we have a cobalt build pack. And then, yes, that we take those, when you run, you do CF push, we take, we load all the build packs and the build packs actually have three scripts to them. They try to detect what language it is. They, I just forgot the second one. Then they build the application and then they execute it. So each build pack knows how to do that. So they all run the detect, if you don't tell it what language you're using, each build pack will check your application and try to detect if that's a language it should be involved in. So the Python build pack will say, is this a Python file? Okay, then run me. And it'll put together all your dependencies. If it needs to be compiled, if it's a language that needs to be compiled, it will compile it with your dependencies. If it's not a language like, if it's like Python, it just takes the files it needs with the dependencies. Oh, those are the three. So detects, I'm not supposed to run. It builds the thing if it needs building and then it passes on the data to the Cloud Foundry to be deployed on a virtual machine. And it's kind of the same thing with services. So services, if you wanted to add a database to Cloud Foundry, like a database service, not, you didn't want to use one, but you wanted to create one, you would create a manifest file that just says, it tells it, I think there's five files. It tells it, it adds catalog information. So it's like, I'm a database, if someone wants to use me. It tells Cloud Foundry how to provision it if someone calls it. So how would I can provision nice QL if it was that? It tells Cloud Foundry how to bind it. So if someone needs that database, how do they connect it to the app that it's using? And then it tells it how to unbind it and how to kill it so that it can easily be scaled, put in or removed from the running process. And so this is all part of turning all of those mess of files that you have to worry about into one where you only worry about your app. So I'm gonna do the demo. And I don't like live demos, so I apologize. I actually, here it's running. And so I'm just showing the directory that it's in. And I did this for real on my laptop. So you can see I have a Python app, HelloPie. I did it, I just, I showed you what's in it. So the only dependency is Flask. And then I show you the requirements file here. You can see all I have, the only requirement is Flask. And then I'm doing my CF push. I limited the memory, I don't know why because it's like a Hello World app. And I called it demo for you. And so it's creating the app. It's binding the interface where I had set up to push. This was Pivotal's Cloud Foundry at the moment. It uploaded my files. And then it's gonna start my app and look for the build pack. So it's creating a container to put it. Now it's downloading all the build packs because I didn't tell it I was using Python. So it's gonna download all the build packs and try to figure out for me what language I use. I had to, I didn't upgrade that. And so you can see it figured out that I needed the Python build pack and it installed the Python runtime. And then it's creating this droplet for me which has my application, the operating system and the files for me, just for my app. And then it uploaded it. And now, for some reason when I ran it this time it took a couple tries to get my instance running. It's only a Hello World app so I'm not quite sure what happens there. And so it started my app and then it tells me that it's running. It's not using any memory yet. And then I always do a CF apps because then it gives you a URL and my Hello World app is actually live on the web as soon as I've done that. So I'm going to Firefox. I put that URL in and you can see my Hello World app. Any of you could do this, you can set up a trial account in like five minutes and you can get this done in download any demo app. I'm from the, and so now I said, I'm going to, you know, it's Christmas time. I'm sure everyone's going to use my Hello World app. So I'm going to scale it up. I want eight versions of it. So I did the scale with the dash I for eight instances. And you can see if you start refreshing there's actually eight instances of my Hello World app. So if everybody wanted to check it out, I could scale up to the root. And so this, when I did this, this URL was live. And then I said, what if we added database? So I created a service, clear to be a database. And I made some error there. So I'm going to do it again. So I'm just creating a service and I just tell it what database service I want. And I tell it what I want it to call, be called. And it creates the database in a second. And there's lots of databases already built in the Cloud Founders. So you could use any of them. And databases often used for persistent data because your application itself doesn't store any data. And then I'm binding that database that I created to my app. So I'm saying bind my app to that demo DB is what I called my database. And I didn't actually do the restage here, I think. And then I'm going to go in and I'm going to turn on this get environment variable just so I can show you that the database is actually live. My very complicated demo here. And you can cheat. If you just want to try this out, you can write your own demo or there's a whole bunch on GitHub. If you just Google Cloud Foundry demo, you can download somebody else's demo and play with it. And so then I do another CF push because I've added another service. And it redeploys my app for me. Updates it in place. So it stops it and it starts it. I create the, now here's where, if I know I'm just using Python, I probably should have just told it I was using Python because I don't have to download all the build packs again. In this case, it's only a few seconds. Any questions? Yeah, I have some people. What? Where do you get the? Yeah, you can download the command line interface to your laptop. And then you do the, you do the, you do a CF to, sorry, on the spot, forgive me. You tell it which server you're using. So which Cloud Foundry instance you're using. Cloud Foundry won't actually install on your laptop. We've tried and you can get it to work on your laptop but it's kind of big for a laptop environment. So if you're just demoing it as an app developer, you can see how a whole bunch of instances running now. And again, there's the URL that's my actual live app. And if we go back there, you can see that we added all the database instance there. So you can run Cloud Foundry on your laptop if it's a little difficult. You can install Cloud Foundry on a server in your environment and then tell your Cloud Foundry command line interface to work with that one. Or you could sign up from a trial from like SAP or IBM or Pivotal and just use theirs. Could you deploy AWS? Yeah, so Cloud Foundry works on AWS OpenStack, a number of different infrastructure platforms. Any other questions on this? Yeah. The same source code. Does everybody get the same binary? You never see your binaries. But for anybody doing the build? I think so, but I can verify that for you. What's the case that you're worried about? Just to know that we started with this source code, the binary that running it. Oh, so security. Yeah, yeah. Yeah, typically you would build it and then you would be providing this URL as a service to somebody else. So typically you wouldn't be providing your Cloud Foundry app to others and expecting them to run it in their environment. You would be hosting your own app for them, but... But if we build it one day and then remain... Yes, making sure... And we rebuild it a month later. Is that the exact same thing? I think so from some conversations I've had, but I'm not 100% sure. So I will definitely follow up with you. Certainly the way... You could specify the versions. Yeah, you can write it all down for a part. Deterministic build. It's a tough problem. Very, very, very, very difficult. I can put you in touch with someone or I can just find an answer and tell you. Do you use Cloud Foundry? Nobody here, I usually recruit all the Cloud Foundry users and make them sit in the front row and help me answer all my questions. Any other questions? Yeah, right. Deploy Cloud Foundry. Yes, if you go to cloudfoundry.org slash docs, there's lots of information there and then we have a mailing list called cf-dev and there's a Slack channel, slack.cloudfoundry.org. All those places. The documentation should be, I'll be there. If you run into holes, please file bug requests. And then there's the two, the mailing list and the Slack have really active people that can help answer those questions. I'm curious, how many people would see themselves using their own instance of Cloud Foundry in their company, like installing it themselves on their own Cloud environment or in their own company? So a couple of you, cool. And how many of you would see your company contracting with Pivotal or IBM or SAP or Swisscom? Both? Okay. Cloud Foundry already. Yep, cool, yeah. Is it really for enterprises or what about if I have an individual website? Would you find, is it a good use case to use Cloud Foundry for that or what would you recommend? It's a good use case for both individuals and companies. Companies are really, right now I would say our most visible users are companies who are seeing these tremendous changes. Like they have this real problem that they need to update their prices two and a half million times a day or something and Cloud Foundry helps them scale to that level and they replace 20-person teams with Cloud Foundry or more. But I think the technology is now also super useful to individuals, especially individuals who couldn't have a 20-person team can now deploy an app that's available 24 by seven, scales, restarts itself when it fails. Like that potential wasn't available to individuals before. So that's why I'm here at scale. Like I think there's a huge message out there, right? That application that we want to empower those individual application developers as well. Any other questions? Anybody have anything to add? So AWS provides a lot of services and it provides a lot of the internet as a services. I'm Cloud Foundry provides the platform as a service so that application management thing. So it offers a lot more assistance to you in managing your app and making sure it's up and redeploying it and creating the VMs for you. So you actually put Cloud Foundry on top of AWS, for example. And if you want to add to that, anyone can add to that too. The database does this too. Does this as well too. It's not quite that simple because you want like one copy of your data, but they also scale. Yeah, you. Deploy Cloud Foundry in such that, you know, people who work for you are going to be able to deploy applications. How much infrastructure do you burn on that left side? Like how many, like, what is the actual, is that all going to be located on a single node or is there like 15 of them? Like what's the overhead of just running Cloud Foundry before the work itself? So it can all run on a single node. Probably not how we do it. Yeah, probably not. So you don't need like a whole bunch of machines to run Cloud Foundry, if that's what you're saying. Like you can run a single node. Does it look so little? Yeah, and that's why I think that's the power, right? Like once we can explain to people the promise that it makes, like you don't have to figure out all those pieces. Like you probably still need to understand how they work because that's the kind of job we're in. But you don't have to make sure they all work well together anymore. But you can run it on a single server and it'll work. How many nodes you add depends on like your redundancy and what your application needs. I mean, you can make it work on a laptop if you really tweak stuff. So I don't know at that level. Like you're asking where it builds versus where it deploys. So it's going to build it within a virtual, it's going to create a virtual machine and build it there for you. But you don't have to worry about the virtual, which virtual machine it is. Like if something happens, you're going to say restart my app, not restart my virtual machine. I don't know. Yeah, yeah. Can somebody write down my questions for me? So I can follow up with people. Thanks. Yeah, yeah. No, no, the old ones are gone. You would have to rebuild. Yeah, yeah. So it's all app based. So it has containers, we call them droplets, the container with your app information and they run on virtual machines. But the only part that you interface with is your app, which gets created into a droplet with a container. Does that answer your question? Well, I guess. Yeah, so you're not going to think, like once you're using it, you're not going to think about it running in various virtual machines. You're going to think about how many instances do you have running? And then Bosch and other parts of Cloud Foundry make sure that it's across different virtual machines or different systems and that it's redundant. Like, you don't have to worry about that. Even though you want to worry about that, I guess. Yeah. Are you still serving requests and switching them out at the little balancer? I don't know. The way I didn't know how to do it would be create new names, but I don't know if you can do that and run time on time. Yes, well, the databases are your persistent storage. So the apps don't have dedicated persistent storage per app, but you store data to the databases. And I think that's a problem that's still being worked out in the space. So there's actively conversations about persistent storage and where should it be, you know, where should it live and how should it work? So if it's a space you're interested in, like now's the time to join those mailing lists and like participate. Or you can wait and Smart Minds will figure it out, hopefully. You mentioned OpenStack, right? So I'm not the OpenStack expert. So Cloud Foundry deploys on top of OpenStack, but I wouldn't be able to answer that question for you. And if you do deploy Cloud Foundry at OpenStack, Cloud Foundry is the one that requests those virtual machines that doesn't, you are not working with your app space, not with the virtual machines from OpenStack. Any other questions or input from anyway? So it's gonna talk, yeah. If you have the same 10 spare servers, and you need to set a Cloud Foundry on each of the 10 servers? No, once in a, you tell it about the 10 servers. Creates those VMs on those 10 servers? Yep. And Poiseaus within them? Yep. Yeah. And a little bell too, so if you spin up a, you can just move them around. Yep. Those aren't spare servers, you need like a whole stack. Yes, you need OpenStack or something on them as well. So you need the whole stack and Cloud Foundry is kind of a top layer of the stack. Yes. Okay, so okay. You would have to install it, yes. You can do it first and then. Yeah. Yeah, you can set all sorts of different messages to alert it for different things, so yeah. Yes. If there's log files and there's messaging, I should probably repeat the questions. Has everybody been able to hear them? I'll start repeating them. All right, so we need the platform. We did a demo. So I think I've said this. So Cloud Foundry is the, it can make a promise to you because it offers these constraints, but what we wanna do is be able to make a promise to the app developers that their app will run, that it'll scale, that it'll be monitored. And that allows us to create this ecosystem of apps that can be moved around, the people that can move between services. And what I wanted to point out again here, I mean, these are the Cloud Foundry Foundation members. Cloud Foundry Foundation is a nonprofit that hosts the Cloud Foundry project, but there are a lot of companies here that are users. And so I think that's really key. Like it's a key moment because these are companies that are willing to invest in the open source software that they're using. So these are companies that want to come and they want to have a say in the requirements and they want to help develop it. And so we're getting news articles like this, that these companies are joining the foundation because they need the software and they want to have vendors that provide it for them, but they also want to make sure they don't have vendor lock-in. And so what we're trying to do is make sure that we rebalance the system so that the providers can still make money and they can still provide a service, but that we're also making sure that the voice of the user is heard even before they really understand the technology or know how to actually work on it. And so what we're doing is building individual, we're calling them SIGS, which is like a kind of overused word, I think in the space, but special interest groups for the different vertical platforms. So like how many people here are in the financial industry? Anybody in the telecom industry? Awesome. What other industries are people in? Healthcare. Nothing they're willing to share, CIA, secret government stuff. The government is, it's actually really cool. The government's using the US government, the Korean government, the South Korean government, the UK government, all of them are using Cloud Foundry and they're willing to talk about it. The US government actually created, it's a cloudfoundry.docs.gov, I think. Anyway, they actually have their own documentation site where they documented how they use it and how other US government organizations can use their instance of Cloud Foundry. And so we actually created these SIGS and this was the very first one we held last year, early last year in New York City. Nobody told me when I went that you're supposed to, when you go to New York City and you go to Wall Street, you're supposed to wear a button up blue shirt. So I was not only the only woman in the room, I was the only one not wearing a button up blue shirt. But it was really interesting. Like so they all talked about what they're trying to do in the space. They were really open to sharing with each other what they had tried and what was working and not working in a way that I really haven't seen in this kind of industry before. And if you're curious, my favorite example, like what in the world do banks need differently than Cloud Foundry than anybody else? They needed much more specific role-based permissions. So like someone that can start a service is not the same person that can stop a service in the banking industry due to all the financial regulations. And then another thing that we've done is we're creating dojos. So they're usually hosted by a company, one of the Cloud Foundry foundation companies. And these dojos are places where you can apply and you come for six weeks. So Cloud Foundry uses the pair programming model. How many people have worked in a pair programming type environment or tried it? Okay, so all of our code is pair programmed. Usually the people sit in the same room if they're in the same company, but we've also managed to do it virtually. And so dojos are a place where you can come for six weeks, you're paired up with someone or maybe several someone's who are more expert in Cloud Foundry. And by the end of the six weeks, it's expected that you are now a Cloud Foundry contributor and know enough about a space or two to be able to contribute. And typically then it's someone who works at a company that's gonna use Cloud Foundry. They go back to their company and take that knowledge with them and help spread it, pair with people in their company. So it's kind of a new model, hoping to bring more companies and organizations into this open source development model. It also has its challenges like distributed work environments. And then we also added certification, which is a new, I mean it's not certification, is it new? But it's kind of new in terms of open source software. So we have these large companies that want to use Cloud Foundry and they wanna know that the version of Cloud Foundry that they're using is the Cloud Foundry. And like if they switch providers or they downloaded the open source one that it would be the same thing. And so we added certification and a bunch of our members were at a launch in December and are now offering certified Cloud Foundry as part of their products. Like if you get Swisscom, it's Swisscom certified Cloud Foundry. Here I talk a little bit about what certification means. You have to certify the core. You have to certify what an extension is. So people developing APIs and you download those APIs are they also certified? Do they run against the certified version? And then I wanted to end on like, I think there's three primary ways you can try out Cloud Foundry. So how many people, before I tell the three ways, I'm curious, so if you, all of you are here to learn about Cloud Foundry with the little that I managed to explain, if you were gonna block out the door and go try to use it, what's the first thing you would do? You would look for a Cloud Foundry GitHub, you would install Cloud Foundry itself. What would you do? I would install Python. You would install Python? Cool, write your own app. Anybody else? And create a demo account? Cool. Anybody else? How many people would go try to install Cloud, oh, yeah. So do another in a trial account. How many people would go try to install it themselves first? And install all of Cloud Foundry? Okay, about maybe a third. How many people would go sign up for trial account? Most people, half people. How many people would, even if they're signing up for trial accounts, go to GitHub or the docs and read about Cloud Foundry there? Okay, awesome. Thank you. I'm gonna do my market research off of you guys. So the three ways to try it out are to sign up for trial account. So we have a number of providers, probably almost two handfuls of providers, SAP, Swisscom, CenturyLink, Bluemix, Pivotal, and a number of them have trial accounts. So that's one way to try it out. That's how I started. It's really easy. Can I just throw away a comment that you asked? Yeah. Install it, manage it. I wanna know that before. So I'm gonna include both sides of that. That's a really good input. And I asked before, but just show me again. How many people would eventually, if you used it, install it in-house? Now that you know what it is too. Yeah, cool. About 80% of you. My very rough guesstimate. You had a question? I did picturing it, do I install Cloud Foundry? Then would I have to install Cloud Foundry on an AWS instance? Yes. I would. So I would just do that first and then I could deploy the apps. Yes. Or you could go to CenturyLink or Pivotal and they've already done all that for you. You sign in for an account and then on your command line interface, you say, you know, link me to this Pivotal or CenturyLink or Bluemix instance and then you can just do your CF push from your machine. And they all have some sort of trial. They all obviously have a business model off that. So if you do run your app that way, it's really easy and then you'll pay them a monthly fee of some sort. Yeah, I would find that unlikely. But if you're a business, you might because you wanna know exactly how it all works and have it on your own hardware or whatever, yeah. So that's one option. Go sign up for the trial account. The second one is download it, play with it, contribute it. So a number of you said you would maybe do both but that's one way. And then the third way that I had and they're no longer actively working on Lattice but Lattice had a lot of the technologies that Cloud Foundry did. But it was kind of a standalone install on your laptop version. So it's still totally functional. I mean, it's kind of a good way if you're on your laptop this weekend here at the conference, you can download Lattice and play with it and kind of get a feel for Cloud Foundry in a smaller package without having to install Cloud Foundry and all its dependencies and all that. All in the docs, yeah. And the other way you can do is come to the Cloud Foundry Summit. It's like 2,000 people that are all Cloud Foundry fans. That's awesome. Maybe not as awesome a scale but pretty awesome. You can also come there and they're around the world if you're not based here. We have three of them, one in Asia, one in Europe, one in the US. So any other questions that I can help anybody with or I can follow up with afterwards? Yes, they will be wherever scale, post them or also my slide share account. Any other questions? And yeah. Does Cloud Foundry have a database system built in or do you support certain database systems? For the apps, there are a number of apps that are already built in as services. So you can use MySQL, you can use that Spark database, you can use Redis, Postgres, you can use all the big ones that are there and you can add your own if for some reason you wanted to. So if you're an app developer and you're willing to use one of the services that's out there, it's a really small learning curve. So I'm not an active app developer anymore like I play every once in a while. And I mean, I literally figured it out. I asked a few questions on the mailing list and got myself up with my first demo app within an hour. I mean, it was like really quick. If you're installing it yourself in-house, then you're gonna need to train some people to be Cloud Foundry experts. There's a number of consultancy companies that help out with that. So I think AccentureLink have consultants that help with the... Okay, so Pivotal, the big companies do like Pivotal and IBM and also there's a bunch of smaller ones like Altauros and ECS and Cloud Credo and there's a number of smaller companies that help. And I was talking to the ECS guys and they're like, when do you tell a client, okay, you now know enough, like you can do it on your own. So we're looking at providing training, we have training and we're looking at certifying individuals as Cloud Foundry experts and we're kind of trying to define like how many Cloud Foundry experts does a company need to feel like they're sufficient. I'd say it's measured in months probably if you're gonna install it yourself and use it in-house. Yeah, so like all the databases, there's a number of services built into it. Is that what you're asking? Yeah, there's been talk about that, there's nothing as part of the Core Cloud Foundry right now that's like an app store type thing that doesn't exist yet. All of the providers do add value, usually so they have Cloud Foundry and then they have their own GUIs and their own tools around it. So if you're looking to use it, you should evaluate Bluemix for Pivotal versus Century Linking, you should take a look at them. But we've talked about it because it would also be nice if there was an app store where individuals could put their apps and it didn't matter what Cloud Foundry they ran on but someplace to highlight them. Yeah. Is there a way I can add it? You can add your own languages, you can add your own databases, you can add your own services, yeah. If we have Cobalt, we must have a lot of them but you can add your own. Any other questions? Anything you were hoping to get from this presentation that you haven't yet? Okay, well I'm around for the rest of the event. Can you guys hear me? All right, can you guys hear me? It is 1.30 so we'll get started. This is Cloud 2.0. How containers, microservices, and open source software are redefining Cloud Computing. I'm Mark Hinkel. I work at the Linux Foundation. I have for about two weeks now but my background is before that I worked for the last five years in Cloud Computing. So I worked for a company called Cloud.com. Cloud.com was bought by Citrix. We had a product called Apache Cloud Stack. When I was at Citrix I worked on, I was on the board of Zen Project, the Zen Hypervisor, and our product was on server which we open sourced and spent a lot of time working with Apache. The open daylight which is a SDN controller that is a collaborative project Linux Foundation and that's where I am today. So while I said VP of marketing, I'm a committer on Apache Cloud Stack and I spend a lot of time with these technologies. I'm also sort of opinionated so I will try and give you a good lay of land and let you make your own choices but keep in mind these slides are stuff that I've worked on so I sort of have a propensity towards these technologies. I did put these slides up on SlideShare if you find them interesting and wanna get any of the links or anything. I usually try and put speaker's notes in my talks with links to the stuff I talk about so you can go do your own research and find that stuff. So I've been doing cloud talks for the last five years and during that time I've given the same talk probably every month for five years that it's updated. So it's sort of like that evolution of dance video. You sort of start out slow and I am not gonna recreate that video because nobody wants to see that but we'll sort of talk about the evolution of cloud. So back when I used to give these talks, this is the diagram I'd always start with. I'd be like, you know, we have this thing it's called the cloud and we have this public cloud and then we, that's host, cloud infrastructure hosted in other people's hardware and then I have the private cloud and that's hosted on our own hardware and then in our own data centers and then we have this hybrid cloud which transverses the firewall and we'd spend the first like 15 minutes talking about that. That's not really the case anymore. I mean, I don't think anybody, most people get that now. Like how many people here use Amazon EC2? Okay, Google, Compute, Azure. All right, yeah. How many people would identify as being a system administrator? Developer. I like to do the Balmer developers, developers, developers either, but that's good. How about just IT support or generalist kind of people? Networking people. That sort of helps me in the way I talk about things. So this is what I talked about five years ago. Everybody was cloud, cloud, cloud, cloud, cloud. And this is sort of what's happened over those years and as I am getting old with my progressive lenses, I hopefully will be able to read the tiny little print. But the thing I think is interesting is this whole cloud thing started way before we started talking about cloud. And I really think that's in the mid-90s they had a movement around service oriented architecture and so-and-so-and-so was a big buzzword. And then later on the service oriented architecture movement sort of faded a little bit, but we're gonna talk about that a little later. The thing that really kicked this off was this whole cloud rage was Amazon and their launch of EC2 back in 2006. So that was really sort of the landmark moment when they started to host compute infrastructure and it was in a very different way than what we did for virtual hosting because it was elastic. And then all of a sudden 2010 came around and we started seeing these knockoffs of Amazon from open source, OpenStack was launched, Eucalyptus, CloudStack was launched as open source and we started seeing these open source platforms so like Linux to this Laris era. Then 2013 of a company called DotCloud was a platform as a service provider and DotCloud launched, we launched themselves as Docker and now every time you go to a conference we hear everybody going Docker, Docker, Docker, Docker, Docker, Docker, Docker. How many people here use containers? Docker, cool, which I think is cool. I'm not being pejorative, I just think that people are gaga over containers right now for many reasons. Then 2014 we started seeing the rise of Paz, so Cloud Foundry and actually Pivotal launched that as a foundation under the Linux foundation and put it in a governance model that allows everybody to participate equally. Then Google did the same thing with Kubernetes and did the Cloud Native foundation. Now I'll point out I've been given this slide for like at least six months and now I'm at the Linux foundation so I wasn't really realizing I was hocking all their collaborative projects but sort of makes sense why I ended up there. So we're in what I call the era of Cloud abundance. We got Cloud choices, we got competitively priced, we have Paz that's hosted, we have Paz we can run our own data centers, we have containers, we got microservices, we got all this stuff and what I'm trying to do is take us in this little journey from what we have, how it's evolved and where we're going. So in this area of Cloud abundance you can see this is some like analyst chart but basically the amount of Cloud that people buy every year is increasing by 24% and you can sort of see a breakdown that platform as a service has one of the highest birth rates and we're gonna talk about that in a little while. But before we do that I wanna talk about what we did in the past because if you don't learn from history you're doomed to repeat it. So let's talk about Cloud 1.0. So this is Amazon is launched, Google is launched and we have all these what I call copycat clouds of which I was one of them. So copycat clouds were the idea that we wanted to bring Amazon into our own data center. So this is actually I was involved in the first press release is the cloud stack was Amazon style clouds. Then you had Eucalyptus. Eucalyptus is owned by HP but it was one of the first open source clouds to launch and their claim to fame was they wanted to keep as close to 100% fidelity to the Amazon API so that all your tools would work. There's another one back in 2012 where HP was talking about their HP cloud compute and undercutting Amazon. That was an open stack based cloud I believe. But is anybody familiar with the Netflix open source open source program? It's a pretty interesting program. It's a bunch of tooling from Netflix operations that they use to run stuff in the cloud. And they scale out their business on Amazon which I find is interesting because a lot of they have some hyper elite operations folks they really have that DevOps culture. They're at the forefront of operations and they use Amazon and the way that they scale out is they build on top of Amazon and then they go into new geographies. It's dependent on whether Amazon has a good enough footprint for them to deliver their services there. So the guy who used to be the architect for Netflix cloud was a guy called Adrian Cockcroft. Has anybody seen him speak or he speaks a lot of cloud conferences, a lot of open source conferences and now he's a technical advisor at Battery Ventures. But he's a really smart guy and he said that people would ask well why aren't you doing it on Google and Amazon? He said well using multiple clouds is like Raymond riding. So that's Raymond riding where you stand on one horse. So having multiple clouds and trying to run your operations across the cloud is really, really difficult. It's really hard to balance and eventually something falls down. So he said we decided to go with Amazon and it made things a lot easier and he had looked at things like cloud stack and Eucalyptus but their choice was to scale out everything they could on Amazon because they didn't want to have this Raymond riding problem. Also in this cloud 1.0 people were very, very particular about the silos of how they define cloud. So they're like is it a public cloud? Is it a private cloud? Is it a hybrid cloud? And what that meant. So that sort of if you read information week or info world or one of the trades every week they would have like private cloud round up and public cloud round up. That's really, I don't think a good thing and we're sort of getting past that in what I call the sort of cloud 1.5 era. And this is what happens in the cloud 1.5 era. There's lots of people like Twitter's and Facebook's and Netflix and PayPal's that are really interested in the cloud and they're building the infrastructure and those people are adopting it that people that were sort of enterprise operations were not really getting on the bus as quickly as them. And then all of a sudden they're like, hey it's not just for the dot com people, it's for everybody and we're really missing out. So all of a sudden probably like 2010 everybody's scrambling for their cloud strategy. 2000, not 10 maybe 2013 I think is where it's started to peak and take off. So enterprise IT started saying, hey I gotta play catch up here. And this graph is from a guy from Simon Worldly, Worldly is an analyst from a CSE corporation, a blogger, really smart, really opinionated guy which used to be the vice president of cloud at Canonical before he took that gig. Now the difference between cloud and like in cloud 1.0 area, a lot of people use the tool called RightScale. Anybody here use RightScale in the past? And I mean RightScale was they took off because it was one of the few tools that you could use to manage cloud workloads effectively and it was a hosted tool. Now in this 1.5 or in the last few years we just see a ton of new tooling come out that's really interesting. So things like in configuration management we had Puppet, but now you have Chef and Puppet, you have Ansible and SaltStack that are sort of helping to solve the same kind of automation and configuration problems. You've got the, anybody here know HashiCorp? They do like Vault and Vagrant and a million other things that are really cool and awesome. You have Docker out there now, you have a ton of Manage IQ which is a cloud monitoring that's Red Hat sponsored project and you go on and on. So we're in this era where the tools that we have today are becoming as useful as the tools that people used to spend millions and millions of dollars for back in the 90s. So back in the days of the Tivoli and the BMC and I mean, which are still around but not in the as abundant as we'd like or not as cheap as we would like, I should say. And the other thing that I think is the catalyst for this cloud 2.0 or 1.5 era is sort of this change in culture. So how many people here have, did anybody go to DevOps Day this week? Okay, anybody here? How many people here have heard the word DevOps before? Okay, I just like to make sure because so just this cultural change about that sort of reverses the conventional thinking on operations which my wife works at a very large conservative pharma company and IT project management and they're of the, they do releases once a month and they plan and they're going through ITIL and they have processes over processes and they have silos of people in development and operations and she's the person that bridges operations developed. Like they don't really talk together. It's sort of these old school kind of ways to manage IT. Now you guys know who the guy in the middle is? His name is Patrick Dubois. Patrick Dubois is the guy who started the DevOps movement and is a real low key guy who lives outside of Brussels and Belgium and sort of started the movement. Probably hasn't made a dime off of DevOps but it's one of the biggest transformative movements in IT and as long as I've been around. Anybody here read the Phoenix project? So Gene Kim, anybody who ever used Tripwire, Gene Kim was the guy who started Tripwire and now he is a sort of evangelist around IT trends and sort of has the mission to help improve lives of people in IT is his stated goal and he wrote this book, The Phoenix Project which is actually a story that sort of encapsulates the ideas behind DevOps in a real life fictional kind of situation. And then you have people that are set in the model and I mentioned earlier, Netflix. So you have this changing culture, you've got tools and you've got an abundance of cloud stuff. Then you have this one final thing which I call the cloud industry shake out. So if we talked about public cloud, these days the way I look at it is there's only three vendors in what is truly the generic public cloud. And I'd say that's Amazon, Google and Microsoft. And what I mean by that public cloud is they have a global footprint at massive scale and they move really fast. Then you have this other sort of tier and actually since I've recently seen that CenturyLink was talking about selling off their data center assets. So I'm not sure if I would qualify, I would qualify that anymore. But the idea is that the people that own the pipes so the telecom bandwidth providers, the data pipes, the level threes, people like that are sort of offer these managed services because not only do they have the cloud but they have the end-to-end network and a lot of people are looking at security, they can get end-to-end security in SLA from not only their compute and storage but also the pipes to carry it back and forth. They are the premium over what you pay at Amazon. The Amazon has tons and tons of services in a huge ecosystem. These managed service providers offer something above and beyond their premium service. And finally, I have what I call the SP and the SI cloud. So SoftLayer is IBM. I think that's the best example and they have system integrator capabilities and they're surprise-prost so they already probably are selling use things, your mainframes, your software, your Java platforms, things like that. I have HP on there but it seems like every time I check out HP it's not clear whether they're all in or in their own cloud or if they're gonna be an arms deal or to everyone. So that's where I think SoftLayer is one of the, isn't a large example of these SPSI clouds. There's people in Canada that I think do a real good job called cloud.ca, they have a public cloud but they're all about, they have a really great services shop integration, that kind of cloud I think is interesting and they have built it for you. And then finally, there's another one in the Netherlands I really like and the other reason I like them is because they use cloud stack but they're Schieberg fills and they're in Amsterdam and they do a lot of custom built clouds and run clouds for the customers. They're services offering and they lease the lines. I think that's interesting. So those things are the shake out and then this final thing that is like the miracle that saving cloud is containers. And I'm being a little facetious. I think it's really interesting. I think it's important technology but the container availability is now helping with the one problem that made cloud siloed was portability of workloads. Containers make it really easy to move a workload from cloud to cloud, from your data center from development, from where everyone and run it somewhere else. That's why I think people call it the, we've been calling it the flux capacity capacitor for cloud computing. It's what makes cloud computing really possible. And all these things I list here are benefits. How many people do continuous deployment from development to production in containers? Anyone here? Cool. How many people just use it for, just for production and of open source software? Anybody have proprietary products that are running in containers? Okay, cool. So as we finish this 1.5 error, I think the thing that's interesting is we have all these tools. We have this movement of open compute if you went by the Facebook booth this week, they had their open compute thing out, so they're opening up the hardware, which I think is interesting. It's probably not affecting us now, but it's gonna affect us a lot later on. I think it's gonna drive a lot of standardization in the hardware we use, because variance is the thing that makes things difficult for a lot of the things we wanna do. So open compute with a more standardized footprint for your hardware. When our compute layer, this is the virtualization layer. We add things like KBM and Zen, and we've had VMware, but now containers is virtualizing our compute layer. We have distributed storage from Ceph and Gluster. How many people here use Ceph? Okay, wow. How many people use OpenStack Swift? Or OpenStack Storage is the correct team. Interesting. How many people use distributed storage from EMC or NetApp? Okay, interesting. Then we have networking. I listed Open Daylight as sort of this SDN layer around networking. I don't know that it's dominant, but it's certainly one of the leading controllers for virtualizing the network. So, and I think that's key, because as you have things moving around in your cloud, you wanna make sure that you're, if you virtualize your compute and your storage, if the network isn't virtualized, and you can't program it the way you can, your other infrastructure, it's gonna be a limiting factor. Above that, we have OpenStack. How many people here use OpenStack? Okay. Anybody here use, Joint is now SmartOS. Anybody use SmartOS? Well, Joint, I mean Joint is a company, but SmartOS is the open source. Anybody here use Apache Clouds for that? One guy. That's good. And then you have, you have in the cloud, public cloud, you have EC2, Azure, Google. And then on top of that, you sort of have this abstraction layer on top of the infrastructure from Docker's containers, you have Mezos, any, you guys know what Apache Mezos is? Okay. And Kubernetes. So Kubernetes is the schedule, is a scheduler that Google wrote to help manage containers. And then you have this platform as a service. Now, honestly, I've seen Gigas, fit cloud foundry and OpenShift and Gigaspaces all in production, but I think one of the ones that has the most momentum behind this lately is cloud foundry. How many people here use a platform as a service there from one of those three? Be aware. Okay, interesting. So all this stuff sort of has evolved and now it's present day and this is what I call cloud 2.0. That's where the magic happens. So cloud.2.0 is that magical place where everything works and our infrastructure is elastic and we have almost zero downtime and we do 20 releases a day and it doesn't break stuff and our customers love us or end users love us and that's where we're at to do today. And here's what I see coming in in cloud 2.0 is that slide just shouldn't be there. So first off, we have all the infrastructure is open source is available and every bit of that layer is available in open source and I think that it's really interesting from an open source angle that I look at it this way. There's a lot of competition between companies and whose products are better. The reason I like open source is it isn't a zero some gain. This is a quote that I use all the time from Allison Randall who when she was the program and I chaired it, I really said it and I really thought it was interesting is in the old days people like IBM and BMC and Tivoli or not Tivoli, computer associates all were very, very competitive for this infrastructure layer and they all were developing things that were overlapping. Today we see a lot of in open source you see people like HP and IBM collaborating on the Linux kernel and you see them collaborating on infrastructure stacks and things like OpenStack and they're not wasting their resources on things that have low value. They're really driving through a lot of innovation a lot faster speed than before and I don't think that would be possible without open source. So earlier I had talked about Simon Warley and his sort of theories and his ideas were basically you only develop what doesn't adjust develop what doesn't, that's so badly written I had a little hangover this morning. Develop what doesn't exist to meet your needs leverage the growing boost of high quality software like Linux and OpenStack and Cloud Foundry and Virtualization like KVM and Zen and then commoditize things like Linux. So everybody is sharing development of Linux across the industry. A lot of people are sharing development of OpenStack. You leverage what's there in OpenStack but you also commoditize the stuff that isn't differentiating like scheduling workloads. That in itself is not really interesting but the logic on top of how you schedule the workloads are pretty interesting. So earlier I was talking about that I had that little timeline and we're talking about back in 90s there's this thing called SOA. They talked about componentization and a design pattern which is where individual components do a single thing via the web and are loosely coupled. That is very much what sounds like microservices today. How many people have heard the word microservices over and over and over again. So the thing that's interesting and I mentioned earlier about Netflix is the way that they create their infrastructures based on microservices. So they have a microservice that does one thing and maybe that one thing is, I don't know. Encoding a movie and it only encodes movies and you can call that to encode movies for re-display across their network. And the way that they deploy this microservice is they deploy it to do this one thing, one thing only and if they make improvements to the service they don't redeploy the service. They deploy microservice 2.0 and the microservice 2.0 is backwards compatible with 1.0 but it has all the improvements. So the application developers actually consume those services as they're ready not affecting everything in production. So if North America is using this encoding service and they're ready to go to 2.0 but South America operation region isn't they can actually just choose not to consume that till they're ready. And then you can leave those microservices in production forever or until they're no longer used and you compare it down. So that sort of loosely coupled network infrastructure has actually been talked about since the mid 90s but actually you start seeing in practice now in the cloud era and part of it's because we're in this area of cloud abundance I was talking about earlier where there's things like that. So we talked about containers earlier. Now did anybody see the Docker news today? What kind of company did Docker buy? Unicernals. So now we're gonna go to conferences and they're not gonna say Docker, Docker, Docker they're gonna say Unicernals, Unicernals, Unicernals. And actually I think they're talking about Unicernals the Docker guys next door talking about Unicernals now. Now I earlier on I was talking about what I've done in my past and one of the things I was on the board of the Zen project and working on that for the last three or four years and one of the things that came out of this Zen project was this thing called MirageOS and MirageOS is a, I don't know what the right term of art is a unicernal runtime. And basically what it is is that it's a sort of a unique operating system if you don't like it or a fast and lean operating system if you do like what they're doing and what they're doing is they're packaging configuration files, the binary and just the necessary libraries for them to actually run this image and you can run it within, I know about Mirage because it was a Zen project you can still use the hypervisor to run these things. You could also probably run them within containers and make your containers lightweight. So that's sort of, I'm not sure how that plays out exactly in the Docker container space but I think it's an interesting technology that is really gonna change, is really gonna be a part of this Cloudbot 2.0 era. Now all this sounds well and good until you start thinking about what's the downside to all this and somebody asked about this, anybody here read the new stack? So the news site, it talks about cloud, it's pretty avant-garde guy used to write for a giga-ohm but he asked Mitchell Hashimoto who, HashiCorp, what do you think's the number one problem in cloud is and that's service proliferation. So now we're in this area of abundance but we also have service proliferation especially as we start to consume microservices and now you're gonna have all these microservices so it's great to have abundance, it's hard to track abundance because that's what I call the zombie problem. So what's the worst thing you can do in the zombie movie? The guy who always dies runs into a house that has a lot of windows and doors and you have a bigger attack place. So now we have all these microservices running and you have all the windows and doors and you know the guy in the walking dead that runs into the house and bolts the door and leaves the back door and you see all the zombies run in it's cause there's lots and lots of openings. That's the downside of this era. So today, as I said earlier in that shakeout, I think that the public cloud is this. This is what we look at and I think those prices, I haven't seen a big web Amazon price drop lately but it seemed like for the last five years just when you think computing can't get any cheaper it gets cheaper. Security used to be the big criticism of these guys. I think they're probably getting more secure. Now we have things like Docker and Amazon supporting Docker and maybe Unicernals at some point. The portability problems across cloud is changed. So you want cloud computing to be like electricity like the initial terminology for cloud computing IBM used to champion was utility computing and you want to be able to plug in your workload just like you can plug in your laptop to outlets and have them be standard. And you're also going to see I think from these guys of proliferation of microservices. So Amazon is masterful at coming up with supporting services for your EC2 instance. So they have monitoring services. They have a suite of continuous integration tools. They've got no SQL database services and Java container services and all these things to add on. I think you're going to see microservices that are going to be more data streams and other kinds of things we haven't seen in the past. Then in the private cloud you're going to see what I call in the private cloud I think is what really is the minimum viable cloud. Cause if you try in the early days I think where we went wrong is we were just trying to keep up with Amazon. Well, Amazon has one architecture target and when they make changes they control all the variables. If your software developer trying to develop software to run on all these different infrastructures and have all these different variables is very, very difficult. So I think what's evolved is with OpenStack which OpenStack I feel like is an ecosystem for you to consume the parts you need to make the minimum viable cloud which a lot of times I think will be done by vendors like Mirantis or Red Hat or numerous other people in that industry. My personal, the reason I've always been such a big fan at Apache CloudStack is simply cause it always has been a minimum viable cloud. It very, it didn't, you know, one thing really well and it's scheduled workloads for cloud computing on your own hardware. I mean it is not, it does not have massive number of sub projects and identity management and used standard protocols for all that whereas things like OpenStack has a project for each one with a lot of support. So they're legit too. And then SmartOS, SmartOS comes to you from the people that did Joiant. And the thing that's interesting about SmartOS is it comes from the Solaris route. So I think it's the Luminos. Is that the open source Solaris kernel now? Then Joiant has been, is really a pretty good cloud hoster. They've, they're really smart. They had the idea of containers and used them for quite some time in just a little bit different format than what we're used to sort of leveraging the Solaris zones model. The foil to that is Linux sort of copied that idea with LXC and the container model but that wasn't exactly an original idea. Just a very good implementation on an operating system that has wide appeal. Then we have what I call this Public Cloud Plus and probably the one that I think is most successful is not a wide use cloud and I think that's Salesforce. So they bought a company way back called Heroku and the brilliance of that was Heroku was a platform as a service. So you can write your applications for Salesforce and then hook them into your CRM and host them. It's pretty brilliant. Like, but they have some extra, they're not just a virtualized infrastructure and elasticity. They've got high levels of service. They have tooling. They have continuous integration. They have a lot of things that make the Public Cloud Plus. And I don't even mean that to me be more features as much as I mean it to be more features, not better features or more specific ones, I should say. So in the early days, we had those silos and I think what's happened now is those silos are sort of gone. We sort of, I think the cloud 2.0 cloud is, there's just cloud and it may be in your own data center and it may be in someone else's data center but you're weaving together all these different things into a single fabric, which is what cloud is. We're getting better and better at bridging between our data center and their data center. Those services that are hosted with microservices I think will start consuming things across different providers and weaving it into fabric from our data center and Amazon or Google. So that's sort of how I think that this all evolves. I have 40 minutes worth of content and I was because I had written down this was 45 so I have plenty of time for Q and A. If there's stuff like right now I spend a lot of my days with, this week I was spent in my days with Cloud Foundry one day, talked to the guys that were running the Cloud Native and said the container initiative, when it's foundation I spend a lot of time with Apache mailing list for a cloud stack and so if you have a question of what do you think or where can you point me to get more information of what you're trying to do, like how many people here are considering building their own cloud and their own data center? How many people here are just trying to figure out what all this cloud hoopla is all about and how it all fits together? Yeah, yeah, I think like when I got into IT it was right in 95 and the thing that was like the biggest transformation for me is I worked in an internet service company and everything ran on Solaris. Now if you know anything about the internet service company they make very, very little money per customer. It's like razor thin margin and the Solaris hardware at the time was really, really expensive and so we, and I vaguely, I think Pentium, just the Pentium processor came out the first post 486 processor came out and so all these machines were getting decommissioned and they were like DX2 66 megahertz machines and these Solaris boxes were really expensive. They're sort of these pizza box things and they're real funky looking and every couple of weeks you get CDs in the mail with your updates and you go around, press the CD in and it was, yeah that guy there shaking his head like it was pure hell. Like you were not doing the pixie booting and updating stuff, it was manual. So the thing that was transformative was these DX2 machines that we had we could put this operating system one that we could get for free that Linux and it had zero incremental cost added to these boxes. Other than time which in those days the thing that was the limiting factor was driver support a lot of time like for storage arrays and things like that but that was that sort of era of abundance in the operating system and that just changed the way that IT growth just went gangbusters and started the .com. I feel like all these cloud services, cloud tools are like that next wave of things that are just gonna accelerate how effective we use IT. So the question was what do I know about cloud orchestration tools for Docker? I know that there's a huge need for that and so I mentioned right scale before. So right scale back in the early cloud days was going gangbusters because it was a good orchestration for tool for infrastructure in the cloud. Right now I think that's sort of the next so Docker has some of these tools. I think Kubernetes was a huge thing and I think the Kubernetes management is sort of gonna be the next set of tools and I'm gonna give a shout out to an open source project called Skipbox. And Skipbox is written by a guy named Sebastian Gozgon who is in Geneva, Switzerland who's pretty darn smart. He's the VP of Apache Cloud Stack. He used to work for me because he's the smartest guy I know in that and he decided what he wanted to do for his own personal project was write this thing called Skipbox. So it's in GitHub. I would definitely go check it out. Tweet to him, he would love to hear what you think but the idea is I think that you use containers and then you use Kubernetes but there's still some orchestration tools to maximize the way. Like I would, like Kubernetes is, if you guys don't know, came out of Google. So everything that you touch for Google is part of this thing called the board and they just add their workloads into this great big mesh just like the Borg in Star Trek. And so that software that runs the Borg Kubernetes, they spun out as Kubernetes because they want workloads to be easy to move across from your infrastructure, their infrastructure and make your world better. But Skipbox would be a place I would look. But I think there's gonna be a lot of other stuff that I'm rooting for Sebastian. So Rancher, that's convincingly, I always forget about Rancher. Rancher was started by two guys, Shang Lang and Shannon Williams who are co-founders of cloud.com and worked with me at Citrix. So I do know about Rancher and Rancher or us. I, and actually they're smart. Like Shang is the guy who raped the Java virtual machine for James Gosling's team at Sun. Like he's crazy smart. When I'm trying to figure out from the vendor-led projects and their venture-backed company and have a lot of smart, one of the best engineering teams I've ever worked with they're two of their architects are just out of this world and a lot of engineers are amazingly smart. Whether or not that layer of tools can be commercialized or becomes part of the open source ecosystem is what I can't decide. Because Google, I feel like once that to be a very democratic layer that everybody participates in and there's not a lot of vendors there. So I think Rancher has some value and Skipbox has some value. But I think it has better shot at the open source tools till we figure out like what the Kubernetes management layer looks like or orchestration layer work looks like. She just said there's another one called DCHQ. Yeah, yeah, so that's my, like I'm fine with proprietary software. I'd love for an open source software but the thing is if you go back to the strategy is develop the stuff that doesn't exist and commoditize what has very little value. Over time, I wonder how much those orchestration tools are going to be commoditized by everyone because everybody or not everybody but you see more and more participation in the end user in the creation of the software and infrastructure tools. And I think that's why I'm like, Rancher, Alas may become, well Rancher has an open source component to it so that what they're doing there might be commoditized and maybe someday Rancher is in the business of like reporting on what all the stuff's going on in your dashboard business for example or something like that. But it's moving so fast and plus Docker themselves and CoreOS, like CoreOS and anybody here know CoreOS? So they're an operating system that uses system D and schedules, container workloads very, very effectively. I've probably been doing them justice by that but they also I think are gonna be heavily involved in creating management for Kubernetes. I think they do a lot of services around that so yeah it's sort of the early days of and it's sort of tough cause you if you start investing your time in a tool you want it to be around for the next couple years. So it's great to be on the leading edge of technology it stinks cause you're the canary in the coal mine for everybody else to figure out how to get it done. Yeah, but anyhow I do think though that the way that they talked about cloud when 20 years ago when the SOA people all the if you see a guy or a gal who worked at IBM who used to talk about SOA and they show up at a cloud thing they're just like nodding their head like yeah this is what we talked about but they were selling that this tooling and infrastructure via IBM and not just them but other companies and now there's this abundant amount of infrastructure and inexpensive access to public cloud which is really driving a lot of cool things so. That is pretty much all I had. My contact information is there which apparently is really really tiny cause I changed the font but I am I'm R. Hinkle on Twitter if you Google my name I am one of usually the first hit on Google for that my blog is socialized software. I'm happy to chat with you or direct you to someone much smarter than me on cloud but I hope this was a good like overview of like how it all fits together or what you should be considering and there's tons of good talks on the people like the tools that I mentioned all those HashiCorp tools are really interesting one that I think is pretty cool that's fairly new is Vault which allows you to manage your public and private keys across clouds which is sort of a pain in the butt like Google makes it easy for you to do it in their cloud but not Google Amazon but Google does too to some degree but his tools that work across all clouds just tons of good stuff to Ansible and how many people here use Ansible? So I think it's interesting so this is my little sidebar on tools like the thing that really has driven like operational efficiency and automation in my opinion is this whole cottage industry around configuration management so it started with CF Engine and CF Engine is this management orchestration tool to manage configuration of infrastructure started by a guy who's really, really, really smart and his name's Mark Burgess and he just wrote a book if you'd like to geek out on like operational theory and why things end up working well it's worth it and I forget I wish I could remember the name of it but he started this sort of movement and then this guy came along Luke Kniece and Luke Kniece liked what he did but it didn't quite work for him and he's anybody use Puppet? So he's a guy who wrote Puppet and so he took it a step further and came up with some things that are interesting the idea of inheritance and some other activities that he really got a big following in the operations community and then came Chef, anybody here use Chef? I feel like Chef's pretty popular in the cloud world and that was done by a guy named Adam Jacobs he liked what Luke did but he had a ticket actually there might not be a chef he had a ticket and Luke's like well I'm not gonna do that ticket he's like okay I'll write my own configuration management thing so he did which is good because I think diversity is good and then all along the way there there was this guy at Red Hat his name was Michael DeHaan and Michael DeHaan wrote something called a funk anybody here know funk? He wrote the thing that automates kickstart too and everybody it's used all over the place you pixie boot to this management thing he wrote all this management stuff at Red Hat that sort of was the glue behind it including a lot of stuff for Red Hat, for Puppet and Cobbler was, that's what it is, Cobbler anybody here use Cobbler? So he wrote Cobbler and all along the time, this way he was thinking well you know I like what the Puppet guys that they're doing, he integrated Cobbler and Puppet but he wrote this thing called funk he's like it sort of did what I wanted it to but he baked these ideas and wrote this thing called Ansible so ironically he moved to Durham, Red Hats in Raleigh, North Carolina where I live and moved up the road for a couple years and had a couple jobs but wrote this thing called Ansible which Red Hat recently bought for a hundred million dollars when the guy used to work in the back room probably give him a raise and some better coffee and we've been happy but I think that his difference was where Cobbler and Salt Stack falls into that same bucket too as he built this thing with the idea that he just wanted to execute things and this execution environment is what was valuable and then by virtue of the fact that it was modular and you could execute scripts you could execute these configuration scripts which I think might be called playbooks or something like that and they're written in YAML and basically so now this is sort of the way that a lot of people are doing configuration management but at the end of the day these tools are what's making you able to automate and keep up with cloud infrastructure because now you can spin up hundreds of instances and minutes before like when they used to let me touch servers which is not a good idea anymore like you call somebody on the telephone like there was no online ordering for Dell and they would send you these servers on a pallet and they'd come to the shipping dock and you take it out of the pallet and it was like the slowest server with like 16 megs of RAM and you'd put it in the rack and you'd bust your knuckles and then you'd have to wire it in and then you'd have to go back and you'd have to open up your firewalls and your checkpoints software and blah, blah, blah, blah so to provision like one server was like a six month or six week or deal till everything came together and now it's like I can launch 1,000 instances in an hour with a credit card and an iPad which is crazy so I think for when that ability to keep up with the velocity of cloud stuff these tools are what's really, really interesting to me and my day, hey my day I feel old now but yeah whippersnappers back when I walked up and I saw Hilbeth's ways to the data center, blah, blah, blah but in those, when I was still doing a lot of operational stuff, the limiting factor once we got all those servers is if you're gonna buy tools, it was really expensive and they weren't exceptionally good at the time like I remember monitoring tools were expensive so we had pro scripts that would ping all of our infrastructure and then if it got a failure it would parse that and send an email and we'd have, we had like these old compact all in one desktop PCs running Slackware that would run these crown tabs that would ping everything and one day we got, we got approved to get NetCool which I was like, yes this is awesome and so we hooked it up and started adding in the IP address and we're an internet service provider so we had like tens of thousands of modem racks and servers and stuff to ping and I remember the guy entering in like the first part of the spreadsheet that had all this stuff in he's like it wouldn't take any more numbers like what do you mean it wouldn't take any more numbers we got NetCool but we only got a license for a hundred IP addresses so because it blew our budget so now I thought it was interesting to see Nagyos sponsored here like Nagyos is not an overly complex monitoring system but how many people here use Nagyos? Like yeah, it's everybody uses Nagyos because it has utility it's not that hard to administer and it does what it needs to do to do availability checks and there's a huge ecosystem of plugins so you can figure out how to manage pretty much anything that's started it I worked at a company and anybody here use Xenos so I was employee in one of the early first five or six employees at Xenos which is open source monitoring software a long time ago and then you have open NMS and has a booth here and there's a lot of free and open source software that's really pretty good for monitoring that used to be a big cost and hard to manage so anyhow, any other questions? Thanks for your time, I appreciate it I just started two weeks ago but I can't have a business card that I could race I brought up DCHQ yep, that's all quite right that's kind of better yeah, let's wait a couple of minutes until until it's time for stragglers thank you for turning up so late in the day and just did it again well, to stop that for now it's always a bit of a salgrade slot in the afternoon on a Sunday all right let's get started so let me give a bit of background of why I put this talk together I've come in some space, lots of space here so I wanted to sort of look at the question how do we deal with security of honorable abilities and open source projects and what's the best practice is the consistency and so on and so forth in a sort of light that software stacks are becoming more complex and like in your traditional stack you have lots of different layers and so that's the idea that if a particular project, a piece of your puzzle doesn't do security very well then the whole stack is gonna be vulnerable and basically we sort of question up there and then I have to wait until the end to get the answer a little bit about me I've been contributing my name is Lars Kurt I've been contributing to a number of different projects GCC, Eclipse, Xen bit of Linux as well I've worked in a lot of different technologies and today I do the community stuff for the Xen project I happen to work for Citrix I'm a member of the group which developed Xen server but I'm also the chairman of the Xen project advisory board and the Xen project as a Linux foundation collaborative project so anybody disagrees with the fact that software bugs happen and we can try we can try to minimize that but at the end of it at the end of the day there will be some vulnerabilities and a picture, an analogy you know, sort of see there is I'm kind of using, it's going to use this a little bit through the talk it's this whole idea of vulnerabilities being like zombies you know, which can do a lot of bad things and and then you know the way but you know, we'll get back to that at times to make it a little bit more fun so let's start with and by the way just interrupt me with questions any time I said we have an hour so we'll be we'll be you know, I can take one or two questions during the talk that shouldn't be a big deal so I just wanted to sort of start with a little bit of terminology so you know, at some point when you write code you know, bugs get introduced you know, the way how I look at bugs of vulnerabilities in this sense is you know, like it's like a broken window in your castle where somebody can get in and you know coming back to this whole zombie analogy at some point in time you know, somebody actually discovers you know, the bug now if it's undiscovered or if you know, or if the vulnerability is undiscovered well nobody cares, right? nobody's going to do anything with it now this could be you know, a bad guy who discovers it and does something you know, with it you know, like sell it, exploit it you know, steal your data game over not very much we can do at that point but what we're going to look at is really that scenario where actually the discoverer you know, isn't as bad you know, they go and report the issue to your project for example you know, via security at your project dot whatever you know, email address you know, where you report vulnerabilities and you know, that gets us to the point of actually, how do you manage vulnerabilities? how do you do it well? how do different projects do that? what are the patterns you know, out there today you know, which are in common common use and in some of us those have different trade-offs so way how I look at you know at vulnerability management processes in open source project and it's really a little bit of a team effort and so I wanted to draw back to that you know, zombie analogy I don't know who's seen the living dead or read you know, the comic okay right and then you know, like there's this scene where they you know, it's eventually you know, they found a prison you know, they managed to keep this quite secure they deployed a lot of techniques to make sure that the zombies couldn't get in you know checkpointing, you know make sure that make sure that everything is fine and in many ways you know what vulnerability management processes do really, they're like a team effort you know, to make sure that you know, you find all the open doors and weaknesses if you have them you close them as quickly as possible you make sure that all your doors are locked you know, that your windows are bolted up and so on and so forth you know, that your fences have no weaknesses and you know that's kinda that's kinda how I look at you know, vulnerability process and then you know, it depends on how do different projects you know live up to that but before I go into this let's just look at some of the common common patterns which are around today in a very basic way so the first pattern um is full disclosure so what does that mean? so you know, we have our we have our buck being reported you know, probably in this case you know, to a guy or a small team or a security team they look at it they put a description together and then they announce it you know, possibly on a mailing list and work on a fix um eventually you know, a fix is available and then probably make a little bit of a larger amount of announcement and get everyone you know, everyone else you know, to deploy that uh, that fix now the color scheme there is a little bit is important because red means you know, a large number of people know you know, about that issue and it could potentially exploit you know, a vulnerability or buck and during that you know, during that red time period and there is and traditionally has been a little bit of a debate on responsible and full disclosure but I'm not kind of going to that you know, just Google for it there's been lots of controversy around it about what's better and what isn't I kind of wanted to move on from that and push this a little bit further so, I was talking about responsible disclosure already so that's the second pattern which is commonly used and what does that really mean and how does it compare to full disclosure? Well, you know, basically um, our bug that gets discovered you know, somebody looks at it um, this is all happening in a small very small team in private you know, behind that security at list um, uh you know, the people who are on a security team will basically probably work together with the reporter and fix the issue um, and then it will, you know it will create a patch at the end of the day um, that patch goes then to something which is typically called a pre-disclosure list so that's a list, you know, of people who have certain privileges you know, like it could be it could be a number of key users or um, you know, it's not the entire world it's a sort of select smaller group of people who get to know about the vulnerability and can do something with it and while going to that the kind of different models there in some more detail a little bit later because what's really interesting is that there's large differences in how different projects deal with this um, and then you know, of course you know um, after the pre-disclosure period you announced the list publicly and then everybody else, you know gives us an update to the systems so that kind of gives us a little bit of the basic terminology um, uh and you know, just to remind you um, this whole process is really about keeping, you know keeping your users safe and to some degree also minimizing risk you know, in a bigger picture and uh you know, I mean coming back to that zombie analogy right lots of zombies there on the right and you know, are you a little user there on a oh, on the right, zombies on the left um you know, trying to fight them off but I wanted to explain, explore first really what does safety really mean you know, what do we mean by safety what are the key elements, you know, of it well the first thing really to realize is for this whole process to work actually you need to encourage you know, the people who find vulnerabilities in your software to actually report them to your project and uh and it's really, really interesting because actually they're totally in control you know, as an open source project you know, you can't force somebody to report the bug to you you can't force them to not you know, do something bad with it um and and really, you know, a robust process encourages people and companies, you know, to report things to you um and we'll get to some of this a little bit later when I will talk about some of the you know, lessons um I learned within, you know, the project I'm responsible for how we dealt with various crisis you know, which then led to changes to how we manage vulnerabilities then, you know, the second key part is really that once something is discovered um the issue has to be, you know, fixed as quickly as possible um you know, you just want to minimize, you know, the time from when something is known to it actually being fixed and you know, really you don't want to have any unfixed vulnerabilities lying around that can be quite challenging operationally also, depending on the size of the projects how many people you have on your security team and so on and so forth but we're not going to address that in that much detail and then you know, once um once uh, you know, once a bug is fixed and you communicate it to the outside world um really, you want to make sure that you know, the exposure time is minimized and you know, that your user supply patches quickly now, there's actually not that much you can do as a project there you know, the only thing you know, you can do is transparent about, you know, the issues you have you know, make them public and you know, maybe in some cases, you know PR helps um, to get users to update their systems um but you know, ultimately you don't really have any control you know, over, you know, over that side of the cycle but we will actually cover and talk a little bit later about things like PR um, and some of, you know and how the media sometimes colors um, vulnerabilities today and what this may mean for your project and what that, you know, also may mean for users so I just kind of wanted to visit bit of terminology we just introduced um, I wanted to start looking at some of the, of how different projects which are important in cloud computing handle these kind of things just to sort of set the scene a little bit so actually um, there's a few examples who do um, uh of projects who do full disclosure right, so um, uh so for example and then there's a color scheme in there sort of yellow is, there's a problem there white is sort of usually okay and in red, that's somewhere where there's a where there's a problem um so if you look at the first two lines it's actually really interesting so in Linux, if you discover an issue in a Linux kernel it very much depends on how you report the issue depending on how you report it, that it gets handled differently and so there's different groups who, you know who handle security vulnerabilities and they have different, different processes so for example, you know, if you report it via OSS security which is a public list, you know, it just gets published it's full disclosure um, if you report it via a security at Kernelorg they will look at it for five days they will try and do a fixed period of time and do a bit of triaging but then it'll go out for full disclosure um, other projects, you know, like OpenStack, QMU and a lot of them, they use full disclosure for low impact issues but not necessarily, you know, for for important issues then, you know, we have a whole list of projects which use responsible disclosure you know, again, you know Linux Kernel, if you use, if you report via OSS security distros that's another, you know, public mailing list it goes through the full responsible through the responsible disclosure approach most Linux distros, QMU, OpenStack or PNFV, Open Daylight, you know some project and so they all use that kind of pattern today um, uh then there's, uh I added Docker into that category as well so Docker is kind of interesting um, because they, you know, they say they use full disclosure and I actually know that they do but you go to the website and then you try to find a policy document and you can't find anything you kind of, you have to ask and if you're, you know, if if you're an important Docker user then you might get access, you know, to their policy document um, and then we come to the area where we actually have problems, right? So there's a lot of newer projects or also other important projects where it's not clear at all, you know, what they do um, uh you know, Cloud Foundry for example there's no clearly stated way of how they deal with you know, securities, there's no published CVEs um, CoreS same thing, Kubernetes, same thing and it's, it's, that's sort of true um, for a lot of the newer projects which have, you know which have started more recently um, I started giving this talk for the first time in um, uh in August, um, last year and actually what's happened is, you know, that the people of those communities, you know it was actually even worse then at that time there wasn't even a way to report a security issue to Kubernetes and there was a Kubernetes guy in the, in the audience and that just added that, you know, afterwards but you know, coming back to the original question that kind of already tells us that maybe there is something, you know, wrong there's a lot of inconsistency and different approaches going on today um, and part of my talk is also about, you know well, you know, if you work on these projects you know, go back to your communities and get them to think about that and how to address this because ultimately in the long run you know, that's gonna be bad for you and for their users I just wanted to sort of look at, you know like pick out, you know, an example um, looking at why, you know, this orange this orange bit at the bottom with some of the new with some of the new projects happens um, and there was a, there was an example um, by the end of 2014 there was a quite severe networking vulnerability in Open Daylight and you know, the Open Daylight team it took them nearly six months, you know, to to figure out how to fix this they didn't have a process, they didn't think about all this you know, up front and then they actually realized that this was an issue for them and once they fixed it, you know they basically made a bit of PR about it so if you Google for, you know, Open Daylight Network and PC World, you'll find the whole the whole set of articles around it but you know, if really, what they then decided was to just look at how OpenStack does things and they just copied their process at the end of the day and I think we're seeing, we're seeing more you know, more examples, you know, of that now, there are a few good um, off-the-shelf processes out there and I like the one which OpenStack uses and send uses and you can just, you know, you can just copy them and customize them for your, for your needs so let's just look at this whole responsible disclosure thing in a little bit more detail and because that's the, the predominant model today, right and so in a table you saw before, you know most of the projects today seem to fall into that bucket so one thing I didn't really cover, you know we had our three phases, you know basically the first phase was, you know fixing the bug, second phase, you know pre-disclosure, third, you know, it's sort of publication now actually what you really wanna end up with is some sort of agreed fixed period between you know, a bug being raised, reported to you and you publicly announcing this if you don't fix this, it might take months and a lot of proprietary vendors, you know they often sit on vulnerabilities, you know, for months and something gets discovered and it might get fixed six to nine months later or even years and but we're talking only about open source projects here, right typically this should be weeks, not months, right then also during this yellow phase when you're actually fixing things and there's a lot of other stuff which is going on as well you know, you have to look at exactly you have to try to figure exactly the conditions under which the vulnerability gets triggered not out of any workarounds, not out of any conflict options you have to get a CVE number allocated and so on and so forth there's quite a bit of stuff which happens during this process and then during this whole pre-disclosure period that's where most of the projects really differ you know, quite widely and fundamentally what you can and can't do is the information which is shared with you you know, that has a big impact you know, on the users of your projects at different levels and they look at that in a lot more detail because that's really interesting but the key point is you know, small differences on how you handle aspects of this process can have very large consequences so there's your little butterfly effect here let's just look at, you know, an example the disclosure time for example so, you know, if you have a long disclosure time you know, for example a few months and you sit on that and it comes out, you know, afterwards there's probably going to be a media story out there which says, well, you know, you had a severe issue it took you four months to fix it you know, you left people vulnerable during this time and so that discredits you as a project and you know, actually quite recently there was an article in the register about Oracle where, you know, where some of these kind of acquisitions you know, were made where they were sitting a long time on some of these vulnerabilities there are other recent, you know, examples as well where it took Apple more than six months you know, to fix a cross-app resource vulnerability and a bias issue which fundamentally left all their users, you know, at risk for a very long time and besides discrediting, you know, the individual vendors or projects behind it it also discredits the disclosure process, you know, itself so a lot of people don't like responsible disclosure because it's often associated with long times between a bug being identified and it being fixed and in fact, you know, open source projects seem to be very good and in this, you know, in this area you know, we tend to do this in weeks rather than months the other interesting thing about this is long disclosure times also created disincentive for reporters to report a bug to your project you know, like, let's look at security research friends for example, you know, they make their money from you know, from finding bugs, reporting them and then getting some exposure in PR after they, you know, report it and they don't want to have to wait for months until, you know, or they will publish a study or paper and they want to be, you know, they want to be able to do this according to their own schedule and not have to wait, you know, for you they might do this once but the next time, you know, they might just publish the paper without, you know, disclosing the issue to you then another really interesting thing is that if there's a project you, if you say it's gonna, we're gonna publish this thing within three weeks you know, whatever, you're setting expectations and that helps you manage, you know, vendors in your ecosystem who might be using you and who might be putting pressure on you and I have some really interesting stories about this later so I'm not gonna drag on about it and you know, I already covered that most projects typically handle this whole process within two or three weeks so from the time, you know, somebody reports an issue to you publishing the issue, you know, if you can do this in two or three weeks so another little element of how projects kind of differ is do you get a CVE number assigned or not, you know, to an issue you find and that's really best practice, particularly in established projects today who, you know, who are active in the Linux and Cloud ecosystem however, this kind of can have some unintended consequences and not everybody assigns CVE numbers, you know, in the same way, let's just look at this, right, I'll just so so actually the CVE databases, you know, CVE numbers are just a unique number to look at a vulnerability and then you can find out information about it and there's some really cool websites such as CVE details to get some statistics about it and ratings and so on and what happens in practice is that decision makers, you know, like people in big companies who might look at your technology, you know, they can often use that kind of information to evaluate whether your project is secure or not, right now I'm going to make the case that actually you shouldn't do that because it's sort of bollocks but, you know, let me get to that a little bit later so here you see some of the stats, you know, around vulnerabilities which have CVE numbers which show the history of, you know, XEN and what you see here is that in the first few years there were very few vinyl abilities, right, now you could suddenly, you could think that XEN was a lot more secure around that time but actually if you look at this all this really meant was that during this time we didn't have a process which required creating CVE numbers, right and but those kind of assumptions which go into those stats they're not tracked anywhere, right, so if you didn't start comparing, you know, projects based on those kind of informations you might get it totally wrong and also, you know, if you, for example, look at some of the media coverage XEN got all of last year, there were so many media stories about one vulnerability after the other but actually in reality probably, you know, in absolute numbers last year was quite a good year compared to the three years beforehand so a lot of this kind of stuff has to be taken with a pinch of salt and you have to know, you have to have a lot of background, you know, information to make really informed decisions if you actually want to, you know, use CVE number information to make decisions about, you know, to find out how secure a project really is how many, you know, how to deal with things and so on, so there's other factors as well and we can't really use CVE databases to compare technologies fairly for other reasons as well, right, so first of all, you know, you don't know when those numbers are assigned so we covered this already and, you know, some projects hardly ever do this, you know, there's or they only do this for the most secure ones, for the most severe ones you know, some projects don't assign CVE numbers at all but then if you don't find any and then you assume, oh, that project has no numbers so they have no bugs so it must be more secure well, you might be totally wrong, right, some technologies can't easily be mapped, you know, like, for example, KVM on containers, they're not, you know, they're sort of well-known and branded technologies but if we look at it at a component perspective, you know, what is really KVM and what is LXC, right, most of the vulnerabilities which affect Linux and, you know, are classified as Linux vulnerabilities don't show up in KVM or LXC stats and, you know, sometimes CVE numbers kind of, you know, affect several products so this all has to be taken with a, you know, with a pinch of salt so now I wanted to look at, you know, where are the key differences and then I'll start telling you a few interesting stories so we covered, you know, a little bit about the overall length of the process, you know, we covered a little bit around CVE numbers but what I wanted to focus now is, you know, is what happens during that previous closure period and what can happen and where are the key differences there and what's key here is that actually what you can and can do if you're, you know, a privileged user who gets information to, you know, to security related information before anybody else really depends on what the goals of the process are so I wanted to look at some of the common goals which are around there so, you know, the first goal and that's a very sensible one is that you have a fix available before it becomes publicly, you know, before the information is out there another common goal is that downstream projects, you know, projects or products, you know, which take your open source product and repackage it in some way or form, you know, if you look at the Linux kernel then all your distros kind of do that such that they can test, you know, take that patch, package it up through the package manager or distribution system and test it in the environment, right, but more recently we've also seen a lot of other goals and that's where the whole cloud angle, you know, comes in so, you know, one goal for example is, you know, let's say, you know, let's say we're talking about, you know, a service provider who operates at really big scale, you know, such as Amazon or WaxBase or, you know, some of the others, well actually a lot of them, you know, they need to make sure that, you know, when they upgrade the entire fleet or something that they have staff in place, you know, they have to put plans in place, then they have to do some, you know, certification and testing and that can take a week or two, right, so actually, for them, you know, so if you then follow, you know, a process by which you treat them like any other user, then they might be vulnerable or their users might be vulnerable for a week or two until they kind of plant the whole thing, you know, roll it out and, you know, deploy it. That can be a very big, you know, issue for their users and for their businesses and, you know, the extreme really is that you don't just allow those vendors to start planning an upgrade, you're actually allowed them to deploy an upgrade during a embargo period and really what that means is, you know, like, you know, they get the information about the fix, they start planning, they deploy it and by the time, you know, they fix the information about the issue where it becomes public, their cloud or their service is upgraded and all their users are safe, right. So these are some of those goals and then, you know, those goals really determine what can be done with the information which is shared with you and, you know, who can be on that list, who can, who's privileged and can be on that pre-disclosure list. I wanted to classify this a little bit, you know, into two buckets now. So we had those four goals. The first two, I call it at the distal model, you know, that whole model was targeted towards Linux distros who take, you know, open source code and they package it up and they test it, right. It's typically a very small list of vendors, you know, they get this stuff and then they treat all of their users the same. And, you know, as soon as you start doing something special for service providers, I call it at the cloud model. Obviously, distro model was pre-cloud computing and, you know, as I mentioned earlier, it does leave service providers vulnerable. So, for example, you know, if I looked at a KVM based cloud today, you know, as a KVM based service provider, I don't get notified of a security vulnerability before anyone else, right. So, you know, when a vulnerability is published, then I have to upgrade my system and, you know, at some point I'm done and during a time period my users might be vulnerable to that, you know, to that specific security issue. The same might be true, you know, the same is not true for other, you know, for other technologies. Then that cloud model that actually really only emerged recently in the last two years and that is something which recognizes the needs of service providers, but it's creating some specific interesting challenges. So, looking again, you know, at this model and mapping them to technologies, right. So, the distro model, Linux kernel follows it if it's reported via what a distro list or, you know, Linux distros to that and most, you know, most other open source projects which are relevant in a cloud follow this model today where, you know, the sole purpose of the whole process is to fix the bug, package it up and test it in, you know, in the environment of those projects and their vendors who use it. The cloud model, that's kind of actually only used by very few projects today. So, OpenStack does that for intermediate to high impact issues. OPNV and Open Daylight do the same. Those projects have a close relationship to OpenStack. They basically just copied what OpenStack did. Same project does the same and actually Docker also does that too. So, mentioned some trade-offs between those, you know, between those two models. Well, the first one is actually, I covered this sort of not so directly beforehand. It's risk for users, right? So, why do you want to treat, you know, cloud service providers differently to, you know, somebody who just uses your software, you know, in a smaller environment? Well, the main reason is that they have a lot of, you know, users themselves. A lot more people are basically being put at risk if that vulnerability, you know, comes out. And that sort of, or gets exploited. And that kind of leads almost right into the next point, right? So, it's also risk to your project's reputation. Now, I've been working with the Accent Security team for a long time. And actually, we got a lot of flak, but even so, you know, when there were security issues over the last year and a half, sort of a lot of very negative press stories. But actually, what was really interesting about all these stories, right, is that at no point, any users were really put at risk. Any real cloud users were really put at risk, right? So, you know, the story is, I don't know where you remember, you know, like at the end of 2014, Amazon had to reboot a portion of their cloud. And then the same thing happened again at the beginning of last year. And it was this huge, huge, huge news story. But actually, it was a story about Amazon's users and, you know, other service providers' users being inconvenienced rather than being put at risk. Now, just imagine what the, you know, and that damage our reputation at the end of the day. Just imagine what the damage would have been if, you know, if users would have been really put at risk. That would have been a whole order of magnitude bigger, right? Another trade-off is about risk of information leakage, right? So, we were talking about this process earlier, you know, where a number of privileged companies, organizations, open source projects get access to information about vulnerabilities before everyone else. Now, that's called, you know, about an privileged list of people we call that pre-disclosure membership. And the more people you have on that list or organizations on that list, the bigger there is a risk of somebody, you know, leaking this information and it can't go out, right? And the traditional, you know, if you look at some of those numbers, you know, this door model, typically there are a number of organizations on that list about less than 10 people or less than 10 organizations. If you have to include, you know, service providers, it's a whole order of magnitude bigger, right? So, the whole, you know, potentially, you know, the likelihood of somebody leaking some information and something bad happening, you know, goes up. And the main argument today against the cloud model is around, you know, this whole risk of information leaking. Now, interested enough, we've been following for nearly two and a half years this model within the same project. And we have about 75 organizations on a pre-disclosure list. Nobody has ever leaked anything. Well, I mean, there have been a few hiccups where, you know, where coordination issues and somebody might have disclosed some information two or three hours before the agreed time, because, you know, they got their time zone conversion wrong, but that's kind of manageable. So, in my view, that's not actually really an issue. And then, of course, if somebody leaks something, well, then you just pull back and follow, you know, follow the full disclosure model and you just publish it straight away, right? And then you have sanctions as a project as well. So, if somebody leaks some information, you strip them, you take them off the list. They will never get that information again. It will be damaging, you know, like, let's just say, you know, a big vendor such as, you know, okay, let's just say Amazon, right? If they leaked information, which I don't think they will ever do, we will take them off the list. And that would be very bad for them in the long run. Another interesting aspect is around fairness. And let me just, you know, like, so, and that's kind of challenging, right? So, who do you allow to be on a pre-disclosure list? When we did this originally for the exam project, we kind of started out with what everybody else did, you know, distro model, and then we had Amazon, you know, on it as well. But that created all these questions. Well, actually, if Amazon is on it, why should another service provider be on it, right? Why should one vendor be treated differently to the others? So, you know, eventually, you know, this created some challenges within the community, and we had a very big, you know, discussion about this. And I'll cover that whole element in the WallStory section, which is coming now. Now, the fairness problem is actually a really hard one, you know, to solve, because at the end of the day, what we have to do now is, you know, actually, if you're a vendor on open source project who's on a pre-disclosure list, you have an advantage to a book standard, you know, individual users, right, user, right? But at some point, you have to make, you have to draw a line. And the only way how you can really reconcile that is have an informed discussion within your community and come up, you know, with something which works for you. So, it's WallStories. So, this section is really about how did Xen come to the project, to the process we have now, and it kind of gives you a few insights of how the process also works. So, we didn't have any process at all, really, until, you know, 2011. And basically, we just took what Debian had and replaced Debian with Xen pretty much. There were a few, you know, we were having a goal of allowing us to fix issues in private and then allow distros, you know, to package it up. We did at the time allow service providers to prepare an upgrade. So, we would notify them that an issue was coming, they could test it in the environment, but they weren't allowed to deploy it during, you know, during an embargo period. And basically, you know, people who could be on that pre-disclosure list, there were Linux distros, open source or commercial and a couple of very large service providers. One thing which was very interesting is we had no, we had no fixed disclosure time. We didn't say, you know, we're going to fix an issue within three weeks or something. You know, this was undefined. Now, that caused us some really interesting problems. So, this whole section about WallStore is what, you know, like, the way how this is going to work is we have a, you know, we have a starting point and we have a crisis and how do we respond to it. So, the first crisis was in July 2012. And there was a thing called the intose threat vulnerability. And it affected, you know, Xen, Microsoft, NetBSD, FreeBSD and, you know, quite a few other projects. Now, what happened at the time is, you know, that bug was reported to us, we prepared a fix, you know, we wanted to publish it. Two days before we were going to publish the information, you know, about the vulnerability, a very large pre-disclosure list member, I can't tell you any names, otherwise they might get fired. Their CEO fundamentally rang every other CEO in the community and put, and tried to put pressure on them to stop that information from being released. And because, you know, so many different organizations got involved in this, that caused a lot of friction and resentment, you know, within the community. We in the end, at the end of the day stood our ground, you know, we said we're going to release this on this day. But what then happened is fundamentally, you know, the CEO of a very large organization managed to talk to the person who found the bug and they then told us, oh, you know, I think they may have paid them some money or something, anyway, the date was being pushed back. Because the discoverer of the issue suddenly said, oh no, you know, we now want to have this published a month later, such that everybody else has more time, you know, to fix the issue. So that caused a lot of problems, you know, in the community. And the key issue we had is we didn't have that sort of fixed period. So what we did is we started a consultation about how we deal with security in the community. This process was very painful and it took nearly a year. And it centered on a number of different topics, you know, like one was the easy one, you know, timing. So, you know, we basically now have a fixed schedule where we try to fix an issue within a week. And we have two weeks free disclosure period. And if we don't, if we don't manage to deal with the issue, it gets published anyway. And in fact, we only had one instance where we didn't manage to keep to the timetable. And I'll get into that example a little bit later. It also focused on a lot of these other topics, like who should be allowed on the pretty disclosure list, you know, if we allow XYZ on that list, how would that impact other people? So there's a lot of discussion around fairness. And that was kind of really interesting because a lot of very small service providers, you know, they felt that a security issue could put them out of business. And I felt that somebody very big, you know, like, so actually what we did then is, you know, we looked at this whole thing again, we clarified the process to make it a lot more waterproof. But then we also said, well, actually, why do we have this restriction? Why do we do why do we not also allow service providers to upgrade during the embargo period? And the only reason why you wouldn't want to do that is if somebody could reverse engineer, you know, from a deployed system, what the actual vulnerability is. But in the majority of cases, if you run a service, you're not actually, you can't actually poke around in the actual code. You can't see the diffs between binaries. So you usually can't, you know, reverse engineer what the issue is. So why, why do we have this restriction in place in the first place? So what we did, so we carried that and, you know, relaxed that whole process. And we also, you know, found, there were other things, you know, we made the application process clear. We make the whole application process happens in public. So there's a public list, you apply for membership, other companies that people might comment on it and everything happens in public. Also, pair vulnerability, we state, can you deploy, you know, can you deploy during an embargo? Or can't you? You know, what exactly are you allowed to do in this situation? And another really important thing is we put a mechanism in place, you know, that was a private mailing list for members of that pre-disclosure list to collaborate. And that has actually helped us quite a lot because that means, you know, like sometimes, you know, somebody takes a patch and then tests it in a very old version and shares it with everybody else. So that means less work, you know, and for the security team. So anyway, that's been, you know, that's pretty much where we are now. Then Venom came along and Venom was the first time, well, it was the first time it was affected by branded bug, but also it was the first time that we couldn't keep two hours, three weeks, right? That wasn't, you know, we basically didn't have a week to fix the issue and two weeks for Venom to prepare for it. And the reason was that this was a Q and U bug, right? So what happened is, you know, the discoverer of the bug, you know, they raised the Q and U bug through us that didn't want to have to deal with the Q and U security team at the time. And we didn't have to wait for the Q and U security team to fix the issue. And so we ran out of time. And what that then led to is that, you know, Venom only had like three days to prepare and deploy the patch. And anyway, to cut the long story short, you know, we have a retrospective process and, you know, what we're now doing is if we don't, you know, if there's a chance, you know, if we don't manage to provide a fix in a week, we warn everybody anyway that there's an issue, but that we don't have a fix and please help us, right? So at least they can prepare and they know what's coming. So, you know, what lessons, you know, have you learned from that? Well, actually that larger pre-disclosure list hasn't really caused an issue for the project. There have been no leaks. We haven't had a single, well, we haven't had a single zero-day vulnerability, you know. I mean, obviously there might be some which we don't know about, but that applies to everyone. The other really interesting thing is that a well-run process builds trust. So, you know, we've learned so much about the vendors in the ecosystem by working through them and so we have now this capability to, you know, to collaboratively improve things as we go forward. And fairness, you know, is a difficult issue. And then there's always some practical issues which you can never, you know, get around such as people will differ, you know, interpret policies in different ways. And you don't always get it right. And I'll just skip this quickly. Now, one really interesting thing is security and media hype. Because if you do things in a very transparent way, what will happen, you know, you're giving the media a chance to talk about this and that doesn't, that isn't always very nice. So, and that is something, you know, really if you are running a security vulnerability process for an open source project, you have to be very aware of this. So, every single time now there's a security vulnerability related to XEN, there's probably going to be a new story about it. You know, these are just some examples. You know, every single time we have a vulnerability, there's two or three articles about it. And it doesn't matter how, you know, whether it's critical or not critical, you know, there's always going to be a story. So, why is this the case? Well, first of all, security stories are hot. You know, they're clickbaits. People just click on them. And because XEN is widely used, and there's this indirect link, you know, to Amazon and other big users, it makes it even more interesting. And, you know, what I suspect, for example, you know, like Docker, you know, I mentioned earlier that they don't publish their security policy. I have a, you know, they're also a hot project. I have a feeling that, you know, they're worried about that kind of, you know, that some of the information there might be exploited. It's too easy for reporters to write a new story. So, you know, like, whenever we work on a security vulnerability, there's a website which tells you that something is coming. So, that means, and it tells you when we're publishing it. So, that means that reporters they go to this website, and then know exactly, you know, when they need to look again to write a story. Now, what's interesting is, now, we are in the community, we have these values around transparency, and everything has to be transparent, so we can't get away from this. Other projects like OpenStack are slightly less transparent, you know, about this, right? They largely follow the same process, but they don't advertise that, you know, on a certain date, you know, security issues coming, so they get picked on less by the press. So, coming back to the original, you know, question, you know, are the security practices robust enough for cloud computing? Well, I think there's a very wide range of approaches, and really, you know, software stacks are getting more and more complex, there's lots of layers, and there is the weakest link, you know, in your software stack from a process perspective. There are some good best practices around, you know, like what we and OpenStack do, and more organizations, more open source organizations are copying that model now, so I would encourage you, you know, to also do that going forward. What's slightly worrying is that new projects don't think about security management straight from the beginning. You should really think about this very early, from very early on, and there's the whole thing about post-Snowden, you know, media pressure, and that will never go away. You know, we were hoping last year that, you know, the register and other press outlets would get broad-risk, sudden-related security stories, but that's just not going to happen. They will just keep on writing about it, and what this is doing is it's actually forcing us as a project to address some issues, like, you know, an example is, you know, like cloud reboots were a very big story, so what we're now doing is, as a community, we're implementing functionality which avoids cloud reboots, you know, such that you can just take a security fix and deploy it on a running system, and then that whole segment of issues goes away, you know, we have a similar thing around, we had a lot of issues which were caused by Q and U, so we're sandboxing Q and U, you know, within the platform. So, you know, what this whole media scrutiny is forcing us with is to tackle some of those common routes where we get media coverage and, you know, just remove those angles such that the reputation of the project doesn't get damaged in the long run, and that's really it. If you have questions, just go ahead. Yeah, so I think this comes down to how many, you know, it comes down to how many users are impacted at the end of the day. Now, you know, like, traditionally as an open source project, you often think about your direct, you know, your direct users, right, and you might have a relationship to them, but what service providers now, you know, like, you have one user, you know, maybe one corporate user who runs a very, you know, a very large, you know, service, and they then also have users, and that number of users might actually, you know, outnumber the number of the direct users you have, right? So at the end of the day, you don't have to make the decision what creates the most damage to the most people, you know, for you as a project. And that's kind of, you know, like, you know, if you look at the same way with AWS, Alibaba, Tenz, you know, all those big cloud providers, if you take all their users together, that's a whole magnitude, order of magnitude bigger than, you know, individual users which are around. You have more, yes, exactly, you have more impact. Yeah, so this, you know, so by one of those vendors not being updated, me being able to update the system in a timely manner, you might suddenly impact, you know, millions of users and magnify, you know, the problem a lot, you know, a lot more. Well, some of them do, but some of them don't, right? So, and of course, you know, even if you have something customized, you, you know, you still use probably a common code base, right? Well, but they wouldn't necessarily tell us that, right? And why would they, right? So it's conceivable, but you know, I don't know whether, you know, I mean, there was actually a series of articles from Amazon, that what they're doing is they intentionally have, they run very different configurations to do risk their installation sometimes, right? So what they, I don't know what that means in practice, you know, they might have slightly, 10 slightly different versions of Xen running the same thing such to do risk that, you know, that's what it sounds like, but I don't work for them. I don't know. Any more questions? Okay. Thank you very much. I hope you enjoyed the talk and thanks for sticking around for so long.