 to the stage for DevNation, San Francisco, 2016. I'm proud to be here this afternoon. Thanks for sticking around. You know, today was an amazing day for us. We did a lot of great things. A lot of great announcements this morning. And a lot of great sessions. And I want to thank everyone for sticking around. You know, today, this morning, we talked a lot of Dev, right? We did a lot of developer tools, a lot of developer process, a lot of developer technology. But we didn't get to the DevOps part. And that's what this is all about this afternoon. So before I get started real quick, I want to thank everybody for the sessions, too, for all the time and effort that went in today. I hope you all enjoyed all the different sessions you go through. Do you guys have a good time? I do want to know. Do you have a good time? Yeah, all right, great. Okay, so without further ado, I'd like to announce, I'd like to welcome Rachel Laylock up to the stage from ThoughtWorks. Thank you. Thank you. Hi, everyone. So I'm from ThoughtWorks. ThoughtWorks is a software consultancy. And what we do a lot of is helping clients and have done a lot over the past sort of five or six years. It's helping clients adopt continuous delivery. So what I'm going to do is tell you some funny horror stories of helping clients do that. And what are the realities of actually adopting these kind of practices in the enterprise? So I'm not going to explain what continuous delivery is. Because a lot of people have done that before me. Some of my colleagues a couple of years ago wrote a book about it. But what might surprise you to hear is even though we'd been doing continuous delivery before the book, which was written in 2010, I actually said to a client one day, you can't have continuous delivery. So let me tell you a story about how I got to this place, this sad story state of affairs. And all good stories start with once upon a time. So once upon a time, I was a software developer in .NET for like over 10 years. I did a lot of continuous delivery stuff and I thought I knew what continuous delivery was. I read the book. I kind of felt it like it was my duty as a thought worker. I was doing it on my projects and I was regularly releasing to production like environments and automating build and deployment pipelines. And then one day a client, a big large enterprise financial client asked for continuous delivery because they'd read the book and they heard the benefits and they were really struggling. They were on a very slow release cycle of every six months and then a monthly hot fix. So they were on a recycle of every month. And by the time we rocked up, it was the third weekend of trying to get the hot fix out and the hot fix required basically everyone, every skill, ops, dev, testing, the whole lot to turn up into the office of the weekend. So by the time I got there, they were very happy. So they said, can we have continuous delivery please? And I said, sure, why not? Three months later, everyone was like, are we there yet? We spent a lot of money on this. Do we have continuous delivery yet? Actually, a client executive shouted at me and said, where is my continuous delivery? He didn't have it. So how did we fail so miserably to not even implement some of the most basic stuff of continuous delivery? Well, they say you learn a lot from your failures, so I'm going to share what those were with you now. Because this was their biggest problem. Their code base was huge, complex. 70 million lines of code with millions of dependencies. Their architecture was, to be frank, a complete mess. And Jez Humble, who wrote the continuous delivery book and came down, didn't tell me about this. He didn't write this in the book. So we've all learned a lot in the post-continuous delivery world. And in particular, that operationalizationing software is actually very hard. And also learned that continuous delivery is quite big. Because lots of people think it's just environments and deployment and build pipelines. But it's also data management and configuration management and continuous integration and quality assurance and architecture. Ooh, and don't forget about release management. And don't forget that you also might need to change your organization. So it's pretty big. And in most organizations in the enterprise, each one of these boxes is a different organization that are incentivized differently and measured differently. Sound familiar to anyone? Yeah, that's good. So I'm just going to talk about this bit right at the start that makes it really, really hard. So this is the new world that everyone's after, right? So all these cross-functional teams using platforms, self-service platforms to deploy value to customers. So the old world of broken down silos are supposed to become cross-functional teams, just like magic. And this slow infrastructure that you have to submit tickets to get hold of, it's supposed to just become low-friction and self-service. This is what clients keep asking us for. They're no longer asking for continuous delivery. They've also kind of stopped asking for microservices. Now they want containers and they want self-service platforms and they want passes and they want everything. They want to run before they can walk. Because CD was always about building stuff right and being able to deliver stuff rapidly that wasn't broken and didn't make operations people hate you because they did hate us. And in fact, you work with them and you become partners and you call yourself DevOps. So what's in the way? In order to create platforms to deliver, you need not just technology, but also the right people and the right processes. So the things that are in your way are legacy technology, legacy processes, legacy people. No, not legacy people. Legacy organizational structures and management. I'm joking. Replace the people. No, jokes. So let's start with the easy one. Legacy technology. I didn't think I was going to say that was the easy one. But it is the easy one. And this one isn't easy. Anyone know what this is? You can shout if you want. Dependencies. It's the giant ball of mud, the most common software architecture pattern there is in the world. It's also the business systems, one of the largest financial organizations in the world. And in enterprises, you're likely to have hundreds of these. But even at startups, because I've also worked at startups, you have these too. Because at a startup, why would you build the perfect architecture of something when you don't even know if you're going to use it? It seems like a lot of waste of time. Even though we as developers, we are in the habit of sometimes over-architecting things and trying to create something perfect. But you don't know it's perfect if nobody's using it. So imagine the value that you want to deliver to your customers or that your organization wants to deliver to its customers. It's right here. This has two key problems. Just two. It has loads, actually. But I'm just going to call out two. It's extremely hard to maneuver. It's slow and expensive to get business functionality out. One change can have far-reaching impact. And it's very inflexible. But also, what's also very important about this is it's really painful and demoralizing for people to work in this. You also can't get the benefits of things like scaling and multi-tenancy when you've got this thing. I mean, what are you going to do? Just dump it on loads and loads of servers and horizontally scale it all over the place. That's not quite what you're after. I know that Brian Foot and Joseph Yoder wrote and summarized in these three words is that technical debt like this is caused by expediency over design. So we're busy getting features out the door and not thinking about the design. Or at least you would think it was that simple, but it's actually a little bit more complex than that. Because technical debt can be caused by many things. First of all, it could be like, we have to meet these certain compliance requirements. We have to shift now and just deal with the debt later. But you have to remember, this is debt. You have to pay it off at some point. That's the part that people can forget about in the metaphor. Your debt can be completely inadvertent. So you might have teams or developers or people that don't really understand the way that the system is supposed to be designed, the way that you've architected the system. Layering is a very simple example, but I can give you one. It just happened to some developers on our teams just last year when we decided to implement a vendor in architecture. That was quite a paradigm shift for the developers to go from their old way of developing software to a new way. And so they made mistakes sometimes and broke the paradigm down. So technical debt can be prudent or it can be reckless. It can also be deliberate or inadvertent. So if it's reckless and deliberate and you decided to go on the agile path and throw out all the documentation, you say, we don't have time to design. We're going to do it later. When we're developing it or not at all. So when is a good time to design? It's the question we always get asked. Because obviously over time, if you don't properly architect and design your software for purpose, eventually it gets harder and harder to add the features that you want. And the idea is that if you do nice good design then it's going to be easier to add features over time. But people always say like, but when is the good time to do good design? We should probably be thinking about it all the time. But this is the biggest it depends question in software consulting. I don't know when is the right time for you to change the design of your software. It takes experience. It takes experience of going through lots and lots of different kinds of pain. But there is another kind of technical debt that gets created even when you're being prudent. And it's still inadvertent. Because now that we're doing things like continuous delivery and we're deploying software into production on a regular basis, we start to see how users are really using the software. And now we know how we should have designed it. You see, this is really tricky for engineers to talk about because telling business people sorry, we did it wrong not because we were rushed, not because the team needed upskilling not because we didn't have time because we did it wrong and we didn't know how the users were going to use it. So what do you do? Well, the good news is now you have an opportunity to do something about it because every business is claiming that they're a technical business and they no longer consider us to be a cost center. Woo, we finally won. But they want to leverage the benefits of scaling and efficiency of cloud. They want self-service platforms. They want to be able to deliver value faster to their customers. This is an opportunity for us to tackle all this technical debt and get real about what that involves. But that technical debt actually comes down to two fundamental issues in software, engineering and design, that we just keep making the same mistake over and over and over again. Coupling and cohesion. I for those that know what the technology radar is that ThoughtWorks produces when we put that radar together we often have huge debates about many things as you can imagine and we recently had a debate about coupling and literally half the room was like coupling is bad and the other half of the room was like well, it's a little bit more complicated than that. The thing is it's not actually good or bad, it always depends it's a trade-off, you have to think about the life cycle of how the technology is going to be used. Now to give you an example of bad coupling or coupling done wrong I'll talk about an airline that was grounded for instead of just a few hours because of an issue was grounded for 24 hours because of the IT systems because what they'd done is they'd coupled their reservation with their boarding system in the database because database integration was all the rage at one point but they have a very different life cycle so when people were at the airport trying to board the plane people at home were kind of like should I set off for the airport now hammering the website, just keep refreshing and eventually brought the database down which brought the boarding system down which meant that nobody could board the plane so then this problem propagated at coupling that is actually bad coupling because when it comes to software architecture it's really just the tension between coupling cohesion and thinking about what's the life cycle of the functionality that you're trying to build and also what's the deployment life cycle of that functionality and then getting your heads together and thinking about where should you couple and where should you not because you always have to couple at some point because most systems are bigger than just one tiny little service most functionality is bigger than just one tiny little service and the answer isn't microservices by the way because this is our latest silver bullet that was some British sarcasm there microservices are not the answer they are a post continuous delivery architectural pattern based on a lot of things that we've learned around being able to automate and deploy infrastructure and all the things that we learned that didn't go quite so well in the SOA world but the point is it's something that we started using quite heavily at ThoughtWorks because it allowed us to deploy things independently and treat things as their own mini applications and then we had containers and everything was awesome but they don't solve the coupling and cohesion problem in fact they just make it worse because in a microservices architecture you've not just you've actually moved the coupling from like in process to external and distributed which has a whole host of other problems associated with it which I won't even go into because this is not a talk on distributed architectures the point is is what coupling actually not coupling what microservices actually provides is it provides you to forces you to think about coupling and that's actually one of the things that I think the real benefit that it brings because people have to start thinking about well if I couple this to this service then how do I deploy and if I couple this to this service will this break or will this break but you're adding all this extra complexity around testing and potential consistency and event driven architectures and whatever else that you may come up with in order to figure out how to implement microservices but the point is in the microservices world if you get the coupling bit wrong it's a nightmare because the coupling issues become integration issues and integration was never been something we're very good at either we've still got a lot of lessons to learn there so they don't solve the problem for you they simply force you to be thinking about it because you have to be constantly vigilant against services that talk too much you also now have to think about messaging and integration and communication and maybe driven design and maybe eventual consistency and a billion other things so it's kind of a trade off but if you want the independent scaling benefits and the independent deployment benefits of microservices then this may be a route that you want to go down and there are patterns for moving from a monolith to a microservices architecture this is a strangler pattern you can read up on this but the only really point that I want to point out here is if you go down this route and decide to start creating these mini modules all over the place and you stop halfway halfway is actually much worse than the start don't just get halfway and like create an even bigger technical debt than you've already got there's a great power the great power of creating distributed systems which is what you've done when you've created microservice architecture there's a great deal of responsibility because the other thing that I like to remind people of is that refactoring when you've got two separate applications it's not super easy and if you start duplicating services all over the place you've spread that stuff all over your infrastructure so it's kind of, I don't know, be careful because in five years time we're going to be like the F was that and why did we think that was a good idea? because microservices are a choice they're not an answer, they're not the solution they are a potential solution which is something we seem to forget a lot about in our industry but they are our current solution so this was the easy bit focusing on coupling and cohesion that's the most important bit but you can create the most beautiful architecture diagram you want once you involve these things people processes it all goes to poop places, not good so you have to create the environment that supports continuous delivery and the architectural discipline because a process or structure change doesn't really have lasting change people don't follow rules they're basically even technologists even those very logical rational people are like a very complex organism some days we wake up and we're happy and some days we wake up and we're not happy and sometimes we hate other teams and sometimes we don't mind them so the real elephant in the room is actually Conway's Law I'm assuming most people have heard about this at some point but even though we talk about it all the time now I don't see many people really thinking about what does that mean to them because legacy organizational structures will destroy your beautiful architecture every single time they'll also make your best people leave and this is a quote from Michael Nigard's book Release It and Release It is a great book if you haven't already read it which talks about the real issues around releasing software and scaling software and how do you implement circuit breakers and all this good stuff and even he says you probably should design your teams around how you want your architecture to look now one way to do this is to think about platforms, products and services you provide and how they will be delivered because even with team structures again it's all about coupling sorry just about coupling because actually when you start thinking about well how do I design my teams around the architecture I want you come back to coupling again so thinking about the life cycle of the platforms that you create whether that's infrastructure platforms implementation platforms some of the underlying infrastructure that you have in your organization and the services that you create and the products that you provide to end users throughout the whole architecture and the different services that you create in there whether they're modelists or microservices or whatever event-driven crazy model you come up with it's all about the life cycle of how the customers use them the life cycle of deployment just think about business capabilities whether you're a DBA or whether you're a .NET developer or whether you're a JavaScript developer customers don't care about that boarding and reservations are two separate capabilities they may be supported by the same infrastructure platforms but they're likely to be supported by different services and different databases and you also have to structure the teams around that because you will always have some level of coupling you simply cannot avoid it and the next thing is ownership is also really really important projects die I'd like to call this platform services and products not projects but projects die and when things die they get entropy and when you get entropy you get debt when you get debt things are slow and expensive and let's not forget all the other stuff that goes on in your organization political fiefdoms and silos when you get into a large enterprise and started implementing continuous delivery let's say about 80% of the time I've ended up in front of HR why do I end up in front of HR? it's not because I'm getting fired it's because people now have new roles and new responsibilities and they need to be measured in new ways otherwise they will subvert any beautiful picture we draw so think about all of this other stuff as well so remember these two things Conway's law is the law and when people talk to me and say I'm adopting continuous delivery and they start talking about cloud technologies and containers and infrastructure and DevOps and all this fun stuff and I'm just sitting there shaking my head because I'm like what are you talking about because the only thing I need to know about your technology is how much technical debt you have and where is it and where is it in relation to the capabilities you want to build and that isn't questions I ask business people that's the questions I ask the developers because what people and process issues you have and the maturity of your organization to deal with them is really whether you're going to be successful in implementing any of this stuff because there is no silver bullet there is no easy way to do continuous delivery retroactively you have to focus on creating loosely coupled applications and create teams and communication structures that support that every part of your organization so you probably shouldn't decide to do continuous delivery everywhere or all at once we'd look with that so focus on coupling concussion remember that Conway's law is the law and finally hope is not a design method you have to be very intentional about this stuff don't get caught up in microservices don't get caught up in continuous delivery don't get caught up in containers none of that shit is going to solve your problems intentional design thinking about how things are coupled together thinking about how where you need cohesion and where you can get away with not having it and then you might be successful at implementing continuous delivery thank you thanks Barbara, I mean Rachel sorry so as you take that journey to transform your environment into a more DevOps focused continuous integration environment you know we have we've been working in the developer group to think about how we kind of bring that together and one of the kind of underlying technologies that we have at Red Hat is the work that we do around container orchestration specifically Kubernetes and the product that we have the supports at is called OpenShift and it's one of the products we've been working on a lot over the past few years and it's really it's quite remarkable and to talk more about that I'd like to invite Ashash and Matt up to speak more about that thanks Harry so Rachel did a great presentation I'm feeling so bad I'm following her she's got an accent, sounds like it's English so I don't have jokes though, like she had I couldn't make jokes about the English today and soccer but I won't football, right, that's what they call it I won't, I said I won't I couldn't make jokes about the English and the European Union right, instead I'll go to less controversial topics and talk about OpenShift so my name's Ashash, thanks very much for hanging around, staying, I did tell Harry that the next time we do this we should serve beers to sort of get more people to hang around, so Harry that's that's actually something that English would support I'm guessing Matt Hicks is next, Matt's definitely funnier than I am Matt runs engineering for OpenShift management and developer and a lot of other stuff so we'll come and talk some more and then we'll of course have David from Google so I don't have jokes but I have questions so let me run through some questions for you question is what is OpenShift? it's a strange question, why ask that well that's because we seem to be a lot of different things to a lot of different folks A, is it a container based cloud application platform, that sounds like a mouthful that you can deploy on physical, virtual, private, public clouds, possibly does it support we had Rachel talk about this and I guess Harry and the rest of the folks they've been talking about microservice based architectures enabling middleware services to run on them can it be consumed as a public or dedicated cloud service or privately administered and managed by folks in your organization is it a pass, is it a cash it's like a Superman movie now, can it be used by developers or by enterprises or E, is it all the above so obviously the answer is all the above it's being used as a platform as a service, lots of folks have been talking about what cash is a container service we've had developers use that scale I'll talk about that as well and large customers all around the world have listed some but if you are sticking around for Red Hat Summit over the next few days I encourage you to go visit a lot of these customers are here talking about their use of OpenShift as well as other customers like Airbus or Swiss Rail and what they've been kind of doing with it so definitely go check out their use cases but like I said let's talk about progress from a developer perspective we had over 3.1 million apps deployed on the platform since we've had OpenShift around so that's since 2011 adding lots of users every week over 6,000 users being signed up couple thousand new applications landing on the platform every day and over 4 billion requests a day that we serve out also really proud about OpenShift Commons which is our essentially free to join community for folks who want to share best practices about deploying containers running containers at scale learning about the best practices developing and deploying and managing a platform service offering and really all kinds of organizations developers enterprise is not for profit universities have joined OpenShift Commons so encourage you to go check that out and sign up for that if that makes sense for you so why are we here talking about all this right so Harry said look we've started this conversation today talking about developers we're going to talk about DevOps we'll spend a little bit more time talking about DevOps and Ops and really some best practices around how you think about application development and deployment but also about how you manage your processes and run and manage infrastructure you have OpenShift around 2014 had been adopted pretty significantly had lots of traction with customers and users and our version 2 technology at that time was being well perceived but the space was fragmented by that I mean if you want to extend out the platform and run software on it but you had to essentially do something called build cartridges for OpenShift and then there were build packs and then there was Amazon machine images for the AWS platform and so on and so forth right so the space was fragmented when people wanted to use software they had to package it up in different ways to use it and leverage it and then of course as you go across the platform that breaks Docker had been out for a few months right and we kind of see the promise around that and we said well this is an opportunity for conversion a single container format and a runtime around that but once we did that well how do we manage these containers at scale right how do we manage the clustering that happens around with this how do you manage the health of these containers that we run across different environments I mean that's where really Kubernetes came in right so that's kind of the part of the journey that we've been on about you know thinking about you know if we want to run these manage these be able to manage state be able to figure out if we're going to put this in our own data centers or run this in a private or public cloud or what people call us right we've got to be able to solve this problem and be able to solve this at you know pretty large scale and then be able to continually increment upon that now of course it's hard to do that on our own right and obviously it's difficult to do that if you don't have a community around you and so us thinking about participating in a wider community right so if you're a user or a customer you don't feel like you're locked into a single vendor solution it's really important participating with several other organizations adopting the Docker technology as part of Open Container Initiative you know it's something that we stand by and invest a lot of our time and energy and our engineering resources to help drive forward the same is the case with the Kubernetes community and we'll talk a little bit more about what that means for us and obviously investing in the cloud native computing foundation it's not just us right it's a lot of other companies right so definitely Red Hat right but it's also you know Cisco and Intel and IBM and Huawei and of course you know Google has been leading the charge around this but to tell you a little bit more about the journey we've been on I want to bring up Matt Hicks so this was a pretty good spot to be in right we had the industry adoption a lot of people have probably heard about this technology but I think one of the most important parts is actually to look at the problem that Kubernetes was there to solve when you look at container orchestration we really had to shift into this mindset of how are you going to describe either a modern day application or one of the thousands of applications that you're running today and Kubernetes introduced these building blocks things like pods to describe container co-location and services to take a bunch of those pods and describe how are you going to access them at a network level or replication controllers describe how many instances of those you wanted and how scaling was going to work really gave us that language vocabulary where we could take this great very exciting concept about containers start stitching them together to actually form applications and if that was Kubernetes 1.0 which we were really excited about the best part is the pace of the project and capabilities has continued to increase if you look at the last release Kubernetes 1.2 which would open shift today is based on you saw a humongous increase in scale and capabilities you saw application configuration greatly improve which meant more applications you could actually represent in this model then you saw things like the new scheduling features and extended schedulers which meant different workloads closer to mesos or batch like patterns that you could run so the project didn't only give us a great foundation to work with but it has been increasing and innovating in a tremendous space so with this combination of stuff we now had the pieces that we could build the next generation of open shift on we had the run time we had the container format we had container orchestration that got us back into our sweet spot we started to get into those areas of how do we now give prescriptive patterns on building continuous integration and continuous deployment around Jenkins workflows being able to establish those pipelines then when we talk about deployment how do we make people not resolve the same problem for the thousandth time and give them blue green style deployments or canary deployments or even AB deployments where you can split network traffic on them then you go one level up and say you know all of my developers don't want to understand how to build layered containers right how do we build in that automation and bring that all the way to the developer tooling and then lastly when you're running this at scale how do we give you operational management where you can't just view the container layer but you can link that to your VMs link that every step down to the physical hardware so that to us really gave us this vision around our container platform that's where we started OpenShift 3.0 we've continued to advance we go forward when we talk about something like OpenShift it's not just integrating this technology we work differently we really get involved in the communities themselves so if you look great examples with Kubernetes one of our needs as Shesh mentioned OpenShift Online and running millions of applications multi-tenancy was pretty important for us we had to be able to separate customers from each other both in a functional area as well as the resources that they got this work allowed us to drive capabilities upstream in Kubernetes like the namespace capabilities as well as quotas and limits but then we also look to our enterprise customers who they were running applications, stateful applications against storage that they had on prim today that let us help drive the volume functionality and implement storage plugins ranging from NFS and iSCSI and fiber channel all the way to new you know sef cluster and even cloud-based storage but most of you probably know like this is how Red Hat works this is what we do we are also able to extend this model to customers probably one of my favorite things about being at Red Hat Amadeus has been a great use case of this because Amadeus last year when we announced OpenShift 3.0 they announced they were betting on the OpenShift 3 platform right when we first started talking about it and the reason was they had been working with us upstream in these communities before we'd even GA the product that collaboration led to capabilities like job support in Kubernetes this is forming the basis of scheduling and eventually batch and basis like workloads in Kubernetes that started with our first requirements with Amadeus OpenShift and OpenStack they're a huge open stack customer they run OpenShift on top of it they didn't just take the products and the integration we provided they continued to improve that naturally became one of our primary contributors to the installation integration we carry there and then lastly if you look at some capabilities like sys control I'm not sure how many people know about this functionality but you're a C++ developer and you use shared memory at the time running a C++ application in Kubernetes that was a little bit bleeding edge we weren't a ton of people doing that and Amadeus had that requirement we worked with them for over a year to build this into all the various layers in the stack when we were done huge benefit to Amadeus because now they have their application they could run this model but it also ended up being a huge benefit to all the other communities in a really really exciting feature so this was a neat way we were able to take our model our way of working in the open and open innovation extend that to our customers as well but before people knew about Kubernetes we're talking about this and all excited about really started as a bet for us back in history this was sort of our gamble when we knew we were going to bet on Docker and we were having this conversation of what should we do in the orchestration space a great technical partner of ours Google had this idea they were going to take a lot of their knowledge because they've been running on containers for a long time and they were going to open source that project and they really wanted us involved from the beginning and to bet the next generation of our platform on that technology and now looking back I'm extremely happy we made that gamble but it's not going to go to technology is great but I think Google has built a world-class open source project and ecosystem around this as well to talk with you a little bit more about that I'd like to introduce David who runs product management for Kubernetes from Google David thank you very much thank you for the opportunity to come and talk so it's interesting because we at Google when we first got this going had to make almost the exact same bet but in reverse you know when you're starting a project like this the first people that you bring into the project are some of the most critical they're the ones who help architect who set the culture who set the mentality for how you're going to go about this project what you know what's in what's out so on and so forth and while we're in the midst of sorting through that we're also moving a million miles an hour checking in massive blocks of code and things like that you know one of the best earliest signals we saw when we were building Kubernetes was senior red hat engineers pointing out you know go hygiene mistakes that we were making these are people who actually cared not just about the project as a whole that we were getting the right features in but that we were also building something that would be maintained over its entire lifetime now they haven't just gone forward and you know made sure that we indented properly they actually you know they continue to add core features to the product and make huge improvements whether or not it's fundamental things like persistent disk we already talked about jobs some of the red hat engineers were some of the ones that will be invested in huge features coming up around improved support for stateful services so it was a big bet at the beginning but I think unequivocally we can say we made the right one as far as the Kubernetes project as a whole is concerned for those that don't know it is obviously an open source production grade orchestration system built in the same way that Google does it internally one of the most fundamental goals about Kubernetes is the ability to run anywhere so it's not just a Google project it runs on bare metal, on VMware on vagrant, on AWS on digital ocean, on Azure you name it and most importantly it allows people to deploy it in the places that make sense for them where you can now run additionally things on top of it such as OpenShift so basically our goal is to have this very clean platform that lets other people go off and build huge businesses on top of it exactly as Red Hat has done we have broad industry support we already talked a lot about that Red Hat is certainly a leader here and as a whole we've counted up over 233 person years worth of contribution and that is in just 12 months since we GA it actually technically two years since the first check in but very very young project to have that much momentum behind it in the coming days we will be announcing Kubernetes 1.3 again with enormous help from Red Hat and many many others in the community here are just a few highlights the first is around scaling we've doubled the number of nodes supported on the Google Cloud on premises we see people going even higher but with a 99.95 SLA on the Google Container Engine we support twice as many nodes as we did it just three months ago we made massive improvements around stateful application support one of the biggest problems that people have when approaching containers today is how you migrate in and manage things like state built in as first class object is the new Petset object which will help you manage state for things like databases, key value stores we have a significantly improved key local developer experience so on your laptop with one command you can spin up a Kubernetes cluster and begin building and testing just like you would with any other container system we have automated integrated cluster scaling so Kubernetes will watch your cluster and see whether or not you have pods that need scheduling and go out and request more CPU or if you're not using that CPU anymore it will naturally scale down as well making sure you're staying within your efficiency and utilization guidelines we have brand new support for container standards one of the most fundamental things about open source is allowing people to choose we'll be supporting Rocket, OCI and CNI in the box so you can choose the container system that makes sense for you we have integrated support for cross cluster federated services again one of the most common things is people want to set up multiple clusters in multiple availability zones bridge on-premises and cloud and integrated into the box will be federated services so that without any additional work you'll be able to deploy services and spread your load across those different clusters and finally thanks to Red Hat and some amazing work that they did porting back work that was an open shift we have a significantly improved identity and authentication management system in the box allowing you to have greater control over the way that you interact with the cluster so those are just a few of the things that are coming in Kubernetes 1.3 do out momentarily and with that let me let me pass that back so it's been a great day it's been a long day but if you're interested in learning more I know that some of you have actually registered we have a code starter tonight to actually get hands on with some of the if you register for that just a reminder it starts right after this session in the room next door I hear there's a couple seats available still so if you want to try to squeeze into that more than welcome to try next door there's another thing I want to announce which is a hackathon a hackathon is the open shift team is doing it's a 12 week online hackathon it gives you $500 in open shift online credits for the first 200 submitters and over 40,000 in cash prizes are potential to win if you're interested in that you can go to openshift.devpost.com and register for that today right today and also also tomorrow begins the official red hat summit so everyone go out and have fun tonight don't have too much fun because starting at 8.30 tomorrow morning we have the keynote starting in hall D all of you there tomorrow have another great day thanks everyone for being here