 I go through what some of the main themes are that helped us really fund and focus on OpenShift. The first theme that really hit this entire industry was this DevOps thing going on. And I'm not going to bore you with a speech on DevOps right now. For me, from a product point of view, we're looking for a programmable boundary, if you will, that still facilitates peace between these two warring camps, if you will. But when I see DevOps, I think of a person, personally. I think of a guy named Steve Hendricks, who was in one of my development teams. And whenever the build server would break down, people would call Steve. And Steve seemed to know how to fix the build server for some reason. He seemed to know how to get to the servers, how to log in, how to increase the memory heap size on applications and all this other stuff. So this was a developer that knew a little bit about operations. And that's what we wanted. That's what really moving forward as an industry, we want our engineers to be able to do to sort of blend between these two worlds. And we're looking for a product that helps establish that boundary and helps. The next theme was this willingness to give developers the king to the castle, the keys to the castle. It started probably, I would say, four years ago where we, as an industry, would allow more risk in what we were pushing to production. Because the markets were moving so quickly around software, around features, around touch points with customers, around CRM systems, around analytics. And we wanted to capitalize on that. We wanted to bring our own business and our own opportunity into that. And so we're looking for ways to allow developers to bring code to production a lot faster. And this became more of a mantra that we had in the departments. Software is eating the world. We hear this one quite a bit. If you take a step back, there's a lot of companies that weren't traditionally software companies. Look at GE. If GE was to rebrand itself as a software company, it would be the second largest software company in the world. There's a lot of people producing software. And it's because of a lot of things. It's because of a peer to peer economy. It's because of just where we have evolved to in global markets. It's where we have our mobile devices and we want to bring that into our business world. It's just the bleeding of all these technologies and software is becoming something that we can really protect ourselves and disrupt other markets, completely turn them on their head. Not only can we disrupt them directly in competition, but we can put something new into the pool and have a ripple cause of effects. Look at chewing gum sales of all things, right? Very, very classic example of a commodity product that you buy at the shop encounter when you're sitting there waiting for your checkouts. Now people are staring at their mobile phones and they're not looking up at the shelf at this chewing gum and chewing gum sales go down. So here is software rippling affecting into something that's completely unrelated and nobody thought there would be an effect. A lot of grandiose statements in those last three slides, right? So I'm more practical, I'm more of a Darwin believer. I think things evolve over time. And in my career, I've seen a lot of these things evolve. Who knows what an encapsulated root disk is? All right, did anybody ever have to break a mirror on a cluster? No? So when I started in my career, there was a lot of legacy vertical stacks. And what we did is we applied high availability to them. The applications themselves didn't have to have those qualities. And there were a lot of tricks and software around that. How about Y2K? Did anybody have to work midnight at Y2K? No? Nobody had to sit there and help people? So the biggest thing in operations with Y2K wasn't the fact that we had to install some libc patches and some kernel patches. It was the fact that we had to bring systems down that had been up for the last five years, and this was the first time they were being rebooted. So about 80% of the time didn't come back up. And so you had to become an OS whisperer and go and debug why the thing didn't start back up. And this was, again, this was because applications weren't spanning, they weren't horizontal, they just didn't have these driving factors. Then we had a lot of big iron suddenly being consolidated and thrown away. I remember sitting in a management meeting and we figured out that our product at the time could be unplugged. It was a large server, not even used. And you would save money by just having it sit there because it was cheaper to buy a bunch of servers and just throw them away when they broke. Just literally not even try to fix them, just throw them away. And this was this new era coming in. And then we hit this large distributed system clusters. I remember there was a big one going in Texas, there was a big one going on Alaska, and one in China. And they were so big that they were taking over Intel's ability to ship their servers, just these three locations on planet Earth that were buying that many servers at the time. This, again, we needed this evolution of that programming and for those particular applications. At this time, we were also collecting data. People were sitting in front of computers and forms and putting data into computers and not necessarily doing the big data analytics side of things. And the next one, we moved into virtualization, right? And for me, the first virtualization to hit was actually the kernel virtualization with containers. And I flew all over the world and I helped people put their single kernel into many virtual, we call them zones at the time. But later in that same span, five years later, I went back to the same customers and I moved them to a hardware hypervisor. I moved them to an XVM, or KVM, ESX implementation. Because the applications themselves were not capable enough to span and have all your eggs in one basket on one kernel. DRS got popular, I was in Canary Wharf. And a customer was manipulating the financial market by bringing in a lot of resources at different times in the market. Another good use in that grid age. And then we went into a converged infrastructure and software defined everything in assemblies and big data. And at this time, everything else is happening, right? We have source code programs. We had the move from ANT to Maven to XML explosion. The infrastructure is code, all the runtime and programming languages. So everything's evolving. Everything's taking advantage of lessons learned in the previous time frames. And the result of what we have right now at Red Hat is an ability to really accelerate this next generation architecture that was born from all those lessons learned. At any point in our stack, you can come in and you can consume. There's nothing that's blocking you from just using the Kubernetes and the Docker implementation instead of getting into our build management services. There's nothing stopping you from using just enough operating system and maybe using somebody else's orchestration. So there's a lot of pieces that you can take advantage of. When you look at this slide, there's literally, there's probably 30 startups that we compete against in this. This is just a very hot market right now. And Red Hat is offering this to you over the counter in a nice package. So let's talk shop. Let's zoom in a little bit further and see where we are. This is our 2016 year. This is, we're right around February, but at the end of January, we had a pretty big release. Now we did some auto scaling. We made sure we were qualified to run on our just enough operating system or atomic host. And we got some dynamic provisioning of storage in that timeframe. We also revamped our dedicated offering, which now runs on Google Cloud and Amazon Cloud for anybody interested into that. But right now in February, we're spending a lot of attention on our multi-user credit card purchase environment called OpenShift.com. This is when you log in as an individual or a corporation and you want to build hourly based on a monthly credit card bill. So we're recording in a lot of features right now. We're going to code freeze that and we're going to launch a beta on that. Right now it's on the version two platform. We'll move it to version three and start a beta in that March timeframe. And then we'll take a cut of that code and we'll ship it into the data centers and we'll call it OpenShift version three dot two. Also the comic platform that comes out at the same time. And then we'll move into our summit releases, which is typically our biggest release of the year and we'll go through those features. At Summit this year, we're also going to introduce a standalone registry, a Docker registry for customers that are interested in just consuming from a registry and talking from their laptops with the Docker data. So when you look at that slide, a lot of things, all you have to remember is that there's important data around April and there's important data around June. And since I'm a product manager, we kind of like to just use the first half and second half in case we miss dates. So let's just go first half and second half. There's a couple of primary things, we're very focused on developer experience. That's what you see in the blue. We always want to make sure that we're catering to services, that's what you see in the red. The yellow is really the container into the Docker layers and then the green are the core pieces of the product. And I'll go through each one of those. So has anybody used our product in three dot one? Three dot one dot one, great. So we just released three dot one dot one and we added even more functionality into the user interface. When you do a revamp like we did with the three dot X product moving from the two dot X product, it takes a little time to bring the user interface, the browser specifically user interface up the par. And this three dot one one really brought it up the par. You have a really nice experience and everything that you want to see as a user of the platform. We're going to continue to build on that. We're going to allow you to load templates instead of just loading images. We're going to allow you to edit these templates. We're going to allow you to display a lot of information about persistent storage volumes. This is all being worked on right now in the user interface team. We also have a concept called service linking, which I'll talk about when we get into the Kubernetes section. With the developer experience, probably the biggest lesson we've learned since we shipped the product is as we start really penetrating into large development teams, they all carry with them large investments in lifecycle tooling. Whether that be the GitHub subversion team quest side of that story or that's the Jenkins, Bamboo, Atlassian type of conversation. Or maybe it's their do it yourself type thing that they've grown up over the years of pushing software out. But every customer, I'd say if I talk to 10 customers, I'll get three different integration situations that they want. Some people want to consume Docker files. Some people want to consume source code. Some people want to consume a war on jar still as they move across these life cycles. And we needed to really look at the product and become very versatile on our integration points. So what we're working on right now is to illustrate that to the developer. So when you log in and you see your projects page and you see all your applications, we're adding a visualization that helps you see the logical groups that you've called your life cycle, dev, UAT, test, prod. And to have different integration points. Maybe you pull from Jenkins in one. Maybe you do a build, a source to build in another. Maybe you pull from an Artifactory in the last one. This should all be displayed for you and you should have different intersection points. That's a short term. We're gonna hit that this year. The second one is after you do all that work, it'd be nice to step back and call that alpha build, right? And then a month later, when I bring up another team, he or she will just select that alpha build. And that means it'll be four life cycles. It'll be Jenkins at the beginning, source to image in the middle. So you'll start to be able to play with that. A lot of features around building, like I said, people want to sometimes not give us source code. Sometimes they want to give us those binary pushes. So we're really working in those areas. When you start getting into build services, there's a lot of pre and post build actions that you can take. And we want to make sure with Jenkins in particular that we ship with modules for Jenkins where you can select them. They have OpenShift in the name and they make that connection back to us pretty fluently. Making Jenkins S2I compatible with all the runtimes that we offer. Putting more XML stanzas and JSON stanzas into the template so you know what you're looking at, so you know what you're doing. Right now, when we build, we'll push that same image that had some of the artifacts in it left into production. People want us to separate those two out. So that's another thing that's happening. Another one here is with when you give your credentials to something like GitHub or other services, you want to encrypt that. You want to use the secret framework that's in Kubernetes and we're not doing that today on the build pods. So we want to bring that into the product. When you talk about a developer, the developer typically will want to have some experience local to him or herself on their laptop. And so we have the CDK, the Container Development Kit coming out in 2016. Right now it's in beta, so we'll come out of beta. This allows you to bring up a bundle that offers you on a Windows or a Mac a very clean startup of a Docker image that brings up OpenShift that allows you to work with templates that connects directly to Eclipse for both the Docker file and the templates. So this is a great user experience for a single person that wants to test her code out. So let's get into some of these application services. As you know, we have the bulk of the Jboss middleware product line on the platform today. This is a great competitive advantage in our market as we can really produce some really complex applications. What's coming in in 2016 is the mobile application. The first part of that integration is to simply run the Node.js and the Redis components of those applications out on OpenShift. And then you'll still turn out to the public cloud or feed Henry acquisition and interact with it from that point of view. Then later in that 2016 year, we'll integrate more extensively with the user experience and the consoles. And we'll bring it into the on-premise product. The AP has a pretty big release coming up. The last one in this API man are really exciting. So as we have a lot of customers getting more and more into microservices, more and more of them find a need to authenticate APIs, to manage APIs in a more sophisticated manner than what the product provides. Bringing API man onto the platform and offering it as a service that runs on the platform for people developing is just a fluent. It's what you need to really exist in a microservices world. At the same time, this has always been a thorn in a lot of people's sides. How do I stand up an application and then offer authentication for the application that I stood up as a developer, bringing key clook and it's very easy APIs for developers. You can rapidly allow them to bring a single sign-on solution that integrates with the corporate identity management. Software collections is how we get our software, our runtimes, our application frameworks. And we do it this way because it allows our platform as a service to fully support everything that we offer. So a lot of the competitors in this market will support maybe four runtimes and application frameworks, but then they'll show you a list of 20 and they'll ask you to go out to those open source communities and maintain them and keep them up to date. We offer quite a breadth of application runtimes and frameworks and this is a huge competitive advantage as we can offer CBEs in a more fluent way. When you look at how we're offering updates, it's very easy for the platform to know what's deployed where. So in the YUM days, you would have a YUM server, you would go and push out your YUM and you would have to make sure that your application instances could tolerate one of them being down. Now the platform takes care of this for you. So when you have a new update, when Red Hat comes up with a new Node.js image, maybe there's a security fix in it, you put that image into the internal registry. There's policies that you manage, you can turn them on or off, but what we'll do is we'll figure out where Node.js is deployed and we'll go ahead and we'll roll it out in a rolling manner for all those applications, doing canaries and then if that deployment config fails, bring it back up. Now you can very granularly decide you don't want to do that or not. It's the same trigger function that we're using for developers code. So when a developer has a code commit and it goes back to build a new image, a new layer and it pushes that out, it's the same trigger, the same concept that we're using. We're just reusing it in an operations use case in this manner. So those all will get updates. The big one here is because of the mobile platform, we're moving over to the more fresh Node.js, the 4.x, right now we're on a .1. So let's go over the core components. This is where the bulk of the R&D is being spent in 2016. There's two major projects. The biggest one is our pipeline. It has some user interface, instrumentation built around it, but it also has some built config stuff we're manipulating. That's taking a lot of R&D. The other R&D one is service linking, which I'll get to in the Kubernetes section. So let's talk about the enterprise registry. People love some of the features that we're offering in the product with our internal registry. They like the fact that they can prune. They like the fact that they can say, every 30 days, go ahead and cut out some of the stuff that hasn't been deployed. If you're doing that, that means you can tell, you can ask the registry what has not been deployed. And a lot of commercial standalone offerings don't have those features yet. And we do, we have those in the product. So people have wanted us to expose that and push that out. We also have an image stream concept where I can do a lot of code commits back. It'll cause more and more Docker layers to be trapped. And then I can have somebody else partner with me. Like Grant Shipley can be policy connected to my image stream. So maybe he's in QE and I'm in development. So when I go and do a code commit, he gets the same layers of his images or his projects. These are all concepts that we had in our internal registry. And we wanna expose that. We wanna allow a customer to make that the first decision point. Just install the registry, start playing around with Docker on laptops. You need something to pull from. A single point at the customer site. To do that, we need a user interface. Somebody has to log into it. Somebody has to see his or her images. Somebody has to go ahead and have quota being displayed. Somebody has to see layer connections, little drop downs in children being displayed in that user interface. So there's a lot of work in those areas around importing and exporting the classic things that you would expect. Now, on the technical side, some of the hard things that happen, we wanna make sure that later you can make a decision to install atomic platform or open shift. So we have to be able to discover that and bring that into the fold in a very easy way. Storage is one of our competitive advantages because we offer stateful applications to do so. You need persistent storage. If you're doing that with containers, it has to be remote. And there's a lot of storage APIs out there, right? There's iSCSI, there's Fiber Channel, there's File, Block, there's all sorts of fun stuff. We offer this as a company and it's one of our golden features, if you will. One of the things that we're bringing into it, though, is dynamic provisioning across the board. Right now, with the bulk of these storage providers, you have to create volumes as a storage admin and then register those with Kubernetes. It'd be nice if you didn't have to create those volumes beforehand. If there was just an API where you could ask for them and build them on the fly. So that's what comes in now. The other thing here is a lot of labeling features are coming in and there's, I'd say, two major ones. The first one is when you're using persistent storage, think of Amazon Elastic Box Storage. It'd be nice to log into the Amazon interface and see tags on the elastic box storage that makes sense to the OpenShift environment. Then on the flip side, log into OpenShift and see tags on the PVs, the persistent volumes that tag back to something in the Amazon infrastructure. So we're starting to add that layering and that tagging in. Where we're using the same label concept is with the users. So sometimes the cloud provider could be an NFS. So somebody raises their hand and says, I want an NFS volume. But right now there's no way to say, I want an NFS volume on the 10 gigabit network and not one on the one gigabit network. So we want to be able to bring those labels in. This is a gold. This is a silver. This is a bronze level of storage. When you attach these persistent volumes to the actual running kernel, there's different things people would want to do. Some people want to FSEK the volume first. Some people don't. Some people want to grow a file system. Some people don't or doing an extensible interface down at that level. Unusual at Red Hat, our storage group is also born from a lot of engineers from the high performance group that used to also be at Red Hat. And we're asking them to really grab on to big data and to start looking at templates and concepts. And the first one that they chased out of the system was when you do persistent volumes, you want different functionality, depending on the application, on how things get mounted when the application grows. So if it's a Cassandra cluster of MySQL or Postgres, maybe when a new instance is added, I want a new persistent volume instead of sharing the same persistent volume. Or if it's an Oracle Rack database, maybe I want to share the same. So there's a lot of different storage mechanisms that come into play here and they're chasing those out of the system for us. This last one is a pretty exciting one. We're going to take Gluster FS and SEF and start running it on Kubernetes. The first one's going to be Gluster FS. I think they put out a container this month actually. So that's definitely in play. Now, when you start telling the platform to expose storage, that means you have to create a user experience around that. There has to be commands to register this storage, ask for the storage. So we have this group creating some of these interfaces. So this is hyperconverged. This is running storage instead of outside of the pass on the pass itself. Networking. We're asking a lot from our networking, but there's two main areas that they're working on. The first one is the router, right? In the 2.x concept, every application had its own HA proxy. People typically didn't like the implementation because it ended up with thousands of HA proxies. Now, when we moved to 3.0, we decided one routing tier, one HA proxy routing tier. It runs on Kubernetes and pods and containers. It's highly available. It has a floating VIP. It's really, really sexy. Now, the problem is in some industries, when you have a lot of tenants on the platform, due to regulatory assignments, you might not be able to run them through the same proxy through the same routing tier. Maybe you'd be forced legally to have different routing tiers for different tenants. So now we're back to this world of adding more of these HA proxies onto the platform. Right out of your head, you'd want to be able to have a proxy or a project and decide for that project, I want it to have its own routing tier instead of going through the platform routing tier. So that's one of the ones we're working on. In the same area, if you're in a project and you have different applications, which have different services, sometimes the timeout value is different. Sometimes how you want the HA proxy to handle that particular application is different. So we're annotating services and having those annotations cause the HA proxy layer to behave differently. So that's another thing that we're asking the work on. Another big one for our largest customers that have a lot of firewalls involved in their deployment. When they have tenants in projects, they want that tenant in that project to use the same IP addresses, the source IP address on their traffic outbound. This is so they can write very easy firewall rules to block that tenant or not block that tenant as they go out to Oracle databases and CRM systems and things of that nature. Right now we're exposing the source IP address from the virtual machine or from the bare metal hardware that we're in. So we want to stop doing that. And so that means we have to work on our egress and our ingress implementation. We want to enable non-standard ports, non-8433 off of the router for the projects and services that are running. This one is getting hot with the internet of things. Typically people are okay with us putting an SNI router on non-HCP TCP based traffic. But when you start talking to refrigerators and pencils and things of that nature, they don't typically understand SNI. And so we have to have a classic TCP router that can handle different protocols. This last one, the CNI, CoreOS implemented CNI into the Kubernetes communities grabbed on to it with their community. And so this is a interface for these plugins. When you're using Kubernetes, you're using a software-defined network. And whose software-defined network you use is up to you. We ship an open V-switch implementation right out of the box as we can't force our customers to go buy a software-defined network. But we have partners that are writing to that plugin interface. We have Nuage, we have Cisco, we have Juniper. There's just a lot of energy in this area, even in VMware. But it's difficult. It's very painful to write to this plugin interface that exists today. And this CNI APIs really make that an easier project. Oh, idling. So V2 was very popular with idling. In fact, idling is how we're able to keep the lights on and give every man, woman, and dog three free gears on the platform on the planet Earth. Because when you start these gears, when you start these containers today, you can walk away, right? And we'll idle them. We'll shut them down. We'll save on our Amazon Web Services bill. We don't have that yet in the 3.x platform. Because when you take a step back, what we've done to an application in 3.x is we sort of shattered it. And we allowed you to have routes and services and pods and containers. And a lot more control. We do that so you can make more complex topologies with microservices. But now when we wanna go and idle something, we got a lot of pieces to grab onto and idle and bring down at the same time. So it took a little while for us to get it. It'll come in in this timeframe. It'll be a great feature for us. I'd say that's the bulk of what they're working on. Logs and metrics. So this was another exciting thing that came in around November of last year. We offer a full elk stack. We use Fluent D instead of the other side of that. But what we're asking the engineers to do is move to a common logging installation. We have components of the elk stack and open stack. We have it in OpenShift. CloudForms wants to get into it. So we're pulling back as a company and creating a common logging solution that customers will be able to install. And then our platforms will be able to discover which ones and bring that in and pull from those consistent logging. In this area, we're gonna bring in a message bus instead of using Fluent D. So that's another evolution that you'll see. So a lot of excitement in those areas. On the metrics side, if you're unfamiliar with metrics and Kubernetes or OpenShift in general, what you end up doing is you take the lowest level C groups, right? C groups have been around for a while. You can get a lot of information from C groups. So you pull all this data from C groups. That goes into a CA advisor. This is running on the node. CA advisor goes up to Heapster. Heapster is an API that you can ask Kubernetes questions, but it's a live API. It doesn't store anything. So we had to implement a storage solution and we're using Hocular with Cassandra under the covers. And so now we have Cassandra storing all these metrics. We're gonna add in some more file system and networks for you to start pulling that out. Container enhancements, we're gonna clean up overlay file system implementation around this time. We'll have Docker 110. When you get 10, you get username spaces. This is your ability to run things as root and then map that back to a non-root user in the kernel. It's in R&D right now. So we don't know if it'll hit the 3.3, which is the second half of the year around the summit timeframe, but it's something we're definitely moving towards. The big one on the Kubernetes side are I would say the scheduling. We added over commitment to the platform. So with over commitment, you can have people oversubscribing to CPU and memory cycles out on the box. What isn't really clean right now is when you are in an over committed situation, we'll try to start the pod back on the same node. And it'd be nice if we weren't doing that if we were starting on a different node. So that comes into this timeframe. We also get some more auto scaling. Auto scaling is CPU and memory based right now. So it'll be in tech preview, but it'll allow you to pick anything that we're tracking through that metrics interface. The Petset is really another controller. There's a couple controllers in the product. There's a replication controller. There's a job controller. There's a daemon controller. There's now gonna be this Petset controller. This will stay in R&D probably for 2016, but it's eating up a lot of cycles. Where you would need this is if you have an application that depending on how it scales, it has to ask other instances questions. Maybe when I add another instance, that instance has to ask other instances what his identity or her identity is. Maybe it has to read from a shared file system. So a little nuances how different applications work. We want to be able to contain that and hold that in the controller. Service linking is config data. It's now called config map. The problem with the implementation right now is when you have a lot of variety in the runtime. So let's say you have EAP, Mongo, and maybe you have EAP and MySQL. Maybe you have Node.js and MySQL. You end up with a lot of combinations of templates. And it'd be nice if you just had Mongo, MySQL, PHP, and allowed people ad-hocly to just grab things. And to do that with an immutable infrastructure is quite difficult. And so what we've done here with config data is all these environmental variables and config files that you would have to inject into the running immutable image. We're gonna hold that in the Kubernetes services layer and the secrets layer. And we'll inject that into the deployment has we're deploying. And Kubernetes will maintain that and hold that configuration data for us. Platform, the job controller, this is short-lived jobs. Kubernetes doesn't ever want to let anything die. If you launch an application on Kubernetes, it'll restart it. It will always want to restart. That's what it's doing in life. And sometimes you don't want that. Sometimes you want to just launch something and you expect it to die because it's a short batch job, right? Bring back the data. So you can launch that with the job controller instead of the replication controller. And that's a nice feature coming in. Whenever you want to use a cert, we ask you to use a secret. And to use a secret, you need to use a function of a service account. This is very confusing to people and they typically forget to create these service accounts. And we need a better path. We need things to happen by default as you select what you select from the catalog. So we're gonna start automating that a little bit better. People have wanted another logical grouping on top of projects. So maybe I have a department called finance and under finance I have a bunch of development projects. So that's another logical grouping layer that we'll have on top. The atomic registry, we have to discover it and bring it in and marry our authentication to it once it's in a standalone situation. Insulation will get rolling upgrades. We'll do that a little bit better than we are today. We released a containerized deployment a couple of weeks ago. So now the whole product is offered has containers. Our routing is a container. Our agents are containers. Everything is in a container. You can still use RPMs if you wanna go down that path. But now we have to change a little bit how we do our HA installer out of the box. We have to clean up that logging and that metrics. Ansible playbook, bring that back into the central install use case if you will. Scale, right now we're at about 500 nodes with around 50 pods on each node. So we'll go to 1,000 and probably with 100 to 150 pods on each node. And then we'll start putting out a lot of reference implementations that you see there. A lot with CloudForms. If you're unfamiliar with CloudForms, there's a lot of providers. You'll use a provider for AWS, a provider for OpenStack, a provider for OpenShift. The power of this is when you start getting these attributes from these providers and you start connecting them, right? You start seeing that the storages can actually connect it to this application. When you get the error in the storage, you can see that being brought up through that application layer. So what does it all mean, right? I mean, that was a lot of nuts and bolts and it's easy to forget why we originally wanted this. Has users of IT. For me, I realized what Platform has a service was about probably three years ago. I was looking at different implementations for a competitor and I hit OpenShift.com. And it was mind-boggling. Very quickly, I could get an application up. And as I was working for a different company at the time, it was like, holy cow, they figured it out, right? Those zealots over in open source land, you know, the only thing that was holding them back was the fact that, supposedly, it was harder to use open source technologies, right? It wasn't a clean user thing. You had to put more effort into it. But here you were. You were out on the platform and you were just grabbing no JS, you were putting your code in it, you were bringing it up. And they had this application gallery that had just the most fantastic things that like 18-year-olds in China were doing and like seven-year-olds in Chicago. It was just, you got to see people just experiencing technology. And I walked out of my current job and I walked into Red Hat and I had to be a part of that. And why did I have to be a part of it was really what happens when you give a man or a woman the right tools at the right time, right? What happens? Art happens, right? Something, you see something that you've never, ever seen before. And this is the new artist, developers. And this is what platform as a service is, right? This is us giving a developer the right tool at the right time and we're able to see something we've never seen before. My name is Mike Barrett, it's been a pleasure. I have about three minutes. Does anybody have any questions? Yes. I've posted the open ship for public just out there in the room up there. Because I'm a developer. Yeah. I've been working for open ship because I'm a developer. I don't have my own servers. So I really like we do, it's a really different product. Right. I like to develop against open ship but there's no different product at the time. Yeah. So I mean you have two options today, right? And one option is V2. Another option is open ship dedicated. And open ship dedicated means you get a full open ship version three environment, four nodes, some high class storage. You're the only person on that. So it's just your tendency, if you will. And around March, the beginning of April, we'll start a beta program for the online with version three. Now what that means in beta is every 30 days we'll just blow away everybody's environment. So we'll allow a couple thousand people on the platform every 30 days, we'll blow it away, we'll let a different couple thousand people on the platform and we'll do that until we flush any imperfects out of our implementation and then we'll go live. The things that are connected to it when you look at a company like Red Hat is a financial system, right? These are credit card purchases. It goes through a different financial system and then our purchase ordering system. So we have to rewrite how that's been implemented. So it takes us a while to bring that over. Any other questions? Well, I hope you all grab on to these drops. About, you know, I gave you the milestones of April and June, about three weeks before then you'll pick up origin drops if you're in the open source community there. So we'll keep in touch out there. That's it, thanks. Get you to do the same thing. CZ, CZ, CZ, and 18, 18, 16. Yes, that would be lovely. Sorry. I'll run out of space. Yeah, you can be absurd if I move this way. No, no, go right ahead. Go right ahead. I thought... Yeah. Just watch the booth. I'll see what's happening, but I might get a little docker for the doll. See what? I'm not going to do that. I'm not going to do that. I'm not going to do that. I'm not going to do that. See what? Stop, stop. Hi. I'm session chair here. Is there room? Yeah. Yeah, do you need anything? Is there a clicker that will work for... What is this? I'm hoping. I'm hoping. Oh. Oh. Ah. It behaves as a keyboard, but only with a limited number of keys, so... Yeah. Well, awesome. This is really great. This should be the speaker prize.