 So, I figured that I have probably the easier job today here because I'm going to talk about very nice things, features that come into OpenShift. So Dan helped me a lot by talking one of them. So that means I'm going to have less about it. And some of the features that I'll talk a little bit, you also have a dedicated session by my colleague as well, Paul Maura later on, right? So the objective of this is to introduce you to a few new things that we are developing and thinking about developing the OpenShift for the next, let's say, one or two or three releases that we think ahead. You often try to think ahead at max of a year because we know that things change a lot. So that's what we do. My name is Diogenes Rotori, that's my Twitter, I'm a product manager right ahead, responsible for OpenShift. And my main areas of responsibility in the product are essentially things that run on the platform, and let's say application services related capabilities on OpenShift, which I'll talk a little bit about them as well. Very happy about this. We all know, right, that Redhead decided to acquire CoreOS. So before the deal is actually finished and the transaction is completed, we cannot make any comments about it other than what's already stated in our blog. So maybe in like a few weeks or so, we'll be able to state our plans, our objective is the technology, but we need the transaction to be completed before again. We can make any claim. So anything that you hear from anyone is probably not true until the transaction is finished. But again, it's a definitive agreement to acquire. We can, though, say that we are very, very happy with this decision and to have awesome engineers that are going to continue to work together in the technology that they really love, right? And let's get into the interesting things about this right now, right? Things that we're doing in OpenShift to make OpenShift even more awesome. I know probably lots of you here are OpenShift users, OpenShift developers, OpenShift contributors. So let's talk a little bit about the things that we're doing. And you know that OpenShift has Kubernetes underneath. So I think it's valid to talk also a little bit about the things that are in Kubernetes 1.9, which are going to come to OpenShift soon, right? This Kubernetes 1.9 release was called as a stabilization release, but there are lots of capabilities coming in there as well. But from the perspective of the community, it's seen, let's say, let's get lots of bug fixes in. Let's get some capabilities to be at a stable stage. Let's migrate some capabilities from Alpha to Beta or from Beta to GA stage. So this was the main target for this Kubernetes 1.9 release. The Workloads API, it's probably the big one that capabilities move from a beta to a GA stage. Especially happy to see stateful set there, right? How many of you know what stateful set is in Kubernetes? So we have about 25% of the room. I think it's worth explaining a little bit. So when Kubernetes was first created a little over three years ago, the objective was to address mostly cloud-native workloads, mostly ephemeral workloads, things that will go away. You have a new container. If that container is bad, you killed that container and you wait for a new one to come up, you assume that that container is probably a cloud-native application that does not care necessarily much about its host name, doesn't care much about his identity. So the objective of stateful sets is to assign an identity to a running container that is going to be maintained across the deployment lifecycle of that container. So if for some reason that container, let's say, dies, when it comes back up, it comes back up with the exact same identity. So a use case that I think loves that, databases. Databases love their host name. Databases love their IP. Databases love their storage they're attached to, right? So that is a very important feature that's coming to GA and Q1.9 and to OpenShift as well, right? So mostly use cases that are, again, have a very warm relationship with an identity, with where they are running. They can be run more successfully with stateful sets. Another thing about stateful sets as well is that it has an order for when things need to come up. So if before a database, you need to come up with an agent, or if before, let's say, an application, you need a monitor to come up first with that application. So it allows you to assign an order of execution for things inside one or across multiple pods. So that is a very good thing. Demo set, you probably know demo set. Demo set is a container that's going to run an every node. So if you need links to running nodes, for example, mostly we see log scrapers that have to run in every node. That's a demo set. And the others, they are very popular. We already know them. Windows support, lots of contribution from Microsoft in this. Red Hat is also involved, so the Windows support and beta for Kubernetes. And this is good because this allows you to have hybrid clusters, right? So clusters that will be some nodes running Windows containers, some nodes running Linux containers. Remember, a Windows container can only run Windows. A Linux container can only run on Linux. So when we say Kubernetes support for Windows, that means that you're going to have a Windows package container that's going to run on a Windows operating system, right? And for the diversity of workloads, the node team and the SIG, they invested in allowing pods and allowing containers to have more access to node capabilities, right? For example, actually, CPU peening, not CPU peening. Device plugins and hardware acceleration. If you know a project called KubeVirt, which is targeted at running virtual machines inside Kubernetes. So these are all capabilities that we expect from virtualization solutions, right? To be able to, let's say, to pin to a specific CPU, to use the device plugins in the host. So that's important capability as well. And that's more. And that's lots more. Any questions about this? Good. So you probably know that these features are going to come in OpenShift 3.9. We follow that release naming. So if OpenShift 3.9 means Kubernetes 1.9. Now, from a, let's say, community organizational perspective in the Kubernetes community, the community is growing. And in order for that to grow in a healthy state, some processes have to be established. For example, how you submit the proposal for a new capability. So that has been established as a community norm as well. So that if you want, if you have a new idea that you want to implement it in Kubernetes, there is somewhat of a process that you have to follow for that idea now. And continue to involve and create some more things. Now, this is a, if there is like one slide that, normally, I would take a picture, would be this one. So these are the things that we right ahead are investing across the container ecosystem, right? So this is not necessarily OpenShift. This is not only OpenShift. This is where Red Hat is putting our efforts from marketing and from engineering perspective to make OpenShift mark successful. But that means investments not only in OpenShift, right? From the, and hopefully I'm not going to go through all of them. I'm just going to cover a few of them that are especially very dear to my heart, which is cloud native work run times for OpenShift. So Red Hat is providing run times to run our applications that understand Kubernetes environments. So if you want to configure our Java application using ConfigMaps, for example, we have libraries that will understand that environment. Also Service Catalog and Broker. So Paul More, I'm not sure he's here. He's going to be here soon. We're going to have session dedicated on Service Broker and Catalog. I'm not going to address that. Again, Windows Containers. Also a data platform based on Spark. And another very, very nice technology that I have a little bit more about it called Istio. How many of you are following Istio or Service Smash? OK, pretty good. So we have one on good that you're following. No, because we have one of the Istio community members here that happens to be a Red Hat employee that's going to be answering questions on Istio on the panel very soon. So thanks for coming. I think the big item for us, Crial already mentioned, but it is an item here called cluster operation. We, Red Hat, we manage ourselves a lot of clusters, right? Like let's say close to 100 clusters. I would say that we have Kubernetes and OpenShift cluster that will say that I have new manage for many various purposes. We have our own OpenShift dedicated business that customers can acquire, a managed cluster from us. We have OpenShift online, which is also many, many clusters. And it's our interest to make that operation even more automatable, right? So a technology that we are going to invest in the next year is called cluster operator. It's going to follow the Kubernetes model. So the objective of cluster operator is that you're going to define a state of how you want your cluster to be. And in the same way that you define in Kubernetes a state of how you want your application to be, and Kubernetes maintain that state, it's going to be the same for cluster operator. So you would describe your cluster, and then the cluster operator will always keep that cluster in that state. It will also help you with automated upgrades, automated downgrades, automated addition or removal of nodes. So it's going to be a very big project for us, again, to allow our customers to have a more automated operation of clusters, and to allow Red Hat itself to become more capable of automating multiple clusters. Of course, it will be open source. Of course, it will be available to you all. It will continue to use the playbooks that we already have, and then interact with those Ansible playbooks. So if you already know the Ansible playbooks, those will be evolved. Those will be likely breaking apart so that the cluster operator can interface with those playbooks and do state-based cluster management the same way we do state-based and declarative-based application management side, Kubernetes. Let me see of the ones that I think are interesting as well, especially to me. I'm going to have a little bit more details about this. So one of the things that we do, of course, is to focus on the stability. And I think just on Cube 1.8 and 1.9, I think between the Cube release and the R release, we had to fix more than, I think, 180 bugs in Kubernetes that we thought was critical. So our work in the community is to chop wood and carry water. So it's about fixing bugs, making Kubernetes stable, making Kubernetes consumable in the enterprise. So that is a lot of the work that we do. And for 2.7, which is launched already, we also made features to a more stable stage. As we learn a lot from our online clusters, we discover that some of the way, some of the, let's say, the API calls or the call aggregation or the data coming back from APIs, they tend to be a very, let's say, a large payload. And we learned a lot. I think it's hard to say that we are probably running one of the most diversely dense clusters of OpenShift out there, because we have OpenShift Online and you can have all sorts of workloads there from different runtimes, from different types of application, from Bitcoin miners, for example, they're using it, which we try to shut down, and we do shut down many of them every single day. That's what happens when you put free compute capacity on the internet. So by that, for the community, for the OpenShift community, that is the best thing that could happen, because we learned so much from that experience of having around, let's say, eating around dog food, of having OpenShift there running, that as part of the OpenShift development process, things don't go to the product if they have not been baked in online, right? So whenever you see a release of OpenShift container platform, that means that those features, they've been tested and run in OpenShift Online, and we know how to operate them, we know how much they can handle in terms of scale, we have fixed lots of security bugs, because we don't want people getting into an OpenShift node online, stealing our AWS credentials and going crazy about it. So I would say it's a safeguard, the fact that we run it first, that we're willing to shoot ourselves in the first before you do that, right? And for OpenShift Pro, which is the paid offering for OpenShift, that is, let's say, the same OpenShift that you can get with OpenShift container platform, and it is the same thing that's in GitHub as well, right? So we release OpenShift container platform, there's also a GitHub release for that, so we're not afraid of open source. So again, we made changes to facilitate to make pulling content from the API server in a more organized way that allowed us to scale, right? Literally hundreds of tons of containers, tons of thousands of containers, that's a lot of metadata that the cluster generates, and we need to access that metadata to make smart decisions about where to run those containers, for example. So that is one of them. Also in that sense, the diversity and density of the clusters, talk to the Liberty about this. If you've seen, we all know that Prometheus is popular, so we're bringing Prometheus to OpenShift before. We had our focus in a technology called Hocular, but I think we've been pretty good at joining successful communities, so we decided to join this very successful community called Prometheus as well, and Prometheus is going to become the supported monitoring technology for OpenShift. We're already shipping it with the product, but as a tech preview stage, and it's a two-step program, let's say for this, is that first we want to allow the cluster itself to be monitored using Prometheus, so that a cluster operator, or a cluster administrator can see the state of a cluster with Prometheus, and the next step will be individual applications to use Prometheus to monitor the applications. Customers have already been doing this, users have already been doing this, but we want to provide a supportable and sustainable path for our users to continue to do this. Now if the intent to acquire CoreOS, we're going to have also very bright engineers that already know a lot about Prometheus, so that's another very good advantage of that. Auto-scaling has been in OpenShift and Kubernetes for quite some time, I think it was 1.1 or 1.2 that it first got in, there's been lots of changes in that, so with the current version that we have, and this is one of the SIGs that we lead, the SIG auto-scaling, it's led by a colleague of mine, a back in Boston, we have the ability to have a custom matrix API to do custom matrix-based auto-scaling, which is what makes sense. CPU works for maybe like 80 to 90% of the use case, but sometimes your application might require you to auto-scale based on a business-related metric, right? For example, you want a SLA, you want a HTTP transaction response time, or you want other types of metrics that will trigger an auto-scale in your pod, right? So this is available, and Kubernetes already, it's coming to OpenShift 3.9, so it's through the HPA custom auto-scaler. You can also, for example, there's a plugin written for Prometheus that you configure the auto-scaler to use a Prometheus query to define whether or not something should scale up or down, so that's a very powerful scaling tool. Let me see how many I have done, I have no clock, I have a clock here, it's weird. Flex volumes, so this is again allowing us to run other types of workloads on OpenShift as well. Network, continue network, IPv6, it's interesting, some industries, especially the telco industry, requires IPv6 a lot. Some, not so, pretty much it's been a major one asked by telco, almost a showstopper for telco industries, so that's why our investment's there as well. Continue work on network policy. How much do you know, how much of you here know what network policy is? Good, so I think it's worth a little bit, maybe like 5%. So network policy allows you to have fine grain control over the network communications that happen inside your cluster, right? So if you have two projects, or two namespace, or Kubernetes space, or projects you want to say, I want a pod in a single project to interface with a pod in another project, and only that, and no other network connection between these two pods, you can do that via network policies, right? So if you have, for example, an application and a database, and you don't want any other application to interface with that database, but the pod that's interfacing with that application, you can do that using network policies. There are other ways you can do that as well, but this is to, let's say, create a network level protection and granularity and control over what you can do with that. So a great contribution from Tigero and this, so they were the ones that first came up with this, so this is just powerful to see the community helping everybody become more successful. Tigero, I think it's a common memorization, I'm mistaken, so good to see that happening. So storage, this has been long waited by some of our customers, is that a request that I got in the early days was, I have my database, and I want my database to run in a node that I have SSD attached, and I want that database to keep running on that node forever, because I want to do it, but I want to run it in container, right? So you kind of have a storage-based pod scheduling. I want my application to land on nodes that have this specific type of storage, and the storage, it's a locally available storage because that application requires very low latency and fast storage. So this is one of the capabilities that we are working and soon come up to a DA stage, but it's still an alpha, but again, it's ability to start, local storage-based scheduling of applications. So as you can see, Kubernetes, we kind of solved 80, 90% of use cases. Now we're starting to get into more complex scheduling and execution use cases to really run every single application out there. Also resizing, snapshotting, that's self-explanatory. Let me go live, this is covered. This is, so Paul Moore is going to talk a little bit about this, Paul Moore is a Red Hat engineer. He is the lead for the service catalog work in Kubernetes, and I know he has a very nice demo to show you, but I want you, how many of you actually have seen this, the container catalog or the service broker? So good. Red Hat's objective of this is that we want your application catalog of things that can run both inside and outside of the platform to be consumed from the OpenShift catalog, right? We have developed a broker API that allows you to publish applications, again that can run either inside or outside of OpenShift and you can trigger the execution of these applications from inside. If you saw the announcement we made when we launched 3.7, and the beginning of December, we announced the AWS service broker, actually AWS announced the AWS service broker for OpenShift, and that is a way for you to consume AWS services from an OpenShift cluster. It doesn't matter where that OpenShift cluster is running. So you have OpenShift running on-premise, you want access to an RDS database or you want access to let's say an S3 bucket or a SNS or SQS, you can consume that from your local OpenShift cluster. Of course the service is always running on the cloud, but the negotiation and creation of the service is done by the service broker and interfacing with, in that case, cloud formations templates on AWS, and you don't necessarily have to know this, right? You just go to your OpenShift cluster, you say I want SNS or SQS for example, you can pre-configure AWS credentials or you can use your AWS credentials exactly at that moment, and then you have a representation of a queue or a topic or a database inside your local OpenShift clusters so that your application can bind to, so that you can have and share those SNS or SQS credentialing connection information shared with your application. So this is powerful, where I see this going is that eventually this will become a, let's say, application and service catalog for companies for both things that run inside and outside of the platform. The investments we are making this, in this now, can I have a little bit of water, please? I think that one on the left, yes. Thanks, Diane. Yes. So we all know that organizations have policies that they need to enforce and so far, the services that are published in the catalog they're available to anyone, right? But I know, and I've done this, is that you don't necessarily a production database exposed into a development environment, right? So the work we're doing upstream now is to create governance around the services that are exposed and available in the catalog. So you would say this user, this namespace, can see services related to production services. This user in another namespace can see services that are related to development services or QE services, right? So this is the work I've been doing. We expect to have the work done on this by 3.10, end of June, if I'm not mistaken. All the date exceptions apply that we might change, but we've been pretty stable in the releases. And this is on the governance side. So because if we assume that this catalog is going to become the enterprise catalog, we're automatically saying that we're gonna have hundreds or thousands of services published in the catalog used by different groups within the company. So we want governance around that. On the automation side, we want to have the same easy experience that we had, even in OpenShift 2, is that you could just have your application and say, connect to this database and pre-ed. You will need to do anything else. So all the credential and connection information that will be shared. So we're going to, this will be, let's say, the first step towards automating. I'm going to evolve a little bit of this use case with you now. You have a Java application that needs to connect to an Oracle database. What does your Java application need? You need a JDBC driver for Oracle, right? So you can include that as part of your, as part of your build process in your image. But we're also going to invest in creating binding-based build and deployment triggers so that we can notify your build process that this binding requires something that you might necessarily have, right? If you're going to bind to an application that requires a specific library, we want the build process to be notified. So that means that if you're doing your building open shift, your build process will see and maybe have a image source or a volume mounted that will contain the dependencies that the bind needs to happen. So this is the level of automation we want to go is that you, the platform knows a lot and you shouldn't be needing to tell the platform the things it knows already, right? So that's where we're going to go. Installing upgrade, I talked about this already with the cluster operator, so that it was going to be on our big investments for this year. We want, in order to do that, it's not only a matter of automating, but it's a matter of also creating artifacts that allow for easy deployment of nodes, right? So we're going to be creating golden images for open shift nodes, for example, so that you don't need to install rel and then install open shift and then do something else. You have this very nice image that already has everything and it will connect to a master, pull its configuration and that. So we ran a few tests already and we were able to stand up a 100 node cluster in about seven to eight minutes and before that you would need, let's say three to four hours to do that. And remember that we need this ourselves for our own dedicated business, an online business, so in everything that we do there, it's again going to be available to anyone at the exact same time, it's going to be developed, of course, in the open. So again, the objective is to facilitate how you stand up open shift clusters, how you destroy them, how you add nodes, how you remove nodes with golden images of open shift nodes. Continue to work on the CloudFarms tool or manage IQ, the tool that we use to manage open shift itself, it's called CloudFarms and the upstream open source project is named as manage IQ. We're working also on allowing you to have reports that show the consumption of user-specific image, so if you need, for example, to show to your organization or to charge a group of your organization based on the usage of a specific image that has a product that's licensed and you need whoever is using that image to cost center to pay for that, that is going to be available in CloudFarms. So guess what the next open shift version will be. So we launched 3.7, the next one is? 3.9, so we were paying attention, pretty good. So it's going to be 3.9, so what happens to 3.8? Yes, it's going to be original and we're going to, from a product perspective, we're combining it to release. Nah, I'd say skipping is like a bad thing, right? So we're going to combine the 3.8 and 3.9 release. You know, the good thing about this is that sometimes people complain that, yeah, Red Hat is behind, OpenShift is behind, Cube releases. So the day we launched OpenShift 3.9, Cube 1.9 was to be there. So it's kind of, okay, you were saying we're behind? Not anymore, find something else to say. And that's it, so some of our goals, again, continue investing a lot in bug fixes and improvement and let's talk about the things that I appreciate and I know when jumping slides here real quick, which are the services that run on the platform. This is actually the thing I'm especially passionate about. So Red Hat is investing in a serverless technology for OpenShift. Some of the early decisions in the technology was to use OpenWisk, but we understand that the market for serverless is still very much in flux, but we are having investing in serverless technology for OpenShift and for Kubernetes. Or we can call it function as a service. I prefer it in serverless. There will always be servers, right? They're just not yours. So we understand that there are many ways you can define an application today in Kubernetes. So for example, you can use charts, for example, to define application, you can use K-compose if you're bringing in from Docker, you can use OpenShift templates. I think a colleague did a research, there are 18 ways to define an application inside OpenShift or Kubernetes, right? And although each of these ways they have a reason, we want to try to, as with most standards, we want to try to come up with a new way that will make the best thing of all these ways and the result is 19. No, so the objective is to try to come up with in the community with a standard way of defining what an application looks like, right? So it's going to be Red Hat not enforcing our own will, but really working with the community. I think so far there's been a lot of good discussions around the next version of Helm, Helm 3, so I would say we're trending towards Helm 3 at the moment, even though it only exists in paper, it does not exist in technology, but that's what we've been thinking about so far. Some email here, Dan, you need to work on that, okay? Okay, and then service mesh. So you have opportunity to ask Christian Post around service mesh. Michael has prepared very nice questions about service mesh as well. But the objective of service mesh is to transfer to the platform capabilities that were once available in the language platform. So for example, if you want to add circuit breaking, say fault injection, AB routing, or any specialized routing, before this, you would have to use client-side libraries if you wanted to do, for example, circuit breaker, right? And the objective of Istio, it's to allow that to happen at a platform level, right? So that you have the platform delivering those capabilities. And the way it does that, to go in a little bit technical, technical is that it's going to use sidecar containers with a proxy inside, which is the Envoy proxy created by Lyft that we are also contributors to that, where all the traffic is going to go from proxy to proxy. So that means that the proxy knows where things are going and where things are coming from, and there's a control plane on top of that where we're able to say, okay, pod A can talk to pod B if someone else wants to talk, that's not allowed, or if I want to do, let's say, circuit breaking, okay, I try contacting that application three times, I could not do that. I want to, let's say, open the circuit and not allow, or not have to wait for the application, or let's say, a denial of response or timeout to come back. And this is another thing we're going to be investing. Hardly we are already investing in this, and the intention is to be able to show this running in production at Red Hat Summit, which is in about 13 and a half weeks of development left that we have. Pretty tough, we'll get there. It will be available on OpenShift, so it will be something that you install on top of OpenShift and Kubernetes, it's already available on Kubernetes, but the majority of our works so far has been making sure the capabilities and the components in Istio do not require you to escalate privileges in the container, right? We try to have a security mentality first. It often involves hard work to do, but that's what we want to do. So if you were to try Istio today, some of the capabilities require you to assign elevated privileges to container. We don't think that's a good thing. We can't do this on OpenShift online, right? Because remember, we run it ourselves, so we couldn't just let containers have root access in the host or in the node. So these are the things that we're going to fix first and then continue evolving on the others. If any of you here is interested in being part of an early adopter program for Istio for OpenShift, please come talk to me directly. Be aware that you really have to be able and willing to use this if you want to be part of the early adopter. If you just want to see what we're doing, it's going to be available on GitHub anyway. And that's it, that was pretty fast, I think. So thank you very much. Could I have time for questions? Do some questions while I set up the lightning talk, folks. So if the lightning folks want to come and line up over here, that would be great. We'll make Liz Rice go first. Yes, there, here, there. Come again. So Windows Support, it's going to be provided by Microsoft itself, so Microsoft will have to say that Kubernetes is supported in Windows. So we can't claim it. What we can do is we are going to integrate with Cubilets running on Windows nodes. We don't support Windows. Microsoft supports Windows. Cluster Federation. So any work on Cluster Federation? Yes, the Cluster Federation project took, let's say, a different spin or strategy. So they are working today on a smaller, more with a small scoped comparative federation called Cluster Registry, which is to first have a registry of clusters. And then work on allowing the resources to be distributed and avail and deploy in multiple clusters. So let's say the federation team thought that it was too much of an undertaking to do. And they said, let's take a step back. Let's have a more focused approach. Let's fix this problem first and then see where we go.