 So what I'd like to do is do the Q&A. If I could get the PMs and the folks that spoke today from Red Hat to come up and any of the engineers that are willing to testify and talk. We're going to let Joe. Yeah, we'll turn on all the mics. I'm going to take two of them and ask one of my colleagues and I will go up and down and answer. So I know this is between you and the beers, but please this is your opportunity to ask them any questions that might have come up today. And I can feel the stage moving with the weight of the knowledge that's here. So I'm going to put one down here because I know you guys talk a lot. And so you guys are good. And I saw that you have one and I have one. So who is our first guinea pig? All right, actually before we start, I want to just give a big round of applause to all of our customer and partner presenters today. Amazing job. And then also a huge round of applause to Diane Mueller and the team that put this event together. So my name is Joe Fernandes at Joefern1 on Twitter. If you've been following me, you know I've been live tweeting the event. I joke with somebody I feel like Donald Trump at an impeachment hearing. Too soon, too soon. I joined Red Hat eight years ago. I was the product manager for OpenShift 1.0 and 2.0 and no offense to the BroadCon guys, it wasn't that bad. But with OpenShift 3, we made a huge bet on Kubernetes. Obviously, you see that we've continued that into OpenShift 4, enabling all sorts of new workloads and new capabilities. With OpenShift 4, we made two new bets. We made a bet on an entirely new way to manage the platform using operators and machine controllers and REL Core OS and all the great stuff you heard about this morning. And then we also made another bet. We made a huge bet on bringing a whole new set of services to empower developers and DevOps folks, things like Istio and Knative and Tecton and all the operator-backed services from our partners and so forth. So we're going to continue these bets. Hopefully, you learned a bit about that today. If you want to learn more, we'll be doing some sessions over at the conference and obviously come to OpenShift.com. But at this point, we just want to open it up for questions. We have many of our product managers and some of the engineers and others on the stage. So if you have questions, just raise your hand, grab a mic, and we will get it to the right person. OK. For Jeremy Ader, I was, there you go. Hi. I enjoyed your talk. Big fan of operators. I was paying particular attention to the part where you're telling me about the API for creating clusters. Show us the CLI, actually, not the API. But I assume it looks like a traditional RPC-oriented API. You can use curl instead of the command line tool. Yeah. OK. We can use curl to interact with Red Hat. It seems to me you have snatched defeat from the jaws of victory. Why does the client not simply use a kubectl or whatever to create a Kubernetes API object to define what he wants? In some people have thought along those lines, including a kubectl plugin, but there's nothing productized at this point. So it may be possible, but it's just not just yet. Thank you. For using OpenShift Container Platform in a public cloud, is there anything coming down the wire that would allow me to run, install, and manage the cluster myself, yet pay for it by the hour? As opposed to having, I think in my talks with Red Hat for licensing, I had to pay for my peak for the entire year. So you want to install and manage OpenShift clusters yourself, but pay for it by the hour on any of the public clouds? Right. That'd be easier to sell. So like a consumption-based, yeah. So we have done work around sort of an hourly or consumption based pricing model that could enable that. We don't have that today. A lot of what we needed to do was sort of enable some of the metering that would allow you to actually do that more effectively. With the metering operator that you saw, I think came out in 4.2. That provides some of the underpinnings of that. But I think it's an interesting model. Also, OpenShift 4, a lot of what we did with sort of bootstrapping clusters and really tying into the different cloud provider infrastructures and interfaces is to make that whole process of bootstrapping and self-managing, or make the platform more self-managing is key. But we should probably talk later a little bit more about kind of exactly what you're looking for. So nothing like right away, but certainly something that we're thinking about. The only thing I would add is sort of mentioned, we need to get the technology in place. And starting in December, there's a new SaaS service that will be up at cloud.redhead.com called cost management. And you'll be able to see more visibility of the cost of your cluster. Once we have that in place, we can turn on the back end, which is our procurement process. And that's a whole another ball of yarn. A question around the service mesh and the STO product set. When Steve Dick was talking about extending STO into the VM space, is that part of the product supported roadmap as opposed to the upstream STO? Or is the STO service mesh delivered through OpenShift constrained to Kubernetes? So I guess I was looking for Brian. He's not here. So the question was whether we're planning to support a service mesh and STO outside of a container-based environment. Right now, we don't have plans for that, right? So OpenShift service mesh, which is generally available now, by the way, go out and you can use it with OpenShift 4.2. Right now, it's included as a service with OpenShift. So we don't actually sell it separately. We also don't support it today outside of a Kubernetes-based environment through OpenShift. There are some interesting use cases, obviously, for service mesh with virtual machine-based environments and other non-Kubernetes environments. It's not in our product plans to offer that as supported today, but it's something that we do contribute to through the work being done upstream that Steven and Brian talked about earlier today. All right. Oh, I walk over here. There, go ahead. Go for there. There's a lot of interest on KubeVert. What's your position on that? We got our KubeVert guy. Our position on KubeVert is in OpenShift 4.0 On the documentation links, you'll see down the bottom container-native virtualization, which is our productized variant of KubeVert. KubeVert, for those who don't know, allows me to deploy, manage, and run virtual machines on an OpenShift cluster, particularly a bare metal OpenShift cluster. It is in technology previews still right now, but certainly on 4.2, you can go out and try it today. And we'd encourage people to do that. And on Twitter, I'm at access Gordon. Would love to hear feedback. Yeah, one of the projects that I'm really excited about the most, KubeVert, again, a lot of people who run OpenShift in Kubernetes today run it in a VRT environment. I see what you heard this morning and some talks this afternoon is OpenShift runs great on bare metal. I think it's the best way to run Kubernetes. But when you run on bare metal, what do you do with the workloads that still run in VMs? Well, you bring those VMs to Kubernetes instead of bringing Kubernetes to VMs. And so that's the idea behind KubeVert running a mix of container and VM-based workloads, all managed with a Kubernetes control plane on a shared platform. It's very cool. It's in developer preview right now. It's called OpenShift Container Native Virtualization. So hopefully everybody here who's interested can go and try that out. Right here. So with the announcement of OKD4 in terms of feature set, I guess when you guys go to release a GA, what's the target? I guess when you compare it to enterprise or OCP in terms of the features. So OpenShift, everything in OpenShift 4 has been open sourced the whole time. Our current thought is pretty much everything that would be an OCP is in the core platform would be part of OKD. There's a few components that don't run on anything except rail core rest right now, some of the work around the bare metal stuff. Those are all roadmap items to go make sure that those work well with the metal cube upstream, but it'll take us some time. There's a number of operators above where the community versions should work on top. We haven't quite gotten to that point yet. I think the focus for the next two, three months is going to be stabilizing, making sure we have a good repeatable dev cycle, making sure that folks in the community who want to contribute are able to do so. And then I suspect we'll start as we get into early next year. There'll be some bigger discussions in the community. We have a roadmap that was set out in August or so, where we're trying to break this down into chunks and make sure that we have a good repeatable dev workflow. But there's no belief, I think, that we have that we would hold anything or change it. It would just be, are those components well integrated with OKD or not? And that's up to those upstream communities. Yeah, I think some people think of, well, isn't OKD the upstream for OpenShift? I mean, the upstream for OpenShift is ultimately Kubernetes, much like the upstream for rel is Linux itself. And so we do all of our work in Kubernetes first and then build stuff around that to manage the clusters. But OKD is important for many of our community members. It's sort of that sort of pre-commercial distribution of OpenShift. We had some work to do. Mainly on the Linux side, we always blame the Linux guys. But to get a non-commercial version of rel core OS, which is Fedora core OS that OKD is based on, it's out there today. And as it was in 3X, it will sort of iterate and also be the place where we try out some new things before it comes into sort of a beta or dev preview in the commercial offering. Yeah, and I'll note that especially on the Fedora core OS side, Fedora core OS is driven by the Fedora community. It's much more accelerated than rel core OS is. Rel core OS is actually co-designed and co-developed to be on the exact OCP life cycles. We're just starting to go through the discussion of what that'll look like in the Fedora community and with OKD. But I would expect it to move in some areas more quickly. And that may mean that that changes how we release OKD based on things like C Groups v2, which is now in Fedora 31, or Fedora 30. I don't know which Fedora we're on, I can't tell. And so that's sort of progression. OKD will probably go through periods where we take on things in the upstream communities that aren't fully baked yet because there's so such large changes. And we haven't yet hit that first one, but it'll probably be the C Groups work. Hi. Again, yeah, just a quick one. When can we see the support for CSI snapshot and drive it? Say it again. The CSI snapshot? CSI snapshot. Mike Barrett? So we're hoping to bring CSI snapshot in the 4.4 release, which would be the March-April time frame. All right. Yeah, thanks. So CSI, Kubernetes is going through this project of taking the entry storage providers and bring them out to a containerized storage interface implementation. So a lot of the work is taking the existing entry devices like ISCSI, Fiber Channel, all the Elastic Block, Google Compute, all those storage that we support in Kubernetes and re-instrumenting them out on the CSI interface. And a lot of the ISV vendors here are also ported to the CSI interface as well. And we will have the dev preview available with 4.3. So if you want to just play around with, see the APIs, you can do that. Yeah, so that's OpenShift container storage 4.3. Sorry. 4.3, right? OpenShift OCP 4.3. I do have the dev preview of this snapshot and clone and restore APIs. And that should, like the nightlies of 4.3 are already out there. I actually think the basic enablement is in. So you can try a 4.3 nightly, and you should be able to access that. Yeah, this is actually something I wanted to mention to everybody, right? So one of the cool new things we introduced with OpenShift 4.x is the availability of something called OpenShift nightlies, right? So OpenShift nightlies, once we start publishing them, are essentially early views at the next release, right? So right now, OpenShift 4.2 is generally available to everybody. It's fully supported by Red Hat. But I think we already began publishing the OpenShift 4.3 nightlies. So if you ever want to look ahead to the next release, some of the things we're working on, or if there's some features that you were hoping to test out, try out those nightlies. And then OpenShift 4.3, general availability will come looking like early January. So this is actually a really important point for us. So as part of the health monitoring program, folks, when they opt in to that service, you're actually sending back the versions. I've actually seen that quite a few people try out nightlies ahead of the GA version. We can actually do more to make that communication clearer, if you'd like. That sort of feedback is really important to us. So right now, the 4.3 nightlies are in a reasonable feature complete state. And so if there's something that you're excited about or an OpenShift 4.3, that should be available now in the nightlies. And if you try it out in your environments and you hit issues, both reporting it as bugs or if they cause stability issues with the new configurations that might be there. So if you turn on CSI drivers and causes your test cluster to crash, that's actually feedback that will come back through the health monitoring service. And that helps us prepare for the release because there's a lot of variety in customer and test configurations that we don't always have the exact info about how you'd like to use that. And so participating in the nightlies and opting in to the health data monitoring and sending us that usage data is actually incredibly valuable for us to support you better. And it's all exciting. Hi, just a quick question. I think there was a great discussion earlier this morning around digital transformation and how it's not just a technology problem. And after all, all these platforms that we are talking about from infrastructure standpoint is leading to business delivery. If there is no business delivery, there is no meaning for these platforms. So is there any strategy around application development? I think we are accelerating the infrastructure cycle. But application development takes most of the time when it comes to business delivery. What are your thoughts from digital transformation standpoint and accelerating that cycle? I guess some of the work that we're doing with Knative, Service Mesh, Tecton, it's all really addressing some of the problems that you're talking about. That's really bringing some of this lifecycle for application development to a place to a state where it's really easy to do on top of Kubernetes, which traditionally has been challenging for an average developer to get started. This is something that, again, it's already embedded in the platform. So things like the developer console that we ship now with 4.2 and we continue to enhance in 4.3, they add to that so that you can start building applications using the console. You can deploy the application straight from the console without that much knowledge about the platform. Things like code ready workspaces so that the IDE is integrated to the API on Kubernetes side so you can deploy from there and you can run the IDE in the browser already. Some of those things already, I'd say, addresses some of the challenges that you're having. I mean, is there a way to bring a business person to the conversation is what my question is about. That's really the key because, ultimately, business is going to pay for this. We asked a question just one more time. You want to know how digital transformation hooks in the business lines? I think the more important point is from a business standpoint, enterprise application development takes most of the cycle, if you will. Infrastructure is, yes, it's a foundational block. But I can accelerate 10% or 20% of this, but business delivery is really dependent on the enterprise application releases. So what is that we are talking about? I'm not just talking about the developer focus. I'm more talking about from business focus standpoint. What is that? Is there any strategy to bring that into a conversation from OpenShift standpoint? Yeah, I mean, I think those are conversations that we seek to have with all our customers. I think when you come to conferences like this OpenShift Commons gatherings or summit and you hear customer talks, sure, they'll talk about OpenShift and some features and capabilities that they're excited about, but more than ever, what they talk about is the challenges of getting this working in their environment and showing business value to their customers. I think the Exxon mobile team did an excellent job showing how they're providing real business value by accelerating or enabling the collaboration between their data scientists and accelerating their ability to do machine learning and data science on the platform. So it's not ultimately about the platform or even the developers who are building it. It's about the value that those things combined provide to the business. And can we do a better job of bringing it into the conversation? We're sure trying. And that's why we're bringing folks like Jaybe and the panel this morning to Red Hat to drive those non-product conversations because ultimately, based on just what customers are talking about, that's the biggest challenge is the people, the process, the cultural challenges. I have some comments, actually. I would say, starting with, I questioned some of the assertions that you made about how it's really about development. Because if you look at what the cost is of managing most of your portfolio, the cost of operations is dwarfing the cost of development in all cases. Because if it's not free to run things, then you just run the clock. It's eventually going to cost more. That's the nature of a service. So unless you're shipping things cut on CDs, if you have to actually keep servers running, which is basically what we're building for you, then that cost will eventually dwarf it. The other thing I'd say, this goes back to something that Clayton said on stage earlier, that we're essentially, in some ways, we're part of your SRE with the emphasis on the E. So you're going to have to bring people to fill in the other parts of that. But we're building this platform together with our customers so that they can provide this reliable underpinning. And I'd say nothing transforms the development possibilities for your organization more than in improving your operations. I think, Andrew, just to clarify my question, I did not stress on the cost specifically, because I completely agree with you. Operations cost is much more than the development cost. Is the speed to market that I was talking about from a business standpoint? All right. So great question. Let's just one last thought. I agree, but the way to reduce the speed to market is make doing the right thing the easy thing so that you have these guardrails that allow your developers to make the right decisions on the business, the domain that they're working on, instead of trying to configure all the random things for their platform. So, and I'm going to add that Devops is 10 years old this year, right? And we use this in bad ways and good ways, but the point is there's some things we've learned and some things about sort of Devops metrics flow. And so I think part of what the panel you're sort of smorning this team is to try to figure out what is the next 10 years of Devops look like? Like we've got all the patterns now, the books, the presentations. And so what have we learned? What do we do right? What do we do wrong? And I think that leads us into sort of, you know, like ideation is a big part. Like we're really good in Devops of commit to production. Like we nailed that 10 years, we got that down. You know, other thing. And I'm not saying we're going to do this definitely, but that's part of what Jay, myself, Kevin and Andrew about is to try to figure out like, what happened in Devops in the last 10 years? What does it look like in the next 10 years? Wonderful program today, ladies and gentlemen. Thank you very, very much. Scott Fulton with ZDNet. I sat in on a webinar a couple of months back that had to do with service mesh. It was not a redhead webinar, but there were several implementers in there. And one of the things they all agreed upon and I thought was interesting is they were implementing service mesh to the point where they all said, and they agreed on this, that they foresaw a time when for the purposes of service discovery, they would no longer have to use DNS, that they could completely rely on their service mesh to deliver service discovery. I'm wondering if you folks think that that is a rational goal for service mesh or whether you think there may be some danger in that. So I think anybody who tells you that you're never going to need anything that you currently use now and that you're going to be able to get rid of it is lying to you. And so one of the things that I would say is, I think within a cluster, I can actually envision a lot of that sort of stuff. I mean, there's been a couple of talks, even at KubeCon though, of adding more service mesh, some of the capabilities at a lower level in Kubernetes that exist in Istio today, because Kubernetes wasn't designed as a, we're going to build the basic primitives and then let everybody else solve the problems, but we want to work with the communities and find patterns that are generally applicable. I think the argument I would make would be the DNS system or the domain name system. The DNS is always so awkward to say. The DNS is so broadly applicable to so many environments that I question anyone who believes that it will go away. Certainly, service mesh provides a ton of advantages when you think of it as an application interfacing layer. And if there's one trend I think that we've seen is moving more and more of the things, like we talked about guardrails, service mesh is a great guardrail because it allows you to use patterns and primitives that take common problems and make them the responsibility of a mature operations team versus being things that you expect every developer to understand the details of. I would say, I could certainly see service mesh is growing, being stitched into many pieces, but I think we'll continue to have lots of points of integration and that choice of how much or how little you use, I think I'll continue. Quick question about Quay. Can you describe the difference between Quay versus Harbor? Testing, one, two, three. The question was, can I describe the differences between Quay and Harbor? The number one difference right now is that Quay is a live service that serves over a billion images. Right now has over a million repositories and serves hundreds of thousands requests a minute and we know it works. Quay.io is one of the largest, depends on the data that the largest or second largest, it kind of fluctuates with GCR if I recall, but largest registries on the planet, which means that when we meet code changes, we test them at scale first on Quay.io and then subsequently we release them in Red Hat Quay. Now, since the community version of Quay will have code being merged in constantly, if you're running the community version and you're not running a particular release, you'll also be testing it along with us, but if you're running Red Hat Quay, you have the benefit of that experience of running a registry at scale that pretty much no one else with the exception of the major cloud providers, Google, Amazon, et cetera, have. And we know for a fact as someone who is constantly on call for Quay.io, I can tell you, we know it works, otherwise it wouldn't sleep at all. And that's a huge benefit. On top of that, Quay is also the only registry product on the market. And I mentioned this earlier, but we're the only registry product on the market that has a guarantee of backwards compatibility. So we, too, this day, still support Docker API version one. So if you took a Docker client, Docker 0.4, which came out in, what, 2013, and you were to try to push an image against a version of Quay with the right feature flag enabled right now, it would work. And then you could pull that image with the most modern version of Podman, and it would just work seamlessly. And that's part of our commitment to the enterprise space that no other registry product is really committed to, whether or not they see the difficulty in doing so, I don't know. Beyond that, we've demonstrated consistent ability to innovate. So Clare was the first, we were the first, of course, private Docker registry available. We were the first one with security scanning. Clare is built by our team, so our integration is extremely efficient, as will the new integration with the new version of Clare that's coming along. I don't wanna speak ill of others, but some integrations have been less than well done in my experience. And moving forward, this level of innovation will continue on Quay because we're continuing to be kind of a pioneer when it comes to the container registry space. I know that's a very high level description. There's a lot of little subtle things too, like our feature list wise, we're kind of ahead of the curve in quite a few areas. It's working for squash images, automated builds, which we had first. We have our upcoming roadmap, which I spoke a little bit about. There's a lot of stuff there and a lot of really cool stuff coming down the pipeline. Like the Container Security Operator, which is coming into OpenShift with Quay 3.2 next month. So it's a big list, but a lot of different areas, yeah. It's also important to note, you can use whatever registry you want with OpenShift. So if you wanna use Harbor, if you wanna use Docker trusted registry, some people pick Artifactory or Nexus because they're managing other types of artifacts, we focus on containers. But if you want the best, most scalable, most feature registry on the planet for containers, that's Quay. That's Red Hat Quay, so check it out. The boat is not gonna explode, so. I promise the boat won't explode. Thank you. I'm like, one of the couple of things that I've been playing around is around CanVy, or what's the Q-Bert, I guess. Two questions around that. So what's an ETA around CanVy? I think which is what the commercial version of Q-Bert is. And the other one is about Windows support. Okay. About Windows container? Yeah. Windows containers. All right, so the question was, when will Knative and Windows? It's Q-Bert. Oh, Q-Bert, okay. Commercial availability of Q-Bert. So Q-Bert currently is technology preview. We're not currently forecasting a GA for that. We are trying to build that roadmap based on customer feedback. So what we're looking for really is trying to establish what is the right level of feature functionality that we need to reach to support the customer workloads that people wanna bring onto the platform. In the recent release, we did add Live Migration, which will work with the upcoming release of OpenShift Container Storage. So we do have some of those traditional enterprise virtualization features in the platform. There's others that were touched on earlier, like snapshots, clones that we'd like to bring in as well. But really, again, I can always just encourage people to try the tech preview. We'd love to get feedback. We'd love to have conversation about what features you would like to see for us to make that graduation to general availability. Mike, wanna talk about Windows containers? Was the next one about Windows containers? Yeah. Oh, Windows containers. I love Windows containers. So Windows containers is on a trajectory probably around the 4.4, like the CSI, to go GA on us. We're hoping this week, the end of this week, beginning of next week, we'll re-amp our Dev Preview program where Windows containers will run on the 4.x platform. Right now, we have them running on the 3.x platform. And that'll simplify people's usage and allow more people to try it out more rapidly. Yeah, and I'd say this is one of the most common questions we get as product managers, is when is X, Y, or Z gonna become generally available? I mean, I think everybody knows this, but that's a big deal for us, right? When we say something's generally available, we're saying you can run it in your mission-critical production systems. We will support it not just now, but for the next several years, we'll patch it, maintain it. In the case of Kubevert, there's some maturity that we still need to see. VM workloads are hard, right? And we're looking for you guys to tell us at which point do you feel like it's to the point where it's gonna provide you value in production environments, even if it's not doing everything that a vSphere would do or whatever. Same thing with Windows containers. Frankly, the maturity of the Windows OS container itself is still something that's, I think, maturing. And then certainly we still have work to do in the Windows Kubernetes SIG to make ourselves comfortable that it's something that we're gonna say, yes, this is ready for full-on production use. We do expect them both to be generally available in the upcoming year, in the upcoming calendar year. We'll have a better read on that as we get more feedback from folks who are trying out our early access previews, and we'll have, stay tuned for more on that. So, did we get all the questions, Dan? All right, or people just want beer. All right, well thank you, everybody, for coming and spending the day with us. All right.