 We're going to make this sort of ask me anything. The only caveat I'll say is be a little bit mindful of there may be a whole bunch of questions. If you ask a super, super detailed one about 3.9.1.2 and we don't have time to get through it, we'll get it at the beers. But ask anything you want. So real quick, starting the left, Mark, go ahead, do a quick introduction. What area you focus on? My name is Mark Curry. I'll push a product manager for container infrastructure. And so that includes things like networking, master node, end to end provider integration, and so on. Hi, my name is Mike Barrett. I'm also a product manager. I look after the core Kubernetes technologies and look after pretty much the go to market release. Hi, Steve Speicher. I look after the user interface, developer experience, continuous delivery, online, and dedicated. And I'm Joe Fernandes. I run the product management team for OpenShift. Here's the newcomer focused on DevSecOps and security, along with a couple of the other PMs. I'm Jimi Zelinski. I am PMing Kwey and everything operators at CoreOS slash Redhead. Yeah, and this is just a subset of our product management team. Not everybody could be here today. But first, I'd like to thank everybody who stuck around for the day. Hopefully, you all appreciated the event. I would like to thank all of our customer presenters. We had 24 sessions today. Hopefully, you guys liked the format. I know you can hear from Red Hat a lot. You'll hear from Red Hat all week. We figured if you could hear directly from customers about what they're doing, what they're experiencing, it would be a good outcome. This is your chance to ask questions. Actually, one last thing. I want to thank Diane Mueller and Alexa Overbay who put this event together. So big round of applause. I think this is our sixth, sixth one of these now? Sixth one. Sixth event. A few more coming up too. And it's just amazing to see the attendance grow and Diane and Alexa do a great job and everybody else that works with them. So really appreciate the effort. So yeah, so please raise your hand if you have questions. Who would like to come first? Things that you like. Things that you hate. Proud row. Proud row, yes. Things that you don't like. Things that you'd like to see us work on. New features you'd like to see. Ah, we'll use this. Ask us anything. Thank you. Okay. When can end users expect to see the first result of the merger between the CoreOS GUI and the existing OpenShift console? Great. So the question was the results of the merger when we're gonna start to see that. So we did this acquisition just three months ago now and we've been working together ever since. So the Pat and CoreOS PM and engineering teams. For the OpenShift customers, you know we're doing releases every three months. So the next release is 310, which is in June. Yeah, June, July, whatever. 310 in June, we have 311 in September and 312 in December. Already in June, you'll be able to have some things that won't be in the product, but you'll be able to try them out. We're gonna run some previews and some betas of things like operators and even here at Summit, you'll be able to see some demos of how the integrated console is coming together. I would say in 311 in September and 312 in December is when you'll see the bulk of the integrated features and you'll see those products coming together. Our goal is by the end of this year to have CoreOS Tectonic and OpenShift completely converged all the critical features. That's what we're operating at. Hopefully we can get more stuff out sooner to you in the fall and hopefully there's no delays on anything, but yeah, I think 311 and 312 is when you'll see that. So second half of this year. Go ahead. Yeah, with 3.9, we have the introduction of supporting the cryo runtime. Can you touch on how you intend to cover things like best practices for building containers without using Docker tools, things like using Builda or whatever else, and then how is that gonna be reflected in the overall building images within OpenShift from a developer standpoint? So I'm gonna let Steve talk about the build tools here in just a second. Just the folks who don't know. So OpenShift ships with RHEL and that includes the RHEL Docker runtime, right, which is Red Hat's packaged version of the Docker now Moby upstream. About a year ago, we started working on an alternate OCI runtime called cryo and basically for a number of reasons, something that was dameless, lighter weight, secure, was specifically focused on Kubernetes and would be stable for all upstream releases. But cryo and Docker, they're OCI compliant runtime. So you can take any Docker container and run it on either. In fact, you wouldn't even know the difference if you're running it in Kubernetes in OpenShift because it's sort of underneath the covers. So in addition to the runtime, we have been working on a set of build tools, things like Builda and so forth. I'll let Steve talk about that. Yeah, so Builda itself takes a similar approach to the Docker daemon, but just doesn't require the Docker daemon and we're doing work to bring together the Docker file-based build so we can plug in Builda. We already had a tool tech preview we had done with this Image Builder tool, which has other, I guess, qualities like that that doesn't depend on Docker itself. And we'll continue to invest. I know that Google put out a countercode tool that's like this. It takes a slightly different approach. And that was one of the things we talked with them last week at KubeCon and EU, was sort of what it's like going forward in a, if you will, a Docker daemonless or Docker toolless world and also how to leverage more of a Kubernetes primitives to do the build. So we're continuing to evolve that. I would only add, you mentioned best practices. In the next two releases, Joe mentioned 3.10 and 3.11. You'll see some performance information of what we're pumping out of cryo coming out. You'll also see the cryo control command and how to use that and how to use that as compared to Podman when you're using these new runtimes that are being offered in Kubernetes. So all that's probably coming out between, what is it, May, June, June and September. Yeah, so we see strong standards now for container runtime and container format with the Open Container Initiative specs having reached 1.0 and a ton of innovation in new build tools and new methodologies for building standard OCI images and build a Podman. These are some projects that we've been investing in for the future to give you more options for that. I got one way here in the back. Hello. I know I'm real excited for example with the operator interface for Vault and things like that but maybe could you talk a little bit more about at a high level security moving forward for the native OpenShift secrets and maybe using more secure algorithms for that. Do you wanna talk about secret management? The question was about secret management in OpenShift or C groups, secret management. Yeah, secrets in Kubernetes, secrets in OpenShift. Sure, so one of the most exciting projects in Kubernetes right now is a KMS API. So you'll be able to attach this service that will come native in Kubernetes. It's in alpha so you got at least two more releases to wait to the back end of whatever you happen to have. At the same time, while that's baking we are creating integrations with a lot of the vendors that you've indicated are the most popular from Cyber Arc to Hashicorp to where the case may be on how that interfaces with how you create what would have been a secret. The only downfall to that is that you aren't using the secret API object in that case, right? You're still working with a secret bit of information but you're not storing it in the secret API. The KMS service that's coming in the next two releases of Kubernetes will allow you to save that in the secret API object. Yeah, so by default we're storing secrets in SED in Kubernetes, right? So SED is your secrets vault but we know many customers would prefer to use a Cyber Arc or a Hashicorp vault. So we wanna make that work seamlessly across the board whether you're using SED, whether you're using a third-party vault solution that the experience would be the same. Yeah. Go ahead, Jimmy. And a couple weeks ago what CoreOS did was actually open source our vault operator. So previously that was shipping as a proprietary operator with Tectonic and now it's open source and can be ran on any Kubernetes cluster. So you can go out and install that today and start using an automatically managed vault instance that runs HA. It's not the actual config map as they were saying but it is a, you can think of it as like an Amazon key service running on your cluster Kubernetes native. And as you can hear hopefully there's been a lot of investment in improving secrets management in a number of different ways. So just in case you didn't know, in SED you can encrypt, secrets are encrypted by default as of 361 and some of the vault integrations that Mike was talking about as they're looking at KMS. In the near term there are integrations available using other methods using the Flexvolume API. So lots of investment. Yeah, just one more thing on Jimmy's point. You heard about the operator framework this morning. You'll be hearing about operators that we're gonna be building for all of our platform components. Everything that runs on Kubernetes and OpenShift will have an operator so that we can automate the operation. So that's everything from SED with Tectonic already had an operator for Prometheus. We're gonna do one for Elasticsearch and Kibana. Basically all of our self-hosted components. And then we're also exposing operators out for end user services. So things that you would provision through the service catalog but things that you want somebody else to operate for you. And I use the example this morning. A developer wants to consume a database but he doesn't wanna be the DBA. So how do you build operations? So Jimmy was referring to the Vault operator which is an operator that they built for the open source Vault project. And that's something that you'll be able to try out on OpenShift. Obviously if you want commercially supported Vault itself, you know, sorry, HashiCorp is a partner. CyberArch is also a partner so we don't take sides. So please check those things out and give us your feedback. Hey. You're next. I'm looking for an official image for Artemis, actually MQ7 and never got. So I was looking for this image and I heard that will be a service on the cluster like logs and metrics and we'll have like a broker, a message broker as a service in the cluster. Do you know something about it if you're coming on the next version? So it didn't catch what image you were asking about our technology? Yeah, we are trying to use the Artemis, use the ActiveMQ7. Our comedies, is that what you said? Yeah, Artemis. I'm not familiar with our comedies and stuff. There may be images in the community. Yeah, because the less official image is the 6.3 of ActiveMQ and so. Okay. I will look for the new one but never got and I heard that will be comes like a service on the cluster platform. Yeah, let's follow up with that. After words I'll dig into more. Yeah, we can dig into that. A lot of times what we see is people bringing a lot of either community images or their own custom images to OpenShift and that's great. We have a lot of then ISV partners that will provide supported versions of those things. We have a lot of our own stuff in our portfolio but that's just one that I'm not familiar with but we can follow up and see if either there's a plan for Red Hat Image or one of our partners to have that as a supported offering. Here in the middle. Hey, how's it going? Any plans for Playbook support for disconnected environments for helping gather all the prereqs, Docker images, all that, making upgrades and patching easier? Yeah, so there has been issues reported with our disconnected installation. There's just baked into it some expectation that some things would be able to be pulled. So we've hunted most of them down. The thing that's missing is the post install activities. So where Jenkins has to pull, where NPMs would have to pull, where Gym servers would have to pull, kind of your artifacts, that is still not mastered but in 3.10 we definitely solved all the other disconnected issues. Yeah, it's something we're always working on. We have a number of customers in public sector and finance services that have to run disconnected installs, offline environments. As you can imagine there's a lot of, when you're dealing with images and Maven repos and all the stuff, there's a lot of stuff where the software wants to reach out but we do constantly evolve those capabilities and I think to Mike's point with 3.10 we've made another set of enhancements and we keep looking for feedback on folks there if you're still running into stuff that we can continue to improve. So those playbooks should be in the installer and then those enhancements will come out with each release. Yeah, hi. A question with introduction or future introduction of Tectonic into platform, what's the future of CloudForm? So the question was around the future of CloudForm. So CloudForms is, continue on, there's CloudForms is RedHead's hybrid management solution. We also have OpenShift provider in CloudForms to pull stats from OpenShift but those stats would come from our monitoring agents and stuff within OpenShift, right? So a couple of things, one is we've gotten feedback from customers that wanted admin capabilities baked right into OpenShift, right? They just wanted to be able to log into OpenShift and if they're an administrator to be able to see the health of their cluster, the health of their services and with the Tectonic console baked in, that's exactly what you're gonna get, right? So now what you see when you log into OpenShift will be tied to your role and if your role is cluster administrator, you're gonna be able to see information on the health of the cluster. All those same metrics will still get fed out whether that's to our monitoring solutions like CloudForms insights or whether that's to third party solutions, you can hook into those Prometheus monitors and get data out of the system. And then likewise, we're dramatically enhancing, extending our Prometheus monitoring capability so the Tectonic team had a great Prometheus team that had done a lot of work around Prometheus monitoring and so that's helping us accelerate the amount of monitoring data that we're able to provide. On the flip side, they didn't have a logging stack, we had our log management stack so bringing these metrics and logging together for the cluster administrator within OpenShift is a key focus, day two management is a big theme and automated operations but that doesn't sort of supplant Red Hat's management portfolio, we still have satellite and CloudForms and insights and Ansible very much part of the portfolio and they'll go beyond just OpenShift administration into RELL administration, infrastructure beneath OpenShift and beyond and that's kind of, we're just gonna provide much richer data within OpenShift and as feeds out. I just wanna clarify, all that observability functionality in OpenShift is for the cluster itself, is it not for applications running on top of the cluster, it's focusing on the cluster health itself, cluster logs itself, you should not point all of your applications to that stack or else you'll tank the cluster stack. Don't wanna do that. And then through the operators then we'll be able to start providing more observability into the services through the operator framework. You'll be able to provision your own stacks similar to this specifically for your applications in an automated fashion but we wanna isolate that stack from your actual applications so that your applications don't crash the cluster. I know. Another question? Diane, we got two on the mics and if you wanna cut it after that. Yeah. So up from front. Sorry. Yep, yep. Various people have talked about having multiple clusters and deploying applications across those multiple clusters. The work on KubeFed seems to have disappeared into the background a bit. A lot of people are using Jenkins. Where do you see this going? Sure. So the SIG is back on track. In the past couple of weeks if you guys aren't following the Federation SIG or the Multicluster SIG, there is now a Federation version two where the ideas and the concepts of that large federated control plane have been broken up into CRDs and smaller objects where you can do workload and policy on how you wanna target your clusters. You can also do a different CRD and choose to install it or not install it for DNS that would span multiple clusters. So now you have a choice of which components that you wanna use. The proposals are out and there is also a proof of concept implementation of one of the proposals called Fanord. So definitely take a look at that. That allows you to do a lot of what you could have done in the previous iteration of Federation. This will probably come to bear it towards the end of this year. In the middle of this year, you'll get cluster registry, which is a great step forward. You can like have one central API to ask where are all my clusters? What are their names? What secrets are in them? Things of that nature. We also brought in some technology from CoreOS. CoreOS was syncing up at the project level mainly. So stuff like the namespace, stuff like some of the config maps, not the replica sets or anything like that. But if you wanted a consistent look and feel of all your projects, we have that technology coming out towards the end of this year too. So those will all come together towards the end of this year. Yeah, I think it's safe to say that the initial Kubernetes Federation project, Kubernetes Federation 1.0 was a bit too ambitious and the implementation too monolithic in terms of all the problems it was trying to solve and the implementation for how they were trying to solve it, right? But the problem didn't go away, right? All of our customers, including I think most of the customers here are running multiple Kubernetes clusters. Many of you are running apps that span active-active across multiple clusters. So the problems that Federation was meant to solve around multi-cluster management, federated ingress, federated deployments and so forth are still there. And so this is kind of what we always caution people, you know, just because you see something announced in the upstream or you watched a Kelsey Hightower video or something, you need to really be careful in terms of the state of a feature, right? If it's alpha in Kubernetes or beta in Kubernetes, that actually means something. It means that the APIs and the feature itself isn't stable. So that's kind of why we try to be clear in terms of what's GA in OpenShift, what's Tech Preview and what's still kind of in an experimental stage upstream. One last thing on the, I think that's the Office of Technology track at Summit. If you look on your agenda, I think there is actually a session where they're gonna be talking about what's coming out of now the multi-cluster SIG. Ivan Font, if you look up his name in the agenda. Ivan Font has a session, including features this year. So the multi, one of these modules that got broken out is the multi-cluster registry, which is a registry to store meta information on all the different clusters that'll aid your deployment tools. And then there's progress on ingress, on deployments and stuff. And I don't know if you wanna talk any more about the CoreOS. Yeah, Rob Zumski just had a talk at KubeCon.edu like a couple of days ago, talking about this in more depth too. If you wanna watch the video of that, I think it's already up on YouTube. Yeah, Rob is one of our CoreOS, another CoreOS PM that just joined our team. He's gonna be doing a keynote demo tomorrow, but he did a session last week at KubeCon. So if you check out the KubeCon site, you'll find the video. And if not, just reach out to us. We'll send you the video on that. All right, so we've got one last one here in the middle. I don't wanna put any pressure on you, but there's about 500 people that would like a beer. So the excellence of your question is based on this one. So no pressure. Right now we have the option to either deploy nodes on a rail-based install or an atomic-based install when Tectonic is fully integrated at the end of the year. Will we have a third option or are the two of these ones gonna evolve? So the question was around host options for OpenShift, right? So OpenShift is gonna always be supported on Red Hat Enterprise Linux. And I'd say the majority of our customers have been running OpenShift on rail because they want a traditional RPM-managed distro. They already have all the tools and processes in place. Obviously CoreOS launched Container Linux. That was the first product that started the company. And then competitively, Red Hat launched Rell Atomic Host. We're now bringing those two teams together. And don't tell anybody, but there's gonna be announcement out tomorrow and the new joint offering is gonna be called Red Hat CoreOS. And you can think of it as the merger of Atomic Host with CoreOS Container Linux. And that's gonna give you a fully immutable container optimized and fully automated host. Because one of the things that we learned from the CoreOS team is it wasn't just about the distribution, it's about the automation that you could put around that distribution when you have a fully immutable host. So again, we'll be talking about that tomorrow in my session, CoreOS and Red Hat. And then on Wednesday, there's a specific session that's just about Container Linux and Atomic Host. I think it's called the Road Ahead. And so if you want more information, come to one or both of those sessions. We think they're both great options, but we do see a lot of momentum towards full immutability down to the host and a lot of Linux administrators who would traditionally have done RPMs or Debian managed kind of distributions really looking for that immutability and the benefits that it provides, so. A huge thank you to all of the speakers today. If you were a speaker, could you just stand up for a minute? This is speed dating. So I know you weren't all counting, but when these five folks came up, we went over 51 people speaking on this stage today. And so I really wanna thank all of you for sticking with us and listening to all these stories. But now you have 51 faces to find in the hallways of Red Hat Summit. To connect with, to figure out how you're gonna collaborate with them, to get lessons learned from. So please take advantage of the beer that's about to be poured and find someone and have a good conversation. Thank you. Thanks everybody.