 So the way this works is you've been saving your questions all day, haven't you? All right. There's two microphones. If you want to get up and stand up and talk into the microphones and ask your questions, do that. Otherwise, I'm going to make every single one of these dudes and ladies introduce themselves and tell you what they're working on. So please don't be shy. Come and ask any question. This is an AMA. So this is your opportunities here. So go ahead. Otherwise, Joe's going to talk. All right. So while our team's getting up here, my name's Joe Fernandes. I run the core platform team for OpenShift. I also now I'm in charge of OpenShift and Red Hat Virtualization. We have here a large group. These are OpenShift product managers as well as our members of our OpenShift engineering team, lead architects and managers. You're here to answer your questions, not only today, but throughout the week. So I know many of you guys have meetings scheduled in the Customer Briefing Center. We have a few hundred OpenShift customer meetings set up. A lot of interesting things going on. So obviously you've heard about some of it today. Before we do this, I wanted to actually thank all of our customer presenters today. It was amazing to hear all these stories. So big round of applause. I also want to thank our partners, Microsoft, VMware, all the other partners who presented and who sponsored the event. So thank you to them. And with that, if you have a question, either stand at a mic or raise your hand and Diane will find you and we'll kick it off. All right. Don't be shy. Yes. Thank you for today. My question is based on earlier presentation with the channels and some of the earlier road map was OpenShift 4.0 actually skipped and we're getting straight into 4.1 being the stable release. So earlier, Mike talked about us being courageous to change things. So we know that nobody ever installs a .0 release. And so we had the courage to go straight to 4.1. And part of that was keeping the internal timeline moving, which is we actually had an internal 4.0. And 4.1 is a set of features that were always actually plotted for 4.1. And so we did a 4.0 and then we decided to soak and hold it longer. Yeah. So the version number for the first GA will be 4.1. You should expect that to be available in the next about two weeks from now in the channel. The beta that many of you guys have participated in that was 4.0. And then after 4.1, we're going to get back on our releasing every three to four month cadence. So 4.2 should be end of August, September. We're trying to get a 4.3 out around the end of the year. And then you'll see that, just continue that into the new year. So. Okay. So I have a question regarding your plans in terms of development of CACD features, specifically what would happen with Jenkins. I'm not sure what your experience is about this specific thing, but we are not quite happy and we would like to get some feedback from you on where to invest, what is the direction. Cool. I'll let William talk to this and talk a little bit about that. So regarding Jenkins for the foreseeable future, we'll still work with Jenkins, but we are also working with Tecton as one of the CI CD technologies that we want to deploy as part of the OpenShift pipeline story. Not sure if that answers your question, if we had something more specific regarding Jenkins. Yeah. And so Tecton is a new upstream project around building cloud native or Kubernetes native CI CD capabilities right into the tool. So that's a project that we're really excited about, really investing in. We've been shipping Jenkins since the start, a lot of customers are using it. The other thing I remind customers is there's a ton of choice in this area, right? There's so many different CI CD tools and so forth. And so you're not limited just to the tools that we ship. Most customers are bringing their own tooling, whether that's Jenkins or Bamboo or Team City or GitLab, all sorts of different tools and so forth. And so what we're really trying to do is make sure that we can nicely integrate regardless of what you're choosing for CI and CD services. But if you want to know the direction of where we're going, check out the Tecton project and that's, I think, you'll see a lot of what we're, where we're headed through that. Cool. Yeah. I couldn't attend the morning session so I'm not sure if this question is answered this morning or not. Is there going to be migration path from 3.11 to 4.1 or 2 or it's a net new install and we have to reinstall all the apps and all. Okay. So go ahead, Maria. So hi, I'm Maria Beretta. We're working on a tool to migrate for applications from 3.11 to 4. And this is an app migration tool leveraging the upstream project called Valero, which is the previously Heptio Arc. So that will be supported by Red Hat, right? Okay. Yeah. And so also I know some folks lived through the V2 to V3. So obviously V2 to V3, we changed everything, right? We completely rebuilt the platform, moved everything to Kubernetes and containers. So there was no easy way to automate that migration. But V3 to V4, it's a Kubernetes platform. It's the same containers, all the, you know, everything that you're doing on V3 should work in V4 because the innovation is all around how we operate the platform and services on that platform. As Maria mentioned, there's an OpenShift migration tool, which Maria is the PM for. So feel free to pinger later, but it's going to allow you to automate sort of the, you know, the migration of 3x apps onto 4x and so forth. It's not in place. It's not an upgrade. It's a migration. And then we're already working with a number of customers who have large deployments to kind of figure out what else we need to do there. One more plug for that. We're going to be demoing that tomorrow as part of the OpenShift what's new. That's right. Actually, there's an OpenShift roadmap session here. Yeah. So there's, actually, we're repeating that session twice. I know last year, some people couldn't get into the OpenShift roadmap session. So the OpenShift roadmap session will be, it should be on the agenda two times. We'll be demoing the migration tools in that session. All the good sessions are full. So. All right. And then all those sessions are recorded and will be available shortly after the conference. Okay. Another question. So the OpenShift as a SaaS offering in Azure, is there any timeline on that? OpenShift dedicated in Azure? You want to talk about that? Managed service. Managed service. Yeah. Yeah. So Azure Red Hat OpenShift is actually going to be announced tomorrow. Tomorrow? Yeah. I didn't know if I could actually say it, but yeah. It'll be announced tomorrow and it'll be available tomorrow. Okay. Yeah. So I'll actually jump back to that last question just a little. Hi. I'm Eric Parris, one of the architects on. We discussed kind of the tool that we're going to help to migrate applications from three to four, right? But why we did that? Why? I'm sure a lot of people are wondering why did we not provide that upgrade path? And it's because what Joe mentioned, the operational characteristics of everything is completely different. It's completely different, right? The way that you configure the cluster and run the cluster. And we had more concern about trying to create an upgrade path from three to four that would actually break your cluster and leave you in a state that was difficult to recover. Then we thought it would be for customers to have a second cluster that they were able to get up and get running and understand the new operational ways to interact with OpenShift 4. And then you can move your applications bit by bit from one to the other. And that application migration tool they mentioned is generally useful. And we'd actually, it's been a common request for quite a long time to make sure that there's a path. So that applications can move more seamlessly between clusters. So rather than double down on something that had that higher risk profile, trying to make sure that we could actually ensure you have a successful 4.1 launch, but also spend that time invested in tools that actually help you move between clusters has more benefit in the long run. In between 4.1 clusters or older 3x clusters and 4x clusters. Exactly. Actually, that's a good point that Clay- Starting on 3.7. And that's actually a key point. So the migration tool isn't just going to be helpful for 3 to 4 migrations. It'll also be helpful for 4 to 4 migrations for customers who can't upgrade every single release. Like let's say you have to go multiple releases. Kubernetes forces you to go sequential. That's not the norm for a lot of customers. And so I think that'll actually help us even beyond the 3 to 4 migrations. More question over here? Hi. There are different opinions in the internet concerning OpenShift deployment. What is the best practice? OpenShift deployment on bare metal or virtual machines? Well, I can start and anybody else can weigh in. So the thing with OpenShift is we want to make sure that it serves as a common abstraction regardless of your choice of infrastructure. The answer, it should run as well on bare metal as it does on VMware, as it does on OpenStack, as it does on Amazon, on Azure, on Google. Because ultimately the only way we're going to succeed is if we can give you a rock solid, consistent abstraction layer for your applications and then allow you to run that across different infrastructures based on what matters to you whether that's cost or location or whatever decision. I think that's been our focus. With Forex, we're trying to make it easy to operate the platform across different environments. One of the things you lose when you go to bare metal is you lose a lot of the automation of your virtualized environment, automated compute provisioning, automated storage and networking and so forth. Check out the keynote demo tomorrow. We're going to show what we've been doing to bring a cloud-like experience to bare metal so that operating a bare metal cluster has all the same or similar characteristics to operating on a virtualized environment. But then at the end of the day, the choice is yours as to where you want to run it. And many customers here in the room and in our customer base are running it in multiple environments and obviously that's great for us because that means that we're really living up to this mission of being a hybrid cloud solution. I would add a lot of the benefits that we see in the long run. Virtualization has a lot of advantages and running on bare metal has a lot of advantages. And so for us, we think Red Hat more than anyone else is actually really well positioned to run on all the world's hardware with Linux and Red Hat Enterprise Linux and CoreOS, REL CoreOS actually takes all the strengths of the REL hardware certification and means we can do in place cluster upgrades and some of this automated management. So a lot of the investments we're making in 4x play equally well on virtualization or bare metal, but in a bare metal environment we think it'll be an experience that's better than hypervisors. I would only add that it's very rare that a customer only has one infrastructure these days. They have multiple clouds or investments. The highest population of growth we're seeing is in the bare metal. So it may be a smaller population but its growth rate is faster than some of the other infrastructures out there. Joe mentioned this keynote tomorrow. So Red Hat's in an unusual situation where we're very strong in infrastructure. We have our open stack investments. We have engineers and kernel engineers that talk to service processors and how to bring up networks. So you'll see in this keynote demo this culmination of a lot of different skill sets at Red Hat coming together and what we think the future will look like in bare metal. The second question is in the same direction worker nodes with different computing. This is best practice. Worker nodes with different compute? For example memory or CPU. Derek showed this morning something cool that people may not realize it's called machine sets so when you bring up that initial cluster we want to make that super fast so it will bring up a highly available set of worker nodes so three controller nodes so three masters three compute nodes all configured the same but then you can actually bring up machine sets. I just want to make sure I understood your question fully. Are you saying you don't have a homogenous fleet you have some computers with different CPU counts memory capabilities and others? Yeah, so I mean that shouldn't be an issue my background on the upstream is largely around resource management so I spend a lot of time in Kubernetes to make that possible for you so I would say you should be successful in having a heterogeneous pools of compute in 4x I think we're doing more to make that easier so while Joe talked about how we're doing work if you're on particular cloud environments to make it easier to provision and deep provision compute that could vary on characteristics that's an option but we also have the ability to you could have different configuration for like accelerated instances versus normal worker instances and you should be able to tweak how you configure that host differently another capability we have is the node tuning operator so depending on how you label your nodes you can actually have an operator that applies tune D profiles to those hosts for you automatically so I guess from my perspective we're doing a lot to try to make it possible to run heterogeneous node pools without issue and we'll continue to invest in making it easier to make the system smarter on configuring that the only thing you might want to do is run a machine pool where I was going with machine sets is sometimes you want to actually have specific hardware for specific services, specific applications so now you can actually create different pools of compute that are optimized differently for like these are my machines that run storage these are my machines that run heavy AI processes these are GPU enabled and so forth and kind of tying that even directly to operators and stuff different pools of compute and then you could target workloads to those pools so if you had like a GPU accelerator instance in a cloud and you wanted to dynamically autoscale just that set like the 4x capabilities that we have should make that like super easy to do and so yeah, you should be fine thank you very much Hello, so my question is more about cloud capabilities so the feature sets at least seem to overlap a lot or converging a lot so for things like KNATO versus AWS Lambda or AWS Kubernetes Services versus your own platform so in the future where do you guys see the platform going in order to differentiate yourselves relative to just going to native services? Yeah, so look, we've been competing with native services since we launched OpenShift 3 like OpenShift 3 and GKE pretty much launched around the same time, right? I think on the and so now four years later there's I think over 80 CNCF conformant Kubernetes distributions we're going to do two things in my mind and others chime in one is we're going to continue focusing on making it the best platform for Kubernetes and for hybrid cloud and you see a particular focus on 4x where we see a lot of the challenges still in Kubernetes is how to operate that platform seamlessly particularly across a hybrid environment and even as now some of the cloud providers like Amazon and Google realize that we live in a hybrid world and they're coming on premise it may be some time before you see Amazon support you on Azure or Google support you on Amazon, right? So for us a hybrid cloud isn't just one cloud provider and an on premise appliance it's being able to work across all the major clouds being able to work across different on premise footprints whether that's bare metal, open stack, VMware what have you and being able to operate at the edge, right? So building the best hybrid cloud, hybrid Kubernetes platform is one area where we think we already have huge advantage and we're going to continue to differentiate ourselves the next, the other area is all the stuff we're building on top of the platform so Knative you mentioned the work we're doing on Istio, the work we're doing on the developer experience with our middleware team frankly with the teams our partners and so forth to sort of build a really rich set of services and capabilities for end users to drive consumption of that platform so I don't know if anybody else wants to anything? So I think like in the long run we're not going to be using any one thing, we're going to be using all the things like no technology advanced in the history of technology has removed the previous technology we just layer more stuff on top and I think some of what we're trying to do is make sure there's a healthy open source ecosystem that can support people wherever they are, whatever cloud provider, whatever hardware they have they'll always be trade-offs in picking well designed services from a particular provider and we did that for a really long time with Microsoft we did that for a really long time with IBM and other companies before that there'll always be a best of breed technology that you may or may not make trade-offs to use directly I think in general we just see that there'll always be more than just that one best of breed technology and so what we're focused on is making sure that the delta between that best of breed technology and open source and open platforms is as small as possible by investing in standard operational environments, standard application environments building out open source communities that complement or supplement what proprietary vendors don't provide and we know wherever that innovation comes from it's going to come from open source and that's obviously aligned with what Red Hat's good at and then everything from our investments all the way down to the kernel and the infrastructure up to middleware and application services I think gives us good breath of expertise and capabilities to continue to differentiate and we love it we love having so many different Kubernetes investments from different vendors I mean go to Indeed and do a job search there's a lot of protection on this technology package that you're all involved in in this room and think of the opposite think of if it was just us so we encourage the cooperation and competition it's a great place to be thank you Hi there, I just wanted to ask if you guys expect to have the developer preview updated to support non-AWS installs specifically like physical servers or anything Yes, so actually if you go out to try.openshift.com right now as of Friday it actually enables you to go on-premise so you can try out bare metal and vSphere at this point Great, thank you We started with AWS Beta 4 has bare metal and vSphere we also have been working with Microsoft on the Azure provider which will be available soon the GCP, the Google Cloud Platform and OpenStack platform are both far along those are also targeted so again you know what we're focused on in OpenShift 4 is extending our automation all the way down into the infrastructure but you know all that infrastructure is different so we have to do specific work on each infrastructure to automate the provisioning of those compute resources and networking the storage and so forth but in return what you get is less work, more automation more goodies that you can take advantage of not only when you install but also when you do upgrades and also having the cluster be able to scale capacity up and down if I had a nickel for every time a customer asked me if they could auto-scale their nodes to be a rich man, you can do that now and that's part of what we're doing so check that out and give us your feedback oh definitely, we're looking forward to it thank you, you're welcome yes I have a question regarding operators if we assume we have the perfect level 5 operator that has all the knowledge about operating a component codified and you know is prepared to handle everything except for the one thing it's not prepared to handle and obviously yeah I mean operators take a lot of responsibility away from the operator of that component but leaves the last 1% of responsibility still there because he needs to be able to react for anything that might happen now do you have any ideas or even you might be working on stuff that makes that process of bringing the analysis of what's happening, what's the problem I guess it involves people, it supports specialists and how we can close that last gap yeah great question in general you want to build an operator so that it's defensive in nature and then it's going to fail safely as much as possible so that's kind of the foundation of this and then we want to build in other ways of bubbling up status from the SDK and our scaffolding up to users and admins all the way into that metering component where you're actually getting operational metrics around for some reason this database that's being managed over here has hit some weird situation either needs to be escalated to the operator author or at least the cluster admin something like that so it's kind of like a multi pronged approach there but we're going to be moving a lot of innovation in the SDK forward really quickly on that type of thing and I'll follow up as well as like at the platform level Derek alluded a little bit to this this morning is opt in with customers in the same way that we've worked to do insights for Red Hat systems before is we want to actually help at the end of the day everybody on this stage everybody at Red Hat is there to ensure that your software keeps running and so we want to do a better job of both delivering software to you enabling our ISVs and partners to deliver software helping train them but there'll always be some percentage of cases that just like all software it's not perfect we want to make sure that there's actually a great channel between the platform and Red Hat support to make case resolution faster to help us collect data from the fleet when customers opt in to share that to give you access to early drops of the software so that we can get advanced notice you know we've tossed around some ideas over the last year or so about making it easy to get new versions of the kernel that people can test on bare metal right that's part of that story around how do we do bare metal better it's about ensuring that we're working with customers and users to make the software better and that might mean sharing some data with Red Hat, sharing faults with Red Hat in a way that we can take that turn it around and so the support team is better arms to answer questions about your environment and we'll talk more about this over the coming year and it'll always be opt in of course because earning customer trust is pretty critical to us. I would add one other thing and we didn't get to touch on this very much but from an experience and for we you talk about operators as autopilot well sometimes you still want to turn autopilot off and take control and so there are areas where we have recognized that we haven't yet had the operational experience to operate as optimally as we like and we've steered away from that and so I would say if there are areas when you're writing your own operators that give you concern you can just listen to that and if you do try to experiment with it make sure that you can turn it off or override it in case bad actually happens. The worst thing would be if you're just constantly fighting with your operator and so I would just like give that advice based on our own experience in the forward development cycle as Rob said always think about fairly modes always ensure you can turn it off if it's and try to keep them simple honestly. Also at level 6 the operators become sentient so be careful careful. Yeah sounds interesting thanks a lot guys. Hi I have a question about auto scaling as if now we were able to auto scale our cloud resources I'm not talking about the for auto scaling in fact and I'm talking about the no auto scaling so we would write our own launch configuration and then specify the auto scaling there but how do we have any plans of extending that to the VMware or maybe on prem in future? Yeah so as Joe said we want to the API service we presented today if you look under the covers is very focused on describing the characteristics of the compute you want to bring up not much else so the set of platforms we can support for that will come over time once that API is available any of those platforms should be able to take advantage of auto scaling so we have a resource that we didn't show called just the machine auto scaler resource and you say okay I want to scale this pool of compute within this bounding range and once we have support for each platform and probably you should talk to a PM about priority and ordering it should work the same everywhere and work well. No I think you also mentioned on prem right and so some of the stuff you'll see in the keynote that they're talking about they aren't going to show auto scaling but it is something that has been considered how can we during low usage actually power off machines right you've got real hardware if we can power it back on the same way in the cloud you would be able to just get new instances we are trying to figure out how to apply the same auto scaler to bare metal right to on prem as well so that story as Derek said is not just as we bring clouds on board we're trying to bring it into the data center as well it can also be at the VM layer too right I mean so it's clear right we're not talking about just public cloud we're talking about also on prem so if you had like a mware or something like that I mean what the cloud does is obviously make it more convenient right there's more apis that we can call into to scale up compute and configure everything else but those apis exist in the virtualization platform and we're building automation around even bare metal because we want to make that compute dynamically scalable regardless of what infrastructure you choose to run so by default Kubernetes recommend for us to go with odd number of master nodes and are we breaking that silos from OpenShift point of view I think in the short term you're going to find we're going to be pretty opinionated about what the control plane looks like so we're probably going to support three masters and part of that is we want to make sure that actually Derek did not get to this in his talk if you want to talk or Alex if you want to talk about the etcd operator we do want to bring control planes deeper under control of the platform and we don't want to expose too much flexibility now because that would permit us from doing that later to answer the immediate question the reason we recommend the odd number is because etcd needs an odd number of nodes to operate optimally you can operate successfully with an even number but you lose performance that way so in the interest of conserving resources we just scale the control plane to be the exact same size in the future the plan is to eventually allow the control plane to auto scale as well that work is underway I don't think we have anything to announce about that now but keep your eyes out the longer term trend is really we don't want to force you to have to make operational choices that are meaningless to your success we want you to focus on the choices that make a difference workload scaling, hardware certification network configuration choices that actually do matter and as much as possible we would like the control plane to be transparent seamless, automatic and if it breaks it's our problem not yours I have a question for you as it relates to the conversion of going from conversion infrastructure to hyper conversion infrastructure I was talking to my new tanics guys at one point in time the red hat OCP team was talking to new tanics and some of us are trying to decide as we do our on-prem hardware selections as we move forward where is that going to be it seems to stop when the IBM purchase started or is there another reason before that as it related to new tanics no so first of all we have open shift customers running on new tanics we have open shift customers running on VMware open stack and other platforms the thing is is that there's platforms where rel is certified where we actually have a relationship with the partners that we can actually jointly troubleshoot all the way down to the OS and it's integration with infrastructure and then there's platforms where we don't have that relationship or we don't have that level of integration new tanics is one of those so it is sort of like a demarcation line between what you'd come to red hat for and then what we'd ask you to either reproduce or help us engage the new tanics side but that actually has not changed at all in the last since the beginning of open shift and so forth that's kind of so it really has nothing to do with the IBM acquisition it just has to do with each provider's kind of integration both from a technology as well as from a partner perspective with red hat and our joint engineering efforts so thank you very much. You're welcome. My question is on observability capabilities as I understand like open shift Kubernetes, Istio has different level of observability and the operator metering I understand is also level of observability that you're providing. Is there a plan to consolidate and provide a unified view? So within the console there is absolutely the already work underway to give you kind of the snapshot view of everything that's kind of happening in the cluster at a given time that being said with certain things like Istio specifically you know you may end up with a lot of observability data that actually could give an individual with malicious intent more kind of understanding of your application that you want just anybody on the cluster to have so there is actually good reason why certain components here are segmented in such a way where it's not just a big picture view where you see kind of everything all at once and why there is a lot more nuance to the views in which those are presented so I think it really just comes down to is there security concerns about exposing that data to a more general audience of cluster users that really kind of defines when you will see big picture things and when you get a really deep drill down? Like do we have plans to implement maybe you know role based authentication or something like that to have the drill down capabilities? So there's a number of items that are a way to allow the monitoring on the platform to more closely integrate with the UI there is some role based access to that segregation of certain metrics are visible in the console I don't think we're going to get quite as sophisticated around that aspect until I would say until this deal is a normal part of every cluster we actually want to focus on making sure that security boundaries are respected that the core is stable that applications work well that we have the first and second levels of integration with Kubernetes concepts like ingresses and routes and services and then I think you'll see more investment in that. One last question thank you. Is still part of 4.0 I missed this morning session I don't know. Yes. Thank you. 4.1 which is the initial GA release sorry this project for that confusion so the 4.1 which is coming out in two weeks Istio will go GA on 4.1 Thank you. In the morning demonstration of OP shift 4 we could see a feature where you could remotely see your clusters and which versions they were running I have two questions about that feature the first question is will that portal enable you to remotely trigger an upgrade in that cluster and the second question is whether this feature will be available for partners that resell OP shift to QCMERS. Yeah so what you're referring to is it's going to be at cloud.redhat.com it's our open shift cloud manager cluster manager and so yeah the whole idea is to allow you to remotely register running clusters launch the installer to provision new clusters and then actually be able to see what versions you know each of your clusters is running so that you can sort of decide when to upgrade and to trigger that upgrade it is a hosted service it leverages telemetry that's in the cluster to send information back obviously if you're it's optional so if you're in a disconnected environment or offline mode you know you wouldn't use that but if you're in a connected environment it allows us to keep tabs on it we are also looking at how can we take that capability for those offline customers and package that as something that we could deliver for you to run in your own data center it's all architected it actually runs on open shift it's architected as container so that's something to kind of look ahead to not but initially it'll be available as a hosted service and yeah we're very open to we don't have anything to announce today but we're very open to talking to partners about how you can kind of have if you're a partner that's delivering open shift as a service to your customers you know again no concrete plans to address today but that's something that we want to discuss with partners about what's the best way is it just to use the same service that we use and maybe have some branding or whatever or is it better to deploy your own instance of that once we have it as a deployable thing those decisions haven't been made yet but that's kind of you know those are things that we're exploring and so forth and would love your feedback thank you just really quick to add to that so with OpenShift Dedicated basically the clusters that we have that Red Hat is managing you will be able to remotely trigger upgrades for those clusters for OCP clusters that are self-installed that you're managing it's a feature we're looking at right now but at the moment yeah you won't be able to remotely trigger upgrades for OCP clusters yet the scenario is a CCSP provider that sells clusters to all the customers and it will allow you to keep a tab of what's happening where what the customer is doing in the case that the customer decides to manage themselves the cluster so you can say you know what you are on a version that has critical security features you should upgrade as soon as possible etc so you would allow the CCSP provider to better service the customer that's the scenario it's a great idea and again the implementation is something that we need to discuss with partners like yourself in terms of you know again tying into the infrastructure that we have in place for cloud.redhat.com or you know deploying multiple instances of that it's just I think today we have to we have to do more digging and more have more conversations to figure out you know what's going to work out best for us and for our partners thank you you're welcome alright first I want to address the gentleman who had asked that question about IBM's acquisition having anything to do with your plans on Nutanix we would very much like Nutanix to be supported as well I represent the power systems we have a Nutanix offering on power as well so I just want to kind of dispel that notion moving on to my question which is we do get some feedback from our clients that they'd like to mix different architectures they want to put workloads appropriately or place appropriate workloads on the right architecture in specific to Nutanix as well and in general do you see a roadmap where you can mix architectures in the OpenShift cluster yeah so today predominantly where OpenShift runs is x86 architecture we do have support for OpenShift on IBM power and we have an entire team called the multi-arch team that's kind of focused on multi-arch capabilities not just for power but for exploring things like ARM Zee even and so forth we think this is an area again can't really say anything about the acquisition until the acquisition closes but it's an area today with IBM as a partner and down the road that we look to kind of collaborate more on for the architectures that they care about it is something that you know as customers kind of adopt more diverse compute infrastructure that we get asked about a lot and very open to figuring out how we can do more in that area you're welcome so I've got a couple of questions we're looking at probably fairly aggressively trying to move towards 4.x mostly driven by Knative because we've got a lot of developers who are using OpenShift who are very interested in this paradigm at the moment with our three clusters we run stretch architecture so we use AWS as well as bare metal on prem control planes one side of the one link we don't stretch that I get those would be two different machine sets in 4.1 is that going to be like a supported deployment where we'll be able to actually deploy bare metal and EC2 instances and have them part of the same cluster or is that something that you're intending them to be separate clusters at the moment so generally our guidance is latency between control plane and masters is a key thing and obviously any time you mix infrastructure that can be very complicated I think EC2 and metal is going to be fairly complicated yeah we keep the control plane one side of the link we don't stretch the control plane and I think we have support for REL7 worker nodes not just REL CoreOS worker nodes there may be configurations that make sense I think it'd be really to say there might be some assumptions that you might want to investigate before jumping whole hog into that but generally if you're limiting it to the worker nodes then you're safe it's when you start stretching the control plane that we really don't recommend and can't really guarantee yeah no no no that's foolish we wouldn't do that just out of curiosity I was going to say to add what he kind of hit on it was the case of REL7 you could have a cluster in AWS for instance and then add a REL7 node and try it out I mean I don't know that anyone stretched it that far especially if a worker nodes but it might be something that you could try very quickly it shouldn't be a lot of work if we do that there's probably more detail question but I guess then I'm assuming things like the auto scaling operator are going to not work because they won't understand yeah they won't understand the bare metal how to provision actual bare metal machines right they wouldn't AWS environment because there's a machine API that'll know how to do that okay not in the bare metal side just the one question I have is and I guess it's a complicated scenario so that's why we're excited to ask questions but um I assume you're you treat if you run your control plane in the bare metal environment and you burst out to EC2 I assume you have you've turned off all cloud integration points right so okay so I think understanding some of those nuances would be good feedback to the team here and then but I don't want to give you an answer of yes it would work great okay fair enough um and I guess the the second question um was thinking about the the three to four migration I'm a little familiar with Valera not deeply but um we obviously it's great for stateless stuff but we've got things like OpenShift Container stories that are hosting PVCs that are hosting the docker registry what's the story on sort of stateful pieces again we're going to show a demo tomorrow but we're looking at several the idea is to give our customers choice so there's a choice to copy and replicate there's a choice to swing the PVCs or move the PVCs to point to the new control plane um and the idea is that you know your architecture are better than anyone and you will see what are the options that Red Hat gives you and choose the best the best path okay but there is some optionality around the system volume move correct I also wanted to add a little context on folks mentioned uh bringing your own rel nodes and so forth so when you're looking at the OpenShift installer and if you've been in the beta you're looking at like the right now uh for AWS a fully installer provisioned uh mode right where the installer is taking care of everything from configuring the infrastructure to bootstrapping uh all the CoreOS nodes for the masters and the workers to then setting up Kubernetes and everything that comes on top that's not always possible um so we do have a mode uh that sort of user provisioned or combines user and infrastructure installer provision uh so that you can do things like set up the the cloud infrastructure yourself if you're you know in a lockdown environment or environment where you're not able to delegate that control to the installer or uh bring your own uh rel nodes as was mentioned if you're uh in an environment for whatever reason you want to continue managing the operating system outside of OpenShift or you want to kind of have a traditional RPM based uh set of components you can do that um the VMware and uh bare metal providers that Tracey mentioned that came out in beta 4 those are what are called user provisioned meaning um you configure the infrastructure uh the underlying infrastructure yourself and then uh we uh automate the deployment of Kubernetes and operators on top of that but we are working with VMware as you heard this morning on a full installer provision mode where those choices are made for you it's fully automated from the configuration of vSAN and networking and everything else above um and uh what you'll see in the keynote tomorrow is we're working on a full installer provision mode for bare metal which will be very cool to automate everything that has to do with bootstrapping a bare metal cluster and configuring and so yeah one thing to add to that um Joe you had it almost all perfect except for the uh AWS for user oh that's right yeah that's right we're gonna actually have a mode I know this has come up um probably a few people in the room has come up about uh customizing the AWS infrastructure so we have a mode where it'll take pre-existing infrastructure and you'll be able to do an uh an open shift deployment to that so that's one of the things that will be part of uh 4.1 yeah so I'll put the plug out there again I was waiting for my time try.openshift.com there is two different sections please go out and look it's brand new as of Friday you have an install on premise or install with options that Catherine mentioned or the usual install flow that you guys have seen up on try.openshift for a few months now with um installer provisioned it's also a plug for our betas like you know your feedback is really valuable right and so for us we look at it like Amazon we can automate everything let's do that right and then we go to select customers in the beta and say our Amazon environment is very locked down and I don't have the authority to delegate control of whatever DNS or VPCs or whatever to you um and so that's how the user provisioned mode for AWS was born and it's feedback like that that is just invaluable to us to continue making sure that the product suits your needs thank you very much I have two questions so we're currently using 3.11 with Gluster CNS storage and heard that there is no roadmap for Gluster probably in 4 just want to confirm on that size you know what's going to be our strategy how do we get off of that when you go to 4 so OpenShift container storage in 3x was built around the Gluster technology and that's going to that's got a life cycle that goes out into 2020 something yeah so you know if you're on that infrastructure you know that is you know fully supported by Red Hat patches updates the whole nine yards as we move to 4.x you know we had a just like we had a number of architectural choices on OpenShift we also had architectural choices on storage one of the things that we saw was a lot more demand for object storage and we also saw a lot of momentum in the community around a project called Rook right so generally in our portfolio object storage really is more you know closer to SEF in what we you know what we we see as the strengths of SEF obviously SEF also does block and even file through SEFFS and the Rook project we already had Red Hat folks contributing to it and so forth so OpenShift container storage in 4x is going to be built around the Rook and SEF technology and then just like you know three to four migration we're working on storage migration Cheyenne I want to stand up is the product management lead for for OpenShift container storage yeah I want to give Cheyenne the mic yeah good question so OpenShift and it's going to be in the road map session tomorrow at 1030 if you're joining so the technology stack for OCS 4.x is going to be based on Rook IOS SEF and actually Nuba which is our latest acquisition in storage space for multi hybrid cloud capabilities so it's not just Rook and SEF but also Nuba so if you want to know more about it you can come to the session tomorrow it's room 161 and I know it's hot it's booked so I tried for it but yeah all right so again all sessions will be recorded and obviously there's also the opportunity to meet with folks like Cheyenne here or after the conference if you want like a demo if you couldn't get into a session or you like what you saw in a session but you want a demo for your broader team or discussion just reach out to your account teams and we can set that up and everybody's happy to talk to you okay another question is the licensing it used to be socket based and I heard it's going to be core based with the 4.x OpenShift is that correct so basically we did we do support we'd supported both sockets and cores for the worker nodes from the beginning we moved to focusing on core based because it gives us an option that works across all environments right like you can't really count sockets on Amazon and so forth that being said if you're an existing customer that you know is on a socket based program there is sort of a grandfathering or renewal so you can kind of continue that at least for one renewal term and then you know talk to your sales reps around the rest and then there's options like cloud suites like some bundles that are socket based but yeah we wanted to kind of simplify the pricing structure also get out of discussions around what the right ratio is from cores to sockets which is you know you should see your procurement people love talking to us about that so we just wanted to kind of simplify that but that's kind of the focus now is on core VCPU based pricing the other thing is we're actually with the metering technology finally going to be able to offer hopefully this year a consumption based model for if you have like a base set of capabilities but you want to burst capacity so we're looking at how to tie the metering to offer a more consumption based pricing option so thank you you're welcome two questions there was a comment about automating vSAN integration is there a story around read write many or is that still going to be something that's more manually provisioned Eric sorry there's not a new story in that okay the other question is with all of these new features and all of these new processes and technologies building blocks what about training for those of us for example with access to Red Hat training catalog when can we expect that to be updated yeah so great news so the Red Hat training team has already been hard at work starting with the first beta on building training curriculums for OpenShift 4 both for administrators as well as for developers although from a developer's perspective it's the same thing other than there'll be trainings for things like Istio and Knative and the new stuff but in terms of how developers work with OpenShift that should be consistent so yeah so I think look forward to that there may be some sessions I know the global learning services team is here at Summit you can go talk to them about that if you have a GLS all you can eat subscription that's great because you get access to the whole catalog and then we also have other options Brian mentioned learn.openshift.com a great resource for folks who are just getting started actually do something hands on and self paced is a lot of learning modules and we keep adding new ones a lot of them are tied to new technologies like Knative and Istio a number of books and community resources the commons gatherings events and the weekly briefings are a great way to learn about new technology so for those of us with the learning subscriptions you'd expect that to be updated relatively yeah I think if those courses aren't available right when the GA hits in a couple weeks it won't be very long after that because they've been developing along with us yeah I think the plan is instructor led will go right around GA and then the self paced stuff will come shortly thereafter excellent thank you I'll go back to your read write many questions I don't know where Cheyenne is and why he's not yelling at me but if my answer was accurate I guess if you're trying to just use the vSphere provided functionality but if you have OCS read write many would be something that could be automated and supported and that can run on top of right anywhere you want to run it so there is a story but the specific question I think you were asking it's sort of manual I'll table all of my questions around that thank you alright well thank you everybody we appreciate your participation we appreciate your questions now that you know what everybody looks like feel free to corner in the hallways and throughout the week and thanks again