 Hi. As Diane said, we're from GE Digital, and we're here to talk a little bit about our journey to OpenShift, what we've experienced along the way, and maybe some helpful discourse about maybe you can help us and we can help you, and we can help them make the product better. So without further ado, I'll introduce Tim Oliver. Hi. I am Tim Oliver, software engineer at GE. I am, well, I get more into what I'm doing later, but I'm a software developer at GE. I'm just going to get mine out of the way now. So I actually have the same title as Tim, but I am a CIS admin by trade. I've been at GE for eight years. I was a Slaris admin for almost 20 years, and I came to GE as a Slaris virtualization engineer, containers, before containers, before containers. And I did automation as a service for about five years at GE, and then I joined the container cloud service, and I do all the terra for me stuff that happens before we do our OpenShift install. So although I am really excited about four, now I don't have a job anymore. We'll talk about that a little bit later, and why we're excited about that. I think there are many of us all out of jobs, which is good. Wherever you are. Thank you, Clayton. My name's Jay Ryan. I'm a staff infrastructure architect at GE. I'm not sure what that means, but you know how titles go. So I've been working at GE for just two years, and I've been working with containers and Kubernetes since March. So I'm very new to this space and excited to be a part of it. A little bit about our team. GE is a complex organization. And so we sit on what we call the container cloud team. It's a very small team. We're half of the team here today, represented here. And we're dedicated to running Kubernetes as a service at GE. And so we sit in our core tech division, which we'll talk about a little bit later. But that's basically IT as a service for all of GE and the GE business units. Tim's going to talk a little bit about what GE is and what we do besides appliances and light bulbs, which I'm sure you'd probably raise your hands for. So one of the things that's really cool about GE and about working for GE, I've been at GE since 95. So yes, these grays tell a story. But GE is a huge company. And we're all over the world. I mean, take that globe and stretch it out. And that's where we are. Pick a country, and we'll probably be there, have a presence. So GE is a kind of company that builds the things that make things like cities work. We do the power plants, and we do the oil and gas, and we do the power, renewable power, and things like that. It's all the kind of stuff that's behind the scenes, but it's huge. It's industrial stuff. And then there was also the light bulbs and the home appliances and things like that, that is now no longer a part of us. But that's another story. So we're all over the world, and we're in a lot of different businesses. We're in health care. We do things with obviously the Department of Defense. We're in aviation. We make engines and things like that. So in some of our businesses, it's really highly regulated. And then in other things, it's what you would think about as IT. So it's pretty cool. Back in probably about 2010, we're dealing with a lot of things that you probably have heard as an enterprise. Every developer, every developer team thinks that their application is special and requires their own server. And so you go and buy that hardware, it comes in, it sits on the dock, and then it's underutilized because their app is the only thing that's using it. But then at the same time, they're complaining about compute costs. Oh, and everything costs too much. It just costs too much. You've got to lower the cost. And so I was part of a team that was a part of our initial entry into the cloud. And we're looking at these things and we're trying to just get the cost down. So we're looking at all these apps and we're looking at all these underutilized servers and all of this and we're thinking, how can we do this better? Well, along comes Docker and containers. And it seemed like the perfect solution. But then we have, you know, we had some changes if you've been watching any of the Wall Street stuff. We've had some changes in the way GE operates. We won't go into that. But there's been changes in management levels and things like that, right? So some of our initial efforts got thwarted. But we started down the Docker road with Docker Swarm and we had some successes with that. And then we started with some of the other technologies and we had some successes. We found ourselves actually developing the pieces around Kubernetes, because Kubernetes, particularly in the earlier days, Kubernetes is a bear to install. It was just huge and it was unwieldy and it really took something to install that thing to keep it running, right? But we were building services around that and then we saw OpenShift. And OpenShift, everything that we were trying to build, OpenShift had, so what we were doing didn't make any sense anymore. So then it was like, well, okay, how do we just, if they already have it, then why are we building this? We can just get it from them. And then of course, Red Hat does all of the great stuff that as an enterprise you need with the support and everything like that. So we got down the road of containers. We had a lot of false starts, but we had a lot of opportunities for containers. And then OpenShift and Kubernetes happened and we got started with that, so. So yeah, so if we take a step back a little bit and talk to you a little bit about CoreTech, about IT as a service edgy and how that works, it will kind of make sense about why we approached OpenShift the way that we did. So half a million people CoreTech is supporting in over 170 countries, 7,000 enterprise applications. And as Tim mentioned, migration to the cloud, app modernization. 300,000 employees all over the world, over 10,000 applications actually. Yeah, so 1,000 plus, I think that's old data, probably 1,000 plus applications migrated. So as Tim said, we're around the globe. So one of the things that makes CoreTech special is that we're really a business partner. We're partnering with our business units together to help them migrate to the cloud, to help them optimize their applications. And we really do that through a product-centric approach. So we are a service provider for GE. And so if we're not successful, GE is not successful. So one of the things that we focus on as a product organization is building products for the customer, not bringing IT services that they have to consume. So that approach kind of flips thing on its head. And so if we are successful, GE can be successful too. So one of the things that I'm doing actually is taking OpenShift and writing a layer around it that is specific to a self-service kind of a model that we want our users to use. And I'm using calling, making calls to the OpenShift API in order to do things like create accounts and assign groups and roles and all that to users and groups and things like that. We're handling that metadata side of it outside and that's the piece that I'm writing around OpenShift. So OpenShift at GE, so we took, like I said, CoreTech takes a product-centric approach. So we kind of took a product-centric approach when approaching OpenShift. And so as you heard already this morning, the reasons why OpenShift. OpenShift checked all of those boxes out of the gate for us. Kubernetes the hard way is a hard way, right? So, and now OpenShift, what we find out that in OpenShift 3, that was OpenShift the hard way. And as Clayton and Derek stated, they're making things a whole lot easier for us going forward. So they're checking those boxes, they're a step in front of us the whole way and that just excites us. So when we have conversations with our security teams, we can tell them that we're bringing secure platform out of the box to start with. A little bit about kind of the background that Tim mentioned, 10,000, when we did a small survey, a small sampling of what our customers were doing with regards to containers and orchestration, we found some moderately surprising numbers, but we found tens of thousands of containers running. We found thousands of Docker daemons and around 100 or so orchestration engines. And so that was, like I said, just a small sampling, that was maybe a 20, a 30% sampling of the environment that we had access to dive into. So we saw our customers with a need and again with the product-centric approach, right? We're gonna build a product to help them solve that need because Kubernetes is hard to operate. When we surveyed some of these customers, the, what was the main thing? It's like, hey, you guys are running Kubernetes, how's it working out for you? Yeah, and all of them without fail, just about everyone that we talked to, so here, just to back up a little bit, they started doing Kubernetes or Docker or whatever the container strategy was, they started doing it on their own because we didn't have a corporate service offering for them. So it started happening at a lower level, but when they heard that we were doing OpenShift, not every one of them said, great, because I don't want to run this. I don't wanna run OpenShift. I want somebody else to do it. I just wanna consume it, which made our job a lot easier because we actually had a built-in market at that point. Yep. So that's when we implemented our Kubernetes as a service model. And so what that really is, is a fully automated and orchestrated lifecycle for OpenShift. And that's today, that's built on AWS, with more clouds to come in the future. We're using persistent storage. We're using EBS dynamic volumes, as well as experimenting with Gluster, dynamic storage and AWS as well. We don't have pretty pictures to show you about all of our cool architecture because it's OpenShift's architecture. And again, one of the things that made this such a great choice was they lay out reference architecture. They tell you how this should work in production. And so we followed their model with high available masters at CD and Infra. We're using four node OCS clusters for Gluster. Some of the guidance was three nodes in the past. So four nodes will actually give you the ability to provision storage while you're patching the cluster. So one of the things about this is that really, I mean orchestration is not sexy. It's just running containers. But that's exactly what you want. You don't want it to be out there visible, saying, hey, here I am running containers. You just want it back there doing the job. And that's what we've seen so far. It's been up and our users are using it and it just, the thing just goes. Yep. So we started with the internal registry, which is fitting most of our needs actually. We also have some registries that are part of some of the other work that some teams were doing, that we're plugging those in as well. We've set a standard that says for a registry to be white listed in our environment, it's gonna have to check these security boxes, scanning and the like. So we're also on the internal platform. We're also running Clare as I'm sure many of you guys are to do scanning and vulnerability checks. And we're working on automating the lifecycle around detecting remediation. One of the guys that's back home that didn't come and sent us a Clare report today that he was doing some scanning and the very containers that I'm using to run that, what I just told you about around the API, has 50 vulnerabilities, so thank you very much. So as I mentioned, we're a product-based organization. So we have three flavors, so to speak, of the product that we're offering at GE. One is the first one is shared clusters. And so Chris, Red Hat CTO, sorry, the OpenShift CTO came up here and talked earlier about either building mega clusters or building lots of little clusters. And if we would've had a conversation with him about six months ago, we probably would've went the lots of little clusters right way, which we were kind of going down that path. Now, it turned out a lot of our customers have workloads that require their own clusters. As Tim mentioned, lots of government, lots of regulation. If there's a three-letter acronym, it exists at GE, so. So yeah, so we have three types. We have shared clusters basically for test and dev type workloads where you can come and play, you can learn OpenShift, you can bring your test workloads to get it working, and then we're offering dedicated nodes in that environment. So for customers that maybe need that next step that say, hey, I don't want customer Y from Business Unit X, being able to affect my availability, they can run their own nodes on that same platform, and then dedicated clusters is ultimately the kind of route that we're doing. So the way that we're really doing this is through automating the lifecycle of the clusters. So from soup to nuts, we can stand up clusters, install them, patch them, in the way that Clayton and Derek were talking about for the next version. So it's great to hear that, because thank you Clayton and Derek for automating us out of jobs. I really appreciate that, and I actually say that very sincerely because what we need to focus on, and what Clayton, I think, mentioned several times, is we need to focus on driving business value, right? And the less time we're spending getting the cluster running and automating the cluster, the more time that we get to spend solving business problems. The less time we spend with development teams, app development teams talking about infrastructure and servers and things like that, when really all they're trying to get to is I want to run my application, and I want to run it with all the capacity that I need, and I want it to scale, and I want it to be up all the time. We can have that conversation, and we obfuscate the infrastructure, then we're winning. Yeah. So one of the, Tim talked about some of the work that he's doing, but one of the things that we're building is what we call project guardrails, which are basically constructs using resources and limits, taints and tolerations to give our customers the ability to stand up the workloads that they want, but also give them constraints that they might want to put in place, like potentially cost constraints or something like that, and tendency constraints. So we give them the ability to build their clusters and their projects in the way that makes sense, business sense for them. So there's a lot of opportunities that we have to make the product better, to make our offering better, and I'm kind of gonna breeze through these a little bit. I know we're running out of time, and I appreciate everybody, I appreciate everybody's time. We played and then those guys already told us so if we need more time, we gotta have time. We can have their time back, yeah. So RBAC, one of the things that G is we have really complex teams, teams that span different organizations, dotted line teams, and so when our customers wanna build RBAC models and RBAC controls inside their projects, it can get a little messy, and so one of the things that we wanna do is build out custom RBAC roles that can take away certain access and just enable our teams to work better, and those are things that I think might help the community as well. There are default RBAC roles that come with the cluster, and of course, you can change RBAC to your heart's content, but that's something that I think, maybe a set of practices around that we're looking to help with, I guess. Identity as well, so open ID is what we use in the cluster, and the user info endpoints in open ID aren't necessarily supported yet in OpenShift, and so there's lots of balls in the air to get all of our identity matched up and synced up throughout G in the cluster, so that's one of the things that we're working on too, and that's something that Tim's working on as a part of his front end, yeah. A tenancy is a thing that is starting to get talked about, I guess, in the community, we're really interested in the working group, multi-tenancy, that the Kubernetes community has. We're building business constructs and tenant constructs outside of the platform today, and using the project tenancy that exists today, but we think there's a bigger story around that that we're following as well. In each of our businesses, they're all pretty discrete businesses. The thing that they have in common is that they are GE, but then it's pretty much stops there, so the shared hardware and shared platforms and things like that, we really need the isolation because nobody wants to deal with it, or even, they don't wanna deal with anybody else, they want to be the only ones. Ingress, again, we have application teams that have complex needs, and so OpenShift has routes and Kubernetes has Ingress, and those things are coming together, so we're interested in that story and how that evolves and how we can support complex Ingress policies in the cluster. So the future for us is basically just listen to what the guys talked about this morning because that's the stuff that we're excited about. Red Hat 4, OpenShift 4, that is the thing. We're expanding, I said we're in AWS today, we're expanding to the other clouds that we're in, we're in all of them. We do OpenStack as well, so in January and Q1 next year, we'll be rolling out our OpenStack on-premise OpenShift appointments and looking forward to more clouds in the future as well. Ephemeral build environment, so one of the things around build is it requires privilege, so we want to build, we've talked to the OpenShift dedicated team a little bit and they've talked to us about how they're doing that and it's very interesting, so they build physical hardware and tear it down every time they do build, so I don't think we're gonna get quite there, but an ephemeral build story is definitely in the cards, so CoreOS and Cryo, no further explanation needed there why we're going in that direction and the biggest thing for us is our customers are innovating, GE is innovating, GE has always been innovating, so the customers that we have in the environment today are teaching us about Kubernetes and teaching us about OpenShift and asking about operators and asking about Helm and wanting to know how they can get in at the ground floor and build their applications and their platforms on top of OpenShift and integrate and build them in a cloud-native way, so that's one of the most exciting things for me. Yes, we're getting the watch, so. And we're hiring. Yeah, we really appreciate all your time today and if you want to talk about multi-tenancy or any of the other kind of futures or challenges we're interested in, come find us. Thank you. And we're hiring.