 All right, so we're almost ready to rock. I think we're going to give people just another minute or two So while we're waiting, I'm gonna tell a story because It relates to this very thin room. So and some people probably haven't heard before I don't think you guys have heard it So in 2008 I took a job after my first startup failed with this company called go grid Which was the second public cloud in the United States after Amazon web services And two weeks after I started I found out that public speaking was actually part of my job description Which I had never done before and I had been volunteered to go speak at SNIA So I'm sweating bullets the whole time, right? And I'm like doing all the research and trying to understand how Steve Jobs presents and you know I got two weeks. I prepare I build this whole deck. I show up there walk in the room The room's about four times larger than this I'm like Crap I Didn't get any sleep the night before I'm so nervous I'm so nervous I get up on stage and the way they had to set up You kind of came out like you're a rock star and all the lights are like lights were like two three times brighter than this I couldn't see anything except I the entire room was empty except for like five people It could fit like 400 people and they're like five people in there and so I was still nervous though So I went through the whole deck. I gave my presentation and you know the thing I focused on was content I'm trying to make sure all the content was really interesting how these data points from GoGrid and At the end all five people asked me at least one question So that became my new bar like I don't care how many people are in the audience What I care is like do I give you guys the value and do I get high-level engagement? So That's just my story. So the room's a little thin But you know, I'll be I'll be very happy and so will our panelists I think if you ask good questions and get engaged because you know We're just here to have a conversation and try to understand things right nobody's got the answers But in having a dialogue we kind of together find out and you know, and hopefully learn more about what we're trying to accomplish Yeah, all right, so we're gonna go so your panel today. I'm moderating. I'm ready by us and I hoped that you know who I am I find out that his open stack is bigger and bigger as a figure. I become smaller and smaller within open stack I've been part of open stacks since the very beginning I founded a company called cloud scaling that I was the CEO of which was sold to EMC last year and at EMC I am one of a few people including Josh who Tries to make EMC think differently. So as you can see, I'm not the typical EMC executive I have no colored shirt. This is how I tend to go to business meetings even in Japan So, you know, that's that's me and then I'll let each of the folks introduce themselves in it. Thank you, Randy Myself. I'm its tank. I'm a senior principal cloud architect I help a direct TV with their cloud initiatives lead teams for putting open stack in production and different use cases around a cloud Hello, I'm Lachlan Evenson. I'm the cloud platform team lead at lithium technologies You may have seen the keynote yesterday where I demonstrated croc hunter. So we're using containers in production on open stack at the moment So I'm user operator bring that experience in the enterprise space So I'm Josh Bernstein. I joined EMC in May from Apple where I was responsible for data center infrastructure for Siri So as part of joining EMC Randy and I are kind of the tag team show I guess trying to Steer the ship if you will change of the direction of the ship. There you go So for those who haven't seen Me moderating a panel before I go to fairly great lengths to try to have a diversity of opinion I do not pick people who are my friends I do not peak people from vendors and strategic partners or even customers necessarily what I want to see is Some different viewpoints because if everybody on the panel says, oh, yeah, I agree with the guy next to me Then we don't really learn anything right that where we find things out is in the areas of disagreement So I tried to carefully select a panel of somebody who is trying to understand what his container journey looks like That's a myth. He doesn't he's not using containers yet He's just the very beginning of understanding and in trying to figure out containers and you can talk about this more in a second He's trying to figure out like do I put them on open-sack? Do I not put them on open-sack? Then I picked Locky who's using containers on top of open-stack and harmony and very happy with it And then I picked Josh who I work with but he's got the very interesting story around, you know, one of the world's largest Container deployments at Apple where they use absolutely no virtualization. So we have containers at scale with no virtualization at web Scale we have containers on top of open-stack in production where there's a lot of learning happening And then we have somebody at the beginning of their container journey So that's kind of why I got these three folks in here so we could kind of explore You know, do can do containers have a future in open-stack or do they work in harmony with it? Do they work against it? And so that's what we're going to talk about today All right, so good So the place I want to start with was to have each of these folks kind of describe their container journey And let's start with Josh and then work this way and just each of them are going to sort of tell us in more detail What that journey was like and what are the lessons learned so far? Yeah, thank you. So We started at at Apple with You know one of the one a very very large virtualized environment running VMware and after about two years or so we decided that it was just too operationally complex to run something with virtualization at that scale and In the meantime there were there was another group kind of going down the open-stack journey in parallel They spent about 18 months on their open-stack deployment and we were kind of at an inflection point decided We had to do something different and so when we took a look at what open-stack had offered them and gave them It was kind of just the same mousetrap but different than what we had already built and we were looking for something fundamentally different and really fundamentally simpler, right? At a certain scale the complexity of a virtualized environment makes operating at its scale very very difficult and so that's where we landed on mesos and containers It drove a lot of complexity out of the environment. It gave us tremendous capability around scale and You know we were lucky enough to have gone through that journey very quickly. I think and how big was that environment? It was when I when I left it was in the 85,000 server range. Those are physical servers. So it was big Great. Thanks. Lucky So our journey started about six months ago So we'd been running open-stack in production with VMs for about two years But we really didn't feel like we'd actually solve the problem for the developer getting their app out into the cloud environment as Josh was saying it still felt far too complex and when we took a look at how our developers were deploying their app They would spend typically a month writing a microservice So four sprints and it would take three months to get that into a VM cloud environment Stably so we felt that was a failing and we felt we could do better than that as a team and Around that time we started looking at Docker and we went to Docker con and we've been keeping an eye on the container gone ecosystem and What we found is is containers gave developers a really nice Handoff point a contract point where they could package up their app and hand it to the infrastructure So we thought you know, let's give that a try But we wanted it to be developer led in the past We'd actually tried to come up with this tooling and give it to development But this time we said with very little overhead. Can you run with containers and we'll provide the infrastructure so you know in one month we went from no containers to containers in production basically on the demand of the developers and That three-month lead time to get something out is now down to about 15 minutes on the first run and we're down deploying We've deployed 30 microservices on containers to production In two months. So that journey has really made it a smoother transition. You're doing that in a hybrid fashion Is that right? Yeah So another another value prop for containers is that we found was you could develop them locally and there are mutable images And you could put that image you had a common run time whether it was on AWS or an open stack So it kind of leveled the playing field what we could have we could use two clouds as essentially two different AZs right and have the same app and that's what I demoed yesterday running in two places Great. I meant Very interesting lucky and Josh. Thank you Our our journey is a little interesting. We take we tend to take a contrarian view on we have a cloud Infrastructure that's already in place legacy cloud as well as some open stack base clouds in flight Towards production. So the question that we asked was why do we need containers and that started out evaluation journey? Are there any microservices? based approach that we could take for new applications or taking existing applications and decomposing them and then using them on Containers if the answer to those questions would be a yes, then let's go ahead and actually Evaluate containers more closely and we did venture into that area. And so currently we don't have anything in production We do have several proof-of-concepts going on where we are evaluating containers in Open stack containers on VMs containers on bare metal looking at different ways where you could Possibly leverage the best performance like hardware native performance for cases use cases like NFV and stuff and At the end of our journey. We'll probably hopefully have a more clear picture on whether we could put Containers into production on 10 percentage of the workload footprint 20 percent 30 percent And so forth Okay, great So if you want to ask a question you could raise your hand or if you could come up to the mic Any time as we're going along would be great And since you were saying that you're gonna you you want or that you are contrarian I'll just give it to you for a second And by the way, I'm not necessarily gonna follow the script here This just gives us a basic arc to talk through and it seems like when people are getting into containers They're still trying to understand. Is it is it the same as virtualization? Is it different? Is it application packaging for the developers? Is it a lightweight form of VMs? I mean, you're very early in your journey in doing that the exploration. I mean, do you have any kind of initial sense of like Is it one or the both or one of the other or both or what? Sure, great question. I think we tend to see it as a combination of configuration management and packaging put together Nicely boxed up in one container format Whether or not it's same as Virtualization, we don't tend to see it as the same as virtualization like another replacement of virtualization platform Because really virtualization does give you nice level of security isolation resource isolation and many of those better tested proven Attributes that come with a KVM or say ESXi or some of the other hypervisors, which today Doesn't come along with containers So we don't see it as a another virtualization option instead. We see something at as adds value or augments our virtualization Strategy so Siri runs without any isolation between the containers and it's a free-for-all and you know to complete I mean not not speaking specifically about Siri I mean containers fundamentally differ from virtualization in that the containers rely on the same underlying kernel So that if you have workloads that need different OS's for whatever reason then you have to virtualize right if you can if you can get All of your application instances to use the same kernel then you've eliminated a layer of complexity in the system I think there's still plenty of ways that you can guarantee security and abstraction in the same way that containers give it to you Net or virtual machines give it to you now So I just I mean I think that there are I Think a lesson that we learned was well Why do we need all these different kernels for all these different use cases well because that's the way? We've always done things and that's the way it's been validated and tested right But really like any the kernel really doesn't matter right so you can pick one kernel all your applications can leverage it And what you gain from that decision is this removal of this layer of complexity I think and that's the fundamental difference Do you need the same OS or not or and really not by OS? But I mean the same kernel you can run the same kernel everywhere Why wouldn't you do it makes hard supporting hardware easier eliminates complexity? It's just it's just simpler Agreed and would you say sorry didn't mean to have a quick follow-up thought So would you characterize your apps as mostly user user space apps and not apps? Which have user and kernel presence as as being those kind of target candidate apps? Yeah, I think so. I mean There are certainly certain applications that need a kernel space derivative like maybe a file system driver or some like an NFV instance So I think yeah, that's a good characterization of it Okay, I think you know when when you actually have a look at how our infrastructure is utilized and this isn't a main driver for us but there's We're running so many kernels and they're managing memory. They don't own Via hypervisors right so there's actually a tremendous amount of waste on your infrastructure by running all these kernels And not only that you take a look at kernels. It's something to patch something to manage something you need config management you actually Can simplify not only not only from the performance standpoint So the hypervisor is a piece of middleware, right? And it's essentially feigning all these resources, right? It's doing hardware emulation. It's doing hardware emulation if you take that out of the way you have direct access, right? And it's a churruded C group. That's all a container is So you're relying on a single kernel. That's not a specific driver for us today It was actually getting apps out and time to market But it's something that we're aware of as we look at the overall utilization of our cloud and how dense we can pack things long-term Containers actually are a lot more compelling when you look at it that way Well when I when I first started using containers It was free bsd jails and I was using them to create isolation and to create better security for a web-facing Application so I've always been a little bit Be mused by you know position of VMware part of the emcee federation That you know that the type one hyper hypervisors are somehow more secure I like that wasn't necessarily my experience I mean they seem like they could both be equally secure But it does seem like the tooling around container security and management is much less mature than than VMware Yeah, I think that's true It's definitely less mature from a tooling perspective and you could argue that a virtualized environment at least with VMware is more secure because if you broke out of your virtual machine your attack surface is ESX it's not Linux So I think you know the security discussion is if you get a bunch of reasonable security people in the room You run around in circles about what is secure enough and there's always somebody who's kind of Unreasonable in the room that has some sort of philosophy or religion on it and you're never going to convince them I mean, I think that's the state of security. You're either you either get it and you accept the Arguably slightly additional risks for greater operational efficiency or you cripple yourself with all this ridiculous security nonsense And I trust that the community will come to pass that the way containers are deployed ask is secure the same way You know VMs had to go through that same process So I'm kind of relying on the fact that that maturity will come with time and this will come to pass that okay Yes, the world is at consensus that is that this is a secure way to do business great point and I really resonate with one of the point that Josh just made where the advantage that you gain by making it more efficient is Something that is definitely going to be more worth it even if you have to manage the security aspects of it a little bit actively So let me just get this straight Like over here. I wish I had a picture for this over here. I've got you know sort of like You know an infrastructure as a service layer that provisions me virtual machines which have an operating system And then I have a configuration management system like chef puppet ansible salt And then I have my application that gets deployed on it and then kind of over here I have a management framework and then over here I've got sort of like containers with a management framework kind of all the way down almost like a more like a vertically integrated silo and Sounds like I've not only really removed some of the complexity around the type on hypervisor and managing multiple operating systems and kernels But I've also removed a lot of the configuration management tools And I've got something where it's very easy for me and very very fast for me to spin up and Put a container out in production or scale aside scale sideways. Is that right? Yeah, absolutely I think one of the one of why would I care about opensack? I mean why why not just go all miso's I? I think that's a good direction to go. I said it there I said absolutely so absolutely so what what our development one of our developers to quote him was Getting a nap out with VMs is like a Rube Goldberg machine, right? Where you've got an incredibly complex set of things that have to go in order to automate something like a napkin I don't know if you guys have seen the Rube Goldberg Images, but yeah, you go through that kind of journey to get a VM and an application deployed, right? Which is incredibly complex and if any piece of that breaks the net effect is the the application doesn't get deployed right so but when you What for us having opensack there meant that we could leverage a platform that? Basically resourcified network compute and storage and we could overlay Container orchestration really quickly and get it out there in a month if we didn't have something like opensack that gave us Access to all these different pillars How that you need network compute and storage we wouldn't have been able to turn that around in a month for sure Interesting take so Randy you made an interesting point, and I want to speak to that when you say That you've gotten you you taken out the configuration and some of the other aspects of it Are you alluding to having moved? Offloaded those things to an excellent entity which would then act as an orchestration layer Or are you alluding to the fact that somehow you've turned your application into? Either state a stateless or stateful form where you'd no longer need any of that configuration at all Well, it seems to me my experience has been so far that if you go to our developer and you say hey learn this chef for puppet thing Versus, you know package your stuff up in a container. It's sort of a no-brainer for them, which is less effort Right, I mean, I love chef and puppet. I built my very first startup around puppet, but it you know, they're complex Yeah, so no that's a great perspective, and I remember one note you made of which was very Insightful note that reducing the surface area for the developers But does that mean that by reducing that surface area for your developer? You are in you are increasing the surface area for your ops guys or your ID guys so that now They have to worry about managing the containers itself and orchestration of containers itself. No, I don't think so in fact I think that I Think that you know one of the lessons we went through is when we got rid of virtual machines our the size of our Puppet repo Decrease substantially right the first thing you do when a developer spins above VM is now they want you to like Oh, can you change the version of Java running in that thing? And so now you have all this you have all this bloat in your puppet environment, right? And so if you can if you can just simplify that all down I think it puts less burden on the ops guys and certainly less burden on the developers right the developers focus on What are the runtime requirements that they want and certainly when you add a pass on top of that where they can Be declarative about the versions and the container itself and the image itself is built automatically like the burden is tremendously lifted off of both teams Interesting. Yeah, I mean some people would say that you can't really eliminate complexity of an IT system I've heard this argument before you can kind of just push it around Some people would say and I'm kind of more in this camp that You can eliminate some of the complexity if you're willing to have some of the trade-offs I mean like one of the fundamental things Amazon taught us is like hey if you move some of the complexity into the application layer In terms of you know managing apps managing themselves and being resilient and scaling out that like suddenly like all the stuff You have to do at the physical underlying infrastructure leg is way way simpler and I'd argue that that's not an even trade-off Like it like there's so much more complexity save that the infrastructure layer and cost that like it's like a no-brainer to push it into App layer and once the tooling's all there then you know your average developer can use a pass or whatever It's very simple. I think though that there's also you know We we as an industry have put all of that you look at running Oracle, right? We go to great lengths with our infrastructure to support H8 Oracle, right? You have all these hardware things you have all this multi-path fiber channel environment, right? Like the database or the application should be aware of its ability to replicate and do dr That's really where that complexity belongs Because to do it correctly it has to be in the application We've just accepted for so long that that we'll just we'll make this a hardware problem, right? I think that's the wrong place to solve that problem to begin with Yeah, not only that but once you do you tread they I see everybody treats it like a hammer And then they just go around trying to reuse that hammer for everything even when that apps don't require that level of redundancy and resiliency So I don't think I got my my question answered though, right? I mean Does open stack matter? Can we just get rid of it kick it to the curve and go with mesos? I don't think you want me to answer that question. I want you to answer that question In fact, Josh, please answer that question. I Mean we're at open stack, right? So it's I think that Look I think that the reality is is that there are applications that are environments that for whatever reason people want a virtualization They want that abstraction. They want to run multiple os's. They want want to run different kernels and so on and so forth and so For those use cases sure I can't argue with that But I think that there's a tremendous amount of complexity that's still an open stack that can be eliminated By looking at other options and I see some of you are ready to like jump up on the stage and kill me, which is fine but but um You know, I was I was in a session earlier today where they talked about You know all of the holes and all the problems in in the open stack API is and depending upon what plug-in you were leveraging You would get different semantics around calling the API like How is that really like open and easy and modular like it's just abstraction for the sake of abstraction, you know it and Maybe at a small scale I think at a small scale it works just fine, but at the data center scale. I think it's too complex like try I'm You know try it try diagnosing You know this VM you can't ping this VM, right? You have all this complexity in the network and then you want to layer like a network abstraction on top of it How do you operate at that scale efficiently? I don't know well I I mean to play devil's advocate a little bit, right? I mean how much does direct TV look like series series pretty much a single app, right? It's pretty easy. I mean we it's it's several apps, right? It's several microservices people just call it one and so I get a little flack for that but but You know, I mean talk about your experience running, you know running your open stack at scale Yes, so for us specifically we see you know a need for both still so that keeps Open stack and AWS very much alive and VMs very much alive for us So, you know they complement and you know, it's about Using the right tool for the right job and what business problem are you trying to solve, right? There are still definitely a need for virtualization in some cases some apps just mentioned are not Container friendly you can't just pick them up and put in a container such as such as let's say a database Some database technologies, so I mean they were they were built on the assumptions that the hardware Depending on the database right if it's if it's Sandra Kafka sure right, but if it's if it's my sequel it wasn't built to Go up and come down and reattach and go up and come down and have multiple and issue scale up and scale that down Commands, right? It wasn't built like that, you know, maybe maybe one day I'll be there But for us right now, I guess the other thing to note is it's a journey, right as you go through these things You know AWS is at Lambda Our people really consuming lander lambda. Is this something that we're going towards are we are we evolving? From VMs up to containers up to application run times that are that a borderless compute I don't know but you know as as we hand these things to developers. It's giving them little steps, right to to Really consume different levels of the of abstraction and complexity So you know for us it's still a journey and we still see a need for both And that's why you know open stack is is important in our environment Mid you were saying that you were trying to evaluate which apps could go in containers and in which don't like What do you are you? Mostly homogenous or have you got a heterogeneous set of operating systems and I mean, how do you figure out what apps make sense? No, we definitely have heterogeneous Environment and I am leaning towards a lock lease position on this that there are certain things like a fusion middle where for instance where there's just so much monolithic layers of software running there that Trying to even consider decomposing them or like trying to put that into like a service perspective So a perspective may not necessarily make sense right away. Instead if you're trying to build new apps or new services Those probably are better candidates for Considering containers as a target I do however want to come back to your original question about whether or not open stack matters So not just direct TV while working with EMC and Cisco I've worked with many fortune 500 companies building and architecting solutions for them where Every company will have to travel that trajectory that journey where they are being introduced to From a legacy environment virtualization environment to an open stack kind of environment They are not gonna necessarily be able to leave frog Always there'll be outliers, but they may not necessarily be able to leave frog directly to containerize world By passing the API friendly open stack ecosystem Questions anybody Can I ask a question? No. Yes. Go ahead. I think there's this miss There's this like perception also in the industry that like some apps can't be containerized I don't agree with that. I think that there are certain classes of apps that That are harder to containerize Also, you don't You don't necessarily have to run them in a container They can run on bare metal if you're running on get a fee or some other kernel components, but um, I don't get it Like my sequel runs in a container. No problem Postgres runs in a container. No problem Oracle runs in a container No problem. Like I think that there's this like mindset that people say well my app can't be containerized I Don't really buy it when it comes down to it Yeah, I you know everything can be containerized. There's no doubt It's doesn't make business sense and risk if it's you your crown jewels to go and pick up infrastructure that's been running and Running platforms that you rely on that you're making business Decisions on picking them up. That's a that's a big risk factor, right? So, you know even our journey to VMs, right? We didn't go and say let's just pick up data stores that we've relied on For 15 years in business and just put them in VMs day one. We've said let's be pragmatic Let's take let's take things slowly and move things in and see how it goes, right? And then as that evolves you I know you can be absolutely you can run my sequel but to do that day one It's just a big pill to swallow for the business. So business risks, you know, I'm gonna go further I'm gonna say it's actually easier to run some of these legacy apps in a container and here's why My use case when I was first my first startup was I was orchestrating puppet in 2006 when nobody had heard of it And we would like spin up on Oracle database and we would run the command to create an empty database 30 minutes later. I had an empty database Unlike when my is go where it takes like half a second and like if I had already created that empty database in the container And I can just replicate that out. I just saved 30 I mean an extra 30 minutes in the provisioning time to scale out. I mean that's unacceptable Yes, I don't know it's tooling and knowledge and and the people that have that have run it, right? I don't think anybody's going to going to not want to go to it But it's just you know, it's the journey and even the front-end applications that we've moved, you know And I'm sure it was the same with Siri. It's like How the heck do we debug this in prod? How do we get the tooling around? It wasn't like all the tooling was there day one and you went, you know, cut the cut the red tape series done We can all go home. Don't need to touch her again. She's an organism Living and breathing when you actually get in the trenches. You're like, what is it a C group actually doing when I ask it for this month? And how does that look and what's noisy neighbor look like in a container? What do all these things that I've actually solved with virtualization look like and that is the journey that we're on now, right? Actually building tooling around Containerizing so that's probably why it takes it's a largest pill to swallow with something like crown jewels Your data stores, right? Your system a record. This is what your business is actually dependent on you can't throw it away or lose it So, you know risk-wise moving something like that is just a big pill that's too big to swallow right now for us Okay, so What about like a containers only open stack, you know, I'm like spotting sacrilege here But like imagine like no Nova, you know, maybe just Keystone God forbid maybe not even neutron, you know But just you know spin up containers and or Kubernetes and Mesa's in harmony with Magnum Whatever, you know, just like really stripped down, you know, you can tell your developers Hey, just point and click sticker. Take your Docker image right off your laptop shoot it into this thing And you know, you've got a hundred copies, you know instantaneously like I mean does that make any sense? I think we call that Mesa's already Don't mean Mesa's be a little bit different, right? I mean, it's got a whole scheduling system in a level of sophistication at an open stack by comparison is not There's no orchestration necessarily, right? Yeah Absolutely, I think there's there's room for it and again it comes around tooling in the journey, right? So you've got all these API's that you've built tooling around you've got Cinder You've got neutron whether it makes sense to use these down the track but in your journey as you start going down Rebuilding everything from scratch is sometimes too hard. I mean some in some cases. It does make sense where you've got green field Maybe just you don't do it. Mesa's you know the thing is like we talk about like this pets versus cattle thing, right? And like in open stack like you when you put a new cattle on the line It's like you take it out and then you like, you know You clean it all and you curry it and you feed it and maybe put a little, you know hat on it and like You know, everything's very precious. Are you dressing your cattle now? Is that what yeah? I mean, that's what it's like Right, it's like I spin up the me I'm and I touch the block stores to it and I put on this VPC and the network has to be There's two network interfaces and it's pretty elaborate Right in but like in container land. I'm guessing that Siri doesn't have like multiple network interfaces per like container and You know that it's just a lot more stripped down because I mean Google's not doing that with their container based system Absolutely, you know, I get a lot of questions around the production environment, right? What's the IP address of the container? I said forget about it, right? You're attaching it behind the load balancer. Just forget this everything's ephemeral Right, you don't even care about the IP address of a container, right? It just doesn't matter. Everything's by name forget about IPs, right? But it's just again, it's a journey and people You need to break the way people think and and help them, you know, handhold and move them on I Think like a an all container open stack would be cool I mean, I think that there's a tremendous amount of complexity that gets removed out of that that kind of stack Right and I'm all about making it simple at a certain scale. It's got to be simple So I think that if you could remove all this I mean you look at a you look at all the vendors on the show floor where we have our own, you know Reference architecture for open stack There's empty on VLANs and there's all this complexity and you you got to attach the store here and there's two network ports It's just it's just crazy There's just so much complexity in that system that anything you can do to drive out all of that I think you'll be better off longer term even at a small scale. I don't know man I seems like most IT engineers prefer complexity That's the way they solve problems is adding another piece another layer of abstraction, right? anyway, um, so I Would really like somebody to come up and ask these three smart guys questions You have the person who built the infrastructure under Syria here And you know, you know, there's no hankering in your heart to learn something about that He might not tell you but you could give it a go. I Can call on people don't embarrass people already. Don't that ship is sealed Anybody oh my god, we must be terribly boring Okay, I'm walking going. I don't mind talking, you know me Um So Thank you. You win a prize. I don't know what it is But Nicole right here is gonna help you out later on go ahead. You've won a pony I mean, I see a lot of you folks who are managing these clouds and driving the fact Let's move towards containers is the push strongly coming from the app side app developers or you know Is there a lot of momentum on that side or are they kind of too busy doing the day job and really don't know I mean You know, I especially let him or Siri was the push that let's use our infrastructure at higher utilization So let's make the effort and then the app guys will push them So I just want to understand which side of the game and you know, that's a good question At least in my experience it wasn't the app guys that wanted to go down that route It was really it was really us as the operations team. I think that that Different organizations, you'll get the pressure from from either the apps or the ops team Depending upon what the pain point is I think in most organizations the IT infrastructure is lacking And so the developers try to take more and more of that responsibility on their shoulders because when the app breaks in production They get the phone call so I think that That when you hear about application developers wanting containers and wanting this sort of things It's because the underlying infrastructure has been so insufficient that they want to take on That responsibility on themselves and that their day job is you know features not ops And so they want the ops piece to be as simple as possible but I think there's other organizations where You know that the ops guys want to I mean I think if we're all IT ops people here We all want to be successful Running infrastructure, so I think it comes from both places depending upon what the politics and what the dynamic is in the organization Yeah, from our perspective, I don't think and I think probably across the board Developers couldn't care less whether it runs in a container or a VM, right? They just have a certain set of requirements and they need to get something out, right? They compute storage network and that's all they really care about how it runs and what it runs on really isn't what they're necessarily concerned about but You know that the promise we sold with the cloud was we were going to make self-service infrastructure easier, right? And when we actually queried our our development team on we'd actually Obstructed it so much that we've made it more complex and It was almost easier to deploy to bare metal than it was using an orchestrated VM Infrastructure right and and we looked at that and said, you know, that's not the promise that we sold those guys we actually said we were going to make it easier for you to deploy your apps figure out what's going on and Stop worrying about the infrastructure. Just start worrying about what you pay to do Which is write features and get features out into the environment. So You know, we we saw the lack there and we tried to actually help them Meet that need which was stop worrying about the infrastructure and and start worrying about writing features So that's something they've come back to us and we're actually just getting you know I've got an email as long as my arm just saying thank you. I can now start worrying about just writing features So that's something we've achieved with containers, but I don't think necessarily they cared that it was containers Docker just did a very good job of making it sexy to put things in containers, right? That's what for developers for developers, right? That's there, you know, something they've done a tremendous. They've made the experience to containerize things. It's absolutely invented containers, right? No, I mean they made container sexy. I was using Solaris zones 10 years ago in production and you were jails So in addition to some very interesting perspectives already shared, I think the the need arises from multiple stakeholders, maybe an architecture group and the operational group the the executive management wanting operational simplicity and agility on the development side when translated into technology container is a very good option to consider and so I I kind of believe that it's not necessarily the application owners that are driving this it's multiple stakeholders and The you know just one more thing, you know The confluence of dev ops and microservices and all these things have kind of led to play into the Field of containers the sweet spot. Yeah, where you've kind of given developers access to the infrastructure You've asked them you've given them the pager. They have to get out of bed So actually making it easier for them to get things out and fix things and deploy things is something they want and need so So as I'll follow on to that last question and this discussion which I find really interesting So it ops side. I think all people in the room get it. So the dev side You're rolling it out. Hey guys, you're gonna get containers come Monday What do the developers need to do or understand or learn? Differently with say the Docker environment or what have you to be able to encapsulate the dependencies and all that is it a big learning curve Is it easy? I don't think in my experience. It's it's It's a learning curve. I think that Depending on how the infrastructure set up to receive the the container The developers just declare what their runtime dependencies are just they just declare them differently than they did in the past so for example if I'm running a Writing a Java application. I would email my ops guy or put in a ticket that's saying hey, I need Java 1.7 Right, so they make this declaration about what they need and they expect it to be in place Whereas with containers they can declare that requirement Themself for themself and they can trust that the the infrastructure will honor that declaration So the learning curve is you know instead of filing a ticket because the app is packaged up with all those dependent Yeah, that's right Java 1.7 plus the app plus the library plus whatever else you want plus you want I don't know what else you'd want with a Java app, but It's it's the learning curve is basically instead of declaring it through a ticket or through an email or through a Document that you you give to the ops guy you declare it in your Docker file, right? So there's maybe a little bit of a learning curve a syntax learning curve But I think it's something that but the fear of all operators is when you give the developers that power They'll do they'll put crazy stuff in there that maybe increases to your attack surface or you know Isn't the monitoring standard that you picked or whatever? Yeah, that's true I mean it I think it's better than giving a developer a credit card Right, I mean I think the most dangerous thing you have is you have all these developers running around with P cards and credit cards that are going out on Amazon and doing whatever, right? I think that's more dangerous I mean you can you can allow them to declare from a set of curated packages for an example to deal with those types of things So it's not totally the Wild West Go ahead. I mean we've we've actually found it to be They want to be more compliant right because we can provide these containers cut with Java 1.7 in them That are based off our security best practices and they can just basically all they need to say is from lithium Java 1.7 Add my jar right and run my jar and they're like this looks like a bash shell script, right? And that's what they're expecting. They're not expecting a chef recipe in a run book and a puppet manifest, right? Which is a big massive overhead for them to deploy an app But one of you to go back to the original question, which was we have to wrap up soon Okay, yeah, I'll finish you talk more than Randy. It sounds We'll have a contest on that later and Normally, I would try to summarize What we talked about and I feel that that's a little bit hard in this case, right in the reason I think it's a little bit hard is it's clear that depending on whether you're a web scale or a startup or a classic enterprise Maybe your your needs are very just enough that like containers either like a really awesome fit or not You know only do a little bit for you But the things I thought that we agreed on is that it makes the developer's life a lot easier And a lot of time maybe makes even the operator's life a lot easier So there's some clear wins So maybe it's like open stack sometimes and may so sometimes now But you know, I've always had the belief that kind of all businesses at some point have to move towards being more Web scale like maybe I'm wrong about it But you know if you watch me, you know, I I promulgate that notion. So in that case, maybe it's like it's An idea an ideal goal a platonic ideal that we would want to try to achieve which is more pure play containers and then What do you think I'll give you your last 30 seconds That's a very tempting future where you have that open stack maybe even bare metal ironic provisioning and so bare metal containers coexisting in that ecosystem and Definitely as that Maturity arrives even classic enterprises will have to consider it as a very compelling Way of solving certain technology problems. So there's definitely very Interesting future pipeline there lucky. I mean I mean for us. It's 30 seconds. They're clear The value prop for us is that you can take that container image and run up the same way whether it's bare metal on a VM in AWS just don't care. Just don't care. I Think my closing message is whatever you do keep it as simple as possible, especially at scale. Love it. All right. Thank you Thank you very much the panel. Thanks guys. Thank you. Thank you, Randy