 Check, check. Hello. Welcome everyone. Thank you for coming. Today we're going to be talking about, what are we talking about? Yeah, anything. We're kind of open. So, we're just going to have a casual panel. We're going to talk about lessons learned, adopting open source and scaling out kind of large scale clouds. I've got a great panel here. Folks, they're kind of friends and family. We've been working together and been partnered together for a long time. So, there's, it's going to be fun. A lot of stories, a lot of lessons learned. But first, I want to kind of introduce the panel and let them introduce themselves just a little bit. We've got four folks. We've got Dan, Mark, Sebastian, Shen. Dan, can you go ahead and introduce yourself please? You bet. Dan Sperling over at Getty Images. For those of you who don't know Getty, Getty is the, I like to say, the purveyor of pretty much every major either editorial image or a lot of the creative images across the world. From my perspective, I run what we call our tech services group, which is focused on pretty much everything outside of development for Getty. Responsible for all the classic IT infrastructure, help desk, knock, et cetera, as well as all of our application support and most relevant here, all of our platform and cloud efforts. Thank you. And my name is Mark Williams. I'm the CTO at Redapt. Prior to that, about three years ago, I was responsible for all of Zynga's infrastructure, running all the Farmville games that wasted everybody's time. So I got to ride the wave of building out massive quantities of infrastructure in Z Cloud and the operations team supporting that as well as AWS. Cool. My name is Sebastian Stadel. I'm the founder of Skater. We do policy and governance across multiple clouds. And I have a lot of experience working with a lot of enterprise customers that are employing large hybrid clouds across their private clouds and public cloud infrastructure. Hi, I'm Shen Liang. I'm the CEO and co-founder of Rancher Labs. We make a Docker management platform that can run on OpenStack, run on Amazon. But really, the reason I'm here is also, previously I started a company called Cloud.com. We were actually, I personally have been coming to OpenStack Summit since the very first one, summer of 2010. And we made a piece of software called CloudStack and it predated OpenStack just a little bit. And so we can kind of tell the story of what we learned from there. Yeah. And I was a big user of CloudStack. The Z Cloud was, we had two different cloud domains. One was 30,000 nodes. The other was 12,000 nodes. And all with three management servers. So I'm just sharing a little bit of frustration. Now this is my third summit, still hearing how anguishing it is to scale with OpenStack. So I'm looking forward to the maturity of that vector. And in my role as Redap CTO, I try to be a trusted advisor to our customers who are looking for cloud solutions as they build out their infrastructure. And as an operator, as a previous operator, I have to recommend, I can't recommend things that I'm nervous about that will have problems down the road. So OpenStack has been a challenging one that keeps gaining a little bit of ground, a little bit of ground. So it's finally, I think, getting there. I'm still worried about large scale with it. But again, this is all why we're here to talk about what's next. Yep. Let's get a starting point because we've got Daniel Mark that have done a lot in that kind of hyperscale area. Sebastian probably has seen the biggest clouds out there and has helped make that deploy. And obviously, Shen has built that software that has done that for all these hyperscale deployment. But what is hyperscale? What's that definition? And the second part of that is like, does it matter? We're not all going to have hyperscale clouds. So just real quick, Mark, what is hyperscale to you? What is that definition? I think once you start, I mean, I look at it two ways. One is the speed at which with which you are growing, which is really when you're having to solve a different problem of how to how to grow with that speed. But then there's the Okay, you've reached a certain size. So if I were to pick a number 1000 nodes 1000 functioning devices feels like a tipping point where you can't do some of those traditional it approaches with things that have grown to that that size like V lands don't work. All the you know, depending on heroic humans to do things to restore service as opposed to depending on automation and repeatability, completely break down at that scale. So it's really that a good inflection point to look at as to how you have to change how you do it and operations. So so I totally agree that growth is a very big component of it. Another big component, I guess, is how homogeneous your architecture is. We work with a big industrial conglomerates that would qualify as hyperscale in the site in just the size of it. But that's across tens of thousands of applications. And so the way you manage that the way you you need to govern that and just manage it in general, is very, very different than say a web scale application. Going off what you said, though, Sebastian, from a standpoint of talking about hyperscale, talking about cloud, those are really great ambiguous terms. When you talk about the challenges in a single cloud, single application, hyperscale challenge, I see those as very, very different challenges versus to your point, tens of thousands of applications, or even even multiple hundreds of applications that are inconsistent, incongruent, have different technology platforms underneath them. You want to go into a little more detail around the hyperscale challenges that are different in both, like, because I think they're very different. Yeah, so in that case, when you have like the hundreds or thousands of applications, there's going to be a wide spread of what those, how those applications are kept alive and basically what their architectural, what their architecture is like. And in a lot of cases, there's going to be some folks that are going to rely on actually the hypervisor to provide uptime for like really old style virtualization. In that case, so that would be more like, you know, that might be a hyperscale cloud underneath the cloud APIs, but on top of it, it's not really managed the same way. Yeah, and I think that's the reason I was asking the question, I think it's important to know, like for for me, when we talk about hyperscale, there is there's a hyperscale around the technology, which we've been talking about here, the software and hardware that goes under that. But there's also the hyperscale cultural shift that has to come in there. And if you cannot pull people along, you are going to continue to have quote unquote hyperscale with a ton of systems and a ton of applications that are managed, maybe in more, you said old ways, right? We're old in this context is 10 years, right? That's really old. But from the standpoint of you're going to manage it in a different way, the the needs for consolidation for some type of consistency for some config management are there. But the stuff that we're talking about, or we're hearing about a lot this week, from an open stack perspective are probably not versus a hyperscale in a we are building one platform, it is going to grow at tens of thousands of nodes per month. And it is going to be highly available and highly reliable with seven people that support the whole thing. That's the kind of that hyperscale of a single, single entity, single thing that has the cultural shift of people that are able to think in that kind of context. You know, if I add something like to me, you know, hyperscale is actually becoming smaller than most people imagine. Like a lot of before people would say, you know, there are very few Yahoo's very few Google's and very few Facebook's and it really doesn't matter to us, you know, what what people are doing or Zynga, what people are doing in that context. But, but especially with the maturity of a lot of the new development paradigms and, you know, container technology, paths, just just the way the development environment has changed so much. It's no longer the traditional middleware database, three tier that kind of architecture anymore. So all of a sudden a new way of developing these applications actually become cheaper to develop applications using following the hyperscale model, then you actually have to go back and, you know, develop an old style app. That was not the case just a few years ago. So I think, you know, I think, I think hyperscale is going to is going to have probably a disproportional impact even though in the end very few companies would achieve hyperscale in reality. But, but just like, you know, if you build an app, you want to follow good engineering practices, you want to build efficient app, you want to, you know, even though initially you probably don't really need all that efficiency, you just want to do the right thing. So increasingly what I'm seeing is, you know, the hyperscale way of building apps and deploying for structure, creating, you know, management system is kind of becoming the norm. So just kind of for the audience, what's the, what are folks kind of here for? Are they, are you guys trying to scale applications, trying to scale cloud? Just curious about, about the hyperscale and what we've done in the past or is anyone trying to scale Open Stack right now in here? A couple of people? Okay. That's, that's interesting. So what, let's talk about some, some challenges in that. And I'll start with, with Sebastian and some of that you've seen because you have an interesting perspective of, of helping folks scale both Open Stack and Cloud Stack and, and there's been some difference, differences and challenges. What, what are some of those things? And are they, are they technology issues? Are they business issues? Like what, how do you scale these applications for folks? So I'll, I'll use an example. There's actually an upcoming talk in the next, next slot that's going to be done by NASA. And they have, they have a very large radiation simulation cluster that where the, they have this, they're simulating the effects of radiation on some, on some shielding that they have. And, and it's an embarrassingly parallel process. So what they do is that their scientists, they, they run that model in their local Open Stack. They run that model a couple of times and then when they're happy with it they do like a massive simulation of like two orders of magnitude more particles hitting the shield. And when they're doing that they, they need to burst to, to the public clouds for all that elasticity. So, so then one of the challenges there is, is no even though, like even though to their developers, their scientists everything looks homogenous because they're, they're using the, the scalar API to talk to all that. Underneath the hoods there's, there's very different performance characteristics, very different network performance characteristics and things like that for the, the clouds that are underneath. So, so it's kind of a hyperscale type to, to the NASA scientists, it looks like it's a one gigantic cloud. But when, but when they're actually trying to optimize things it kind of breaks down a bit. Yeah, if I, you know, you know I, one, one example of like challenge of hyperscale I remember working with Mark's team at Zyngar a few years ago, a long time ago. And I think we actually started in 2011. So it's been, it's really been five years since we actually scaled to that, did the initial scaling. But one thing we really immediately realized was, you know, like cloud stack back then we were very proud of the fact that we supported upgrade. You know, open stack actually didn't even support upgrade yet. But we, we say, okay, we support upgrade and we never really had any problems supporting upgrade except, you know, until you then you have tens of thousands of hosts, then you kind of started to realize this sort of, your old upgrade mechanisms no longer meaningful. So, you know, these days people talk about blue, gray or the red, green, that kind of deployment. Yeah, yeah, or they, they, you have to come up with a, you know, so we was really kind of inventing those back then, inventing those things on the fly. You know, we had to figure out a way maybe to isolate a small cluster of servers and just upgrade them first and make sure they really didn't introduce any regression. And then, and then if something goes wrong we have to roll back and, and so, so, so at that time it felt like, you remember that's 2011, it felt very cutting edge, but, but, but now, you know, you look at it now that's why I said hyperscale is, is, is just not what it used to be. And, you know, you look at Kubernetes, you know, all of a sudden and that's kind of how you would upgrade any app in Kubernetes by design, right, and, and it made the process very simple and, and so, so, so Rancher adopted that, you know, we, we of course are just all into that kind of practices and I think, you know, I think the, the, the, you know, this, this style of upgrade, you know, I also remember in the, in the early days, we had this, it's challenging to get it right, but it really requires a mindset change and I had conversations with some of the OpenStack architects in the very early days, we, we had the Nirvana vision of, you know, maybe OpenStack never really needs to be upgraded, you always sort of, you know, you always sort of upgrade little components and this stuff is evergreen, you know, it's always, always good, always alive, so I wish, you know, I still wish we can, we can get there someday, but I, I think it's possible. That's an interesting point, one of our, not a customer, but a user of Scalar, what they do to upgrade from one release of OpenStack to another is that they just plug in another cloud, like they build an ice house to replace the Havana that they have, and then they just start deploying new virtual machines there and through attrition, they, they get rid of the, the old, the other one and so they're still running today, they're, I think, still some, a few nodes running Falsum and a few nodes running Grizzly and Havana and they kind of just do that in that way and over time their developers don't really see what it's running in and they use attrition to, to, to go, get rid of the older stuff. So think, thinking back to 2011 and 2012, while CloudStack did have upgrade in place, you do remember we did, we had complete outages for some surprises, but, but those were all recoverable and eventually they were all learning exercises to, and, and the key thing is, and kind of tying this back to open source, like because we were running our business on this platform, and it wasn't open source, this was a commercial product, like it was absolutely essential to have Shen's phone number to call him in the middle of the night when we were doing maintenance and to deal with these things, because again, we're losing revenue when, when things were down, so, and, and just to think too about kind of what that next wave is, you've heard about CoreOS and CoreOS is like, oh yeah, we're going to upgrade your operating system pretty much every two weeks, and you might have, and it's unclear to me, I haven't researched this in depth, but it's unclear to me how much of a, of a pause button you have on that process, because you can disable it, but having, and this is, this is where the maturity and large scale operations comes into play, you need to have visibility to all of the things that are changing in your environment, when your customers and your tenants are releasing, when your operating systems are changing, when logging rates change, and so something like a Splunk or the Logstash Kibana whole stack with the visibility to all of that is critical to having maturity and operations to know, and to ensure that you're effectively lining up your air traffic control, only one airplane can land at a time, so you need to structure that. So I get a little nervous about having to have a wide long period to let CoreOS decide to go ahead and pivot itself and to look for all the wobbles that are going to occur during that time. Is that the, are you talking about like the change management type sort of things or something else? No, change management for sure, like the air traffic control, you know, give people windows of opportunity but you have to have visibility and you have to not let people do things at the same time. So I'm sensing we're all in agreement that open source doesn't scale and that, I'm just kidding. No, I want to talk about that. I want to talk about making those decisions and so there's, you know, going proprietary, there's going open source and there's going to kind of go into open source with support. Where, I don't know who to ask this, I want to know who's going to take this but, you know, where are those lines? Those lines are, well, we have a strategy to go open source. We have a strategy for hitting this amount, these requirements and then where is that kind of do it yourself, support yourself versus having the, kind of the break this glass and call button? I'll jump into that. So I look back at the four years I was at Zynga as like, I was incredibly fortunate with the team that I had and the decisions that we happened to make. There were some early times where we had kind of opted for, like, no support on certain things and when there were complete outages, you know, I'd be the one talking to the CEO saying, okay, we're going to fix all these things. But, you know, we did choose a lot of early things that were open source. So like MySQL and DNS Mask and, you know, CentOS. But you look at a lot of that, well, DNS Mask is kind of off to the side but, like, MySQL and CentOS were well established. Like, and you look at what dependencies are you really building on those? Well, the way Zynga was deploying them is very simple isolated components that, again, forcing yourself to be religiously consistent with that, building that homogeneity into the environment is absolutely critical because the more proliferation of differences you have and the problem occurs, you don't, there's too many variables to eliminate at that scale. But other decisions because things were going so fast, like DNS Mask was a choice made to figure out, well, how do we instrument, you know, DHCP and DNS Relay in this new brand who never been done before large private cloud environment and right at 1,000 nodes or maybe 2,000, we hit this bug. And one guy wrote it in the UK and were like, how do we get support for this? So I was actually just looking at the release notes or the change log this morning and like my guy had to find it and ask him for a patch and we paid him 1,000 bucks as a thank you and but that's no, you can't run a business that way. So I get very nervous about people untethered to some kind of commercial support to call the experts because actually that's why I picked CloudStack because we looked at CloudStack and Eucalyptus and the key differentiator was like feature parity was there but when we had a problem in our POC Shen's team was able to jump on it, fix it in 24 hours reliably and Eucalyptus was like asking me what the problem was the next day. So go with engineering capability is a key influencer as well. Dan, so you've got a unique business. I just can't even imagine how many new pieces of data you guys are collecting each day and how this is random different sizes. What's your take on that? What when do you go kind of a supported model and when are you comfortable going open and managing yourself? You and I were talking about this a little bit before we got started here. It's interesting. I came into IT back in the early days my first job I was actually programming prompts for Esprit machines and so hardcore mainframe and then you see the shift to distributed you see then the shift from Novell to NT you see the shift from from some commercial applications over to more open source applications and I think for me like every time I've seen that shift it's been driven by people. So I think that as a leader I've had to step back and say like am I really fooling myself by saying we are making really strong decisions based on the viability the supportability the cost differences etc or am I making that decision based on the people and I for because because of some experience that I've had I've seen more and more that is a people side. I was at JP Morgan and T-Mobile and we were hardcore commercial we stayed commercial we absolutely preached we will not risk our business on something that we do not have a good backstop with good backstop being a good partner. Going over to Getty though Getty to your point like we ingest a few hundred thousand images a day those images are ingested from like we have people that were working in Africa on some of the from a disease perspective on some of the challenges they were having there as a country sorry as a world and they were like getting its images back over into AP for example in like five minutes so we could give real-time information around what was going on. So at that state if you're trying to if you committed to your partners I'm going to get images to you in five minutes like actually from somewhere in the jungle I'm going to also have those images stored perpetually so it's obviously sad but when Philip Seymour Hoffman passed away we had a lot of pictures that we had never ever viewed that were needed within seconds like literally within about 20 seconds we had images that had not been touched in about 15 years. So from that standpoint like how do you keep all that stuff available and how do you have a backstop that says if something breaks how do I immediately get support. I came into that group with that commercial mindset because that's what I'd come from but the team members that I had in the organization like the really strong team members who were leaders from a technical thought perspective were hardcore open source and they were able to show me and really kind of sway me to say hold on a second there's a different view the community can support us and then when we need it we can actually have kind of that backstop through partners that are able to give secondary support outside the community and we can get as good if not better support from an open source solution as we do from commercial software. Now I know that sounds like blasphemy to Oracle and Cisco and EMC and HP and IBM but it's interesting we've seen that in truth but we've had to build the house that way we've had to build the house around solutions that probably aren't built by one person right we've consciously made decisions that are things at a point of at a point of I'll say a maturity that we can actually build our house on those solutions. People are joking like I've I've not heard I've heard the tagline already like give me liberty or give me death right from the standpoint of it's going to be interesting to see how much more Libitor brings in ease of deployment ease of scalability ease of support etc and then also some of that higher availability it needs and I think that we'll see adoption continue to ramp based on maturity but also really based on the people that are driving it within the companies. If I can just tag on to that asking a question when you say commercial software do you mean like a like a commercial license for to an open open source software or proprietary? Yeah that's a good call I think that the question I'm going to twist it a little bit and say the reason typically why we choose commercial software open source supported and or truly commercial is because we need the support it's a support decision not so much a not so much like a a go-to-market strategy for that vendor whoever it is yeah and but coming back it really also I want to reiterate though and we've talked a lot about that support side it was I am I have got three principal engineers and they love Nain your thing right they love Docker and so the larger challenge we have there is not so much around like saying should we or should we not move forward with Docker it's more around saying those three principal engineers who are absolutely on the bleeding edge that are trying to move forward with Docker today that are completely upsetting the apple cart for all of our like puppet users right we're like no config manage everything right no no Docker everything now that we're not doing config management anymore we're doing Docker now like that that life cycle for some of our principal engineers is like three to six months of change like we're Docker no we're going to go Kubernetes no we're now we're changing that all and we're going to strategize on something completely different we're going to do something like completely off the shelf with Magnum and then some type of new container solution like the challenge we're having is they're moving so quickly that we cannot bring our people along like the rest of the organization the you know the two percent are there and we can't bring the rest of the organization and so we're really having to like kind of slow down and pick the right solution to your point based around the supportability of that thing but also based on what our our our culture can absorb that that that speed of change I want to talk more about about people and open source and scaling but first is there any any questions from the audience so far so we're do you mind hitting the microphone or yeah here you go sure so and talking about hyperscale and especially talking about OpenStack OpenStack is mostly a control plane which provides access through plugins and extensions to a back plane which is the data plane which is traditionally provided either through some open source reference implementation or some third party vendor we saw yesterday in the keynote that there's 60 different storage plugin drivers there's 30 different network plugin drivers there's multiple different hypervisors and when you factor that out there's more than a thousand different permutations of OpenStack so the reality the reality to me and I'd like to know from you is do you think OpenStack is limitations for scaling is on the control plane and is an application or systems architecture problem or is it an integration problem and not being able to properly test scaling across all those different permutations that's such a great question thank you I'll take a stab at that so I think there's a lot of different answers to it one is you know looking at what OpenStack has kind of started being it's it's like an SDK it's and it's and its intent is to try to far reaching have a far reaching ability to do everything and I think that lack of focus leaves people vulnerable to things that are tried and true so there are in this world there are so few kind of well traveled paths that have demonstrated that repeatability like everybody's doing larger scale or HA kind of in a different way and so how can you learn from each other and I think the other kind of vulnerability is to is to the customer that is believing this idea that OpenStack is able you know when they say their support for XYZ component they're led to believe that it's the full integrated support like everything I can do natively with that device or that OEM or proprietary device is supported through the OpenStack API now great and then you find out that it's like you can do one thing and not very well and so that's where I kind of get to what is the approach and this this doesn't directly answer your question but like when we talk to customers about what are you trying to solve with OpenStack or with a cloud solution the question really is what is your workload and so many of the customers can't articulate that they just want to be on the bandwagon and have one or have the Amazon like thing and like well that's not really a good strategy because it's not going to save you money it's got to be the successful clouds I built and have helped customers build and I see customers build is there's a purpose to it and it's going to solve this particular problem and there is a religious focus about keeping things simple that's why cloud scaling was so fantastic for me because you guys forced customers to adopt something that was utilitarian and it was utilitarian for a reason is because simplicity scales I want to just add something I think shown actually asked a really important question in my mind and it's not obvious to a lot of people what scalability really means and basically there's control plane scalability as you said just about managing number of nodes and managing number of VMs and then there's scalability about like this whole in top ability mess there's so many combinations you're gonna also have to worry about I personally actually believe the second one is a lot more serious than the first one you know the I remember I had a conversation with Mark Shuttle where I look it was a long time ago we were you know it was a before open stack we were still talking about doing cloud stack and he kind of said you know wouldn't wouldn't you have a scalability problem if you were using because we were using MySQL to store some data and he said shouldn't you consider using NoSQL database I said why he said because it's more scalable because you know because because Facebook uses it and it makes sense but but then on the end of Facebook is managing hundreds of millions of of users right each doing things of the billions tens of billions of messages whereas we're I mean we're in the best case talking about managers of million VMs you know that's so so second we're talking about three orders of magnitude from where Facebook is going to be you know are you sure you want to deal with that kind of complexity for that so I actually think the control plane scalability is also kind of being blown out of a little bit blown out of proportion but it's really the in top ability scalability that that's kind of in many ways is taking a toll on the on the project and and I mean personally if I I wish like some of the original so the description of OpenStack was so good that we you know that's why we you know is despite back then that that was developing a competing product we are honestly thought in like nine months in 2000 2010 we would have switched over to OpenStack and throw away all of our code it didn't happen because you know the code wasn't there but but but one of the things that really attracted me was this idea there's all going to be a was going to be an independent loosely coupled set of components that you can pick and choose as as you wish and they could be independently tested and I thought that would really solve some of the you know some of the in top ability testing problem as well and that I I now I see it in the darker world you see you see people I mean that's why go as a programming language is taking off because it's so simple and if you if you ever write a million lines of go it will be unmaintainable but but if you write you know if you write a few thousand lines of go it's it's very clean so it's almost forces you to decompose your problem into into into smaller chunks and each of them is basically independent right and and problem to sort of and plus and problem basically at the end of the day you can test each one of them independently so you know I wish I wish opens that could be more like that and I think that would be make a lot easier to adopt and scale I think that the problem too is Python like it's an interpreted language like you're not going to get scale up performance out of that that's one of those fundamental things I think we have another question over here yeah I do you hear I I joined the session later I'm sorry this was answered can you speak to the supported supported or certified hardware how important that is yes so I'm going to put this in the context of the cloud maturity model I don't know if any of you have heard of this but the Open Data Center Alliance cloud maturity model actually describes what steps are logical to take as you grow from having zero cloud to having a fully optimized well running cloud and early in that process you're needing to focus on just successful demos you know proof of concept basic capabilities and as this relates to like the hardware compatibility list so many customers want to start with like old hardware just random things that they have in their closet or go directly to like the cheapest ODM manufacturers to start off these kinds of projects and so deviating from the HCL of a vendor or recommendation or deviating from what you already know well like the most valuable things for the way we approach things at Zynga is like let's not change everything at once let's let's build on things that are reliable because we don't have and this was all based on not having time to distract ourselves with new problems let's build on the backbone of what we have known and trusted and I made mistakes through that process like I went into the blade chassis world this is pre-cloud but I you know went into blade chassis thinking this was going to help optimize our power in the data center and just blew up in our face because it just added so much newness and complexity and software and you know I wanted to just and we finally threw it out it was like the happiest day of my life and so again my general recommendation is build on the backbone of what you know leverage the domain knowledge and vendor just vendor kind of experiences that you have work on things outside of the rack like figure out how to innervate the cloud solution with how you do operations how you do authentication how you do logging how you do monitoring how you do change management keep things simple inside then once you've learned how to do that then you can start optimizing then you can go to ODM then you can go you know figure out how to manage all those differences I would add though that I think that well if the Google outage the other day taught us anything that there is some need around making sure that we're safe from a from an HDL perspective but at the same time it is kind of tied to your problem you were talking about inner production at scale if you're trying something new or you're trying out to trying to figure out where you're going to land maybe you do want to play a little bit with different hardware maybe you want to carve out I don't know what the amount of time is but maybe you want to carve out a little bit to think about optimizing in parallel for us I know that we to your point we are trying to get consistent and standard and make it as plain and simple as we possibly can but there are some places where we're trying to also in parallel optimize without completely saying we're going to wait until we get the surrounding systems that donut around the hardware clean so here's a little story that that that kind of this reminds me of that provides a little bit of background so I'm sure a lot of a lot of the people in the audience have heard about Nebula the the company that recently went under one of the imp the impetus why Chris Kemp went out and and wanted to design a an appliance is because back when he was at NASA at Ames he was building a Eucalyptus cloud for for the federal governments and one of the problems you had is even though he had procured hardware from a bunch of hardware from from a vendor it turns out that they had like 40 percent of the fleet that had serious hypervisor issues and it took them months to figure out what the issue was is even even though the the boards were the same and the firm were the same and everything was the same it turns out that the the OEM of the board was sourcing a specific chip from two different vendors 60 percent from one vendor and 40 percent from another vendor and that is an extremely hard problem to to dig down find out exactly what what that issue is so even if you're like there's that that's the other case even if you are sourcing all your hardware from one single vendor there's still some there might still be some variability there and so that that experience is what led him to then create the company etc. Can you guys speak a little bit about the process of transitioning from a more commercial model to a more open model from a people process and applications perspective? Yeah absolutely I'll take a first crack I it's it's been interesting I'll I'll be honest the more that we have tried to make that transition the more we recognize that there are fundamental things it kind of goes to Mark was saying like do you know why you're building your cloud stepping back a little bit even from that and extrapolating from that idea do you know why you're trying to make that people transition and as we've looked at it both when I was at T-Mobile and at Getty the thing that I think resonated the most was we had too many people who were network engineers that literally said I am a route switch engineer I am a load balancer engineer I am a whatever and I know nothing about the application I just simply respond with whatever the request is and I do the thing that the request asked for same on our hardware guys same on the team that was driving our storage platforms same on the teams that were driving our database engineer etc. They were doing the thing that they were asked to and but they were they were not applying their expertise to the application stack that or the actual business problem that that application stack solved they were applying their expertise to their expertise so the maturity in our business did not gain a lot of value out of that but maybe the maturity in that one platform was rock solid because they were that's all they were focused on that led though to we would solve problems for example delivery or velocity problems we would solve problems by saying we're going to optimize how quick we get the form the firewall form right use that example how quickly the firewall form from the customer to the person and the person fulfills that firewall form in a day and we've got an SLA and we wrap it all and in a lot of garbage candidly and then but nothing actually helped the app so then we step back and you look at that problem and say what is the real problem the real problem is we don't understand why we're doing this technology thing we're doing we don't understand why we're applying this that our expertise or our knowledge to whatever the business problem that we're solving and so that transition and transformation for us has started with do our employees understand what our business is do they understand what our business is trying to accomplish do they understand why we're doing why are they a DBA at Getty why are they a network engineer at Getty and then can they take that and then transform and say now that I know what the problem is but I can figure out how to help solve that problem so for example if your problem is your marketing team wants to try new ideas out and do A.B. testing on a really rapid basis but that does sometimes require like the proverbial firewall rule or whatever the other thing is and you tell your marketing team to try that out through technology is going to take about two months right they are going to weed out 90% of the things they want to try out and to get down to the 10% that they want to try because they know it's going to take so long it's not worth it to try all of them if if you as an engineer know that you're like hey how can I be way faster at just trying something out they don't they don't need it to be perfect they don't need it to be completely rock solid they just want to try it out with five percent of our customer base okay well then I can probably do these three things whatever and I can go talk to Steve and Mary and Nancy over here and we can figure out how to across the four of us in our domain expertise solve this problem then all of a sudden now you're looking at solutioning for a business problem not solution on how fast you can answer a ticket once you start solutioning for a business problem that's when you really start going into how do I what do I use I like I'm probably going to have to use something that's faster than I'm not knocking on Microsoft with us at all but I'm probably going to have to use something that's faster than installing Microsoft from a CD or DVD right and starting Windows and then building up an environment I'm probably going to have to look at config management I'm probably going to have to look at something faster than actually building up a machine every time and look at containerization I'm probably going to have to look at something that's faster than even faster than a a a like a actual like graphical interface where I can provision servers and actually start using an API solution and then it that transformation forces you to change and look at the technologies that are able to better enable that that higher level velocity really long answer but hopefully that gets you there I'll try to follow that with with maybe shorter one objectives is a way to align kind of organizations and businesses so what worked really well for us at Zynga was operations shared an objective for availability of games no matter what the cause was it wasn't like oh our network was up 100 percent but the game's down like we considered everything all in because we're all in this together user experience matter so measuring object measuring availability even if Facebook was down and it caused an outage for us we measured that because it matters second one was cost third one was the speed of delivery for new environments and uh the fourth one was good fast cheap what was the other one anyway you can you you get it yeah awesome innovation availability cost anyway worst bad time we're over lightning round just one last question uh you're building your cloud you know it's going to scale open source do it yourself or or some sort of supported open source Jen oh definitely how do how do supported open source because I think our value at probably should be focused on adding things on top of you know some of the work other people want to do all right Sebastian so um so uh very short story um I absolutely hated when somebody in my organization creates a little bit of some application or something like that or brings in something into the company and then starts working on something else because that becomes a liability to me so I really really like it when when my team chooses to either commercial software or proprietary open source so that like regardless of our project the person's working on I know I can rely on some vendor for it good fast first buy something then figure out how to optimize and make it cheaper later yeah very much commercial proprietary for the brownfield and then some type of supported open source for the greenfield thank you guys so much it's been a great panel I appreciate everyone for for for coming and um yeah we'll see you around thank you Jeff for moderating thank you