 Can you guys hear me? I think this is going into the overflow room too, so if anybody wants to sit, you can go out there. Hi, my name is Josh Kleinpeter. I work for AT&T on the OpenStack team there. And we're going to hold a fun reference architecture panel. And my question to you guys, you can introduce yourself and tell me what the hell does that mean? Let's start with Randy Bias from Cloud Scaling. I'm Randy Bias, the CTO and co-founder of Cloud Scaling, where I'm most notorious for being one of the first employers of production OpenStack in 2011, both Swift and Nova. That time's very early. I'm part of the initial 2010 launch. So you asked what we think about reference architectures. This is a very tricky question from my perspective because a lot of times what's in a reference architecture are prescriptions that create lock-in. Like a reference architecture that has NetApp in it creates a lock-in around NetApp. It makes assumptions about the network. It makes assumptions about the storage platform that aren't necessarily helpful. So when we think about reference architectures for our product, we try to make what we call a modular reference architecture. So we tried to create a sort of meta architecture, a way to actually build new reference architectures. And so that's how we try to sort of think about it. It's something that's more flexible and you can kind of make it fit a bunch of different kinds of scenarios and situations. Hello, I'm Randy Perriman from Dell. What do we think about reference architectures? Essentially reference architectures, to me, I agree with you, it locks us in, but at the same time it allows us to use that as a starting point and a conversation starter. And then it allows us to come back as we are in the process of talking with customers and allow us to, as time goes throughout the sales process, we can always bring that back. And then as we're doing the installation, use that as our goal and then eventually as our support. My name is Rick Rowling. I'm from Hewlett-Packard. I'm a lead architect of the cloud system matrix solution, which is private cloud solution that to the question is based on a couple of reference architectures. We like them, we love them, we ship them. We find that nearly all customers like to tweak them. In fact, they think their own deployment is a reference architecture. So we end up being fairly flexible based on customer need. We start from key positions and that expands based on customer feedback. With OpenStacks, which we've introduced recently into our product line, we find customers are, let's say, much more aggressive in terms of the kinds of expansions of reference architectures we're creating, which shouldn't be a surprise to the folks here. So I'm Robert Starmer, principal architect for cloud in our solutions unit at Cisco. We've built reference architectures and reference systems for a number of years, specifically around network service. And now as we do more compute, adding in OpenStack and other technologies on top of that. One of the things that we get out of references is not that it is the one and only answer as I think everybody else has said here. It is really sort of a baseline to work from. But with that baseline, we can provide some validation and some data around what that reference can do so that when a customer is looking to extend and expand and build their specific version of something, they know, hey, do we need to tweak up, down, or can we change it in different ways? So we find them to be very useful from that sort of baseline perspective, something to start from. And so then do you tune your reference architecture towards ease of use versus resiliency or speed or something like that? At least for the base one and then go from there. So we tend to tune for sort of a midline. So we'll look at resilience. I mean, part of our network model, our historical network model has always been to provide resiliency in the infrastructure that we build. So that's a core model that we've always had. So for example, we build redundant networks as part of our reference architectures. A lot of people, especially in the OpenStack community are building against a different systems model in the end. They're looking at things where they might do a single top of rack switch instead of multiple or single interfaces into the actual server components. We give you the option to do whatever you need to after the fact, but we wanna find that sort of middle ground. When we look at the compute models that we build against, we do the same sort of thing. We'll pick something that's middle grade CPU for our reference architecture, knowing that there are certain users that will need the absolute top bin part. There are others that will say, look, I don't even need that. I'll go with the bottom bin component because it's a more appropriate solution for my end-to-end result. So we sort of try to find the middle ground in how we build our references. It gives us a better baseline to build up or down from. Does the reference architecture mean that you can go to production with it? So from HP's perspective, the reference architectures that we sort of qualify, again, we're very flexible on the customer side, they're production architectures. So not only are they production from a test and support perspective, some of those architectures are production in terms of we pre-manufacture them. But as everybody's been saying, once it gets out to customer side, then there's opportunity to modify that. There is some boundaries to the modification, but as every year goes by in our private cloud solutions, that boundary seems to expand. So one of the things I kind of left out is that even though we have kind of this modular reference architecture, we actually operate from a baseline, which is Amazon web services. I consider that to be a de facto architectural standard. And part of the reason is that Google Compute Engine, which will be the other major public cloud here shortly, is about 95 to 99% architecturally overlapping with Amazon web services. So I just think that rather than recreate stuff, it's better to look at whoever's dominant and sort of look at them and reference off of them. But the key there is that a lot of those systems are really designed for scale. So what we try to do is make our reference architecture designed for scale. And the problem is sometimes if you want to make it smaller, like if you want to fit it on four servers, I can't do that. Like I can't take my architecture and squeeze it down on four servers. So the answer for us is that our reference architecture can be put in production, but maybe you can't if you built a reference architecture that's for like a pilot deployment or a dev test or a QA deployment, because you're just going to make a different set of trade-offs for a production system, then you go for a non-production system. So you all have a reference architecture. The community doesn't. Is that something that we should do as a community to provide out there to say, hey, you're a deployer. Here's a reference architecture. And if so, how do you, it seems like we should remain agnostic of vendors as much as possible to not be prescripted into Randy's example, a NetApp, not big deal, NetApp. I don't think Randy is either. What do you do there? I don't know, Randy? Well, the community as a whole, coming up with a reference architecture, like you said, it's going to be difficult because there is so many different vendors and represented in the community. Allowing each vendor come up with their own reference architecture that they have validated against the current OpenStack product is, to me, the right thing to do because then it also allows the OpenStack community to concentrate on what they are working on best, which is designing the next level of our product. But does that then relate to interoperability, right? And because it seems like to me it does where you've got a reference architecture and then you've got, if we want to do interop between clouds, you need to do that at the API level, but then if you can show somebody, here's a reference architecture that allows for that interop, could be important. I was going to say to yes or no on OpenStack reference architecture that I think there needs to be more than one, not a reference architecture, but a small set. They need to be use case based. Like I've been saying, we're very private cloud focused. A lot of the folks here and a lot of the work in OpenStack is very operator public cloud focused. So I think there's a couple of maybe choice points set for reference architectures. And in either case, you're right, interop, for us it means more heterogeneity across other vendors, gear, including the folks here sitting with me is an important aspect of that. So a logical reference architecture with, at least from our perspective, the right components not tied to specific Cisco hardware gear, specific HP or Dell, you know, compute gear is something that we would want to participate in from a community perspective. So I'm going to push back on that a little bit because I think it gives the wrong idea in terms of the last piece you said there. It's not public versus private cloud. It's elastic cloud versus enterprise virtualization. So a private cloud that is built around enterprise virtualization doesn't look anything like Amazon, but you can build a private cloud that looks exactly like Amazon. So it's not that the community is focused on public cloud is that they're focused more on sort of a scale out more elastic cloud model, right? And then in terms of sort of like this interop problem, I mean, it is a big problem, but the reality is is if we do premature interoperability we will stifle innovation, right? We will. So we have to be smart about this. And the way we've been talking about it in the OpenStack Board of Directors, I'm one of the directors, is that we've been talking about sort of coming up with a baseline reference stack, ref stack. You might have seen some of the presentations here on it. And the idea is is that, there's enough common denominator capabilities that you can kind of have an SQL 92 of OpenStack. What is the absolute minimum functionality that has to be the same across all OpenStack implementations and build towards interop on that as a beginning point and then have flavors kind of to your point, right? So here's like the enterprise virtualization flavor that you can kind of layer on top of it. Here's the AWS flavor so that people can do the different things they wanna do in terms of innovation and ecosystem because it's still too early to try to clamp down but try to put some guide rails around it so that it's not everybody running in different directions. I guess my only comment to that though would be that, if we start talking about flavors and start talking about differentiation, somehow the APIs I think need to still stay consistent. So if we have consistent APIs the differences tend to be at the potentially the hardware level or the sorts of services that you can implement but those services still have to be presentable through the API. So if I have a systems model that, for example, just thinking of the quantum level model I talk about a service that is a network domain, right? Much like I talk about a compute domain within Nuke Nova or a Swift storage model. I don't have to implement Swift per se. And there are plenty of people that build OpenStack clouds right now that aren't using Swift as a part of OpenStack but do we make that a part of the reference? You have to reference how you would implement Swift even though there is a large number of people that might not even use that particular technology, right? I think that we can get away with saying, here is the reference architecture. Here are all the pieces that are core to OpenStack. And so yes, you should implement them. If you want to, here's the reference that for example Cisco has defined for how you could do that. We have different references from everybody else as well. It doesn't mean that you then have to say, oh, but here's the more generic reference because at the API level they should all interoperate. If I want to migrate an application that I deployed on a Cisco reference architecture OpenStack, it should still work on a Dell reference architecture OpenStack, right? Or even on the cloud scaling system within the limitations of, like you said, it can't necessarily deploy it on four nodes, four compute nodes, but the APIs are still consistent. Yeah, but see, this is the thing, right? Is that there's no agreement OpenStack community about that. Assuming the board is sort of a representative in the microcosm of the macrocosm, we had hour long debates on Sunday about whether OpenStack is the APIs or the code. And there are very, very valid positions for one position or the other. And it's really hard to say. Like in some ways, you're actually not allowed to call it OpenStack if you don't use Swift. I know people do that. I'm not saying they're wrong. I'm simply saying that we as a community have not come to an agreement on that piece. So it's really hard right now to come down definitively and say, look, as long as the APIs are the OpenStack APIs, whatever's behind it doesn't matter because that's actually not true. It does matter. You can't use the OpenStack name now, right now without using the OpenStack code. I mean, that's part of the rules. Now, should it be that way? I don't know, but we have to sort through that together. To that point, we don't have code for everything. I mean, what's volume, right? LVM? Like probably not. Well, how do you deal with the differentiation between being able to talk to L-par and VMware and Zen and KVM and anything else that might actually be implementing the compute backend for your system, right? That's not defined as a requirement, right? I mean, you could say, look, my reference architecture has to support all of them. That might be very difficult to achieve. I think maybe what you were talking about earlier, Randy, was if we had a reference architecture that said, look, you have to use the code. Let's assume that that's maybe gonna be a requirement. And you have to support all the core parts of OpenStack, right? Again, a good requirement. But there are certainly going to be backend differences that you can support in different ways. And so there are then potentially a set of flavors, a more resilient infrastructure that may or may not be appropriate to everybody's reference. But still, if I take something that I would deploy on a Cisco, if again, I'm gonna say, I'm resilient and you're super scale, right? You can still take what you would deploy on my infrastructure and map it to your infrastructure. The backend might provide different classes of service or what have you, but it's still an OpenStack deployment and so my system should be able to translate between the two. I think that would be a great way for us to provide a consistency across reference architectures while still allowing us all to provide some differentiation. How should I? Sorry, not to bogart the mic, but yeah, I completely agree. And this is part of the thing about the ref stack stuff is that we were talking about, look, everybody's got their own reference architecture. How do we resolve this? And one of the ways is to use heat or heat templates at least is sort of a way to describe your architecture. And then those of us who've built their own provisioning and lifecycle management systems, we can actually feed them with that template and then there could be a baseline plus the flavor that actually basically drives the layout of the physical infrastructure of the cloud. And if you can do that, then it makes it a little bit cleaner and then over time, I might be able to make my product actually implement your flavor even though it doesn't now by simply using that. So then how does, to your heat point, how does deployment mechanism play into this? Is the reference architecture just the end state and however it got there, it got there, whether I'm manually doing it using heat, using Chef, using whatever? Or, because I mean, right now there's lots of implementations of this is how you deploy and I see basically every company implementing it and there's no one standard there. Yeah, I think that this notion of deploy is a pretty great question because when we talk about ref arches for our customers, it's not necessarily just the end state, it's a starting state, it's the process of getting to the end state, but as we talk about an open stack wide set of ref arches, back to my point before, I think if you're going to prescribe or strongly encourage certain deployment technology or deployment solutions to help you get to that ref arch, then you're starting to, that's a pretty fine line that we start to walk. So that's a great point. So obviously by the way I'm saying this, I think that's maybe something that ought to be one level back or maybe that's a flavor extension to the point before that sort of, there are choice points there for deployment because unlike NOVA or Glance or some of the other services, deployment's not ubiquitous, single service and open stack today. If the community would provide a reference architecture, what if we used open compute as kind of the hardware model and have it completely agnostic? What do you think about using that kind of architecture? Well as you know, that's only one part of the equation, right, it's just the hardware architecture doesn't handle the network architecture, which is problematic. So I've spent a lot of time talking to the open compute guys for several years now and I think it would be great if we could work with them on this and what they've done is a little bit akin to what kind of what Apple did with the carriers where they kind of drove, they actually told the carriers how they were gonna bring the iPhone to market rather than the reverse as it usually is, right? And so that now the OCP guys kind of tell the hardware vendors, kind of this is a spec you want, and that I think is very powerful but if you look at a lot of the configurations there, they're not really appropriate for infrastructure as a service, like they're just really not designed in a way. So we'd need better engagement from this community to go into OCP and say like, this is what we think it looks like, we need input from probably folks like yourselves here on the panel to basically give them something that we could use. And then I think it potentially would be great for compute and storage to actually have that kind of open standard. People might not know it but Amazon web services, they actually go out and they bid, they get bids from five different vendors each time for their hardware because they're using such bare bones hardware that they're able to make it fairly interchangeable. It doesn't have any vendor secret sauce in it. One of the, now I'm gonna get in trouble. One of the problems with the enterprise vendors historically is that they've added special extensions to IPMI or special extensions to their little secret sauce HBA or they load their own firmware on the Seagate Distrives, like HP firmware on Seagate Distrives and then like open source tools don't work with that. So what we do need is we need the hardware to be as open as the software so that we can drive it and run it and manage it. Hardware guys? Yeah, I was gonna say the, starting something with open compute is not a bad idea. I said earlier, logical reference architecture where hardware specifically are abstracted, I think is a pretty good idea. Network design is an interesting topic that we don't have to go into now but that's, I think that's an important part of at least any ref arcs that I wanna give to my customers or a set like I said before need to have network design fully described in that as well. As a vendor with secret sauce, as long as our customer is gonna pay us for that secret sauce, we're happy to include that on as many of our hardware products as we can but I realize that that also complicates at times our ability to say plug our differentiation into a standard reference architecture. I guess I could also say that we also have secret sauce, we have, I think Randy was maybe hinting some of the network components that also have these secret bits and things that you can twiddle. I think if you were to say that there is a reference architecture and you wanted to push it down into the hardware layer saying, look, here are the base capabilities that the hardware has to provide and maybe open compute is the right way, again, if we can get them to do some things that give the entire community the right infrastructure to build on, we can say, look, this is a good reference platform, sure. If we can implement that same reference platform with other services like the compute that we sell or the network that we sell or compute for Cisco network from HP or vice versa or even Dell components, whatever it is, right? We take all those pieces together and say that, hey, I have the IPMI tools I need. I might not be able to use the extensions as a part of my reference architecture but at least I know that I can plug any compute component into the system and it will provide the base functionality, right? So I think that there is a way to say there's even at that compute level or reference architecture at the network level a set of base services that have to be there. It's forward packets, hey, I think that's a good reference baseline that we need to get to and we don't necessarily have to try to define the actual IPMI BIOS version that is implemented in that hardware, right? So again, it sort of says, here's the API for the hardware layer, the API infrastructure that that has to provide as well and so we can keep it at one level of abstraction from the actual function in a sense that's what we've done with OpenStack, right? It provides an abstraction for the actual implementation of the deployment of a virtual machine and it's opened up the possibility of using bare metal, right? For VM deployment or for compute entity deployment and vice versa, you can do virtual networks on top of systems rather than hardware networks on top of systems. So I think the pieces do actually work very well if you can define that abstraction layer for reference architecture. Oh, yeah, I think if we could, I mean, is that a feasible thing to do? I mean, I have like, here's a spec for IPMI, here's a spec for configuring RAIDs, here's a spec for configuring switches and as long as people started implementing those things, then I can, because from my team's perspective, we're all about automating and automating not just deploying OpenStack, but configuring all of our hardware. Right, well, I think if you wanna do bare metal, right, and you want that to be something that's possible, not just with a very specific vendor's hardware, then you have to say, here are the capabilities that the bare metal system has to provide, right? So you've already sort of started to define an API for what that looks like. It might define a specific IPMI version because that's the only one that supports the features we need to implement X, right? But beyond that, again, my view has always been if you can define a reference architecture that provides some ability to be flexible, right? You get a lot further than if you say, this is the only way that you can implement this one particular function, right? Now, at the same time, that's maybe a little bit opposed to what we actually have, right? There's only one way to implement the Nova API, and that has to be that way. So, it would be nice, it's a little bit of a pipe dream. I don't know if we can get there. You know, we're constantly fighting hardware. It's like nonstop. The way I always think about cloud scaling is, we're sort of like a next generation systems business. We have all the problems of EMC and NetApp and Cisco and Juniper except, you know, we don't sell the hardware. So, we have all this open source software and it has to run on top of arbitrary hardware. So, we have to control and manage the hardware really tightly. And we find variations at the hardware level that are just, they're mind boggling. I'll give you an example. We're doing an deployment in Russia. The guys are in there and they find out that at arbitrary times when they're booting up the boxes on the mini-boot, our little interim, you know, kind of tools system under Linux and they're trying to configure the RAID and at totally non-deterministically, sometimes the drives are numbered from slot 64 instead of slot zero. You have no idea what's going on, right? So, it turns out that LSI has logic in their RAID controllers, these new RAID controllers that we had not certified yet that talked to the SAS expander and that detected our SAS expander as being a Dell SAS expander, which it was not, it was Quanta. And then what happened is when it's Dell, when it detected it sometimes, it starts numbering the drives from 64 because Dell's decided, and I'm not trying to dig on Dell here, I'm just trying to point out there's variations because of the secret sauce, but Dell wants their drives numbered starting at 64. I mean, that's a problem because it's really hard to uncover because each hardware vendor's done slightly different things sometimes to make their kind of integrated systems work well. So, when you take open source software and you start talking to it, you have problems. Like when we were talking to HP JBODs at CREA Telecom, we tried to use open source tools to talk SES to the enclosures and it wouldn't talk SES to us over the SAS bus, but we went to a generic JBOD and it worked. So, I mean, I think some of the things that the vendors have to do is actually kind of make their stuff a little less smart, you know, the Dell C series kind of stuff and to add less secret sauce for the open compute style reference hardware. So, wouldn't that go back to where each of us, we're developing our own reference architectures and one of the reasons we do that is so then we've proven and tested and solved the problems you just described prior to you going out, so then going out into the field, you know exactly what you're getting and that's the whole reason behind a reference architecture, correct? Well, from a hardware vendor perspective, that's the whole point. From a product perspective, like mine or from a customer perspective, that is not the perspective because they do not want to be locked into a single hardware vendor. They do not want that, that's loud and clear. And so, I think it's incumbent upon us to think about what systems companies look like in the future if they're no longer doing a fully integrated system and they're working with lots of open source because in that case, it's important for you guys to test against the open source tools. It's important for you guys to work in concert with the open compute project and with OpenStack to basically make certain that your hardware at the low levels, you know, when we're trying to drive it with provisioning systems does what's expected and there's not large amounts of variation. The provisioning system, though, is that should that be part of the OpenStack standard? I think a lot of people have a lot of secret sauce there right now so I don't think that that's gonna happen. I think you're making a good point about we aren't abandoning our secret sauce but there's gotta be a balance between if we're gonna participate in ref arches then we need to do it in such a way that the use cases you describe could be successful while at the same time, for those customers who wanna choose to vendor specific enhancements, they can pursue those sort of as a second wave of deployment or configuration after we've gotten through sort of the first wave which is more standard or more general purpose so I think I agree with that. It seems like this gets into who you're who you're expected in user for the architecture is, right? If it's an enterprise customer, a lot of enterprises do want, okay, I wanna just deal with HB, Cisco, Dell, whoever, right? Versus somebody that wants to build the next Amazon, probably would wanna follow the Amazon model and just spew out to everybody. I think certainly HP's product line and the cloud system, we see most customers deploying new projects with a very HP-centric hardware set, a very HP-centric architecture that doesn't mean they're locked into HP, they have other projects in the same group that are running other vendors hardware and then maybe over time, if they're successful with an initial play with one of our sort of HP-centric deployment architectures, they'll wanna expand that to include other vendors, gear which we certainly support. So I think you're right and I think that we, from a customer perspective, see a lot of need for us to be very specific in our HP products as a part of the ref arch, but we can't only be specific and never go beyond that because customers want to take us beyond that as soon as they've been successful with what we start out as very HP-centric at the beginning. Yeah, I think from the perspective of an open stack sanctioned reference architecture, I think it would make a lot of sense to say, look, here are the base capabilities that we have to have. Here are the functions and effectively the API calls that have to function properly and maybe slightly different than what we were talking about in terms of what the open stack foundation is saying is going to be open stack, however that discussion ends up. Maybe there are some differences. The issue that you ran into with the particular rate controller, that's a very interesting one and maybe there's very much a corner case, but still trying to limit that. Okay, one can dream. I have many more stories. But if we can at least say, look, if you can implement these base functions and give us the right kind of feedback as to what you've done, right? I mean, in that case, for example, if the system came back and said, look, I gave you ID 64, this is the ID you're gonna start with. Your code, the system code could actually be a little smarter and say, okay, I know I might see this level of variation, but understanding that level is a task in and of itself. At the same time, reference architectures that we're going to develop and continue to develop will include value that we can drive and through some of the interfaces that we have and capabilities that we have to simplify the deployment process, right? And I think all of us will do that from a systems vendor perspective. It is the value that we can give to our customers, simplifies their process of getting it up and running. But as a reference architecture, I still think that the best way to think of these things is that they're a good baseline, a good sort of starting point for where you might want to take a system over time. How do we then do, with the capabilities that don't have implementations in OpenStack? I mean, we get rid of Nova Network, what do we do for networking? If we get rid of, well, we don't have a solution for volume, do we need to make a stance and say, use MooseFS or use OBS and whatever? Are we making those choices as part of the reference architecture? We're kind of making those choices as a part of a reference, right? And right now there are at least, here there are four, the example of references, right? There are plenty more out there. And in each one of those references, sometimes they're a little bit more prescriptive. I know some of our references are missing probably some of the actual minute details of how do you set up a particular hypervisor and a particular volume system with that hypervisor to make it as efficient as possible? Probably areas that we need to discover and cover. Part of that will then come out of, when you do a deployment, you learn a lot of things that you say, okay, in the next reference architecture, we're gonna make sure that we include this because it is a thing that got us in the past, right? We wanna make those reference architectures as simple to deploy as possible and as simple to become flexible as possible, right? So it's a fine line and references, I think we're always going to be not the perfect solution for most people, right? Just to add it, I think they've gotta be iterative, right? I think the reference arch that we may produce at the end of Havana is great but it's gotta be updated and it's gotta be kept moving and it's a reference architecture and a point of time and just like you said, as we learn new things or as a new service becomes core and it's time to expand the ref arch, we need to be able to, if we're gonna do this in an open-sec way, it's gotta be something that can be done and improved from release to release. Can I ask Brandi a question? Sure. So why does Dell have the C-Series? The C-Series is designed specifically around the cloud computing. It is designed with the idea that we are going to be able to put together individual systems that are in general for more high density and better compute cycles. Actually, I wasn't ready to answer questions on specific platforms. Well, the story I heard was that your web-scale customers basically ask you to build a platform that had less secret sauce. They wanted more generic servers. They liked Dell as a vendor, but they wanted a simpler system that had less special stuff. I agree with you there to a point, but at the same time, we also have other customers looking at the R-Series and they love the extras that it buys. And the whole goal. I'm just trying to get to the point that the C-Series, you created a whole platform line because you had very large customers who wanted no secret sauce, right? I can't answer that. I wasn't involved in those. That's a story I haven't heard from. Okay. So, let me think. We've answered the questions that I have on my... Let's see, so, go ahead, ask me a question. I'm like, look, I'm like, wait a minute, I've answered all these things. So, as part of this, how far are we going towards not just this is a deployed, but a deployed architecture, but should we get into things like, to do this right, you need 10 gig networking, you need to do bonding, you need to do this type of storage. You should be using JBODs and that type of thing, or we just talking about you need to have something you need to have something that stores stuff. Are you talking about a reference platform for that OpenStack is gonna say that we have to have, or are you talking about that individually? Well, I'm saying if OpenStack said, this is what you should do, if we're talking about a production thing, yeah, any old switch would work, but if you really wanted to use it, we should tell people, hey, you need to use this kind of networking, it's just a baseline, how? I think we've kind of hit on that earlier, you have to be very specific in a set of capabilities, I think that was the phrase you used. You don't have to say, use JBODs, use ISCAS, use this specific kind of hardware behind your Swift implementation, but you ought to be specific on capabilities. What are the performance characteristics you're after? What is the scaling size you want? Simultaneous number of users, whatever those performance or real-world scenario criteria should be, I think it should be framed more in that kind of context. You should get help from some of the key customers who are to play OpenStack today. And there's a lot of work that goes behind, I don't know, maybe no vendor wants to give that up, but figuring out how do I build a good Swift cluster, there's a hell of a lot of work behind figuring that out, and... And the thing is that I think with each vendor's tweaks to their hardware, there might be different benefits to a specific model. I think, again, if we can structure things in the way of thinking about it from a baseline that you can build from, Randy's organization may be looking more for something that can provide a different class of scale than another customer might be looking for. So it might be this concept of flavors is actually a good way of thinking about, even reference architectures. I have an enterprise flavor that is designed to support maybe high IO storage because it's really focused more on a data store class of infrastructure or a data base, classic database class infrastructure. So that's going to force you into potentially higher end storage systems, maybe even dedicated storage systems versus the sort of embedded direct attached storage that we're looking at today for a lot of these systems. Those sorts of differences might be the kinds of areas where you could have multiple reference architectures. Again, they sort of the baseline models that I tend to look at seem to be sort of middle of the road and we're looking at middle of the road performance even for the disks that we choose. We have very high end SSD that may be appropriate for a specific deployment model but it's not something I would necessarily define as my principle reference architecture because it skews the general cost for most other customers that would want to look at this as well. So I think there has to be a balance in the way we think of reference architectures and from an open stack foundation saying, hey, this is the reference model that you would want to use for your reference architecture I think that might work if it was here's a reference model for high performance storage class capability which is SSD sort of centric versus here's a model that is more compute centric and really you're looking at how can I get the densest cores into my environment versus I need something that's ultra resilient on the network and can handle maybe it's video distribution so I need bandwidth more than I need anything else. And so even those three sort of semi flavors might define three different reference architectures for the hardware that sits underneath, right? So you're talking about different use cases and that the reference architecture so like in designing our reference architecture one of the things we look at is okay so are we gonna do a storage only or a good generic one that will handle both compute and storage or compute only, you know? So you're gonna want to find your use cases and at the end of the day if you cannot solve a use case why are you writing the reference architecture? Who defines this? I mean in the community like is anybody gonna come and step up and say because you know everybody wants their reference architecture their own thing? Do we get people out of the community to say we're gonna be PTLs and define something or where does it come from? Where it comes from? The people who are gonna use OpenStack. We need to be sitting down and visiting with them. I guess the question is really are the vendors who have value in their reference architecture going to be willing to step up and say here's a reference architecture that you could be using? Yes, we come up, we have a reference architecture we're going to give it to our customers within. I'm talking about the community though, right? Well. Right, that's a very different thing. That somebody pays you. You're right. So we have had success with our reference architectures at a very vendor specific. This OpenStack world, at least for my product line is a fairly new bold world. We're happy to get additional insight and help from a community perspective on these more use case specific reference architectures, reference models. We can still take and add value on new more HP specific reference architectures as well. So it's kind of a synergistic thing from my perspective. So I think this one vendor at least would be happy to play this sort of layering kind of game with reference architectures using the community. I would say that our reference architectures are open. I mean, we post them and we say this is what it is. Here are some of the decisions that we've made. As we develop further and further reference architectures we're going to provide more information on performance that we've been able to gather from these systems and things of that nature. So in general, we've made those open to the community anyway. It hasn't been something that we've specifically come to the community to say, hey, here's what we think the reference architectures should be, and I think that's partially because we're getting requirements to define our reference architectures from the customers that we're working with. So we can say, hey, this might be a more video centric reference architecture and that might be useful for others to then say, okay, I can base again off that sort of midline model that you've built. I can see why you chose the high performance network interfaces that you chose, but I don't quite agree with your storage model, so let me change that out. And I think that's available today, I think just about from any of us. Maybe it's not quite as open in some cases, or maybe it is. I actually haven't looked to see what HP posts for a reference architecture if there's anything public, but that's I think not the limitation. I think being able to say that we know that there are reference architectures for these different systems, again, if the open stack community wants that, I think we can absolutely all support it. I don't know that I've specifically heard the community stand up and say, if you would just give us one reference architecture, we'd be happy, right? Because I don't think it would actually fit, right? Yeah, I don't think that's gonna happen. And more importantly, we already have a reference architecture, here comes my opinion, and it's Amazon Web Services. And I think that if we're looking at a world where we're gonna build lots of hybrid clouds and there's gonna be lots of interrupts to the major public clouds and the major public clouds are gonna be these very large dominant ones that it's foolish to make stuff that doesn't look like them. I mean, you're gonna wind up on an island in the same way that we have this problem now where every open stack deployment looks like something different than the one next to it. It would be better to hub off of those reference architectures and make them better or enhance them than it would be to sort of go your own way. That's just the way I look at the world. I still think that there are going to be flavors of those reference architectures, though. Yeah, but I mean, if you look at AWS today, there's already flavors, right? You can do HPC on it, right? You can do Hi-I Upstorage. You can do, you know, standard. Fair enough. Fair enough. Fair enough. This is your secret sauce. You don't, I mean, kind of like squint and hand wave and say this is what they have underneath there. Well, I mean, but we're thinking like, all right, you know, this is how you should be wiring up your data center or, you know, that kind of thing. Well, there's a consumption side, the end user sees, but then there's the operator side. Right. Well, and I think from the operator side, this is where the reference architectures do get interesting. Some of the discussions that we've been having even this week are around how do we get more consistent operations model around these systems? And the only way we're gonna get there is to have a more consistent set of baseline references that we know, hey, I know how I can operate, like you said, a basic Amazon EC2 class cluster, right? If we're gonna do an Amazon HPC class system, there might be different things that we would want to understand from that operational model. And we can derive some of that baseline, again, for the community benefit from a reference architecture for that class of system. So if we, as a community, say, look, let's just take the Amazon systems models that exist and use those and define some baseline reference architectures because it helps us drive a better operational system, right? Getting better statistics collection, and maybe I need to, you know, pull more rapidly when I start looking at pulling against my VMs to determine what's really running because I might be able to, you know, support a better HPC class system at some point if that's the direction I wanna start taking my cloud. Right, I think there might be some real value there, so maybe something we do need to really look into a little bit more. Yeah, absolutely. Excuse me. So any smart people out there have any great questions that they wanna ask? These brilliant guys. Is what's missing from this conversation, perhaps a reference architecture for interoperability and corresponding hardware compatibility list that goes with that? Well, again, I think if we're defining reference architectures as a set of API class models, right? It comes back to the use case, right? What use cases are we looking at? And we can define those reference architectures. And then anybody can say, look, I wanna support a, you know, video streaming reference architecture. Here's some of the things that we know you're going to need, high performance network, probably better than gig speed networks, and you're going to need potentially high performance disk as well in order to get lots of streams out of a system. But if that's not your use case, then that reference architecture may not be as useful to you, and the hardware models that may be mapped to that use case aren't necessarily gonna be as specific. I don't know if you wanna get to the point of an HCL because I might come out with a new disk that is much better for that reference architecture than we had before, and I don't wanna say, well, let's not think about that, right? One use case is interoperability and elasticity to be able to, you know, move those VMs or deploy it to anything. That's one use case. I mean, where the hardware vendors are coming in here and saying, here's a use case for, you know, like video streaming, that's a use case that you could really optimize for on your hardware, but interoperability is a use case in and of itself. Yeah, but I mean, I think even a reference architecture doesn't limit you. I mean, if we were to say, look, here's a reference architecture for video, and you suddenly find that you need, you know, another 100 gigabit per second out of multiple sites for that video streaming service because you've been so successful, it doesn't mean that you can't deploy that same class of service elsewhere. The performance may not be good as good as what the reference architecture has to find, right? And so then it's really more a question of, can I define my performance characteristics for a particular set of workloads so that if I do migrate from one environment to another, I know what impact that might have, right? And that, I think, is a much harder thing to specify. Starting from a baseline is always good, and maybe that's really, I think, the point. Is it fair to say that, excuse me, is it fair to say that the reference architecture problem is better suited to vendors, whereas the OpenSec Foundation is better suited to set a set of standards and practices for operations? I like that idea, and just a bit ago, we were talking about some sort of standard description from an OpenSec perspective that could be sort of derived into more vendor-specific, even hardware compatibility-less specific descriptions from us as vendors, but starting from some unifying point, which the community could define. So I like what you're saying in that regard. So what we've seen in the past is that there's different levels of interop, right? So like with databases, we didn't get past SQL 92, right? And so sometimes we don't know where this will end up, right? But what we do know is that foundations before like this, like the Linux Foundation, what they focused on is Linux adoption. And then what's happened is that as people have consumed and used Linux, winners have risen to the top because that's what customers wanted. That's what people wanted. And so I think that's what we're looking at now. It's the early days. There has to be enough flexibility in the system that a variety of different approaches can go out to the market, and then people are gonna choose what they think works best for them. And then as the OpenSec Foundation fosters adoption and we get even more people using the system, there's gonna be patterns that emerge and then kind of after the fact, we'll standardize those and we'll have interop between those kinds of standards. I think that's probably how it'll... So the majority of deployments are not public. They're private. And I haven't really heard a lot of discussion in this about private clouds. And I think the majority of you guys are public cloud vendors, correct? No, no. The majority of my departments are private. Right, yeah, maybe none of us are then in fact, so. So it seems that everyone is doing reference or implementations based on like the idea of following what large public clouds are doing. But is that really what we should be looking at for private implementations? You don't really get into people's things. Yeah, so my position is that cloud computing is the kind of computing that Google and Amazon and Facebook and Twitter are doing. Virtual machines on demand, that's virtual machines on demand that doesn't have Jack Diddley to do with cloud computing. It's absolutely completely irrelevant. Otherwise bare metal would matter and Google wouldn't be a cloud because they don't use any virtualization. VMs on demand is a total sideshow and people need to forget about that. If you want a private cloud, in my opinion, you need a cloud that is an elastic cloud that looks and feels and smells like Amazon and Google. Anything else is just kind of like mental masturbation. So we aren't quite as extreme in that viewpoint from an HV perspective. I would hope so. Our private cloud customers want virtual machines on demand. They do actually, to your point, they do look a lot. At least what Amazon provided is sort of the spec I wouldn't call it a reference arch, but a reference spec for what cloud ought to be. Now they're looking to us more and more saying we want open stack, which is good because that's where we're playing. So I think it's kind of a combination we see from our private cloud customers. So a follow up to that so that I can really clarify my question more. It's not specifically just following the cloud model by these public vendors. It's more, do you not feel that open stack should integrate more with an organization's already current setup like LDAP integration that works with our authentication authorization? I mean, how are we looking at reference and limitations from a private perspective? I think you're making a good point, right? The fact that right now the default model for Keystone is a separate entity of authentication is not something that's going to work as you try to implement this in your organization. And even as we think about cloud migration, hey, I wanna be able to use any cloud provider to support the sorts of services that we're talking about, this idea of interop. Well, I'm effectively gonna have to have some form of open ID class shared infrastructure for authentication at the back end of the system. Doesn't mean that Keystone has to go away, but we have to find a way to make it easier to integrate. And you can integrate Keystone with LDAP. So I know that you can't actually implement these things. Do our reference designs cover how to implement that? I think the point I was trying to make earlier about operationalizing these things by having a decent set of systems-focused reference architectures maybe sort of starts to think about that. How do you do that integration? How do you tie into maybe your enterprise-class storage that already exists that you wanna leverage for the new applications that you're deploying within the sort of the compute side of this new cloud that you're building? And how do you migrate to that? That's an integration problem, right? I mean, people use AD and LDAP to authenticate in order to get access to AWS and Google Compute Engine today. I mean, that's a sideshow. It doesn't have anything to do with the architecture. So are we building this for operators? I mean, it seems like we want more people in the community. We want people to be building OpenStack clouds. However, they're doing it. And to me, providing a good reference architecture for these guys where they don't have to, you know, a great deploy guy that they get out there and they're successful can be really important to us to get great uptake in the community. I think the point of cloud infrastructure, at least for, you know, in the private cloud space, is to provide a more flexible infrastructure for the end users within that enterprise. I think the same thing applies to the cloud in general, right? The reason that, at least as I understand it, the reason that Amazon took off was that it was very easy for any developer to get access to the things that they were looking for, compute, storage, network, right? By enabling that, they were able to accelerate their development process. They were able to get things done when they couldn't before. So in a sense, yes, I think OpenStack is an operator tool and our ability to provide a reference architecture that makes it easy to get that operator tool running. And then, again, the ability to then define how you can operationalize that and manage that tool, I think are key things that are defined in references, this is a base way of doing it. It doesn't necessarily, you know, ran it to your point. It doesn't mean that there isn't an integration strategy that still has to happen. You have to somehow fit into your enterprise authentication mechanism. Like I said, maybe you have to fit into your enterprise storage management technology because you can't just immediately go away from that. But the longer goal is to simplify the IT infrastructure that people want to deploy against. Just one more point on the integration topic. So we see the integration aspects as critical parts of HP-specific reference architectures. Whether they become important parts of a more general OpenStack ref arches, I guess a separate question, but customers want us to describe specifically how key things like Active Directory are integrated into our cloud system solutions. So we would see that as an important part of the ref arches that we deliver, whether it's a layered effect on top of what comes with the computer or not. I don't know. What form does this take? Is it a set of architecture diagrams? I'm gonna pull up rational rows or something. Is it, sorry, old IBM thing. Are you going to have deployment guides and have step by step? This is how you set this thing up. What is the form? Predefined, you want to buy the system? You can buy the system sorts of environments as well. I think it's going to take multiple forms, but at the minimum, a document that says this is how we recommend you deploy a reference. That's kind of the baseline for what I think we all work from. I was gonna say the form ought to take some project or some project in the making and a set of folks from the community sitting down and starting to work something out. We probably need to just like with any project or any enhancement, any blueprint, let's go get people to work on it and the answer might be very different in a few months from what we think it is out of the gate, but until we work on it, we're just gonna keep talking about it. So I like to see everybody copy us. We have this modular reference architecture. I mentioned it earlier, it's called Cloud Blocks. And if you look at Amazon and Google and Facebook, they all deploy in kind of more than one server approach. Sometimes people call it a pod or a cluster. And so we call it a block-based approach. It doesn't matter what you call it. All it matters is that you need this combination because we are sort of building system solutions in slightly different ways, but you need a way to define what the unit is. Is it a rack? Is it a container? Is it a half a rack, a quarter a rack? Is it a triplet Facebook rack? And then you need to say, what is the network architecture in there? How are the things stacked in rack? How are they cabled and labeled? Where does the provisioning system go? And so on. And if you can do all of that, if you can kind of marry sort of like the software template layout with the hardware template layout, then you could very cleanly define the reference architecture down the way where you get sort of more Lego building block approach. And that's what we've been trying to do. It's part of our secret sauce, but I don't actually want to own that. I tried to take it to the open compute project guys and they didn't quite get it. Maybe we as a community can use heat and some of these other tools to sort of help define. One thing I'm stuck on right now is I don't know how to describe a rack layout. We just have a JSON block for that, but it's tricky, right? I mean, some people like to rack from the bottom up. Some people like to rack from the top down. And some data centers have 19-inch racks and some have 23 and it just, you know, it's pretty gnarly, right? Even though we're trying to virtualize stuff, the rubber meets the road when it comes to the data center because all data centers are different. But there is a way to crack that nut. It just is going to require some effort. I think actually maybe that's just what has to happen. We have to sort of post a set of blueprints effectively and say, look, here's my version of a generic reference architecture, right? And then we can all have the community process take hold and see if there is actually a community interest in defining a more generic, simplified reference architecture that we can all build from. And like any proposed project blueprint, if not a lot happens, then we'll know what the answer is, right? All right, well, thanks. Any closing comments anybody has? Thanks.