 So we're we're about ready to start. We're ready to start now. Let's do and We will We'll review who all's on the call here in a second. So my name is John MacArthur I'm peer insight moderator here at wiki bun and today's topic is quieting noisy neighbors in cloud services It's December 18th, and we're going to discuss how both public and private Infrastructure as a service and application as a service and hosting providers can deliver not only availability, but performance And and and meet performance service levels in a massively multi-tenant environment In addition to listening to the call you can watch the call on silicon angle TV If you are watching on silicon angle TV, just ask that you That's you mute the audio so that we don't So that you don't get a feedback loop You can mute your line if you're not speaking mute your line with star six or unmute it with star six So Matt, we don't want you to mute yours because you're going to be speaking here in a second We are joined today by Matt Wallace. He's director of cloud infrastructure at via West and We may also be joined by Jason Carolyn CTO at via West so with that Why don't we kick it off Matt? Why don't you tell us a little bit about via West? Many of many of the people probably already know about via West, but we'd love to get to your quick overview Sure. Well via West is one of the largest privately held data center providers in the United States We have big positions in Five western states that being a Colorado, Utah, Oregon, Las Vegas, and Texas We just got the first uptime Institute here for design certification for our low-mount facility that's just opening up in the Las Vegas area and You know of course more pertinent to today's discussion. We are you know one of the early adopters of Solid fire But we offer essentially cloud co-location and managed services as our our core business What are what are some of the challenges that you you face in a in a multi-tenant environment? well It's interesting. I mean When you look at what's going on and this is in particular in cloud, you know historically speaking there's been an enormous amount of technology and development that has gone into Making you know physical hardware multi-tenant over the years VMware has been around for a lot of time and the other cloud technologies that are you know open source means like Zen and the like and are you a video? Yeah, are you a VMware and Zen user or do you both or? Yeah, we do have both our cloud offerings are primarily VMware focused now, but we also do run some services on Zen using a DA app logic platform Okay, but we see VMware is offering a lot more just in terms of you know user interface and the tool set and the ecosystem is a lot wider You know a lot more robust essentially for our customers to take advantage of plus, you know what we find is You know a while back we actually talked to our customers about you know what? Platform they were interested in using and we overwhelmingly heard back that our customers were using VMware And so they were interested in cloud technologies built on VMware because it was compatible with The investments that they'd already made and the sort of technological know-how that they already had So are they are they bridging? You know internal private cloud to a public cloud or to your to your infrastructure and your hosting? You know it varies wildly you know in terms of what the Customers expect so in some cases we have customers who you know are used to using VMware They're coming to one of our facilities We give them the opportunity to you know have things in co-location. We do you know high-speed cross-connects one of Biowest strengths is that you know as a company. We're very used to doing sort of complicated network infrastructure In a sort of bespoke environment for our customers We have a lot of people who take advantage of that by having some co-location space where they might put you know Some of their steady-state workload. Maybe they have if they like having databases on physical servers and then they also can take advantage of one of our cloud offerings, so you know we have both a Cloud offering that's based on vCenter and you know the vSphere layer for people who need that for Some of them want to take advantage of the our capabilities of SRM which isn't compatible with the cloud layer yet or some of them want to use something like a Vii solution that requires vCenter access or we have the cloud layer where we can offer vCloud director and give people access to You know the full API and user interface that comes along with that and the self-service that comes along with that But the nice thing is that we can bridge a VLAN Between their physical environment and either of our cloud offerings really easily Okay Hey Hi Matt, this is Dave Vellante. I wonder if you could respond to the following There's been a lot of talk about Cloud washing used as a pejorative and I wonder if you could address for our audience. What makes you true cloud? Well, you know, it's interesting. NIST actually has a definition of cloud Right can go and look up at the the actual. What is the cloud? You know self-service? You know scalability on-demand pay-as-you-go. You know there are some characteristics for it So the nice thing is you know This is one of the reasons why we're pretty happy with you know the VMware based cloud offering if you deploy vCloud director You know fortunately vmware sort of understands those NIST requirements and you get a lot of that out of the box You know cloud director has a really fabulous API all you know rest-based Existees for the entire thing. You know, I don't I think we've mentioned this But I mean I worked at VMware myself for a couple years So we have watching different teams kind of scaffold up code to utilize that API. I know that it's very robust You know it offers that sort of self-service capability. Why are we a true cloud? I mean we're Certainly we're allowing this sort of on-demand self-service You know we offer both entirely pay-go Where you can just go and consume huge amounts of cloud resources with no commitment as well as you know What we would call the allocated model where we can reserve a certain chunk of use of utilization for you But every customer we on board has the ability to burst You know between two and a half and three x of their committed rate so that that agility is built into our offering You know part of what we do is you know we we essentially our entire architecture is Built around over provisioning hardware to a certain extent to make sure that our customers do have that room to burst So they get that real cloud experience You know given that it's all self-service you have a portal and you have full unfettered access to all those API I don't know what term you could use to describe it other than a cloud No, great. Certainly. I'm definitely familiar with the cloudwashing term though All right, so you wouldn't put yourselves in that campus So no, I mean for sure I think that our our cloud offering our connected cloud offering is for sure a real cloud it Satisfies every one of those criteria basically now. What's interesting is I think when you look at you know the space for Really ancillary products when you start looking at it There are a whole lot of things that you can offer that may not necessarily be fully integrated and offer that cloud experience So here's a great example on that if we Offer and this is actually we do offer this but we offer avamar backups that can work against You know our cloud VMs so you can take advantage of having all of your Virtual machines backed up you know at a file level But it's not something with the current offering right at the current version where you can just go turn it on now Does that mean that's not a cloud service because we have to turn it on for you? I mean you could argue that I think either way clearly this service is working against the cloud service But the backup service itself you might say is not cloudy because it's not entirely self-service Okay That the my follow-up question is you know Amazon's obviously being very aggressive with its entrance into the enterprise with a horizontal You know commodity like service and so we're seeing service providers Really needing to have a very clear value proposition for enterprise customers and focus on a set of services And maybe even a set of customers around which you're providing solutions Can you talk about? Your differentiation in the face of this Amazon trend in the and just the commoditization of infrastructure as a service Sure, well, I mean I think maybe you know what you were interested in doing was taking advantage of you know Commodity compute because that's what you needed You know I can see where Amazon and their price competitiveness and their you know You know their scale is can be could be interesting to some people but You know there are so many things that I've heard it as feedback basically that are shortcomings in Amazon We've had a lot of people who've run into inability to scale their architecture Because they run into problems with Amazon's networking You know the best of my knowledge most of their infrastructure outside of the HPC instances is connected on one gig Whereas, you know all of our stuff is based on UCS all of our networking is multiple 10 gigs In either channels, so the scalability of our network is much higher Whereas you have no idea what sort of platform they're on You know we've chosen to build our cloud on what we consider the best of breed of enterprise hardware So we have that sort of performance and reliability of the UCS behind our cloud You know with Amazon We've had instances where people have asked about it and you know tried to do a direct comparison You know one of the first things that the conversation turns towards when we talk about outsourced Amazon It's just the fact that if you're even interested in getting any sort of support from them, too You better start adding on the up charges for support. You know our customers are used to an experience where They can pick up a phone and call us and I mean we have some incredibly talented, you know really bright engineers You know who they spent a lot of their day actually doing development work For us and yet these are the cloud engineers that you can talk to if you have an issue on our cloud Whereas you know if you ask people what sort of customer service experience they receive at Amazon if they run in trouble You know, I think that the what you probably end up hearing is they tell you a tale about You know what sort of experience they had going to the Amazon forums and asking the community for help So if you want somebody who can actually stand behind what they're providing and help you out I think There's a clear value that we're delivering for people who you know They want to be able to use the infrastructure But they want you to be responsible for it and not just have to figure out all the ins and outs themselves Plus of course we have a whole suite of managed services So you know one of the things you're you're gonna go looking to Amazon if you were there You'd have to look to partners to provide you with things like managed operating systems managed databases and things like that Whereas, you know, we've had a huge adoption of those in our other environments like co-location and dedicated servers And of course the fact that we can offer the same thing on the cloud is really attractive to people who you know Want it don't want to manage that because it's not their core business. You talked a little bit about over provisioning. You also talked about Talked about database applications and you know customers still wanting to run them in on dedicated servers Is are any of the technology changes that are occurring in your environment enabling you to deliver a different kind of offering and With the burstiness and that might come from from those applications and also the latency requirements that might come from those applications Sure. Well, you know latency hasn't been a big issue because you know, we're really we you know We don't have to worry about I think I mentioned this before but you know the fact that we're using You know multiple 10 gig links that we're using, you know high performance You know Cisco and heuristic gear to do all of our you know switching Non-stretching things over wide distances means that our latency is you know tends to be incredibly low You know even the Cross-connecting we do for customers is done pretty efficiently and it results in a minimum number of physical hops So if you're connecting to a service in the same data center that you're in you know We offer things like cloud services and multiple data centers You have the ability to to reach that you know really quickly You know the question about database. It was a great one and I mean it's a great segue into talking about you know Really where solid fire fits in and how this got to be so exciting when I first saw it So I mean coming from VMware I'm used to that sort of memory and CPU being really well-managed a hypervisor layer and obviously Know over this decade. We've seen the hardware support for virtualization has really improved with Intel and AMD Coming on board to you know, virtualize the memory management and the CPU context switching Now looking at you know storage though Historically, this has always been a really risky proposition if you were on shared storage because there's there hasn't been a lot of good tools Controlling how much performance and how much IO any individual tenant gets out of a shared array and of course if you're in the cloud and you've got you know, everybody wants to use the Volumes that can move from virtual machine to virtual machine You don't want local this hide the local hardware because it's really the antithesis of doing things the cloud way You want to be on shared storage for the advantages, but you don't want the disadvantages the noisy neighbors that potentially come with that So, you know when I was talking about database services away Some people deal with having you know consistent performance for things that need it and being able to scale horizontally for things That scale more easily horizontally the issue with databases traditionally is that they don't scale well Horizontally you end up having a scale them vertically and just pour more RAM and CPU into them and of course more just IO performance So what we're going, you know, what we're going with the software offering is this ability now to have these network accessible volumes That people can attach the database servers including virtual ID database servers that allow them to get Guaranteed performance for those applications so you can actually run a database server and be assured that you're going to get a Consistent performance and high performance that you need for that application And I don't need two environments you can have just one and how do you measure performance? What what what are the metrics? well, I mean so the big ones I guess would be latency You know for an individual right and then you know throughput basically is measured in terms of IOPS Right, and so what kind of latency requirements are you seeing from your customers? Well, that's a really interesting question You know I think you know if you're looking at network accessible volumes You know that the best you're probably going to get and I don't think we want to deep dive too far into You know queuing theory here, but I think if we're getting under point two milliseconds We're doing a pretty reasonable job Being able to you know network-wise which can result in a you know Somewhere between a 400 and a 700 microsecond Think time on an individual right now That's not what you get from a single disk But now we're talking about a sand array where you're doing a right It's actually being committed to more than one drive or at least more than one right back cash before it gets returned So you know that's a really an amazing performance And if you do the math basically divide that out even in the sort of worst case scenario in the upper bound And I'm talking about on the 600 700 microsecond range even on a single thread Even waiting for a sink after every single right you can actually get about 1500 IOPS that way and that's you know Essentially your worst case scenario with just one thread writing entirely entirely Serially and of course you know your typical database is not going to be limited that way because different tables are being accessed at different times And so you'll be able to actually use multiple right to different parts of the disk So you can so you can handle more multi-tenancy you can handle more Burstiness in the in the in the right activity. Oh Absolutely, well, I mean, you know, we're talking about if you're if you're able to actually handle, you know If your application basically can do multiple threads You know than what the hardware platform supports is up to about 15,000 IOPS on a single volume Why don't we pause for a second in our questions and see if there are any questions in the community? So again, if you've muted your line press star six to unmute it and and just jump in with questions This is this is David Flier. I've got a question on the latency, etc One is the average latency, which is important The other is the maximum latency is and if you get a significant number, you know over eight milliseconds or over 50 millisecond Then you start to get Some pretty bad effects. In fact, there's a risk like published a A benchmark for database type of applications That's one of their main conclusions was that it was important to avoid those high spikes Any comments on what you're getting with with your setup now? Yeah, I mean our Yeah, that's a great point. And I mean, you know, this ultimately is one of those funny things that it can separate a sort of synthetic benchmark from, you know, real-world performance. I mean a lot of drives you know or Arrays are going to get, you know, really great performance for very short-term tests because they have, you know Caches that are essentially DRAM that can fill up and as long as you're piling, you know, data into DIMs You're going to get incredible performance and incredible, you know, your right latency is super minimal and once you, you know Put on a continuous load that exceeds what the drives behind the scene can actually do in terms of writing the disk You know, your performance is going to completely crash and if you just start to bump up against that, of course You know, there's going to be essentially pushback because there's a sort of throttling effect as things don't get written as quickly and then applications see this sort of Psyche performance where they're actually waiting for a disk ride or at least, you know, space to open up in the cache You know, this is the nice thing about having to have it entirely backed by SSD because although the solid fire appliances have a gig You know DRAM write caches on them You don't have, they don't run the same risk basically of filling those up because the entire array behind is is built on SSD and You know, because of the way they distribute blocks across all of the drives in the entire array There's kind of an amazing horizontal scalability where they can take advantage, you know, of ever increasing number of nodes You know, to increase their write performance You know, we just don't see that sort of performance bike You know, it's interesting. We've only gotten really recently into testing, you know, all the fine-grained stuff I I see basically a maximum variation so far in testing of about three to four milliseconds As the slowest write and that tends to be on long runs You know, we see that around the ninety nine point nine to ninety nine point five You know tiny a point that percentile goes as high as maybe four milliseconds Whereas obviously like I said what we're looking to see typically is the four hundred to seven hundred microsecond range Which is what I see, you know at the fifty and seventy five at ninety percent marks So it'd be wonderful if we could get Get all the way down, you know, we're all the right, you know all the way to nine nine point nine nine percentile We're consistently in that four hundred to seven hundred microsecond range You know barring obviously if you know drive fails or something like that Then we were we're going to expect to see a you know the a couple milliseconds of creep there while it drops But no other than that I'd like to see fully consistent performance But certainly nothing anywhere near the fifty millisecond range so far Excellent. Thank you. Yeah, any other questions in the community just want to so Behind this behind the SSD What else what else are you using for storage infrastructure today? Well, you know, we have a bunch of traditional, you know offerings from Ranging from EMC VNX is which we've actually run For customers is dedicated arrays because you know, I didn't mention this earlier This is another one of those answers to you know, what happens if you need that consistent performance You know the options before was getting an array all to yourself You know, we also have all of our sort of base cloud storage is based on that app and We also have a stand offering from three par and at several locations So we actually have a wide variety of storage solutions We've utilized several workloads, but obviously it's a little bit different from being able to offer this sort of You know guaranteed IOPS guaranteed performance You know storage that we can offer now So when you when you need to deliver guaranteed IOPS You'll go with a with a solid fire infrastructure and when it's How do you how are you making those decisions is where to deploy which technology? Obviously, you know, it's ultimately after the customer to try to figure out what makes sense for them You know in terms of the offerings, but if some were to say to us, you know, I need absolutely guaranteed performance You know, there's only really two choices that we can offer One is something that's entirely dedicated to them And obviously we're building out sort of entirely private infrastructure as opposed to cloud infrastructure You know, we can turn to local discs to offer that although of course there are downsides You know tapping data stranded on nodes that could theoretically be down The other off choices that dedicated array and then you know solid fire obviously gives us that sort of the flexibility and Power and sort of agility of that sort of traditional network that storage but with the guaranteed performance that goes with So I'm an old procurement guy, you know, I always care about quality and service and things like that But I also care about price. So if they go with if they if they go with the shared infrastructure that's based on the Solution from solid fire versus Dedicated discs from one of the other are they going to see a substantial cost savings? Well, it's certainly possible. I would say that they see a substantial cost savings It's not a this isn't a price negotiations, but I just you know, you know, I mean I guess my point is really that you know, everybody's requirements are different So I mean to name an example that came up recently where I had a discussion with one of our sales engineers about the particular need of a customer, you know, they had a need for Only two terabytes of storage, but they needed 7,000 IOPS out of it for their application So, you know, we began you're looking at what our guidance is for you know The number of terabytes that you'd need in terms of NAS or sand for that level of performance You know, you suddenly realize if they're going to have to probably over provision storage, you know By some huge multiple maybe 10x or so in order to get sort of IOPS that they'd need for that So when you start comparing that to you know, what you can get from solid fire It comes really and that's obviously a really dramatic example, but it comes it becomes really obvious that The solid fires really advantage at that point. How sophisticated have you gotten at this point and being able to? have some automation to your own decision-making process on the on You know, what infrastructure to use for what which use case if you've got that much variability and customer use cases Right that's a good question. You know, honestly at this point the decision about what sort of platform to use is something that we have to push Sort of to the front of process You know, there's not a lot of obviously if we have somebody who's on a storage platform that is auto-tiering You know, they get to take advantage of that, but you know, we We don't have I wouldn't say too many tools basically to sort of automatically direct someone to the right the right platform Of course, you know, we start talking about cloud You know, we're in the process right now doing our upgrade to cloud director 5.1, which brings with it the ability to be storage tier So, you know going forward As we roll that out customers on that cloud will be able to actually pick and choose what storage tier suits them That because it supports basically choosing something like NAS and or at the C back Storage as you deploy at the end sort of choose what type makes sense that we just sort of fill as you go This is David again I've got a question on the total number of IOPS and the total amount of Capacity on the solid fire Which which do you are you running out of? first, but how do you balance those two in a practical sense of selling them do you separate those two components out? Right, so I mean it's I think it's important to say that you know kind of protocol running out is maybe a bad term to use for us Because you know, we're pretty early right now in the deployment of this so You know, we're not anywhere near saturating either of those with production workloads on our side So it's difficult to say You know in the early phase what's going to see a bigger adoption I will say that you know since people are sort of used to traditionally Sizing their workloads for you know, what they sort of expect in terms of performance to disk ratios in with sort of traditional non-sd disk my Perception is that there will be a larger demand for the number of you know Terabytes or gigabytes available on the disk and there will be for the IO in other words I think for the typical you know sort of consumer of this product There's a there's essentially more IOPS than they'll need Although, you know, it's odd in a sense of course because to me the huge advantage of solidifiers of platform isn't really just the fact That it's fast because there are a lot of S&P vendors who have offered you know fast offerings There's the whippetails and the pure storages of the world and you know There's something wrong with that as opposed to if it fits your use case But to me the killer application is definitely the the guaranteed IO the quality of service features It's really what makes it fit in the multi-tenant environment really makes a solution enabler in the multi-tenant environment But yeah, I think to get back to the game space question I would say you know my my feelings so far even though we don't have enough data I think to draw a solid, you know end-stage conclusion is that In terms of capacity versus performance, there's definitely more available performance On the other hand, you know as we start to onboard customers who get a feel for wow I really can have a you know 500 gigabyte volume that can get 12,000 IOPS You know, we may see their sort of their the way that they provision their storage actually change as they see that the power of the solution offers Hey, I wonder Matt if we could go through sort of the customer Decision process to actually move apps Tier one apps into the cloud and and on A Disc a flash-based disk and maybe they don't even have necessarily high visibility on that Maybe they don't care where it goes But can you talk about that whole customer decision process? How do they get there? Where are they coming from? What ultimately, you know gets them to make that decision? What's the business case? Well, I'll tell you you know first of all I should preface this by saying that you know Since I don't spend too much time, you know out of the field doing sales engineering work You're kind of getting a second-hand answer But my impression so far is that the customers who are performant sensitive they they come Sort of with an IOPS requirement in hand or or they have an existing array that can be extrapolated So they might say well We're using this type of array that has this sort of cache that has this many drives of this type You know, so they go This this thing has 16 gigs of right cache and then behind the scene It's got you know, 12 15 case that drives, you know It seems like we're maxing it out. What sort of performance are we need if we move to this platform? And so obviously at that point we can engage one of our storage engineers, you know Basically come up with sort of some sort of estimates based on their existing environment Other customers are a lot more savvy and they've actually benchmarked their environment And they have a good feeling for what they're actually consuming So I mean the example I brought earlier, you know, somebody knew that they needed two terabytes of space They knew that they needed 7,000 laps because they measured it You know, they felt like having a little bit of upside in terms of you know, the ability to burst would be great But you know, they expected their performance requirements to remain relatively steady state And so in that case, it's them coming to a say this is what I need. What sort of solution can you offer to fit that? Okay So what as somebody with a technical background, what's your opinion on sort of the future of traditional block based storage? Maybe you could summarize what you see happening there. I mean, there's got to be trade-offs when you're moving to a new architecture You may be missing some components of the the data stack. Maybe not I don't know in terms of maturity possibly so as a technical person What goes through your mind to to sort of stink test the new platforms if you will well It's it's funny you should mention that I mean we went through a you know Months-long process basically where we abuse the solid fire appliances and obviously, you know what you're mentioning is It's probably the scariest thing about adopting any new platform So, you know in the in the vein of that, you know Aphorism no one ever got fired for buying Microsoft obviously people feel really comfortable with the sort of entrenched in traditional storage vendors You know, and they have enormous enormously long and good reliability records. They have mature Services support and so you know people feel comfortable with them for a reason I've never had a reason to complain about the sort of service from the MC or net app on arrays like that You know, so when it comes to a company that's sort of the upstart the fresh company like solid fire, certainly You know, we we felt the need like we would never feel with an MC or net app to really put it through its faces So we spent a lot of time, you know pulling out drives pulling out cables disconnecting switches You know deleting and restoring volumes rapidly, you know, provisioning hundreds of customers in the matter of you know a window of minutes just to kind of see where the rough edges were if there were any and You know, that's a lot of work. I guess that I don't think we would have had to Do if we were on a sort of mature platform But you know, sometimes, you know, this is the interesting case sometimes you run into a scenario where a Technology is exciting enough and it solves a big enough problem that you know, it's worth going through that extra effort Thank you, you know in terms of where storage is going. It's interesting because you know one of the things I I'm fond of talking about where cloud was because There's this enormous there are so many neat advantages you get from using cloud services Ability to you know scale on demand and that business agility the ability to avoid capex And you know reduce your spend to an off x number or get away from running your own hardware You know when we start talking about, you know customers in the past have had to opt for a dedicated array solution Now we're essentially talking about somebody who had to go the capex route You know although we can offer that sort of as a service and do that capitalization for them but even if we do that even if we you know sort of spread out that the Purchasing and operating of say a dedicated vnx over a whole lot of months for them there's still in that scenario where they've picked a specific piece of hardware and Let's say they have a buyout or merger or they get featured on You know Oprah and they're a huge hit and they grow rapidly. They're looking at a scenario where They don't have the ability to scale their platform up on demand They don't necessarily have the ability to grow overnight and then Sort of get rid of that excess capacity if it's only a temporary need Whereas you look at what's going on with you know what solid fire enables And it's essentially enables the same business agility that we kind of come to almost take for granted on the Compute of the memory side with clouds and it provides that same sort of ability to scale up on demand really rapidly On that shared storage platform. I mean shared storage is that As a platform in general has a lot of those really great, you know cloud attributes the agility the scalability Lack of capex, but you know with the noisy neighbor problem being the downside. It was a real trade-off. So I think You know with the guaranteed COS now we're talking about having those same advantages of the shared platform But with the without the downside I think what you know, so I remember back to the early days of storage as a service and early migrations to What's not now called cloud offerings, but there was an awful lot of dedicated infrastructure You could scale up But you couldn't necessarily scale down because the cloud providers or the storage as a service providers were Where you know, they had to make big investments and they needed to capitalize that they needed something to write off against us, so What has what's fundamentally changed to that to make that? To make it possible to be able to both scale up and down And not just not just Yeah, go ahead Well, I mean obviously it helps to not be in a city-state, you know If you always have a certain amount of Customer interest and you have a certain still rate that you're expecting and that is playing out Then if you don't have one particular customer that says, oh, I need more and now I don't need as much It's the same as clouds You know, obviously when you go out and you spin up virtual machines and a sort of on-demand basis You're essentially paying, you know to use some capacity that isn't necessarily always going to utilize in the same way I think that applies to the storage array. You know, you're using some capacity if you burst up It's not always being utilized But you know if you talking about a customer that spins up and then spins back down There's not really a huge concern about the fact that they're spinning back down because there are other customers basically just in the ordinary Of course the business we're going to come along we're going to be interested in that solution We'll be you know more than happy to pick up the capacity The other push the other pushback on cloud offerings has been that you know At relatively small and sort of bursty stuff it made sense But once you get to a certain scale it gets harder to justify the overhead associated with cloud provider Starts to swamp what I can do internally. Are you seeing that or is that changing as well? That's an interesting question I Think possibly since our you know cloud offerings have only really been you know in the field now for Around a year and a half. I don't know that we've you know seen the entire life cycle of customers But you have to keep in mind that unlike a lot of cloud providers, you know via West is You know provider who offers the whole gamut of services So I mean the fact that we actually have Coal location environments the fact that we can actually run manage dedicated servers for people for whom that makes sense And that's something been you know by West core business in the pre-cloud era For a long time means that you know unlike a scenario with a pure cloud player You know we can actually sort of Have a conversation with our customers about you know the options that are available to them in the economics because we can actually service a whole You know a whole gamut of needs that they might have so You know if we had somebody who decided for some reason they actually wanted to go back to the business of Capitalizing the infrastructure themselves That they wanted to you know it have a hardware that was dedicated in a space You know we're still there we can still be their data center provider And obviously we're with that being that first ever tier four design certified data center You know we're kind of on the cutting edge of data center technology as well So I think we kind of have a lot to offer that way I guess we were not necessarily in the same place some cloud providers where if a customer decides the cloud isn't really Workload they lose the customer so we can actually you know serve a wider variety of needs Matt I wonder if I could ask you about the coal converged infrastructure trend You've seen a bit virtually every major server and storage vendor is playing in in that sandbox these days What is your? Perception of that are you guys take trying to take advantage of that and and if not why not and if so how does the? the solid fire You know partnership play into that because essentially it feels like a bespoke piece of infrastructure I wonder if you could comment on that Yeah, well, I mean could the converged infrastructure trend and I mean you know obviously I think I mentioned this We did a round of introductions. You know we've standardized on UCS as a platform for our Cloud-related offering and one of the reasons why it's so attractive is the fact that you know if you looked at The more traditional environment where we had to actually deploy a full stack of hardware for someone in order to you know Deal with their compute needs There's a lot of there's a lot of expense you know in Maintaining a whole network of say Brocades to give people fan access or the switches to do cross connects to a backup network and so on and so forth and all the fiber runs and all of the Poor charges, you know sorts of things that you end up having to build into your products can actually be relatively expensive To say nothing of the fact that you know, it's a little tricky to deal with you scaling everybody's hardware individually and the fact that you know That leads to a certain sprawl and an operational complexity in terms of Having more platforms to maintain no more firmware updates to test and so on and so forth But to me the primary the primary benefit of that converged infrastructure is really about being able to have this Seamless blend between the compute and the network that allows you to avoid a lot of those extra cross connects and So on and so forth and kind of use that shared infrastructure Scale that shared infrastructure. I think that trend probably continues as you know, we see a bigger Adoption of virtual appliances. So obviously, you know vCloud comes with vShield appliances to do a firewalling and load balancing We also have a 10 that we're used to running in physical environments. We're rolling out a Software load balancer for that And of course, you know, you've got companies that we don't necessarily offer but are also big in this space like viata You know just got acquired big in the virtual firewall space I think with the converse sort of move to those software appliances as well You have some more ability to take advantage of those with the converged infrastructure So it's it sounds like you're two-thirds converged the the systems in the and the network But you've left the flexibility in the storage piece. So you just sort of essentially build your own converged infrastructure by by slotting in the storage piece. Is that is that correct to your own standards? Or am I am I off base? The nice part about this is I mean again going back that converged infrastructure is we can get away from having to worry about individual cross-connects on to storage networks So we had a you know, we had a sand network for storage in place prior to Our deployment of UCS, but let's say you come to us today and you want to get you know, a certain amount of compute One of the nice things about having the converged infrastructure is you're sort of having the Once you're on to something like UCS, we don't actually need to charge you poor charges to connect to brocades And so on and so forth to get access to the sand because all of it it's all you know pre-connected effectively So no, I mean it I wouldn't say that you necessarily to pick a specific Pre-kend component just the fact that you can tie all of your storage infrastructure into this one compute platform and let people you know connect all to it through You know, obviously in a UCS case It's the fabric interconnects and the fabric extenders the fact people can just connect that and get access to all of your Services to that one layer really sort of reduces the overall cost of the solution and um my other question was You know this is saying that people don't buy from startups because they want to they buy from startups because they have to And so startups have to be considerably Better than you know the status quo. So I wonder if you could Comment on that number one and number two I mean there's there's other guys out there with all flash arrays in the wings You know, certainly emcee made an acquisition And there are others so one of you could comment on that and then sort of circle back to why solid fire Sure, well, I mean I mentioned this before because I you know, I've actually talked to a lot of a Lot of people that are offering You know pure SSD arrays So, you know among them I mentioned with tail and pure storage You know and it's not that they don't have I'm not gonna say that they don't have a place in the world But what they're what everyone has lacked that you know is to me is they're really exciting thing about solid fires The fact they're missing that QoS piece You know, obviously we put in place an SSD array as opposed to a traditional array specifically because we're looking to you know solve the noisy neighbor problem the issue with out quality of service is You're essentially trading a smaller noisy neighbor problem for Just a noisy neighbor that's harder to reach So, you know, you're essentially increasing the capacity But still if you have a noisy enough neighbor they can still consume all of those resources Without that sort of quality of service layer with a solid fire, you know the differentiator for us is actually that we can have many small tenants and They can be you know, somebody who wanted a 500 or 700 IOPS which actually even in terms of traditional drive. That's quite a lot of performance But maybe they put that with 500 gigs But you know storage or even a hundred gig volume They can get that sort of guarantee and not actually be worried that The sort of huge players on the the array that are consuming lots and lots of Terabytes and lots and lots of IOPS, you know potentially tens of thousands or hundreds of thousands of IOPS are going to totally crowd them out when they're workload first You know our philosophy with the solid fire appliance to never over provision the guaranteed amounts of performance. So You know if people want to use essentially what the burst capacity is available, you know We'll make that available to them, but we will always make it on our offering so that the minimums are always kept So even if everybody basically is using a hundred percent of their minimum, that'll still be within the array performance to deliver So it would it would seem like the quality of service requirements in SSD would also apply to Well hybrid arrays that you might deploy or or or or just regular spinning disk Arrays that you you might deploy. Is there an application of that sort of a philosophy to the other array types? I have yet to see any vendor that is Able to offer that sort of quality of service guarantee on any other type of array But if they could if they could would you want it? Oh? Yeah, absolutely. In fact that the very first time I met solid fire I said up. Can you apply this software to traditional spinning disk, please? Yeah, but to which they said what to which they said what I? Think they said we're kind of busy with what we're working on right now This is David. I mean just make a comment on that It's pretty impossible to do that because of the thin pipe you have to each of the discs It would be extremely difficult to architect that in a and and has been to architect that in a disc environment I was actually the comment I was going to make too. It's just the fact that you know with traditional disk You know the spinning disk you don't even necessarily know what you're going to get necessarily in terms of performance because You know any given right or read is going to be It's going to be located on different sectors. You know you can't predict the amount of time It's going to take you know that the seat time is going to vary from read to read and yeah, so you're in the sort of Brand scheme of things. I just don't see how you can get the sort of consistent performance you get out of Can I ask a question about the SLAs you can offer and the type of customer you can Get within your organization Within your service it seems to me that if you have that level of of IO capability, then it takes away a huge Disincentive to virtualize from database in general. Is that what your fight? Can you can you offer that? Can you sort of guarantee performance in the virtualized environment? And does that allow you to specialize and what kind of SLAs can you offer with this capability? So our SLAs are going to be specifically oriented around The number of IOPS we can provide You know and that's obviously assuming that you know clients are using it such a way to consume all of it So I mean again, this is one of those things where we talked before a little bit about queuing theory and about You know the fact that depending on the application There's just no way with you know round trip times on a network no matter how fast it is There's a limit to how fast you can you know think something to this can and return a response So if somebody had a single threaded application where every time they sent a right to the array They had to wait for it to come back before they could send another right There's no point in selling them a quality of service level of 10,000 IOPS because they'll never be able to take advantage of that because it would require a 100 microsecond full round trip time, which just isn't realistic on any sort of you know network attached storage but You know database is I think probably I almost want to call it the killer application for this You know database technology is basically ubiquitous in terms of what applications, you know really sort of expect in the real world It's one of those things that people have been really low put on to you know any sort of cloud service Conservices try to use traditional databases in cloud environments. It tends to not you know end well and of course, you know Amazon plays the trail here in terms of Developing a sort of bad reputation that use case and it's been giving rise to whole you know alternative array of technologies You know the whole no sequel distributed, you know distributed big data type of You know Applications basically are all I think designed to get around the issue and provide you that sort of database layer But you know entirely horizontally scalable, so you're not at risk from any sort of one node causing a problem You know with solid fire. I definitely see that you know the environment of you know VMWare backed Compute and memory on the UCS platform with the high-speed networking that we deploy with the solid fire disc backing it You know I see that as getting basically parody with a dedicated physical platform It's just not really bottlenecked here because we're now talking about Every component of this of this infrastructure is essentially being Hardware rations in a sense that you know you can't actually run into these scenarios where the performance isn't available unless you're a provider over provisions which we just don't I I want to give one last chance to our To our listeners to see if there are any other questions in the community before we wrap any other questions Okay, David any last questions from you before we wrap The last question I had was that in terms of SLAs This level of control because that allow you to give SLAs that are more flexible and in terms of allowing burst for example or being able to Allow them to go over limits at certain time and Things like that which you you just can't do in a traditional environment. It seems to me you've got a lot of flexibility there Yeah, I mean I think with when you look at SLAs around this traditional environment You know everybody makes best effort, but your your SLAs around a traditional storage platform really come down to is it the platform available Whereas obviously with solid fire the granularity is you know is I mean It's hard to describe because there's really nothing like it There has not been an offering that allows you to deliver you know an SLA around quality of service this way I think I was going to comment earlier that You know when you're when you're talking about how you control basically more traditional environments There's never been a quality of service that actually put you a number that let you put a number on it You could get to the point where you know with some of the quality of service offerings are in this You can do a certain shares offering where everybody who is you know attacking it has got a certain share of performance But it was really no guarantee that any given tenant was going to get any specific performance number So you know with solid fire we're actually able to say you will get this number of Iops It'll be available to you on your volume, but we can guarantee that it will always be available to you So we can actually put a number and say You have a terabyte It comes with 10,000 IOPS You will always have those 10,000 IOPS and that's just the SLA that we're offering around it You know obviously the tenant has to be able to take advantage of that again There's that whole you know round trips and so on and so forth But you're actually being able to say specifically we can guarantee you this number of Iops me that's the sort of That's the sort of killer app that's the SLA that people need to be able to feel comfortable putting a database of cloud Now there's no this you know we're backing this same you know Guarantees that you know we would put on a platform like compute or something or like those lines It's been artificially more regulated in terms of is the maximum and burst You know we actually yes, we can offer you know increases those solid fire You know let you actually tune volume so then there's a minimum And then there's a maximum which you just can't go above it'll use quality You know so if you were to set someone to a thousand minimum Iops the two thousand maximum Iops Even if this array has available performance it'll throttle them around the two thousand mark But it also has a burst capability as well We're essentially people using underneath their Sort of a sign quality of service level in terms of Iops They sort of earn this first credit if they can spend you a 60 seconds doing really fast Right to this if that's available on the disk before the QoS kicks in well Matt John MacArthur here. I really appreciate you sharing your your experiences and trying to quiet noisy neighbors on shared infrastructure converged infrastructure Again our guest today was Matt Wallace. He's director of cloud architecture at via West Wikibon pure insight of December 18th is Is that a wrap we'll post our Comments documents here in the next in the next 24 to 48 hours, please feel free to to jump on edit enhance improve the documents our next pure insight is January 22nd Well, we'll be talking about achieving 10 exabyte scale So I hope you can join Dave Vellante for that for that pure insight. Thanks very much John MacArthur Dave Vellante Thank You David Flair on the phone for for your questions as well and with that it's a wrap. Thanks