 Hello, everyone. This is John MacArthur. I'm here in the Wikibon offices today is is Tuesday, January 22nd and I'm here with my With the founder of Wikibon Dave Vellante. Good afternoon, John. How you doing? Hi everybody. Good. Good to see you We are live streaming today on Silicon Angle TV just a quick reminder to everyone or instructions to everyone Please press star six to mute your line now We'll wait for a second for everyone to do that, but press star six to mute your line And then when you're if you have questions Later during the call you can press star six to unmute so that you can ask questions of our guests or anyone and Any of the analysts today so the title of today's pure insight is commercial applications and Hyperscale storage we've covered the topic of hyperscale storage a few times here on on wikibon we've had guests from Shutterfly and we've had CTO from clever safe today. We're joined by vice president Russ Kennedy Russ, can you hear us? Thanks for thanks for joining us today Russ as we kick it as we kick this off I'd like you to talk because we've talked about sort of government applications where Hyperscale storage matters and we've talked about We have talked about some of the new applications like Shutterfly. We've talked to them about their use of Sort of hyperscale storage Want you to give us an update on what's what you see what's going on Maybe first of all with the kinds of accounts that are that are Dealing with these issues and then and then a little bit about where you see it happening in the future First of all, thanks for the opportunity to join you today. I think this is a very interesting topic Where we see sort of the trends in the market and things driving, you know the need for hyperscale I think a lot of it's around Consumers and consumer devices as you know, you know today Everybody has or a lot of people have smart devices on their person They can take a lot of pictures take video etc. etc capture information and then you know services like Like Facebook and Shutterfly and others They're they're around to help people share those Images and those pictures and and that audio content with their friends and and other folks So there those organizations are well Yeah Understanding sort of the hyperscale kind of problem because they're dealing with that amount of data and they're dealing with that You know that that's sort of use case the you know the general population Developing their own content wanting to distribute or share that content with others I you know, so that's that's sort of driving that world as you mentioned in your intro, you know, the There's a there's a number of organizations, particularly in governments that are they're capturing a lot of that You know that kind of data as well, you know video surveillance satellite imagery Scientific information and weather information those kinds of things are being captured and that's that's sort of driving Hyperscale storage in those organizations, but I think the really interesting question that comes to mind is You know, how is the rest of the world sort of going to deal with this as it comes online? And I think you know again some of the drivers of that are going to be individuals and humans wanting to In an enterprise or in a you know in a standard working environment, you know share information You know not necessarily an email form or text form, but share in video form or share it in audio form And a lot of that's going to drive. I think sort of the you know the proliferation of storage but just like it's doing in the in the consumer space today in the enterprise world and is Point where they they want to offer services to their employees to Collaborate and share information and and thickly to share information in video format or image format I think it's just going to drive Hyperscale storage. Thanks just a reminder to anyone who's not speaking Please press star six now to mute your line if you haven't already muted it. Thanks so With in within commercial Applications that commercial businesses is that when you talk about video content is I mean is that maybe the One of the best examples or is it other kinds of content also that's driving the massive capacity growth? Well, you know, I think anything that Where employees are going to collaborate on you know a work project or or things that they're building or a new product or a new Service, you know, those kinds of things are going to drive a lot of interaction. They're going to drive a lot of Creation content creation, you know design creation, etc. Etc. And that's obviously going to generate a lot of digital information and and you know these these organizations are now looking for ways to get You know people around the globe perhaps to collaborate or work together on you know the same piece of data and the same piece of Information so you know as you as you grow and you scale and you look at you know the size and the complexity of these Environments you got to consider the fact that you know data is going to be created in a lot of different formats And it's going to want to be distributed shared updated used etc by a number of different Entities within an organization I mean I think that's what's driving a lot of the private cloud initiatives today are you know the fact that organizations want to Not only lower costs, but add more services to their internal employees to say you know you can you now work with You know, let's say people around the world on a particular project and collaborate and share information In a cloud that we're managing and we're maintaining for you You know it certainly gives security it certainly gives advantages around you know privacy and security versus a public cloud offering like You know like an Amazon or something along those lines But I think more and more large enterprises and even emerging enterprises are going to start to take advantage of technology And and offer more services internally to their employees to do more online collaboration Dave just got back from a conference on the on the west coast Where there was a lot of discussion regarding hyper Hyperscale storage or hyper scale storage requirements. Yeah, actually, I wasn't there I was watching the live stream, but David Fleur was there. I think David Fleur is in the call John Furrier was there And it was struck me as quite interesting in terms of the differences between hyperscale and Traditional enterprises that maybe maybe it's worth talking about that a little bit But but let me just talk about some of the things that I learned just from watching the stream for those of you Who don't know the open compute platform Summit OCP summit is really sponsored by Facebook. It's it's Facebook's essentially Facebook's reference architecture for how they build out infrastructure And you know, it's Facebook spends a billion dollars a year on capital equipment for its infrastructure What's nice budget and Essentially if you look at the hyperscale market I just these are some of the things that I came up with there. They tend to be very early adopters They deploy what I would say either our purpose built Servers or maybe they're even custom servers that are built by ODMs that that are specific to hyperscale And and and similar approaches to storage There's obviously their scale out There's a lot of read-only going on or read heavy going on and they do do a lot of cheap and deep I mean, it's cheap SATA even less expensive sass even less expensive SSD whereas if you contrast that with the enterprise, it's kind of the fat middle to late majority You know, they let the OEMs do the qualifying for them It's it's IBM and EMC and NetApp and Dell. They're doing the qualifications And they're using commercial off-the-shelf components from the likes of those companies or HP or Dell or maybe even sometimes ODMs a much right heavier environment and They're using the higher-end blend of of whether it's you know spinning desk or or SSD and I guess the The big difference that I see I wonder if people would comment on this is that it's it's really the hyperscale guys are software-led I mean they're really putting software function layering software as services on top of that commodity infrastructure Whereas today in the enterprise, there's a lot of function embedded into arrays for example and That's sort of was my takeaways from the OCP summit. I wonder if anybody else has any thoughts or comments there or Russ Yeah, so so I think You certainly laid it out very well You know in terms of in terms of what companies like Facebook are looking to do I mean they obviously have huge challenges from an infrastructure perspective and have huge challenges from a growth perspective And they're looking for solutions that can you know can can deliver value And can deliver capability You know at a cost point that makes sense for their business at a cost point that makes sense for you know They're need to scale out and scale up and to and to grow and be able to you know manage the Data and the infrastructure that they have to manage to deliver the services to their customers So they're certainly well aware of those challenges and they definitely are looking at You know things like open compute and other commodity oriented Technologies to kind of help drive some of that and there are other companies that are that are certainly You know at that or at similar scales and need to grow at similar rates So I think that's that's very appropriate as you said most of the enterprises However, or you know a little bit behind that in terms of you know what they're looking at or what they're what they may be buying These days they may be going more for a traditional approach or a traditional vendor approach versus something that's open or You know promoted as an open platform That being said I think they're going to eventually run into the same challenges as as we were talking about as more and more content is being generated in an enterprise and being Generated to be shared or to be used in a collaborative manner across an enterprise I think they're going to run into similar types of challenges that you know the big guys the big you know the big Web 2.0 or social media companies have have already started to tackle and they're going to be looking for similar types of solutions that are There they're easily scaled out there. They allow them to add you know software and services and capabilities on top of a For hardware platform that they can they can build out and Deliver on a very rapid. I think you're absolutely right It's going to be a little bit of time before you know the traditional enterprises are at that point Okay, so your premises that these guys the scale-out guys are a harbinger to what's coming in the future And you said it's a little bit of time. What how much time do you think we five to ten years away from that or is it sooner? The other thing that I think is going to drive this a lot is the whole analytics Angle and the ability or the need for enterprises to now start to Analyze the information that they're capturing or analyze the information that's out there and available For driving, you know or competitive advantage for driving new product introductions for understanding market trends You know for for looking at where are new business opportunities that they could launch You know all those kinds of things that are going to help enterprises become more competitive in their space They're going to look at have ways to capture information and then be able to analyze that information and certainly the platform that they're going to be Looking for something that's flexible something that's scalable something that you can you could not only capture data and store data in But you can also analyze data as you as you're doing that in real time and over time Can we talk a little bit about the pain? Can we talk a little bit about the breaking points? So an organization might be going along just fine meeting their needs growth rate They can manage the growth rates and then at some point. It's kind of like the I'm all prepared for the nine foot Tidal wave, but then the 12 foot wave comes and I've flooded half of Manhattan or something like that What are the breaking points as it relates to storing large? repositories of objects unstructured objects Yeah, I think that's it. You know, it's an interesting question. I think it's History's I think it's different for different organizations, but you know typically when you look at and where customers are today You know, most of these large organizations are looking at petabytes You know tens of petabytes hundreds of petabytes perhaps and in the case of you know, the big companies We've talked about even you know, where's the breaking point? You know, it's probably around a petabyte or a few petabytes in terms of what can you can you realistically? You know store it manage protect Make available Effective manner I mean you can you can get there you can get there through replication You can get there protects information But but at some point in time it gets it gets to a point where if you're let's say making You know three replicas or three copies of your data At some time at some point in time spending, you know three x the amount of storage to First house the data and protect the data and power and cool the storage and then manage it You know, there's there's a financial breaking point In addition to a technical breaking gauge with customers It's generally around the you know the one or two petabyte range where where that starts to get a little bit difficult Certainly if you get you know as a 10 petabyte range It's it's starting to get very very difficult and very costly And that's why I think customers are looking for Alternatives to you know the traditional storage approaches that can that can deliver You know cost-effective Reliable secure storage options for their data So so there's the aspect of replication, which is how a lot of people deal with availability and and and then there's the issue of backup can Can backup and can replication be completely eliminated in these kinds of environments? Well, I think at some point you have to look at does it make economic sense to replicate data You know three times or four times in order to ensure that it's available You know, it certainly it tends to petabytes. That's a lot of storage that you're buying and powering and Cooling and managing if it's online if it's spinning if you put it on tape You know, you certainly addressed it and some of the power issues But maybe your data is not as readily available So you got it you got to make that straight up I think there are levels and I think it's probably in the 10 petabyte range That you can say does it make sense to? Replicate to protect this data or does it make sense to back it up and even so, you know What is my backup philosophy? Do I have to back everything up? Do I have to only back up the things that change? How often do I need them to do backups? What are my objectives of backup? Is it to recover to a certain point of time or is it just to ensure that the data is protected and available? And I think what you're starting to see is Organizations at that level certainly with unstructured data are looking at different ways to Store and manage that data keep it online and keep it available and more of a you know an archiving type of Use case or methodology that that uses different, you know approaches to protect the data and different approaches to make make David Floyer on the phone. I think you had a question. Do you want to go? Yeah? No, I had a comment which is that the The whole business model of hyperscale is very different Is designing putting in the minimum amount of equipment? upfront and Minimizing the amount of maintenance is required both by the vendors and by the installation So you're trying to if something fails for example in Facebook. They just turn it off if enough servers or components fail and they turn off the rack and After three years, they replace it anyway because it'll be lower cost to do that than to try and extend the life and You know that that that model of Looking at things are Putting in much more Cost upfront is something obviously that large-scale enterprises and but in the shorter term That cloud providers will have to embrace if they're going to compete with the likes of Amazon or Google or Microsoft for these types of services so My question is is to what extent? You providing your Your products in that open-source open and you know multi-vendor type of environment that that's been created So I kind of broke up there at the end. I'm not sure I understood your question But I think there's a couple comments relative to the topic that were that we're discussing one One I think your point about the fact that you know hardware commodity commodity oriented hardware, you know At hyperscale is certainly something that people look at now Maybe not investing is as much in the hardware certainly investing in software capabilities that can that can Deliver levels of reliability and availability of the data But not necessarily have to have the significant investment in the hardware at your point of you know Something fails at a certain rate or a certain level you just replace it with you know with similar components or Commodity components and and you know the data is is now readily available or replicated to that new componentry and it's now available you know that that's certainly a Viable approach at hyperscale and I think you you know most companies that are at that scale are now definitely looking at that so that's a difference in Backup or traditional approaches versus a hyperscale approach, but the key to that is you have to have a Way of treating the data a way of managing the data to ensure that it is reliable and ensure that it is available And ensure that if something fails, you know the data can The system can detect that there is a failure and it can recover from that failure You know when we talk about that with customers we talk about the benefits of our dispersal mechanisms and erasure coding as a Viable method at hyperscale to be able to ensure that the data is protected and the data is available Should there be a hardware failure or should there be a component failure, and that's really what you're what you're trying to guard against you know at hyperscale that that that model certainly works and as Organizations enterprises particularly start to get to that level and start to get to that scale They're going to want to look to those new models And you know the new commodity-oriented hardware models with a software layer or a set of software That can ensure that the data is reliable the data is available the data is protected So they can they can spend less or invest less in the hardware or be able to replace the hardware if it fails very rapidly And yet the data is still protected the data is still available One of the things that we're seeing Russ is that there's there are these growing Repositories of data, but then there are also these growing repositories of metadata And and when you combine the data and the metadata you end up with different use cases with different performance requirements and one of the things I'm interested with the approach that you use there at clever safe is Can the data dispersal? Approach the erasure encoding approach serve up the data At the rates that you need for all the use cases That's a very good point and and and as you're dealing with you know billions of objects or trillions of objects Metadata becomes very critical and being able to Identify pieces of information being able to categorize that information search it find it that kind of thing is very very important We offer in you know in our storage system We offer the ability to store both object data and metadata Using the same techniques using the same dispersal techniques So you get the benefits of reliability and availability of the data and the metadata in more fashion And what that does is it allows? It allows essentially limit the scalability you can scale the architecture adding more and more Storage nodes and more more locations and bringing more things online Very seamlessly and very easily and be able to take advantage of that that Infinite scale or limit the scale both on the object data perspective and on the metadata perspective now We we deal with customers that you know that they take different approaches I'll give you an example like Shutterfly Shutterfly does maintain separately a Metadata database and a lot of metadata associated with those photos in a separate storage tier and all the object data The photo data and all the objects associated with photos are maintained in in you know these large Repositories multiple petabytes multiple Tens of petabytes information captured in one large single repository That's their their approach and they've been comfortable with that approach and it gives them flexibility in Reliability that they're looking for for their for their customers But there's multiple ways to to go about keeping track of the metadata and keep track of the data objects but the key obviously is Ensuring that metadata is available and it's usable such that you can use the actual objects You could find them you can retrieve them Recover them you can update them as necessary you can analyze them Just a big piece of what's going on in our world today is the whole analytics piece You know being able to access that information quickly and easily through through use of metadata is very very important right and and Can we just spend a little more time on the performance aspects? I mean what kind of differences? We can use the Shutterfly example or or a different commercial Application as an example, but the performance requirements on the metadata I would suspect would be substantially different in terms of latencies that you'd expect Right is that fair? Yeah, no absolutely. I mean it certainly is certainly is As I mentioned in their case a different tier of storage. It's a different you know database architecture that they're maintaining um and Access patterns and latency and all those kinds of things are are architected in such a way that they can deliver the service At the time frames that they need to for their for their customers So and I think with most object storage systems That's that's that's sort of the way that you know that the people take advantage of it. They they You know they use object storage systems to store objects and they they may or may not use The object storage system to store the metadata, but if they need you know sort of Like in the case of Shutterfly rapid access the ability to find or search or locate You know an object in a in a pool of I say 20 billion or 25 billion objects They have a different architecture. They have a different structure in order to make that happen Yeah, I don't know if you have visibility to this but Shutterfly just recently made an acquisition I'm trying to remember the name of the company But um, it was in the content cure. I think it's called this life or yeah this life. That's right That's right And and this life is in the content curation space and so So it goes and reads through probably all the data that's sitting on your system then Assuming that that's the way that They're going to integrate the technologies But read through all the data that's on your system or or on facebook or other other repositories And then actually do the some a fair bit of the metadata creation Does that change the workload on your side? potentially You know again, I think the the key that in value proposition that we're delivering is You know scalability and limitless scalability from an object storage perspective and being able to capture and store reliably and securely Any number of objects whatever size and whatever capacity they may be as they start to integrate these other you know these other systems and Objects or elements or data objects from from other systems and they start to to bring them together to to put together a lifespan or a duration of someone's life it could potentially you know Drive the need for them to to certainly You know analyze and keep track of where things exist and where things reside and potentially move those things You know into a clever safe or a larger object storage system like clever safe, but you know It's it's still kind of I think early in in their thought process in terms of what they want to actually you know How they want to actually utilize the facet and bring more and more of these capabilities to their customers right I want to pause for a second and And see if there are any questions here in the community. We've got quite a number of people on the call Hi, this was bob. I had a question um earlier you were talking about their performance aspects of say metadata and I know for a certain stores like ocean store what they were They actually advocate having cash copies of the whole object if you're looking to increase performance Have you found or tried any of those and what's excess or lack of success that you had trying to do that? Bob i'm gonna have to ask you to repeat or have john repeat because you broke up During that during your question So I I think and bob I think the question that you asked was you're seeing people putting entire objects into the cash. Is that right? Well, let me restate it. So can you hear me now? Yes. Yeah, that's better. Okay. All right. Sorry. So um When you're using you see that the sort of attendant problem that comes with that is that the performance The latency is increased and I've seen that in two implementation When I was at emc And so The question is well, how do you increase performance then in in some of the papers such as for ocean store what they recommend Is having local cash copies of objects is a way to decrease latency time Uh, I was wondering if you would try that in in a real world implementation and seen if that was worth it Not worth it, etc So, yes, that that is a a method or a way to go after, you know, sort of the The latency problem and and and yes, there is there is additional overhead associated with rature burning When it comes to, you know, putting putting all the pieces back together and delivering, you know, the final object There is there's you know calculations that have to happen in order to put the information back together Which which does increase the overhead We clever safe has not uh, have not Delivered a solution along those lines, although we've done some prototyping and I think there are some some ways to take advantage of you know A caching mechanism sitting in front of a large object store that that can that can deliver the You know the the high performance or or high throughput Kind of results that I'm not aware of any other Eraser coding companies that have that have necessarily Introduced products or techniques that they can do that. Certainly it is something that we We are Working with in the labs and playing with in our own Usually available product. This is uh, this is scala. I had a question about basically down market For everything you talked about this is really an enterprise or a very large content focused company Or organization type scenario Kind of left you learned out there for potential future SMB bid marketplace or Decisions that's made. Um, is there anything that's applicable to that space? Um, with regard to Uh, with regard to what's happening with the type of scale storage Did you hear that russ? Yeah, so the the the question really was So so we've talked about scaling up, but there's two dimensions scottis I know scott mow has talked about scaling up and scaling down Is there a dimension to your technology that says, okay, somebody who only has a petabyte Should really go this way or it only has you know a couple of a couple of petabytes of storage should go this way Does that make sense well, so I think I I understand that just the question and and really, you know, where where I think You know, things are things are going with respect to this technology and approaches Certainly, you know people that are out of petabyte or tens of petabytes today understand the pain and understand the challenge that traditional approaches Uh bring to to to bear others that are less than that if they if they potentially could grow to that level I mean we we're dealing with companies All the time that are you know, they're they're full on data capacity is in the tens of or hundreds of terabytes But yet they're growing so rapidly that they'll be at a petabyte very quickly and be at two petabytes You know very quickly beyond that. So they they're certainly, you know looking to Um, you know future proof their their storage infrastructure and look at ways that they can You know get to a capacity easily and grow with that capacity very easily the other method that I think is Is starting to become popular Is, you know using storage services or storage as a service or using service providers that provide storage as a service for organizations that you know are say less than that Um, you know that capacity that petabyte level of capacity Uh, you know they can offer certainly, uh, you know a lot of capabilities a lot of uh, of reliability and availability of data But you're you're you're now information to a third party to a service provider And you know and ensure that it's available those kinds of things So there's there's trade-offs with that with that model as well But I think a lot of organizations that are certainly below that capacity and and and want to use services that are out there can can take advantage of All the different offerings that are there to a certain level they may want to look at bringing Russ, do you have um service providers who are leveraging your technology to deliver a cloud service? Or not a service provider uh We we do uh offer our technology to other service providers And we do have some customers in the service provider space that that are you know Housing the the infrastructure and providing services to in clients and customers We have we have both commercial customers in that space and federal integrators that that provide that service to federal agencies as well Um, and I think that that's you know, that's that bodes well for you know The question that was being asked and that is you know, are there different approaches versus just acquiring hardware acquiring a system to to gaining the benefit of You know dispersal and erasure coding from a reliability and a cost perspective And certainly there are service providers that are out there that are looking at providing these kinds of Approaches and capabilities for their customers. We're not going to be in that market. We're we're a technology Uh company we build systems. We build You know software capabilities, but but there are certainly organizations out there that want to Provide services for their clients and their customers and are looking at technologies like ours So what do people think this is Dave Vellante? What do people think happens to the enterprise hardware business as we know it so you've got Intel's OEMs on the server side, you know HP Dell Cisco Certainly to a lesser extent IBM. I mean IBM's got an x86 business, but uh And I guess a Oracle As well or is there kind of a wild card and then in the storage side You know add add to those names emc and net app And I guess a tachi, but certainly IBM and HP So we talked earlier about hyperscale being a harbinger for the future of the data center We talked about hyperscale being essentially commodity components on which you're running software Services, so so what happens to intel's all intel's oems and c gates oems I mean essentially the server vendors have been marking up intel microprocessors for years and the the Storage vendors that sell into the enterprise have been marking up c gate disk drives for years Um that dynamic seems to be I guess we're predicting that's going to change in some way shape or form Um What do people think about that? I don't know ross if you have an opinion, uh, does You know do the big server vendors have to figure out how to play in this market or get marginalized? How do they play? And they need to start making deliver those kinds of services to their customers now I don't think that that year one type applications and tier one storage solutions that are not necessarily going to To run very well on a dispersed system nor necessarily going to be Put on a you know a commodity oriented storage device or server device So, you know their their businesses in that respect will continue to Uh, you know to be maintained. I would expect however If you look at the growth and you look at where you know the the new data is coming from and you look at You know all the fact that that a lot of the different objects and images and and types of data that people are capturing and want to Use and collaborate with et cetera et cetera They do definitely have to evolve their offerings and be able to Uh, you know to to bring to the marketplace storage solutions that they can they can deliver at the price point the scalability points and and and you know the reliability and availability that we're talking about Uh, you know so that they can't compete so they won't be marginalized So but today their their intellectual property is is Highly or intrinsically linked to the hardware but oftentimes in custom asics Um, or certainly in software that they Either bundle in or or or sell with their hardware Um Do we do we see examples of these traditional companies that are in good positions to to take advantage of this trend? What do people think? realize that there are you know Trends in the market and movements of foot in the market that are going to drive a lot more usage in this area And so they are uh, they're partnering with other uh companies. They're partnering with uh, you know with the likes of of uh, this drive providers and and server providers and software providers such as us To to deliver solutions that meet the requirements that we're talking about I agree with you I mean, I think a lot of the investment that they've made, you know Up until this point is in customized hardware and very customized software solutions, etc But those won't as we talked about earlier those won't necessarily translate into This type of use case and this type of scale. So they're they're going to have to look at options and other ways to to bring technologies to To the marketplace that will address those needs And certainly they're looking at either partnerships for in some cases acquisition to acquire A capability or technology that will allow them to to bring those kinds of solutions So john we've heard some companies talking about, you know Abstracting or extracting the software and and actually selling that separate from the hardware. Do you do you do you see John the companies in a position to do that. I mean you you've been working with these companies for years You were a customer of theirs. Are they in a position to do that in your view? You know, it's I think one of the things that's really difficult is the transition And some of it's just the transition in the business model, right? So if you if you're going from A business model where you're selling You know very high revenue high dollar amount hardware with a different margin model But immediate revenue recognition to a software business model or a services business model It's it's quite gut wrenching in terms of you know, how you how how the public market reacts to it. So, you know, we I think the recent move potential of del to go private as they go through a transition What's going to probably be a pretty gut wrenching transition At del is a maybe a harbinger of what some of the large enterprise companies need to do if they're going to make if they're going to successfully Make that switch. Well, I mean, it's it's a complicated switch Uh, uh, let me just say this and then I'll let somebody jump in. I mean, let's say i'm selling a Box today for whether it's a million dollars half a million dollars 300, 200,000 all of a sudden i'm going to sell this package This is software software package for a couple hundred thousand two hundred three four five hundred a million Right, uh, I could see it's a serious customer backlash there. Wait a minute. What am I paying for? That's a different mindset. Somebody else had a comment. Yeah This is David Plummer. I uh, I would like to perhaps Comment that the opportunity for vendors is is not all lower costs The model is that everything is going to be paid for up front of the hardware and it'll actually You what they want or what what they will want to do with the business case for Providing that hardware is that you eliminate all the maintenance So you you don't provide that maintenance and you don't and the customer doesn't have to provide that maintenance And you have it there for a shorter period of time and then replace it So there are some upsides to vendors who embrace this There's uh, you know continual replacement over a longer over a longer cycle and And and the saving of operational costs will be huge Which will justify actually maybe slightly higher capital costs going in Um, so so there's a and in the case of del of course there's a specific reason Why they will have a very significant reduction in revenue because of The uh reduction in the number of PCs being sold By them well and and other vent well sure and that's the pc sort of tablet shift going on But how about cisco? I mean that's a classic case where you got a company that has a lot of you know It's been great with proprietary hardware and custom asics and the like and now this trend called sdn comes in Right, right and now they're talking a good game But what happens to cisco? I mean, I wish stew were on to answer this question, but any other cisco pundits out there I'd love to hear your thoughts. Yeah, it it the cisco sell Great boxes. They're integrated and um But they're in they're much more expensive than they Would be if the other model was taken, but however if they embrace it, uh, if they embrace it, um and sell Boxes we should require no maintenance and which at the end of the day will you know be replaced in three years time or four years time There's no reason why they couldn't be very successful In that model It's going to need a great deal of courage to To to switch to it as john was saying, but I I think they can be successful and sell sell the software David we did a quick switch earlier So this is stew so that there's two pieces, of course sdn is looking to kind of abstract the control plane and then there's a discussion of the commoditization of of the hardware itself because You know cisco and everybody else is working on sdn. They're going to have you know open flow controllers. They're going to You know take that piece out of it But if you look at what was discussed at the open compute discussion It's the hyperscale guys and what differentiates them is what you said, david Is I want the simplest components as possible. I want bare bones I'm not going to buy them from somebody that puts together the solution that has you know 500 features I'm going to buy something that is you know very simple. It's based on a chipset Things like the odm's if you looked at what facebook announced with ocp One of the suppliers they had it was intel and it was quanta computer Well quanta is the same company that's putting together just bare bone switches built with like broadcom chipset So a order of magnitude cheaper on what they can do and if we can take the software and extract that out I can still use a cisco switch or I can buy a switch that's much much cheaper than that And so that is the potential to truly disrupt You know cisco and some of the other legacy guys who were all trying to pivot to come on to Really taking their software bundling that and making that the source of value Russ is that consistent with what you're seeing from your customers and in your sector? I was just going to say absolutely. I mean that you know the thing that That we hear all the time is is customers want you know at this scale They certainly want to and they want a platform that that you know delivers a certain capability from a hardware perspective But the real the real benefit or value that they're looking for from companies like ours is Is the guaranteed levels of availability and reliability that our software can provide and you know They're going to very commodity oriented, you know componentry Is a similar example that you know in the storage industry is using desktop class hard drives versus You know cloud or enterprise class hard drives, which are a lot cheaper And if you can deliver the levels of reliability and availability Using software sitting on top of that that commodity oriented hardware that desktop class oriented hardware That that's the same or equivalent to an enterprise that you know twice as much Certainly, you're going to want to go with the lower cost Especially in that scale. So absolutely the parallels are are are similar from uh, you know from what you were describing from a networking perspective and a storage perspective So is that how you go to market russ is a software Solution or you offer both? We're very we're very flexible from a customer perspective. We we offer commodity oriented hardware platforms as appliances with our software on them We also offer our software That we can integrate with whatever Hardware platform that the customer wants to use and we've done that on a number of occasions We worked with other oems. We worked with other And a pager and and and minor suppliers as long as it meets a certain specification in terms of you know CPU and and interface network interface and drive capacity We can generally make it work with our software and and so we we offer a very flexible approach So it's really software the way you want it And and the customer gets to pick sort of right, right? Yeah, so because we're certainly seeing We've had a number of guests on recently who are delivering, you know converged infrastructure that's got embedded software It's got embedded custom asics or FPGAs or To accelerate certain parts of the workload I'm thinking of some of the backup Backup replication de-dupe Appliance suppliers right that are out there. There's a couple of them locally here. So Oh, I I mean I again as we we talked earlier. I think there are certain Certain applications and certainly certain use cases that are going to demand or drive a specific or very proprietary oriented solution But for this hyperscale for the data that we're talking about the The you know the video the audio the large and structured object That's going to be written once read, you know quite a bit of times like Like the use case that was discussed earlier or something that's going to be more or less an active archive It's data that's going to be saved. It's going to be available for essentially forever You know like somebody's photos at Shutterfly The Hardware is a very good solution for that kind of approach and with software like ours layered on top of it You can deliver levels of reliability that are you know that You used to have to pay quite a premium for you no longer have to do that I am intrigued and maybe this is a follow-up for a later conversation But I am intrigued by this notion of having created a large repository that that was a you know store once read occasionally um kind of Repository, but then you come along with some of this metadata creation content curation Solution which all of a sudden goes through and reads the entire repository at once, right? So I just I wonder how they deal with that and do they have to migrate that repository for the curation process or It becomes a background task or what do you know a real-life example? It's not the it's not the this life example, but but earlier or actually last year Shutterfly acquired the assets from Kodak Kodak Ophoto or Kodak Ophoto Yeah, and they did the very same thing right they they migrated all of those assets from one repository to The Shutterfly repository, which is based on which is on you, right? Yeah, yeah exactly And as they were doing that migration they obviously You know tagged all those components and all those images with you know the metadata that was necessary for them to be able to offer those services now to You know the Kodak customers and that was a very straightforward seamless process that you know their application was able to To handle and they did it all I mean I think they migrated about 12 petabytes maybe 15 petabytes of data and about a month's time frame and then Actually once it was live and online they actually moved part of it at a time from one location to another location So once it came online it was always available But they were able to migrate the entire platform and the entire Repository from one location to another by just keeping enough of it up and available and online if someone wanted to recall or use or or you know Take advantage of an image that was out there. It was still available Those are kind of interesting use cases if you think about it, you know that that technologies like racer coding or dispersal to buy You can you can actually pick up and move an entire storage system without ever Without doing what? Without ever taking it offline. It's always available and that's huge Because you don't get to the data right I think I think that's a that's a huge benefit when you When you start to think about these repositories because you can't back them up anymore You it's really difficult to replicate them and it's Nearly impossible to to migrate them if you're on any kind of traditional architecture, right? right It used to be covered from start locally and then expand geographically so customers start with a local repository in one location And then as they want to start to take advantage of sort of high availability and site availability They'll move parts of the system to another location and parts of the system to a third location ready, man ready Never takes So Help me again with the sweet spot for you guys with respect to response time We've had we've had end users on that are looking for You know 100 millisecond kind of response time this weekend. I sorry this last week Uh, I spent time who are looking for with some folks who are looking for microsecond response time And Because because the difference between a couple of microseconds Effects whether or not they get the trade or not, you know, so so In your world What's the what's the kind of response time in terms of retrieving an object that that you're looking to serve But you asked them the most important thing is time to first bite, right? Because we're generally doing with You know a large object that's going to be streamed over, you know Some period of time and getting to that first biting starting that retrieval process and and what and obviously As it was discussed earlier, there is a bit of overhead associated with You know taking pieces of an object putting them back together We calculating going through that calculation and delivering it back out to the application right user You know processors today do that very very fast It used to be that that was a you know an onerous process and cpus were not fast enough to do that very well That's not the case today. I mean the latest processors from companies like intel and ad and others Do that do that transaction or that calculation very very fast. So yeah for us You know the key to delivering a a response time that is that is adequate for the application is the speed of the network How much bandwidth is available from the access point to the you know to the data storage repositories or the data stores that we call them slice stores They're actually the storage notes that the pieces of data. So that that You know speed of that network the latency associated with that the the packet loss associated with that So how clean is the network, you know, so there's not a whole lot of retries if that's a very clean network And it's you know, it's it's reasonable to assume And a in a very you know valid use case that you can get you know Like you said 100 millisecond 200 millisecond response time even geographically dispersed as long as you have enough bandwidth And your packet loss is not that great. Okay, that that helps a lot You know we we had some folks on talking about ad the ad serving or ad bidding market recently and you know that This kind of technology is not a good fit But again, if time to first bite is your measurement and then from there it's streaming an object or Recovering an entire object that maybe you say a few megabytes in size or even a few gigabytes or even a few terabytes inside You know that that process is generally pretty straightforward once you get to the first bite Yeah, no, I think that I mean the similarities in the two use cases Very different applications and very different technology But the similarities in the use cases are That there's a whole bunch of stuff that you don't influence that impacts end user experience, right? Right, but your part you could do very cost effectively Um, and very quickly given the direction with the trend with the technology Yeah, okay, good I think we have time maybe for one last round of question or um so Anyone else up up Okay, Dave any last questions from you No, I just um, I guess my my comments on observations I really do think that uh this discussion hits on the sea change that's coming and I think that you know the hyper scale Weaving into things that we've been talking around About around software-led infrastructure and obviously the data explosion are all kind of weaving together to really You know, we've been talking about these you could see it coming for for years now Almost a decade watching what google and amazon and facebook have been doing but now we're starting to actually see This bleed into the enterprise and commercial products And I think it is going to have a ripple effect and I'm just it's fascinating times and uh appreciate everybody coming to the call today Yeah, and um, and I What I what I one of my takeaways from this is that there is uh Uh It's absolutely necessary for the technology suppliers to be able to deliver technology the way the customer wants to consume it Whether it's a cloud service or it's a as a transaction. I was involved in recently where they they uh the buyer Basically jobs out everything from the disk drives to the processors to the to the to the to the Servers and does their own system integration. So I think we're you know, we're going to see a lot of different models go, but it's um, but I think The erasure encoding data dispersal techniques fit a fit a real need for a growing Requirement within enterprises of all sizes and the question is when is it going to hit a broad market? But it's it's definitely coming. So Thank you very much for joining us here today on the january 22nd pure insights Russ kennedy From clever safe. Thanks for joining us today. Thanks. Also. We had questions from scott low and and david floyer and bob and and bob Whoever you are bob. Thank you. Oh, yeah, bob primer. Oh bob primer. Thank you, bob Appreciate it Thank you very much. Yeah, and uh, we will Be writing research notes over the next couple of days get them up on on wikibon.org. Please feel free to Jump in contribute edit improve enhance And with that i'm going to sign off now and thanks very much for being with us today. Thank you