 Well hello there, Marcus Knopkins here, founding editor of SiliconANGLE.com and I am here today. I'm joined by Jason Anderson of Datalink and we're talking storage. We're talking NetApp a little bit. NetApp's big announcement course came out today and we wanted to kind of get some thoughts from some folks in the field that have a sense of what these technologies are best used for and how customers are reacting to them. So thanks for joining us today, Jason. No problem. So tell me and the viewers a little bit about who you are and what you guys do over at Datalink. So I'm Jason Anderson, principal architect for Datalink. Datalink is a data center infrastructure solutions provider based in Minneapolis with a national sized organization around the country primarily focused on data center infrastructure for medium to large sized businesses. So storage, network and compute. My role as principal architect is to focus on our largest and most strategic customers, especially in the upper Midwest on solving their problems in data center infrastructure. So and you guys, what kind of customers are you focused on, SMBs or enterprise, how big, what's the range of size where you guys like to where your sweet spot is? So we primarily focus on organizations that are Fortune 1000 and we do some work with smaller companies but primarily focus on Fortune 1000 and our largest customers get well into the Fortune 100 size. Wow, okay. So and the type of work that you guys do with them is focused on building data centers for them or helping them architect solutions in-house or they farm it out to you guys or how's it work? No, we're an on-premises company. We aren't in the kind of the outsourcing model. We do manage services, but our primary focus is around helping customers build out the technology in their data centers, storage, network and compute to get towards, if you want to be buzzword compliant towards software-defined data center, private cloud, that type of capability. So yeah, buzzword compliance, that takes me back that word. So anyway, the buzz was talking about the buzzword compliance around software-led. That's like the big new thing that everyone in the media seems to be latching onto is with Pivotal and VMware and all these recent developments in the virtualization space. So talk a bit about software-led and like customer sentiment around it. Is this something that they're reacting against or clamoring for? I've seen a wide range of sentiment from customers that feel that it's a little more than hype with no real meat there to customers that have fully bought in and are realigning everything from their organizations to their spend to all of their planning around software-defined data center. So it's a pretty wide range of sentiment across the customers I've talked to about it. So if you were to, if you were looking at your customers specifically, what percentage of them are trying to kind of move towards kind of a software-led approach to the data center design and what percentage is maybe resistant to it? So I would say that they're all taking steps towards what is generally described as software-led or software-defined data center. It's really a matter of whether they see it as being something really revolutionary or whether it's really just putting a fancy label on kind of the general technology trend of increased integration of the software layer, I'm increased automation and the ability to kind of more vertically integrate the capabilities of your infrastructure as opposed to having silos of capability with siloed teams to manage it. So what are the key differentiators you guys are seeing with clustered data on tap, the C dot thing? Is that something that you see as a market departure from other options in the industry or is it something that is a nice logical evolutionary step? Well, I think that one of the things that brings to the table is the ability to take a scale at architecture and match it to the general purpose IT use case as opposed to it being siloed in specific verticals. So scale at architecture aren't new. They've been around for a while, but in many cases they've been restricted to either specific areas of high performance computing or geo-seismic where there was a big concentration of compute to go with the storage. One of the major things about clustered on tap is its ability to be like the seven mode on tap that preceded it, very general purpose and be able to use quite a lot of companies to solve the general IT case with it. So that's what I'm hearing a lot in talking to different folks that I have some experience with is general purpose, but who are the likely targets that are going to want to clamor for this first? Who are the ones that are going to be best suited by adopting the new technology? I think that the customers that have kind of grown tired of having islands of storage and grown tired with having siloed capabilities in large numbers of arrays that they're struggling to manage with large teams are going to be some of the best candidates for rolling out clustered on tap. Because of its ability to kind of collapse the storage infrastructure into a smaller number of large clusters, you can reduce the number of management instances, reduce the complexity of an environment pretty significantly without giving up any of the functionality that you kind of expect from NetApp. So this is an interesting question that has passed me about one of the analysts. How do you expect the SLAs to change once a customer implements CDOT? So I think that one of the important focuses that NetApp has put into clustered on tap is their push around non-destructive operations. So while the current general availability release that just came out very recently in 8.2 doesn't get the system 100% of the way there, the trajectory is very clear to that, getting to a point of fully non-destructive operation, not just for a current generation of array that a customer would deploy, but across multiple generations. And that's one of the things that really changes the game. There's been a lot of vendors and a lot of products that have been able to maintain exceptional uptime once an array is deployed. But the real trick is when you go to lifecycle that array into the next generation. And Cluster non-tap is pretty unique in its ability to maintain that non-destructive operation while also performing lifecycle tasks. Wow, okay, so it's been interesting to hear from you. I think I've exhausted every question I could plumb out of you, unless there's some other information you're dying to share with me. I can't think of anything else that any other aspect of this. It's a very interesting news cycle though. I'd be interested to see more reactions as they come out. Indeed, I'll go to chat with you. Yeah, appreciate you taking the time. Thank you. All right, we're out. Thanks, Jason. Thank you, have a good weekend.