 Extracting the signal from the noise, it's theCUBE, covering VMworld 2015. Brought to you by VMware and its ecosystem sponsors. And now your host, Dave Vellante. Welcome back to San Francisco, everybody. This is theCUBE, SiliconANGLE Wikibon's continuous coverage of VMworld 2015. This is day two. Jonathan Stinson is here, he's an IT practitioner with Hendrick Medical in Abilene, Texas. Yes. Welcome to theCUBE. Yeah, man, I'm excited to be here. Yeah, so this is a great event. We're talking off camera about VMworld. I mean, as an IT practitioner, I mean, everyone uses VMware. Oh, yeah. This is your wheelhouse, your peers. You've really transformed your IT organization as have many others. So talk a little bit about, you know, VMworld, what it's like for you. Oh, it's great to come out here to get more hands-on experience with some of the tools that we don't use hopefully yet. I like to use more of these. It's great to just be able to talk to professionals, to talk to vendors. There's so many products out there that I've wondered about using and now I have this chance to actually talk to some of these people and, you know, see a disaster recovery solution. You know, what does that look like? What does that work from this point of view from this company? So yeah, it's great. Just kind of let my imagination run to things we might be able to do, ways we might be able to improve. So talk a little bit about Hendrick Medical. What's the organization all about? What's your role? Yeah, we're a hospital. We're a private hospital. We're actually the biggest thing in about 200 miles. So we do a lot of specialized stuff where the official disaster management location for our area for like 10 counties. I think we're about 600 beds, about 3,000 employees. So it's a pretty good size organization. And there's just always more and more data. There's always more applications when you stand up. Extra layers of security, especially in light of the breaches for some of the insurers and other medical companies out there. So that's just always a lot in motion, always a lot changing. So, you know, five years ago when we first started doing theCUBE here, the dialogue was, you know, what percent of your environment's virtualized? What are you trying to get? What's your journey look like? And I would imagine you're through that discussion and largely virtualized now. It's turning your attention to other challenges. What are some of those challenges and drivers that are, you know, activating your activities? A lot of it is just that cost effectiveness, that, you know, perennial issue in IT, trying to make sure that we provide high quality services, that we provide them in a way that doesn't create a management nightmare for us, doesn't create a lot of calls to the help desk. We're about 80% virtualized at this point. So that's still, you know, we still have some ways to go, still some legacy systems we're trying to bring on board typically as part of the update cycle for those applications, just constantly improving, trying to find what's worth the value and make sure we implement it well. So the driver's electronic medical records, meaningful use, obviously HIPAA and compliance. Oh yeah. You know, never ending. Every conversation with HIPAA. That's why you don't sleep. Yeah. So, okay, so you got all that going on. Just describe a little bit more about your environment. I mean, paint a picture for us of your infrastructure and the apps that it's supporting. So medical record systems is the biggest thing for us. We run several pretty heavy SQL boxes. Those are some of the systems that are not and probably will not be virtualized for a long time just because they would consume an entire host. We have about 60 to 80 terabytes of production data. When you throw backups into that, we're pushing into 250, 300 terabytes of data and we're hoping to actually, in the next few weeks, triple our installed disk capacity to better handle that, to make sure we have room for growth. So many projects that are handed off to us come from another department who has chosen a product, chosen application and then we had to figure out how to make it work. And so having that capacity out front where we know we have not only just raw terabytes but the IOPS to support it has been a big part of our to-do list. Okay, and I mean, how many VMs roughly are you running? 300 or so? Okay, and what's your storage infrastructure look like? You said 60 to 80 terabytes, what? Right now we have a combination of a NetApp system, one of the fast 3000s, and we have an IBM XIV. So we kind of split the workloads between those depending on performance and capacity requirements. But like I said, we're hoping to, I'm hoping when I get back from VMworld I'll have new hardware waiting for me, so it'll be great. So how are you driving efficiency? What are some things that you're doing? So a better understanding how we utilize are the physical resources. Like we just upgraded a 10 gig ethernet across all our virtual hosts, better looking at our processor usage versus RAM balancing those things inside the host, understanding more about individual applications, what they need as far as high availability. You know, some things you can deploy as an availability cluster and that's great. Some things still need the old failover cluster and others it's just when they fail and get them back online as quickly as you can. So finding the ways to automate a lot of that survivability is something that we spend a lot of time talking about, a lot of time implementing. So CataLogic invited you here on theCUBE today. We know a little bit about their company. We had their CEO on at Walsh the other day. How are you using catalogs, generally in CataLogic specific? Can you talk about that? Yeah, so CataLogic provides a couple of really good services for us. It's the foundation of our backup system which any important data has to be backed up. They have really great tools for not only making sure it's backed up initially to secondary storage, but also make sure it's replicated an appropriate amount of time to a secondary site. So they make that whole workflow very easy. We get a much simpler insight into what's going on when it's failed, what's broken down. Before CataLogic got an email coming in that was about 300 kilobytes every day. They just gave me the rundown of everything that happened in the backups in the last 24 hours, and it was a mess. Most of that data I didn't need, but the off chance is something went really wrong and I needed to go back and look at detailed stuff. It was there. With CataLogic, I get a simple email that says, this failed, this is out of RPO, this is what's going on. And there's always some systems down. So systems being upgraded. There's always something that, okay, yeah, that wouldn't have backed up. That's fine. That wouldn't have, okay, this one I actually need to look at. And CataLogic makes that very easy for us. They also give us good insight into love our file management. So on the NetApp, you do block level, you can also do file level storage. And with their catalog, we have a report ready to go on the NetApp, all the file shares. I can go out and find all those iTunes libraries with just one quick report. And similarly delete them depending on whose home drive it is, depending on how far up they are. So you let them know first? It depends. Sometimes? Depends who it is. A lot of times I'll check like the last access date and the CEO had an access to that music library in a couple of years. So I assume he's not using it anymore and I let it go. But yeah, generally I do a little bit of research. Even the CEO, okay, be nice to your IT guy, that's my mom. Well, we still have the backups, right? So if it hit a pinch, I can get it back. Well, backup is one thing, recovery is another, isn't it? Yeah. Using the catalog to gain visibility on your data. People talk about copy data management. I mean, let me describe it, you tell me if this is sort of your environment. You make a copy to back it up and then you make a copy because you want to give your test dev team a copy. You make another copy because you want to populate a data warehouse. You make another copy for whatever other reason. You got all these copies and you get this copy creep occurring. And then you don't ever un-copy. Right. Delete, but so, is that describe your environment? Oh yeah. And so maybe talk about the before and after you started using the catalog system. So before catalogic, anytime we needed to duplicate data, you know, it's just the standard, you know, use RoboCopy or just, you know, the Windows file explorer to move it over where you need it. With catalogic, if someone needs a set of files, I don't even really need to know which specific files. I can take that whole drive and say, you know, a good example would be a developer who said, you know, I did an all-nighter and I made several changes and they didn't work out in the end. I need to roll back. But because it was the middle of the night, I wasn't thinking about source control. I don't have access to that. I can get into, you know, as the backup guy can get in there and say, okay, let me give you Tuesday night and let me give you Monday night, whichever one of those it was because you can't remember. And so those drives, I don't have to worry about the specific files. I need to worry about the folders. It's just as easy for me to map, you know, a half terabyte volume back to his server for the last three days. You know, your D drives or your development's going on. Okay, E is, you know, yesterday, F is the day before that, G is the day before that. And he has access to all of it. He can go look through and figure out exactly what he needs and I don't need to be involved in that whole process of discovering, figuring out what he is. You know, oh, that wasn't it. Let me get you some more previous versions, things like that. Here are a couple of use cases. Backup was your primary and now I'm hearing there's a test dev affinity. Talk more about the developer relationship with, you know, generally the, you know, IT admin and specifically in your situation. How does it work? I mean, you provide sets of services that they need access to. What do they like about your service delivery and what don't they like and how do you, what's your relationship with, you know, with them and then I want to get into how you're utilizing catalogs to improve that. So we have a good relationship. Actually, I would say in our department, really we do all get along pretty well and really have a pretty good sort of team approach to a lot of things. So typically what'll happen and you know, some of it's more extreme than others. Like we had a situation recently where one of our developers got very sick was unable to be in the office for several weeks. And so other people were getting in trying to figure out what to do. There was no issue making that data available. The developers all knew they'd come to me, I would have it and it would be presented up wherever they needed it. We, you know, we could overwrite the original, we could put it up side by side, another volume. With CataLogic it's very easy to just have access to that data where we need it. And there's, the best thing about it was before we would have to pull it off tape before CataLogic, we would pull it off a tape, dump it into a folder somewhere. And so the amount of time to get access to it was highly dependent on, well, aside from tape access time, just the volume of data. And now I'm in a situation where I kind of have to imply that it's more arduous than it is. So they don't come to me for every little, every version of a file that anybody wants. Because it is, it's just a few clicks and I can get it in there, so. But you've got all this stuff to do. Yeah, I've got other things to do too. So I don't want to be just doing recovery all the time. I'll do you a favor, I'll get you that. Yeah, no problem. Being able to clone the entire production environment into another VM is very powerful also. And then get rid of that copy when you need to. So is there a relationship between what you're doing with catalogs and the currency of data that the developer has access to? It sounds like the developer is now able to utilize more current data, is that right? Yeah, and whatever frequency of backup so you want, that's the delay, that's all there is. Because getting them access to that data is just a minute. So essentially live data, so they can work on near-live data for their test and dev so that when they go into production there aren't as many blind spots, is that right? Right, and so with, of course you can do a backup ad hoc as needed, and so for anything that resides on that app or I know they just announced support for several IBM systems, you can trigger that backup and it will use the array snapshot to get the backup and then you're done. It takes just a few minutes to catalog that and you can mount it up wherever you want. If you're not, if the data's not on that array, not on one of those supported arrays, you can still do that traditional, do a Microsoft, do a Windows volume shadow service snapshot, pull it over and then present it up. And of course you have that initial transfer time, but after that your backups are running off a production grade array and you can just get it done. So you were talking a little bit more recently about sort of essentially self-service, what's the workflow like there and what tools do you have available to facilitate that? So there are a lot of options for self-service, for letting the developers get in there and do their own thing as far as orchestrating when to pull copies, when to destroy copies, things like that. We haven't had to do that because so far it's been so much easier for me to handle it because it takes so few clicks now. There's no point in training them on it. If there's anything that needs to be scheduled, if they want to refresh their developer environment every week, Sunday mornings, wipe out the previous one, set up the latest one, that can all be scheduled. Typically it's a more of a one time, we need to roll back to this version of the code where we just need this one copy of the data to test something and so it's just more ad hoc and I just take care of it for them. And the point of control for the snapshots is the NetApp arrays, that's not catalogic, so you're utilizing your existing infrastructure, is that correct? Right, and that's one thing, when we were looking to replace our previous system that was one thing I loved about catalogic was they don't try to reinvent the wheel. My array is really good at snapshots. My array is really good at replication. I don't need catalogic to come in and do another layer of the exact same functionality. So they didn't reinvent the wheel, they just took the wheels that are already there and put a big ol' engine on them and made them work. Made them usable. So okay, so the benefit of that is it's simpler, it's more cost effective, you don't have to rip and replace your existing snapshotting system. Okay. Yeah, if you already trust your storage vendor to do snapshots and replication, just keep going with it. What would you like catalogic to do that would make your life easier? I know they could figure out how to make it all work without me ever doing anything, that would be great. At this point, it really just has worked pretty well. The interface, just when we first deployed it till now they've made great improvements on that. Made it much easier to use, giving you more of a dashboard in your first log in, so that makes it a lot easier. There are obviously a couple things where it'd be nice to kind of prioritize this data in the dashboard, but there's nothing holding me back at this point. For our needs, catalogic meets them. Last question, is VMworld, some of the things that you've seen, that you're interested in, that you want to go back and apply? We talked offline about cloud, cloud's kind of a bad word. Yeah, for a hospital regulatory compliance makes clouds kind of a thorny issue. But security, NSX provides a lot of security, a lot of new knobs and buttons that we can work with that I think might be interesting. Yeah, we heard some stuff in this morning's keynote, more distributed encryption, sounds interesting. Right, yeah, and of course that would require buy-in from not just the system admin team but from networking and from security. That's not your call and your call alone. No, it's not. No, that requires a lot of coordination. Are those guys here, your colleagues? No, they're not. Okay, so that's maybe next year. Yeah, I'm the only one here. Maybe next year I'll get them to come. But right now they don't see VMware as their thing. That's somebody else's thing, NSX would have to bring them into that kind of VMware world. So we'll see, maybe we'll get to do that. Anything else that you saw that you're excited about here? One of the biggest things, and this actually I guess has been out since March or April when vSphere 6 was released is virtual volumes. I'm both the systems admin and the storage admin. Okay, so vehicles to make your life. Yeah, bringing those together, being able to set policies on the storage array about caching or tiering whatever tools my array has, compression, dedupe, being able to just assign those at the VMware level knowing they're getting assigned the right properties. Like SQL compresses very well, but you don't want to ever try a dedupe. It won't work. Being able to just declare that upfront in VMware and make sure that no matter where I move that volume to balance, it's going to observe that policy. That would make my life easier by quite a bit I think. Awesome. All right, Jonathan, well listen, thanks very much for coming to theCUBE. It was really a pleasure having you and it's great to meet you. And good luck going forward. Thank you. All right, keep it right there, everybody. We will be back right after this. This is theCUBE. We're live from VMworld 2015 in Moscone North. Right back.