 It's theCUBE! Covering the Virtual Vertica Big Data Conference 2020 brought to you by Vertica. Hello, everybody, welcome back to the Virtual Vertica Big Data Conference. My name is Dave Vellante, and you're watching theCUBE, the leader in digital coverage. This is the Virtual BDC, as I said, theCUBE has covered every big data conference from the inception. And we're pleased to be a part of this, even though it's challenging times. I'm here with Dan Voigti, Senior Director of Cerner Works Engineering. Dan, good to see you. How are things where you are in the middle of the country? Good morning, challenging times as usual. We're trying to adapt to having the kids at home out of school, trying to figure out how they're supposed to get on their laptop and do virtual learning. We all have to adapt to it and figure out how to get by. Well, it sure would have been my pleasure to meet you face to face in Boston at the Encore Casino. Hopefully next year we'll be able to make that happen. But let's talk about Cerner and Cerner Works Engineering. What is that all about? So Cerner Works Engineering, we used to be part of what's called IP or an intellectual property, which is basically the organization in Cerner that does all of our software development. But what we did was we made a decision about five years ago to organize my team with Cerner Works, which is the hosting side of Cerner. So about 80% of our clients choose to have their domains hosted within one of the two Kansas City data centers. We have one in Lee's Summit in South Kansas City and then we have one on our main campus. That's a brand new one in downtown North Kansas City. About 80. So we have about 27,000 environments that we manage in the Kansas City data centers. So what my team does is we develop software in order to make it easier for us to monitor, manage and keep those clients healthy within our data centers. Got it. I mean, I think of Cerner as a real sort of advanced health tech company. It's the combination of healthcare and the technology, the collision of those two, but maybe describe a little bit more about Cerner's business. So we have, like I said, 27,000 facilities across the world growing each day, thank goodness. And our goal is to ensure that we reduce errors and we digitize the entire medical records for all of our clients. And we do that by having a consulting practice. We do that by having engineering and then we do that my team, which manages those particular clients. And that's kind of how we got introduced to the vertical side as well when we introduced them about seven years ago. We were actually able to take a tremendously forward in how we manage our clients. And I'd be more than happy to talk deeper about how we do that. Yeah, and as we get into it, I want to understand, I mean, healthcare is all about outcomes, it's about, you know, patient outcomes and you sort of work back from there. You know, IT for years has obviously been a contributor but sort of removed and somewhat indirect from those outcomes. But in this day and age, especially in an organization like yours, it really starts with the outcomes. I wonder if you could sort of ratify that and talk about what that means to Cerner. So when you talk about medical outcomes. Yeah, outcomes of your business. Yeah, so I, there's two different sides to Cerner, right? There's the medical side, the clinical side, which is obviously our main practice. And then there's the side that I manage, which is more of the operational side. Both of them are very important, but they go hand in hand together. On the operational side, the goal is to ensure that our clinicians are on the system and they don't know they're on the system, right? Things are progressing fine. Doctors don't want to be on the system, trust me. My job is to ensure that they're having the most seamless experience possible while they're on the EMR and have it just be one of their side jobs as opposed to taking their attention away from the patients. That makes sense? Yeah, it does. I mean, EMR and meaningful use around the Affordable Care Act really dramatically changed. You know, people had to demonstrate in order to get paid. And so that became sort of an unfunded mandate for folks. And you really had to sort of respond to that, didn't you? We did. We did that about three to four years ago. And we had to help our clients get through what's called meaningful use. There was different stages of meaningful use. And what we did is we have a website called the Lights On Network, which is free to all of our clients. Once you get onto the website for the Lights On Network, you can actually show how you're measured and whether or not you're actually completing the different necessary tasks in order to get those payments for meaningful use. And it also allows you to see what your performance is on your domain, how clinicians are doing on the system, how many hours they're spending on the system, how many orders they're executing. All of that is completely free and visible to all our clients on the Lights On Network. And that's actually backed by some of the Vertica software that we've invested in. Yeah, so before we get into that, I mean, it sounds like your mission really is just great user experiences for the people that are on the network, you know, full stop. So one of the things that we invented about 10 years ago is called RTMS timers. They're called a response measurement system. And it started off as a way of us proving that clients are actually using the system. And now it's turned into more of the user outcomes. What we do is we collect 2.5 billion timers per day across all of our clients across the world. And every single one of those records goes to the Vertica platform. And then we've also developed a system on that which allows us in real time to go and see whether or not they're deviating from their normal. So we do baselines every hour of the week. And then if they're deviating from those baselines, we can immediately call our service center and have them engage the client before they call in. So Dan, I wonder if you could paint a picture. By the way, that's awesome. I wonder if you could paint a picture of your analytics and environment. What does it look like? Maybe give us a sense of the scale. Okay, so I've been describing how we operate our remote hosted clients in the two Kansas City data centers. But all the software that we write, we also help our client hosted agents as well. So not only do we take care of what's going on in the Kansas City data center, but we do write software to ensure that all of our clients are treated the same and we provide the same level of care and performance management across all those clients. So what we do is we have 90,000 agents that we have split across all these clients across the world. And every single hour we're committing a billion rows to vertical of operational data. So I talked a little bit about the R team of timers, but we do things just like everyone else does for CPU, memory, Java heap stack. We can tell you like how many concurrent users are on the system. I can tell you if there's an application that goes down unexpectedly at crash, I can tell you the response time from the network as most of us use Citrix at Cerner. And so what we do is we measure the amount of time it takes from the client side of the PCs is sitting in the virtual data centers or I'm sorry, in the hospitals and then round trip to the Citrix servers that are sitting in the Kansas City data center that's called the RTTR round trip transactions. And what we've done is over the last couple of years what we've done is we switched from just summarizing CPU and memory and all that high level stuff in order to go down to a user level. So what are you doing, Dr. Smith today? How many hours are you using the EMR? Have you experienced any slowness? Have you experienced any hourglass holding within your application? Have you experienced unfortunately maybe a crash? Have you experienced any slowness compared to your normal use case? And that's the step we've taken over the last few years to go. From summarization of high level CPU and memory over to outcome metrics which are what is really happening with a particular user. So really granular views of how the system is being used and deep analytics on that. I wonder, go ahead please. We weren't able to do that by summarizing things in traditional databases. We, you have to actually have the individual rows and you can't summarize information. You have to have individual metrics that point to exactly what's going on with a particular connection. So okay, so the MPP architecture, the columnar store, or the scalability of Vertica, that was key. That was my next question. I want you to kind of take us back to the days of traditional RDBMS and then you brought in Vertica. Maybe you could give us a sense as to why, what that did for you, kind of the before and after. So I've been kind of painting a picture going forward here about like how traditionally eight years ago, all we could do is summarize information. If CPU was going to go and jump up 8%, I could alarm the data center and say, hey listen, CPU looks like a supplier. Maybe an application is hanging more than it has been in the past. Things are a little slower, but I wouldn't be able to tell you like who's affected. And that's where the whole thing has changed when we brought Vertica in six years ago. Is that we're able to take those 90,000 agents and commit a billion rows per hour operational data. And I can tell you exactly what's going on with each of our clinicians. Because it's important for an entire domain to be healthy, but what about the 10 doctors that are experiencing frustration right now? If you're going to summarize that information and roll it up, you'll never know what those $10 doctors are experiencing. And then guess what happens? They call the data center and complain. The squeaky wheels, we don't want that. We want to be able to show exactly who's experiencing a bad performance right now and be able to reach out to them before they call the help desk. Sure, it would be proactive there. So you've gone from Houston, we have a problem. We really can't tell you what it is. Go figure it out. We see that there's an issue with these docs or these users and go figure that out and focus narrowly on where the problem is as opposed to trying to whack them all. Exactly, and the other big thing that we've been able to do is correlation. So we operate to gigantic data centers and there's things that are shared. Switches, network, shared storage, those things are shared. So if there's an issue that goes on with one of those pieces of equipment, it could affect multiple clients. Now that we have every row in Vertica, we have a new program in place called performance abnormality flags. And what we're able to do is provide a website in real time that goes through the entire stack from Citrix to network to database to backend tier all the way to the end user desktop. And so if something was gonna be related because we have a network switch going out to the data center or something's backing up slow, you can actually see which clients are on that switch and what we did five years ago before this is we would deploy out five different teams to support you, right? Because five clients would call in and they would all have the same problem. So here you are, have disparate teams trying to investigate why the same problem is happening. And now that we have all of the data within Vertica, we're able to show that in a real time fashion through a very transparent dashboard. That's the operational metrics throughout the stack, right? Yeah, because it's very complex, right? I just labeled like five different things, the stack from your end user device all the way through the backend tier database and all the way back. All that has to work properly, right? Including the network. How big is this? What are we talking about? How ever you measure it, terabytes, clusters, what can you share there? So you mean like the amount of data that we persist with in our course? Yeah, give us the fun facts. Absolutely petabytes, yeah, for sure. And in Vertica right now, we have two petabytes of data and I purge it out every year. I keep one year's worth of data within two different clusters. So we have the two different data that I've been describing. What we've done is we set Vertica up to be in both data centers to be highly redundant. And then one of those is configured to do real time analysis and correlation research. And then the other one is to provide service to what I described earlier as our lights on networks. It's a very dedicated hardened cluster in one of our data centers to allow the lights on network to provide the transparency directly to our clients. So we want that one to be pristine, fast and nobody touch it. As opposed to the other one where people are doing real time ad hoc queries, which sometimes aren't the best thing in the world in no matter what kind of database or how fast it is, people do bad things in databases. And we just don't want that to effectively show our clients in a transparent fashion. Yeah, I mean, for our audience, look at Vertica has always been aimed at these big, hairy analytic problems. And it's not for a little tiny data mark in a department. It's really the big scale problems. I wonder if I could ask you, so you guys obviously healthcare with HIPAA and privacy, are you doing anything in the cloud or is it all on-prem today? So in the operational space that I manage is all on-premises and that is changing. As I was describing earlier, we have an initiative to go to AWS and provide levels of service to countries like Sweden, which does not want any operational data to leave that country's walls, whether it be operational data or whether it be PHI. And so we have to be able to adapt into Vertica EON mode in order to provide the same services within Sweden. So obviously, Surner's not going to go out and build a data center in every single country that requires us. So we're going to leverage our partnership with AWS to make this happen. Okay, so I was going to ask you, so you're not running EON mode today. It's something that you're obviously interested in. AWS will allow you to keep the data locally in that region. In talking to a lot of practitioners, they're intrigued by this notion of being able to scale independently storage from compute. They've said to us, that's a much more efficient way. I don't have to buy in chunks if I'm out of storage. I don't have to buy compute and vice versa. So maybe you could share with us what you're thinking. I know it's early days, but what's the logic behind the business case there? I think you're 100% correct in your assessment of taking compute away from storage. And we do exactly what you're saying. We buy a server and it has so much compute on it and so much storage. And obviously it's not scaled properly, right? Either storage runs out first or compute runs out first, but you still pay the price for the entire server itself. So that's exactly why we're doing the POC right now for EON mode. And I said on Vertica's tab, the advisory board, and they've been doing a really good job of taking our requirements and listening to us as to what we need. And that was probably number one or two on everybody's list is to separate storage from compute. And that's exactly what we're trying to do right now. Yeah, you know, it's interesting. I've talked to some other customers that are on the customer advisory board. And Vertica is one of these companies that pretty transparent about what goes on there. And I think for the early adopters of EON mode, there were some challenges with getting data into the new system. I know Vertica's been working on that very hard, but you guys push Vertica pretty hard and from what I can tell, they listen. Your thoughts. They do listen. They do listen. They do a great job. And even though the big data conferences is canceled, they're committed to having us go virtually to the CAD meeting on Monday. So I'm looking forward to that. But they do listen to our requirements and they've been very, very responsive. Oh, nice. So I wonder if you could give us some final thoughts as to where you want to take this thing. Like if you look down the road, you know, year or two, what does success look like, Dan? That's a good question. Success means that we're a little bit more nimble as far as the different regions across the world that we can provide our services to. I want to do more correlation. I want to gather more information about what users are actually experiencing. I want to be able to have our phone never ring in our data center. I know that's a grand thought there, but I want to be able to look forward to measuring the data internally and reaching out to our clients when they have issues and then doing the proper correlation so that I can understand how things are intertwined and if multiple clients are having an issue. That's the goal going forward. Well, and, you know, in these trying times during this crisis, it's critical that your operations are running smoothly. You know, the last thing that organizations need right now, especially in healthcare, is disruption. So thank you for all the hard work that you and your teams are doing. You know, I wish you and your family all the best, stay safe, stay healthy, and thanks so much for coming on theCUBE. I really appreciate it. Thanks for the opportunity. You're very welcome. All right, and thank you everybody for watching. Keep it right there. We'll be back with our next guest. This is Dave Vellante for theCUBE, covering virtual Vertica big data conference. We'll be right back.