 From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. Welcome to theCUBE Studios for another CUBE Conversation where we go in depth with thought leaders driving business outcomes with technology. I'm Peter Burris, and today we're going to be talking about some of the challenges that enterprises face as they try to do a better job of gaining visibility into their user-oriented digital experience. Now what do we mean by that? We mean basically as a company moves to a digital business, digital engagement model, they increasingly are mediating their conversations with their customers through digital means. And if you don't have visibility into how that's going, you're going to end up with unhappy customers. Now to have that conversation, we've got Alex Henthord-Awani, VP of product marketing from ThousandEyes. Alex, welcome to theCUBE. Thanks for having me, Peter. Well let's start by saying a little bit about ThousandEyes. What is ThousandEyes? So ThousandEyes is a company that delivers visibility into digital experiences that are running over the internet as well as your own networks of course. And the whole point of that is that when you move to the cloud and to these sort of cloud-based ecosystems that everybody's using to deliver digital, you are losing a lot of control. You no longer own the software and the infrastructure and the networks that you're connecting over, the users are connecting over. When you lose that control, you really, really need the visibility so that you can optimize and then you can also fix issues when they happen because they do happen. So that's really what we deliver. We deliver that visibility. So that's a big promise. But let's focus in on kind of the more proximate thing. You guys have just delivered a report that looks specifically at digital experience. What is a problem that the report's addressing? Right, so the problem that we are trying to address with this report, this is digital experience performance benchmark report that we've released, is one of trying to take away the subjectivity around performance management because when you're dealing with the internet, it's kind of a black box. What constitutes the minimum bar of good? Where should you be at minimally? And then how are you doing competitively? How do you compare with the top folks in your peer group as an industry? That's the thing that we wanted to address with this particular report. So if I think about then the challenges the company's facing, there are so many moving parts. You think you're getting a single service, but there are so many moving parts even within that service or even within that application that you want to be able to compartmentalize it and break it down and start to isolate some of the issues whether that be a technology or a service supply issue. So what are some of the key considerations that are contributing to, say, better or worse digital experience? Well, the way we looked at this, just to kind of give a little context and to answer your question is, we thought, well, let's look at top 20 websites, consumer websites across retail, media and entertainment and travel and hospitality. So 60 and all. And then let's measure them as to how users will experience them. And to the point of providers, pretty much all of them rely on a content delivery network, a CDM provider. There's a variety of them, obviously, in the market. And so what we're doing is we're saying, let's measure when we go from about 36 cities with a browser, this is automated obviously using our monitoring agents. You're simulating users. Simulating users. Let's go and measure what that experience is and let's tease out some of the performance factors and look at those things. Because those are the kind of things that the web operations teams and digital operations teams are really concerned with. That stuff is the foundation that you have to build from a performance point of view, so that all the other subjective kind of things that you build into your website experience really have a time budget to work in. So those performance factors are kind of those foundational elements that have higher level impacts on a variety of other application characteristics. What are some of the key performance factors that you identified as being important? So we looked at a few different metrics that you can measure. One of the most foundational ones, and people forget about it a lot, is DNS. That's the process where when you type in your URL, something has to convert that into a numeric kind of address that the internet can actually get to. So that's the DNS lookup. So that's one piece. A second piece is what's the network speed or latency from a user where a user sits to that server that's caching your content, that CDN provider server. So we looked at that on a round trip basis. The reason we looked at that just by the way is that when you first interact with a website, before you get the first byte of content delivered, you have to take 11 one-way trips across the wire, which means the internet. So that network time is important. So a DNS network. Then what we did is look beyond that to the point the time it takes to deliver that first byte that's also called HTTP response time. And then to get somewhat of an experiential lens, we took one more measurement, which is the page load time. Page load time is essentially the time it takes to load the content into the browser and it's starting to render. So that's something a little bit more experiential. But it's kind of like there's that stack of performance metrics. So set up the conversation or find who you're conversing with, set up the conversation and then get that first bit of information back. But the page load is going to be significantly subjective because different pages of different sizes. But that's kind of what you set up these benchmarks to do. Find the site, set up the session and then get that first bit back. So what kinds of numbers are you suggesting that people start to benchmark themselves against? So one of the things that we found when we talked with customers is the sense of we don't know what minimum bar of good is. So what we decided to do is define what we're calling an internet performance bar. And what we did is said, let's look at the averages, the median sort of numbers. And we took 300 plus million unique measurements in this study. And we looked at things like, well, what correlates to delivering well in terms of the top end, that page load and HTTP response time? So we said, all right, based on that, we decided that for three of these metrics we could define what we thought is a minimum bar of good in the US market. Because go outside of the US to other geographies, things could differ a lot and they do. And basically what we said is, all right, for DNS you should be at 25 milliseconds response time. What do you write these down? 25 milliseconds for DNS. 25 milliseconds for DNS. 15 milliseconds round trip latency. For the network. For, from your user to the CDN edge server. And then 350 milliseconds to deliver that first byte. That's HTTP response time. So 25, 15 and 350 milliseconds. And what we found is that if you're delivering at that level in the US market, you're in pretty good shape, generally speaking, when you compare to those top 60 websites. And that would correlate pretty well to being able to deliver a good page load time. So if the page load time was equal, or the amount of data being loaded in the page was equal, those would be the things that would ultimately determine how fast or what the digital experience of the user was. Right, exactly. They build the foundation, a strong foundation. Now, on top of that, we also of course looked at 20 websites for each of these sort of vertical industries. And what we did is then charted out scatter plots and things like that. So you could see, all right, well, where are the, are there any patterns there that are helpful for me as a retailer, e-commerce provider, for example, to say, well, you know, for example, we found that there were two major clusters of performance in retail in terms of HTTP response time. One cluster was 300 milliseconds or better. And one cluster was sort of four to 500 milliseconds. And when we share this with retailers, of course they say, I know which one I want to be in. I want to be in the faster one, right? But just knowing that is helpful to say, because let's say 350 milliseconds is minimum bar of good internet performance bar. But if you said, hey, I'm looking at the cohort that I'm competing against and a bunch of them are doing better than that, maybe even significantly better than that. That helps me between those two things make good investment decisions. But to go back to that notion of budget, what that basically means is that if I am at a disadvantage on these foundational metrics, my budget for the complexity and experience value that I can put into my digital assets is reduced. Right, because digital experience runs on a clock, right? Human visual sort of recognition is in the, kind of like 13, 14 milliseconds, but human response time to visual stimulus is about 250 milliseconds roughly, right? So that's what happens in that sort of customer experience of all the content and digital assets. So what you're doing is building a strong time budget to deliver and all the rest of it. Right, so now let's talk a little bit about then what would people do with this information? Because the report articulates very nicely the nature of the problem, provides the benchmarks that people can actually use, provides insight into how they can benchmark themselves against generalities, but also specific industries and as you said cohorts. But one of the other things that you guys do is you guys use this basic tool for generating visibility to capture data that looks at other things as well, cloud, DNS lookups, et cetera. And I've noticed that you actually can start mapping out topologies regarding how traffic is moving in this end to end world. How does a customer use that information to get themselves inside the good cohort group or improve their performance relative to a competitor? So there's a couple of things that you can do with that kind of data. One is to look at it and decide based on the benchmarks and your performance and the kind of topology insights we provide, which is exactly how all this stuff connects over the internet. There's two things. One is you can say, look if I'm not meeting a bar in a market that I care about, I'm going to go to my provider and hold them accountable to do better because I know that this is achievable. So that's one thing. I may make some network architecture decisions. Well, maybe there's a broadband provider that runs a little slower. Maybe I'll get a private connection to them so my user experience runs well. But the third thing is operationally, it's the internet. Things go wrong sometimes. Providers make mistakes. No. And the thing is, the problem is in the cloud there's so many factors. It's really hard without good visibility, without that topology view to know even who to call. So with the data that we provide you operationally, you can now and go hold the provider, the right provider accountable to fix a problem or optimize a performance when things go wrong. And they will. But because of your visibility, it's not just the right provider, it's actually then saying, and this is what I would like you to do. Right, exactly. Because you can share very rich information with them so they can take action. I mean, if you had to put yourself in the shoes of the provider, they probably get blamed a lot of times for things that really aren't their fault. So what do they do? They provide the sort of, they create a defensive mechanism of 12 layers of support and there's a lot of plausible deniability there, right? Because they don't want to be chasing things that aren't actually their problem. If you give them the kind of visibility that we provide with all this rich data, deep views into the internet, well, one, you're going to lower the defenses because they've got something to work with. Two, they're actually going to have enough information to do something about it. And that's how you actually manage the internet in a sense. You manage it through your providers, but you have to have good information for them. So I think it was Sai Sims, the old, the guy who sold clothing to men for many years who said that an informed customer is or a knowledgeable customer is the best customer. And so what you're really trying to do is to, with this report, establish what the problem is, establish these benchmarks that people should be pointing to, give them the opportunity, give your customers the opportunity to compare themselves where comparisons are meaningful, but then very importantly, use that knowledge of their operations to become a more informed purchaser of services or architect of their own assets. Have I got that right? Yeah, exactly. Because if you are able to partner effectively with all those providers, and I mean, in an e-commerce setting, I think you're probably dealing with tens, dozens of different third-party service providers, just all the API content you're sucking in, plus all the ISPs, CDNs, everything else. It's a lot. It's a big ecosystem. If you can partner effectively, you're just going to do a better job. You're going to get a better service, you're going to get better outcomes, and ultimately deliver better experience, which is great for the top and the bottom line. Absolutely, and so it's really a way of tying the business outcomes of the improved service, the ability to increase or the value of the service by altering the way you use your time budget, and then using that as a basis for establishing very high quality operations, both inside your own shop, but also in the connections that you have with these third-party providers. Absolutely, yeah. So the whole point of our research generally, and we have multiple of these reports, is provide that context so that when you use the kind of data we provide for you, you have the bigger picture in mind. You know what's normal, what's not, and so that you know that you're making a really reasonable request to hold a provider accountable to a certain level of performance, for example. All right, so what's next? So we have a few different reports that we've done in the past. We did a public cloud network performance report. We did a global DNS report. By the way, I looked at that one and it was really interesting. There's some fascinating findings in all of these things. We've got some other research planned for this year. We're looking at things like, maybe looking to China, for example, performance in China, but also we'll be refreshing these reports we did last year. We're going to at least go on an annual refresh for them. So we're trying to build out a body of research over time that complements obviously all the product and solution things that we do. That, you know, and we know that from talking to customers, they really appreciate that they can look at something a little bit bigger lens. But it all starts with this premise of, look, this is a really important problem. Let's segment this into different classes of domains. So you have the benchmarks, so you know what constitutes good versus bad. And then use the data from tools and whatnot, visibility tools and whatnot to then improve your set of operational capabilities. Absolutely. Excellent. Alex Hawthorne, Awani, Vice President of Product Marketing, a thousand eyes. Thanks very much for being on theCUBE. Thanks very much for having me. And once again, I'm Peter Burris. Until next time.