 Hello everyone, welcome to theCUBE's presentation of the AWS Startup Showcase. The theme this showcase is MARTEC, the emerging cloud scale customer experiences. Season two of episode three, the ongoing series covering the startups, the hot startups, talking about analytics, data, all things MARTEC. I'm your host, John Furrier. Here we're joined by Krista Wicklund, founder and CEO of UnitQ. Here to talk about harnessing the power of user feedback to empower marketing. Thanks for joining us today. Thank you so much, John. Happy to be here. In these new shifts in the market when you've got cloud scale, open source software is completely changing the software business, we know that. There's no longer a software category, it's cloud, integration, data. That's the new normal, that's the new category. So as companies are building their products and want to do a good job, it used to be you send out surveys, you try to get the product market fit, and if you were smart, you got it right the third, fourth, 10th time. If you were lucky, like some companies, you get it right the first time. But the holy grail is to get it right the first time. And now there's new data acquisition opportunities that you guys are in the middle of that can tap customers or prospects or end users to get data before things are shipped or built or to iterate on products. This is the customer feedback loop or data voice of the customer journey. It's a gold mine. And it's you guys, it's your secret weapon. Take us through what this is about now. I mean, it's not just surveys. What's different? So yeah, if we go back to like, why are we building unit queue, which is we want to build a quality company, which is basically how do we, how do we enable other companies to build higher quality experiences by tapping into already existing data assets? And the one we are in particularly excited about is user feedback. So me and my co-founder Nick and we're doing now the second company together. We spent 14 years. So we're like an old married couple. We accept each other and we don't fight anymore, which is great. We did a consumer company called Scout, which was sold five years ago. And Scout was kind of early in the whole mobile, you know, mobile first, I guess we were actually mobile first company. And when we launched this one, we immediately had the entire world as our marketplace, right? Like any modern company. We launched a product. We have support for many languages. It's multiple platforms. And we have Android, iOS, web, big screens, small screens. And that brings some complexities as it relates to staying on top of the quality of the experience, because how do I test everything pre-production? How do I make sure that our Polish Android users are having a good day? And we found at Scout, personally, like I could discover million dollar bugs, but just drinking coffee and reading feedback. And we're like, well, there's got to be a better way to actually harness the end user feedback that they are leaving in so many different places. So, you know, what UNIQ does is that we basically aggregate all different sources of user feedback, which can be app store reviews, Reddit posts, tweets, comments on your Facebook ads. It can be better business bureau reports we don't like to get too many of those, of course, but really anything on the public domain that mentions or refers to your product, we want to ingest that data in this machine. And then all the private sources. So, you probably have a support system deployed, a send desk or an intercom, you might have a chat box like an ADA or and so forth. And your end user is going to leave a lot of feedback there as well. So we take all of these channels, plug it into the machine, and then we're able to take this qualitative data, which, and I actually think like, when an end user leaves a piece of feedback, it's an act of love. Like they took time out of the day and they're going to tell you, hey, this is not working for me or hey, this is working for me and they're giving you feedback. But how do we package these very, this very messy multi-channel, multiple languages all over the place data? How can we distill it into something that's quantifiable? Because I want to be able to monitor these different signals. So I want to turn user feedback into time series. Because with time series, I can now treat this the same way as Datalog treats machine logs. I want to be able to see anomalies and I want to know when something breaks. So what we do here is that we break down your data in something called quality monitors, which is basically machine learning models that can aggregate the same type of feedback data in this very fine grained and discrete buckets. And we deploy up to a thousand of these quality monitors per product. And so we can get down to the root cause, let's say password reset link is not working. And it's in that root cause, the granularity that we see that companies take action on the data. And I think historically, there has been like the workflow between marketing and support and engineering and product has been a bit broken. Like they've been siloed from a data perspective. They've been siloed from a workflow perspective where support will get a bunch of tickets around some issue in production. And they're trained to copy and paste some examples and throw it over the wall, file a JIRA ticket. And then they don't know what happens. So what we see with the platform we built is that these teams are able to rally around a single source of truth. They're like, yes, password reset link seems to have broken. This is not a user error. It's not a fix later or I can't reproduce. We're looking at the data and yes, something broke, we need to fix it. I mean, the data silo is a huge issue. Different channels, omni-channel now, there's more and more channels that people are talking in. So that's huge. I want to get to that. But also you said that it's a labor of love to leave a comment or feedback. But also, I remember from my early days breaking into the business at IBM and Yulit Packard where I worked, people who complain are the most loyal customers. If you service them. So it's complaints, it's leaving feedback. And then there's also reading between the lines with like app errors or potentially what's going on under the covers that people may not be complaining about, but they're leaving maybe gesture data or some sort of digital trail. So this is the confluence of the multitude of data sources and then you got the silo locations. It's a complicated problem. It's very complicated. And when you think about like, so I started, I came to Bay Area in 2005, my dream was to be a quant analyst on Wall Street and I ended up in QA at VMware. So I started at VMware in Palo Alto and didn't have a driver's license. I had to bike around, which was super exciting. And we were shipping box software, right? This was literally a box with a DVD that's been burned. And if that DVD had bugs in it, guess what? It'll be very costly to then have to ship out and everything. So the, I love the VMware example because the test cycles were long and brutal. It was like a six month deal to get through all these different cases and there couldn't be any bugs. But then as the industry moved into the cloud, CICD, ship at will. And if you look at a modern company, you'll have at least 20 plus integrations into your product analytics, add SDKs, authentication SDKs and so forth. And these integrations, they morph and they break and you have like connectivity issues. Is your product working as well on CalTrain when you're driving up and down versus Wi-Fi? You have language specific bugs that happen. Android is also quite a fragmented market. The binary may not perform as well on that device versus that device. So how do we make sure that we test everything before we ship? The answer is we can't. There's no company today that can test everything before they ship in particular in consumer. And the epiphany we had at our last company Scout was that, hey, wait a minute, the end user, they're testing every configuration. They're sitting on the latest device, the oldest device. They're sitting on Japanese language, on Swedish language. They are in different code paths because our product executed differently depending on if you were a paid user or a free immune user or if you were certain demographical data. There's so many ways that you would have to test. And PagerDuty actually had a study they came out with recently where they said 51% of all end user impacting issues are discovered first by the end user when they surveyed a bunch of customers. And again, like the cool part is they will tell you what's not working. So now how do we tap into that? So what I'd like to say is, hey, your end user is like your ultimate test group and you're in a curious sort of the layer that converts them into your extended test team like that they can actually, now the signals they're producing is making it through to the different teams in the organization. You know, I think that's the script that you guys are flipping. If I could just interject because to me, when I hear you're talking, I hear, okay, you're letting the customers be an input into the product development process and there's many different pipelines of that development and that could be whether you're iterating or geography releases, all kinds of different pipelines to get to the market. But in the old days, it was like just customer satisfaction, complaint call center or I'm complaining, how do I get support? Nothing made itself into the product improvement except for slow moving waterfall based processes. And then maybe six months later, a small tweak could be improved. Here, you're taking direct input from collective intelligence. Okay. Direct input and on timing is very important here, right? So how do you know if the product is working as it should in all these different flavors and configurations right now? How do you know if it's working well and how do you know if you're improving or not improving over time? And I think the industry, what can we look at as far as when it relates to quality so I can look at star ratings, right? So what's the star rating in the app store? Well, star ratings, that's an average over time. So it's something that you may have a lot of issues in production today and you're going to get dinged on star ratings over the next few months and then it brings down the score. NPS is another one where we're not going to run NPS surveys every day. We're going to run it once a quarter, maybe once a month, which we're really, really aggressive. And that's also a snapshot in time. And we need to have the finger on the pulse of product quality today. I need to know if this release is good or not good. I need to know if anything broke. And I think that real-time aspect, what we see as stuff sort of bubbles up the stack and not into production, we see up to a 50% reduction in time to fix these end user impacting issues. And I think we also need to appreciate, like when someone takes time out of the day to write an app review or email support or write that Reddit post, it's pretty serious. It's not going to be like, Oh, I don't like the shade of blue on this button. You know, it's going to be something like, I got double billed or hey, someone took over my account or I can't reset my password anymore. The capture, I'm solving it, but like I can't get through to the next phase. And we see a lot of these trajectory impacting bugs and quality issues in these work, these flows in the product that you're not testing every day. So if you work at Snapchat, your employees probably going to use Snapchat every day. Are they going to sign up every day? No. Are they going to do passive reset every day? No. And these things are very hard to instrument lower in the stack. You know, I think this is, and again, this is again, back to these big problems. It's smoke before fire. And you're essentially seeing it early with your process. Can you give an example of how this new focus or new mindset of user feedback data can help customers increase their experience? Can you give some examples? Because folks watching will be like, okay, I love this value, sell me on this idea I'm sold. Okay, I want to tap into my prospects and my customers, my end users to help me improve my product. Because again, we can measure everything now. With data. We can measure everything. We can even measure quality these days. So when we started this company, I went out to talk to a bunch of friends or entrepreneurs and VCs and board members. And I asked them this very simple question. So in your board meetings and or on all hands, how do you talk about quality of the product? Do you have a metric? And everyone said, no. Okay, so are you a data-driven company? Yes, we're very data-driven. Yeah, go David. But you're not really sure if quality, how do you compare against competition? Are you doing as good as them? Worse, better? Are you improving over time? And how do you measure it? And they're like, well, it's kind of, kind of like a blind spot of the company. And then you ask, well, do you think quality of the experience is important? And they say, yeah, well, why? Well, top of funnel growth, you know, higher quality products going to spread faster, organically, we're going to get better star ratings. We're going to have the storefront's going to look better. And of course, more importantly, they said the different conversion cycles in the product box itself, that if you have bugs and friction or an interface that's hard to use, then the inputs, the signups, it's not going to convert as well. So you're going to get dinged on retention, engagement, conversion to paid and so forth. And that's what we've seen with the companies we work with. It is that poor quality acts as a filter function for the entire business if you're a product-led company. So if you think about product-led company, where the product is really the centerpiece, and if it performs really, really well, then it allows you to hire more engineers. You can spend more on marketing. You know, everything is sort of fed by this product atom in the middle. And then quality can make that thing perform worse or better. And we developed a metric actually called the unit queue score. So if you go to our website, unitq.com, we have indexed the 5,000 largest apps in the world. And we're able to then, on a daily basis, update the score because the score is not something you do once a month or once a quarter, it's something that changes continuously. So now you can get a score between zero and 100. If you get the score of 100, that means that our AI doesn't find any quality issues reported in that data set. And if your score is 90, that means that 10% will be a quality issue. So now you can do a lot of fun stuff. Like you can start benchmarking against competition. So you can see, well, I'm Spotify, how do I rank against Deezer or SoundCloud or others in my space. And what we've seen is that as the score goes up, we see this real big impact on KPI, such as conversion, organic growth, retention, ultimately revenue, right? And so that was very satisfying for us when we launched that quality actually still really, really matters. And I think we all agree it does, but how do we make a science out of it? And that's what we've done. And when we, we were very lucky early on to get some incredible brands that we work with. So like Pinterest is a big customer of ours. We have Spotify, we just signed New Bank, Chime. So like, we even signed BetterHelp recently and the world's largest Bible app. So when you look at like the types of businesses that we work with, it's truly a universal, very broad field where if you have a digital exhaust of feedback, I can guarantee you there are insights in there that are being neglected. So Chris, I got- Because of these manual workflows. Yeah, please go ahead. I got to ask you because this is a really great example of this new shift, right? The new shift of leveraging data, flipping the script, everything's flipping the script here, right? So you're talking about how you, what the value proposition is. Hey, board example's a good one. How do you measure quality? There's no KPI for that. So it's almost category creating in its own way in that this net new things, it's okay to be new, it's just new. So the question is, if I'm a customer, I buy it. I can see my product teams engaging with this. I can see how it changes my marketing and customer experience teams. How do I operationalize this? Okay, so what do I do? So do I reorganize my marketing team? So take me through the impact of the customer that you're seeing. What are they resonating towards? Obviously, they're getting that data's key. And that's a holy grail. We all know that. But what do I got to do to change my environment? What's my operationalization piece of it? Yeah, and that's one of the coolest parts, I think. And that is, we're not going to add, let's start with your user base. We're not going to ask you to ask your users to do something differently. They're already producing this data every day. Like they are, they are treating about it. They're putting in that produce, they're emailing support, they're engaging with your support chat bot. They're already doing it. And every day that you're not leveraging that data. So like the data that was produced today is less valuable tomorrow. And in 30 days, our target is probably useless. Unless it's the same guy commenting. Yeah. So the first we need to make everyone understand, well, yeah, the data is there and we don't need to do anything differently with the end user. And then what we do is we ask the customer to tell us where should we listen in the public domain? So do you want the ready post, the trust pilot, like what channels should we listen into? And then our machine basically starts ingesting that data. So we have integrations with all these different sites. And then to get access to private data, it will be, if you're on send desk, you have to issue a send desk token, right? So you don't need any engineering hours except your IT person will have to grant us access to the data source. And then when we go live, we basically build up this taxonomy with the customer. So we don't want to try and impose our view of the world of how do you describe the product with these buckets, these quality monitors. So we work with the company to then build out this taxonomy. So it's sort of like a bespoke solution that we can bootstrap with previous work we've done where you don't have these very, very fine buckets of where stuff could go wrong. And then what we do is there are different ways to hook this into the workflow. So one is just to use our products, it's a SaaS product as anything else. So you log in and you can then get this overview of how is quality trending in different markets on different platforms, different languages and what is impacting them? What is driving this unit Q score? That's not good enough. And all of these different signals, we can then hook into Gira, for instance, we have a Gira integration, we have a page to do the integration so we can wake up engineers if certain things break. We also tag tickets in your support system, which is actually quite cool where let's say you have 200 people who've wrote into support saying, I got double-billed on Android. It turns out there was some bug that double-billed them. Well, now we can tag all of these users in Send Desk and then the support team can then reach out to that segment of users and say, hey, we heard that you had this bug with double billing, we're so sorry, we're working on it. And then when we push fix, we can then email the same group again and maybe give them a little gift card or something for the thank you. So you can have, even like big companies can have that small company experience. So it's groups that use us like at Pinterest, we have 800 accounts. So it's really through, marketing has a vested interest because they want to know what is impacting the end user because brand and product, the lines are basically gone. So if the product is not working, then my spend into this machine is going to be less efficient. The reputation of our company is going to be worse. And the challenge for marketers before UnitQ was how do I engage with engineering and products? I'm dealing with anecdotal data and life, my own experience of like, hey, I've never seen this type of complaints before. I think something is going on. And then engineering would be like, well, I have 5,000 bugs in JIRA. Why does this one matter? When did it start? Is this a growing issue? And all of this- You have to replicate the problem, right? And then it goes on and on and on. And a lot of times reproducing bugs is really hard because it works on my device because you don't sit on that device that it happened on. So now when marketing can come with indisputable data and say, hey, something broke here. And we see the same with support. Product engineering, of course, for them we sort of talk about, hey, listen, you've invested a lot in the observability of your stack, haven't you? Yeah, yeah, yeah. So you have a data dog in the bottom, absolutely. You have an FD on the client, absolutely. Well, what about the last mile? How the product manifests itself? Shouldn't you monitor that as well using machines? They're like, yeah, that'd be really cool. So it's sort of like the, because we see this, like there's no way to instrument everything sort of lower in the stack to capture these bugs that leak out. So it resonates really well there. And even for the engineer who's going to fix it, I call it like empathy data where I get assigned a bug to fix. Well, now I can read all the feedback. I can actually see, and I can see the feedback coming in. Oh, there's users out there suffering from this bug. And then when I fix it and I deploy the fix, and I see the trend go down to zero, and then I can celebrate it. So that whole feedback loop is very powerful. And that's real-time, it's usually missed too. This is the power of user feedback. You guys got a great product, unit queue. Great to have you on Founder and see a Christian. Wicklin, thanks for coming on and sharing in the showcase. For the last 30 seconds, the minute we have left, put a plug in for the company. What are you guys looking for? Give a quick pitch for the company, real quick for the folks out there, looking for more people, funding status, number of employees, give a quick plug. Yes, so we raised our A-run from Google, and then we raised our B from Excel that we closed late last year. So we're not raising money. We are hiring across go-to markets, engineering, and we love to work with people who are passionate about quality and data. We're always, of course, looking for customers who are interested in upping their game. And hey, listen, competing with features is really hard because you can copy features very quickly. Competing with content, content is commodity. You're going to get the same movies more or less on all these different providers and competing on price we're not willing to do. You're going to pay 10 bucks a month for music. So how do you compete today? And if your competitor has a better fine-tuned piano, then your competitor will have better efficiencies and they're going to retain customers and users better. And you don't want to lose on quality because it is actually a deterministic and fixable problem. So yeah, come talk to us if you're, if you want to have the game there. Great stuff, you know, the iteration lean startup model kind of some say took craft out of building the product, but this is now bringing the craftmanship into the product cycle when you can get that data from customers and users who are going to be happy that you fixed it, that you're listening and that the product got better. So it's a flywheel of loyalty, quality, brand, all if you can figure it out, it's the holy grail. I think it is, it's a goldmine and every day you're not leveraging these assets, your user feedback that's there, it's a missed opportunity. So thanks so much for coming on. Congratulations to you and your startup. You guys back together, the band is back together up into the right, doing well. We'll check in with you later. Thanks for coming on the showcase, appreciate it. Thank you, John, appreciate it very much. Okay, Ava your startup showcase. This is season two, episode three of the ongoing series. This one's about Mar-Tech, cloud experiences are scaling. I'm John Furrier, your host. Thanks for watching.