 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Officer for Data Diversity. I would like to thank you for joining the most recent webinar in the Data Diversity Monthly Series, Elevating Enterprise Data Literacy with Dr. Wendy Lynch. This series is held the first Thursday of every month and today, Wendy will be joined by Sonny Rivera, the Chief Analytics Evangelist at ThoughtSpot and Ashish Singhbeam, founder and CEO of DVSUM to discuss how will AI change literacy and access to data. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. If you'd like to chat with us or with each other, we certainly encourage you to do so. And just a note, Zoom defaults the chat to send you just the panelists, but you may absolutely switch that to network with everyone. For questions, we will be collecting them by the Q&A panel. And to find the chat and the Q&A panels, you may find those icons in the bottom of your screen to activate those features. And as always, we will send a follow-up email within two business days containing links to the slides and the recording of this session and any additional information requested throughout the webinar. Now let me introduce to our guest speaker, Sonny Rivera is a Chief Analytics Evangelist at ThoughtSpot, a modern data stack thought leader and expert with over 25 years of experience delivering data solutions that drive business value and increased speed to insights. He advises customers on their journey to the cloud data literacy and fluency and ops and embedded analytics. He also works with ThoughtSpot's product team and provides thought leadership to the data and developer communities. She is the founder and CEO of DB-SUM. DB-SUM mission is to enable organizations to maximize business outcomes through data insights. DB-SUM offers a comprehensive data insights platform, including plug and play, data catalog, data quality, data governance, and chat with your data, generative AI products. She's passionate for technology combined with his strategic vision, has firmly positioned DB-SUM as a trusted provider in the industry, empowering organizations to effortlessly discover, optimize, democratize, and govern their data. And with that, let me introduce to our series speaker, Dr. Wendy Lynch. Wendy is the founder of analytic-translator.com and Lynch Consulting. For over 35 years, she has converted complex analysts into business value. At heart, she is a sense maker and translator, a consultant to numerous Fortune 100 companies. Her current work focuses on the application of big data in human capital management. In 2022, she was awarded the Bill Whitmer Leadership Award for her sustained contributions to the science of corporate health. As a research scientist working in the business world, Dr. Wendy Lynch has learned to straddle commercial and academic goals, translating analytic results to market success. Through this experience, she has created her new book Become an Analytics Translator and an online course, which I've heard many people have loved. And with that, I will give the floor to Wendy to start the presentation. Wendy, hello and welcome. Thank you, Shannon. Thank you, everyone for joining. I'm happy to be here. For those of you joining the first time, welcome. And those of you coming back another time from before, welcome back. So today is a really interesting question. And I can't wait to hear from our speakers. What we will do is I will do an intro for about 15 minutes, and then each of them will give you their take on the topic of how AI and natural language processing may affect data literacy and access to insects. So as a bit of background, when we think about literacy, we know that it has become a huge topic across most large companies. 90% of business leaders believe that data literacy is going to be critical to their success. So it's pretty much universally thought of as an essential component of education for workers. Now, the basic definition that is often used is that data literacy is the ability to rewrite and argue with data. I've seen this example of the wording many times. And it makes me wonder what that really means, because I think it's a nice simple way to talk about it. And so we've touched on this before, that it probably includes a variety of skills, like being aware of what data are available, identifying the parts of the data platform that you need, selecting, and perhaps manipulating that, understanding what you're seeing, and then higher level capabilities like data manipulation, analysis, interpretation, and then application to decision. So if we think about this as the data skills, it's quite involved, and you will hear many people, and we've heard from some of the guests on this series, that they believe everyone should be able to do a certain level of analysis. And I think that that is a question that remains. Because we've pondered this a couple of times, is our goal to get every single person in an organization all the way to the top? Do we want them to all get these skills? And if so, how do we do that? Well, one of the ways and we've talked about this a little bit is to give training to everyone. But training means that there's a lot of investment that we have to make. We have to designate somebody to own it within the organization. We need to make the business case to leadership that this is essential and that we need to spend time and money on it. We need to develop a curriculum that's relevant to our business. And then we need to scale that, which means how do you get it to everybody? How often do you have to have a refresher? How are you going to get it in everyone's hands in a systematic and consistent way? So if we think about training, especially with how leaders perceive data literacy, we have to acknowledge that there are misconceptions. For example, 80% of leaders in a recent Gartner study said that they already give their employees the data skills that they need. However, when they ask the employees, only 40% of employees say they're given the data skills that they need. So there's a bit of a he said, she said, and it gets worse when you look at the likelihood that people are really data literate and comfortable with their data skills. Leaders always overestimate how many people in their organization have these skills. In one study, not too long ago, they found only about one in five were confident in their ability to find and use data. Another study found that it was closer to 8%. So if we want everyone up here, we don't even quite know where our starting point is. Add to this results from a quick survey that says that fewer than one third of C-suite executives are considered data literate. The one in three of the executives who are making decisions about and who are assessing the literacy of their people are not themselves data literate. So it makes us wonder when we start to add these capabilities here in yellow, if we're saying everybody has to get up to a high level of literacy, we're actually asking that everybody have an orientation toward data that's kind of like getting a data science degree. And that their goal should be to analyze data themselves. So we have to wonder, is it realistic for every person in every organization to try and make it all the way there to the top? So one of the questions that we pondered, and I will be asking this of our guests as well, is is data literacy really our goal? Or what if it's really to develop a company wide orientation toward intelligence, information driven decisions and actions? Isn't what we really want is for people to use information in a timely way to notice when there's problems and identify opportunities, isn't it to have them ask better questions and make better decisions? Isn't it that you want people to make use of the data by understanding and extracting insights? So what if we really aren't trying to make everybody data literate? We just really want people to be highly insight driven, which is slightly different. Because on the one side, we're saying we want people to have these skills so that all of them can climb up and be up here at the high level of literacy. But really, these are tools in order to accomplish access to insight. So some of those insights are available to anyone because they're very basic, easily understandable, whether you have a math degree or not. Then there's certain level that just requires a certain basic knowledge and understanding and maybe some Excel spreadsheet kind of skill. Then we get to higher levels where you need to have at least exposure to analytics and experience understanding the governance of and use of data. And then at the top, you might be talking about predictions and advanced modeling. So if what we're talking about is this, that we want to be able to take advantage of what we can get from data, rather than this, which is requiring that people have these skills, how might we do that? Well, if we think about somebody only having access to these basic insights, if they have low literacy, and access to more insights if they have moderate literacy, and then it requires a high level of literacy to get these insights. How can we better give people access without requiring each person to go through the task of learning all of these skills? Well, there's two. The first one is what if we train team members who have the abilities and interest in understanding more of this level of manipulation and knowledge and insights, and then we train more of them who could maybe have basic SQL and analytics skills, but they are embedded within a team so that they help people understand more. And then when it comes to the highest level, we train analytic translators in order to make that connection, especially for people like the two thirds of executives in the C-suite who do not understand complex analytics, but need these insights. So what this means is we could bridge this gap in what kinds of insights people could get by establishing new roles, people who become the keepers of the data information within a team, people who gain more analytic capability, and people who train to be official analytic translators. And I think it's worth asking, is it quicker and more efficient to train interested team members to be embedded translators and part of a team rather than trying to get 100% of all people up to the top? So that's one way. Now let's think about another way based on historical trends and what happens in industries over time. So for those of you who were training in competing back in the 70s and 80s, like I was, on campus, there was one computer, and it was as big as a building, and it was off campus. And the only people who had access to that computer were the people who knew how to code in basic or Fortran or Pascal. You did not have access to computing power unless you knew how to code. And it took time for there to be ways that you could get a PC. So you didn't have to go to that building ways that you could start to interact without knowing how to code. And now in our hands, we hold these amazing computers that do way more than that entire building did in the 70s. So the question is, will there be democratization of analytics the same way as there has been democratization of computing power? So instead of everyone having to know our studio, maybe we can rely on machine learning algorithms that help clean data. Maybe natural language processing that can give us more access without having to know how to do analysis. And so it's that question that we will be asking. Can we, through natural language, give access to more sophisticated insights without having to move people higher up? Because those insights will be available to more people. So if we think about this holistically, what we're saying is that maybe it will be more efficient to allow people to interact with the data in more easily accessible ways so that they can access more insights without having to educate every single employee in greater literacy. So this is a big question for us. And if we believe that having highly insight driven employees is the goal rather than data literacy, then one of these ways will be not just training and training people with new roles, but also this third new type of access. So as we think about this, we will be listening to two experts who provide services related to what we just talked about different ways of accessing insights. I have asked both Sonny Rivera from BotSpot and Ashish Singhi to give us a taste of how they see their products moving further into that space to provide access to insights that people haven't had before. And then we'll have time for a discussion. So Sonny, why don't you give us your input about this? And then we'll turn it over to Ashish after that. All right. Okay, everybody. Let me share my screen here. Hopefully we can all see that. So I'm Sonny Rivera, Senior Analytics Evangelist at ThoughtSpot. And I'll take you through a few things today. But when I think about data literacy, one of the reasons is I think about is why is it so important? Why is it so important? Why are all those things that Wendy was just talking about so important? Well, part of that is that the gap between companies that are data driven leaders and those that are laggards is getting wider every day. And it's getting worse with AI and generative AI. And those same leaders have 81% of them say they have higher profits than the laggards. So we can look at that and determine, hey, the use of data just as Wendy was talking about, the use of data has accelerated their growth and increased their profits. And that's why data literacy is so important and is being pushed out to more and more people in the organizations. Now there may be a question is the quality of that data literacy and that training and the programs that are going forward. And even to Wendy's question, is it attainable for everyone? I think about data literacy as that early reader learning how to read the process that they go through. And it's still a challenge today. And this is in part because of technology. We need to give the readers these users of data easier access to read data. We need them to be able to search through it. And we need to make it fun and accessible and thought provoking. So by the way of an analogy, and I'll kind of go through this pretty quickly, because I want to get to a little more in depth here. But we see letters, you know, that early reader learning letters, words, and then turning those to ideas, and then turning those to stories. And those readers become writers and communicators. And when they do, they then write the classics, right? From a data literacy perspective within our companies, we see a very similar thing happening. You know, business definitions are those letters and the visualizations are the words and ideas as KPIs. And data users are telling stories. And those are data users that tell stories about how their operations are working. Those are the ones that can communicate. They become the critical thinkers. Much the same way with we see dashboard authors becoming question askers asking and answering questions, more and more questions. Again, like Cindy was saying, and then being able to generate to use and leverage AI generated insights. And so I think therein lies the power of generative AI in the analytics space and especially around data fluency. So we take that reader from having one book and we open them up to the entire library of books that they can then begin to search and find the joy in and the thought provoking ideas in that entire space. So as we as we look into this, what I wanted to do was give you four things that you should just think about. I'm going to demo a product a little bit and talk about how we think about data literacy. But I want you to think about the user experience. Is it easy? You know, should how easy should it be for users to use data? Is it accessible? Can you search through data? And ultimately, I think where this heads is, it becomes invisible that users are using large language models and AI, it becomes embedded in all of the products that they're using. You'll see large language models that term LLMs. That's what GPT is. It's a large language model that generates well, generates language. So most AI tools and many tools are using that as a transformation or a translation layer. We're seeing that today. Think about trust and accuracy and how important it is to have a human in the loop as a non technical user. Can you trust what you are seeing and hearing and reading from this data? And then of course, anytime you're dealing with data security and governance is important. So is that process? Is it enabling you to have a more secure and governed set of data? Or is it creating larger exposures? So as we get into this demo, I want you to kind of just think a little bit about those topics. Let me move this toolbar just a little bit. I want you to think about those topics as we get into this demo a little bit. So many of you may not know who ThoughtSpot is, but I'll give you just a brief here. ThoughtSpot is the search and AI powered analytics. And we also recognize that not everyone has the level of data literacy that Wendy was mentioning at the top of that ladder. They're all throughout that spectrum. And as Wendy said, many are clustered down at the first step. So we start off with a search bar, which is much like the search bar you use at Google Today or Bing. But we also recognize that people may be new to search. So our AI is generating common questions that are asked of your data of your data set by people in your organization. So let's take a look at this. What is your total sales by region for Q2 versus Q4? So this non technical user can just say, I want to know that. Now what we did was we took that text, right, that natural language, and we pushed it through a large language model. In this case, it's open AI's GPT. There are others that are coming down the line. It's not important. And we converted those things into our proprietary patented relational search engine. So these end up being tokens that say, hey, I want you to sum the sales. I want you to group it by region. And I want you to compare Q1 to or Q2 to Q4. So if you think about some of the things Cindy was saying, I'm sorry, excuse me, Wendy was saying, what you see here is that you didn't have to have those analytic skills, the tech skills to write these queries, you needed to understand your business so that you could ask that question. And I think, Wendy, that's where I look at this being pushed out to more and more people that can ask those questions on the front line. I think I recently saw a study that said 87% of executives believe they will be more successful, more profitable when they can get insights, not data, insights into the hands of frontline workers. And so I think this gives us the opportunity to get insights into the hands of those non-technical frontline workers. So much like our early reader, we were given this. That early reader was given a book. We were given this search. But if we do this enough, we see those answers will become a little more accustomed to it. Maybe we can ask our own questions. So let's just ask a simple question like what are my top performing products? So again, we use large language model to convert into our particular tokens and give us back the visualization. Now, what's important here is this is declarative. It's not a technical skill that I have to have. I just say to the system, I want this and it goes out and gets it and presents it to me. Understood my question, understood the answer and what's the best way to visualize that for me. And on top of that, it went and said, hey, there's some other questions that have been asked as well. So here's top selling products. Here's another top selling products for last year, maybe a top selling products this year. And I think what's interesting here too, if we look closely, our AI looked at this question and had to interpret what does top performing mean? That means something different for different folks. For this particular role, we interpret it to mean quantity, how many of these things were sold. But maybe that's not right. Maybe you meant sales. So we didn't quite get it right here and nobody ever does. And that could actually be subjective, right? Because if maybe I'm in inventory control, this does matter. I wanted quantity. So now I'm going to just say, you know what? That's not quite right. Let's fix it. I'll come in and say, I didn't want the top product. I wanted the top sales. I wanted sales here. So the system knows my data and I get what I'm looking for. Now I can look and say, yes, that's what I want. I'll submit this feedback and it becomes part of the feedback loop. It becomes part of the algorithm going forward. And now I know how to answer that question going forward. So I'm going to do another question similar to that and just say, how many jackets did I sell last quarter versus the vest? So how many vest versus jackets? I can see a very similar type of algorithm that happens. And now I can pull this up. Well, maybe I want to edit this. I can go in here and all the visualizations become interactive. They're not static. I can drill in. I can look at, well, what does this look like by a particular region? And I don't have to know the data. I didn't have to be trained on all the data because the system understands the relationship of data that you have. I can then come in and begin to drill down, ask the next question, ask the next question, ask the next question. Because as we know, usually the answer is not on the first question. It sparks another idea and you keep going further and you keep going further. So that's one way I think that AI is going to change analytics. It's making it easier for those non-technical users to use the products. The other thing I wanted to show you here is kind of broadcast analytics is what we see a lot of today. And that will continue to be in place, especially at the top of the ladder. But what if there's KPIs? What if there's things I'm interested in and I want to change the mechanism, the way I interact with data, as opposed from broadcast or pull, but I can actually be notified. So I've got these important KPIs, sales weekly, quantity purchase weekly, number of transactions, and I can see the trends. What's going up? What's going down? And with one click, I can say, can you tell me what's driving that trend? Now, I don't need to be a data analyst. I don't need to be a data scientist. We can use our algorithms on the back end and just ask the question, what are the key drivers behind this? And algorithms are coming up with, our AI is coming up with, here's the five things that were most impactful and we get a narrative. And this is what I really think is important. Generative AI is going to give us narratives in the language that we speak, not in data, in the language that we speak. It says, hey, four of these products have the largest change out of 346. Here's what they are. Maybe we're doing something well with this product. Maybe we're doing something poorly with these products and we should take a look at those. And so those are a couple of different ways that you see generative AI making it easier for non-technical users. They're asking business questions. They're not asking a technical question or writing SQL. And then the last thing I'll show you here before we come back is, if that new medium is pushing this out to me, go ahead and schedule my alerts. Hey, I want to know when this decreases by, I don't know, 1%. And go ahead and schedule that out. And then the AI will go ahead and pull all of this information, send it out to us as it changes. So I think the thing I'd like to leave you with as I come out of this is that generative AI is going to be infused into all of data and analytics. It's going to be infused in our data literacy. We're going to talk about AI literacy as well. We're going to begin to talk about AI literacy. But I also think we've all heard this term chat GPT. Chat GPT is a chatbot that uses GPT, which is generative pre-trained transformer, a large language model. As we go forward, I think GPT becomes known as general purpose technology. It's going to be in everything that we're using. And so it's important that we have the right security in place. We have a human in the loop that can correct and help guide those insights that we're seeing and kind of manage our AI ethics. So that's kind of where I see the value, where I see AI going in our space right now. I'm also going to leave you with this. You can take a screenshot or these will be provided later. We have a free trial if you're interested in that. But we have a data literacy guide that's on our site. It's not gated. It's just there for you to take a look at if you want to read it and see how we look at data literacy. We're happy for you too. And then we also run some podcasts. The data chief, I would definitely recommend. Data literacy begins at home. This is run by Cindy Hausen, our chief data and strategy officer. Great podcast and lots of great guests. So with that, I'll kind of slide back to you, Wendy. Thank you so much, Sunny. That was a really nice overview and the examples were very helpful. So Ashish, why don't you go ahead and take the screen and tell us more about DVSum? Certainly. Go ahead and share my screen. Right. Moment, please. Is my, Wendy, is my screen visible? Yes, it is. We're ready. All right, great. Okay, excellent. Very good. So what I want to talk about is, again, let's look at, start with the same baseline and, you know, what is the typical approach to data literacy, the challenges and pitfalls, some of this that Wendy mentioned, and then, you know, how can AI help and what the DVSum, one way that we are doing it for our customers. Just one moment, please. I'm going to fix the display and be easier. All right, it's better. So if we go back to data literacy, you know, I'm sure everybody has thought about, you know, we define, oh, what is literacy? Come up with frameworks. I mean, this is a classical approach and then define, you know, maturity models, maturity assessment. It's all the right, let's say, objective. The problem is, it becomes too complex as Wendy was talking about. And if you think about what is the challenges, I think it's important to understand the challenge with the current way of doing literacy to appreciate, you know, how AI could potentially help. And so, really, there are three challenges with that. The first is the friction with the users. If, ultimately, the goal is to make the organization more literate, and a lot of focus becomes, oh, you need to understand data, you need to understand databases, you need to learn the BI tools, you're essentially taking the focus away from business users to do what their job is, which is to drive operations, make business decisions. So that's one challenge with the current way of doing literacy. The second is the measuring the quantifiable value, right? So we put all those programs in, we invest business users time. How do we measure? It's always a challenge. What is the value of more data-driven decisions? Yes, overall, more data-driven decisions companies are more profitable. It's all there. But specifically, what is the value that's given? It's not always easy to quantify. And so that again leads to how effective data literacy programs could be. And the third is, you know, if you look from a data office standpoint, your goal is you want to be more literacy, but if you think from the people who generate data, it's like, hey, why we are not data-driven because the business is data illiterate and they are not becoming literate enough. If we ask the business or the consumers of data, then, oh, it's because we don't have good quality data, so therefore I cannot use data to drive decisions, right? It's a back and forth that continues to happen. And so again, the current way of doing it, you put a lot of effort in and if the net result is that we don't know how to measure and then we have this back and forth, then that basically is inefficient. So let's go back to, you know, what we really want to solve? What we really want to solve, Wendy, you asked the question, right? Is data literacy the goal or there is something else? And so data literacy is not the goal. The goal is how do we bring data-driven decision making to every place in the organization? So if you think about a business user, I think the fact that everybody appreciates we can use data to drive better, more productive decisions, now it's undistricted. So it's really about how do you enable that business user to find data and insights themselves when they need it in the moment and without expecting them to become data and BI analysts. I mean, that is really the ultimate goal that we want to achieve. So if we want to achieve that, let's talk about, so what is the barriers to that today, right? And that will lead us to how AI can help. So if you think about data, companies have data, there are users. What are the three things you need in order to be able to use that data? So first is you need, and that is why you need IT and data analysts or somebody who can translate my business speak, my business semantic to the technical definition of data. That's number one. Number two is then to find where is that data exist in the organization, in my data warehouse, in these tables, etc. Number three is you need some skills to interact with the data. Now, of course, in analysis, SQL for a relational database would be the most common way of interfacing with the data. And finally, it is about how do you visualize the best way to look at data. Maybe it's about a table, maybe it's a pivot table, maybe it's a chart that is the best way to communicate or understand and find those nuggets of insights for it. So these are the four things that you need really to interact with data effectively, but that's where the barriers are between a business user and the data, because frankly, this is not their job. So how can AI assist with it? Now, what if we can say, okay, what is the thing business users know? They know their business, they know natural language. What if we can leverage AI to do, to take care of the remaining things, which is being able to find the right data, convert those questions into technical queries, syntactically correct, and then being able to fetch the data from the system and visualize it automatically. I mean, Sony talked about a way with ThoughtSport interface. So if we do that, then then we are leveraging AI to, so again, while what we talk about literacy, is if you can do this, you're bringing, making it easy for business to use the data, when you make it easy for them to use the data, they are going to be more data driven, they are going to be more excited, they will be more engaged because they're not waiting for two weeks to build a report, they need something right now, they get an answer, they're done with it. I mean, that's automatically going to drive more excitement and engagement, and that's what we see. So let's again look at from how DV, from a DVSOM perspective, how we are enabling it, and I saw some of the questions about, oh, yes, to do all this, we need to ensure good quality data, et cetera, and those are good questions, but let's look at a practical way of achieving that. So what DVSOM solution does, DVSOM is a company, we are like a data intelligence platform, it has, you'll see it has, you know, started with a more data management focus, a catalog governance quality, but all of that helps when you combine it with Generative AI to bring that ability to democratize data. So let's talk about that. So what DVSOM that with the data solution does is allows, gives that interface such that a business user can simply ask questions in natural language, and it will bring factual results from your data, so it's not generated information in real time and visualize it. Now, to make it work, and this would be applicable for any solution, you really need three things to make it work. Everybody has used the chat GPT interface, you can ask a question, but it has zero knowledge about your enterprise's data, so you definitely need a natural language interface, everybody has access to it with Generative AI, which is what powers chat GPT, but then you need to combine it, and this is the critical piece, you need to combine it with the intelligence about your data, and when I say intelligence, it's not only okay, what is my actual definition of my data, what are the columns, but how do they relate to each other, because this is all the information an analyst has in order to write that query, but you need something else, you need that semantic layer translation to the technical data, because tables and columns do not necessarily represent what business is thinking about, so that intelligence encapsulates the technical and the business semantic information about the data, and the third is you need that ability to then query the data, build the visualization, which is probably the easiest of all of these. When you combine this, and that's what we do in DVSIM, we are able to provide that experience or deliver that experience such that people in sales, people in operations, people in customer service, people in procurement, they are able to use just with the natural language, able to get that insights without always having to depend on writing and analytics. Now, I think this is a lot of questions I see in the chat or in the Q&A is, oh yeah, all good and all, this is all good, but what about all the other things that would prevent, you know, having that confidence that the numbers that you're seeing are the right numbers, or we are following certain principles, so very important point, and I think that is why having a strong data foundation is key. So in DVSIM, what we, again, as I said, we build on that foundation of having a catalog, having an automatic linking to business glossary, or having a business glossary, so you can define metric definitions, and everybody then uses the same definition when a question is asked. Having governance capabilities, or it has to be as important because for privacy and security considerations, you cannot allow chat with any data if PII information can be exposed. So making sure there is a catalog which can provide ability to have strong, rich metadata, the business glossary, having those governance capabilities will increase the ability to expose this data to the business directly. And then, so to make it work, really the three steps that should be required is one, in DVSIM we connect to the sources, whether it's on-premise or cloud databases, the system will automatically build that semantic and technical metadata layer, and then you define some guardrails. Maybe there is certain data can only be accessed by a particular business, the ability to train with your own glossary of terms that you can feed into it so that it'll understand your specific terminology or metrics. And then you allow that the users to chat with that data. So let's take a quick, just want to quickly show you not a full demo, but an interface of what it looks like. So again, DVSIM, like I said, is a platform. It has the catalog governance capabilities. So all of that, all that management strong data principles are all there. But then from a user perspective, so from that final business user, we want to make it simple. It's not about, I think it's important to understand, it's not about giving them all the ability to create visualizations, etc. Because that's not, that's the goal of a business analyst to communicate to the business user. Business user wants the final information they can use to drive decisions. So that interface has to be as simple as possible. So we provide an interface that looks like everybody who has used chat GPT, a pure chat type interface. And let's start by saying, you know, what kind of questions I should be able to ask with the data. Because remember, when a user thinks about data, they think about, can I use this? Where did you get it from? In order to build that trust, you need that in addition to the fact that you can ask questions about it. So these are just some type of example questions that you can ask with the data. So imagine I'm, I'm again in retail and I'm doing sell-through analysis. Sometimes I want, hey, I just want sample data. What is my, and then you ask the actual questions, where did you get the data from, like the source of the information? Who's the steward so that maybe I can go and talk to the person if I don't know. You need all that information. It cannot be just the charts in the visualization. You need all those other features in order to build that trust so they can use this. So let me just open it up and see what the system does. And so let's take one example here, which is what were my least performing skews for sell-through don't apply any filters on time, filter out skews with this no sales, right? So it has to be natural language. It's not a query that you have to give it a certain prompt. And with all that power of generative AI combined with the that contextual information about your data, what DVSIM does is it converts that into the appropriate SQL. So we're finding the right tables, knowing what the joins are, applying those filters, bringing back the results, and then presenting it in a form that is most conducive for that particular question. Sometimes it's a simple column chart. Sometimes it's a time series chart because you're asking about time. And then yes, certainly as Sony talked about some ability to of course interact with it further, which could be in form of I do want to change the charts a little bit or I want to know that what is my product category. You know what? I don't know what product category values are. So still having all that catalog information right at their fingertips, such that they can learn about the data. So the point here is we want people to learn data. If you tell them you have to learn for the purpose of learning, it's not going to happen. If you give them value saying, hey, I can give you the data you need. And as part of that, you can also learn about the data as you are going through it. That is the most effective way of getting literacy because now you're learning as part of doing your job. And so that is our that is the approach we are taking on bringing generative AI to data literacy. To summarize, I would say the value of being AI into data literacy with this approach, that's three things. So the first is truly what the topic we are talking about that if you empower the business users, you're you're automatically driving more data driven decisions without having complex data literacy programs. So there's also benefit for the IT and analytics teams. Their job doesn't go away. Yes, they are able to focus more on the more complex analysis. There is no scenario in which Chad can solve all analytics problems, but you want them to focus on the more complex. So if you can offload 60% of your ad hoc analysis and simple BI work, the business, now you're allowing them to focus on predictive and prescriptive analytics more to the data scientist. And the third is the fact that if it took a week or a few days for an analyst to give you the information and you're able to get it right now in the moment, that directly has an impact to whether it's operational efficiency or a true business value. So this was Wendy, my take on how AI helps with or takes a different approach in doing data literacy and then how DVSUM as a company we are enabling with our chat with your data solution that we help enterprises with. Great. Thank you. So there are a few questions and I have a few questions of my own and they aren't a surprise because I think that when we prepared for this, I was wondering about it. Can you summarize each of you and I can start with you, Sunny? What exactly do these tools how do they change what skills we do want from people? So it's not, I don't know that it completely eliminates the requirement or the hope that people get more data savvy because they can use natural language. But if you could prioritize, what would you really like your users still to know when it comes to data literacy, Sunny? Yeah, so one, I think the first thing it should do is enable them to do their frontline business job better and easier so that they're not doing data and analytics. They're not doing analytics. And I think the next thing though is people need to know how to make inferences from the data. In other words, not how do I build a machine learning model? But I got this answer back. Now, what does that mean to my business? I think that is a huge skill where there's a gap in the data literacy world. OK, you've told me this. What do I do with it? And how do I know what to do with it? What's the probability that this will actually be successful? Is it high or low? Those are the types of things I would like to see those users understand. OK, so you're thinking it's more about inferences that you make from the result. Rather than training people how to get the result. Yeah, I think if the tools become like I said earlier, the tools become more invisible, embedded in your workflows. Then you're not so concerned about doing analysis. You're concerned about, hey, now I got this answer. What does that actually mean to me and my business? And what impact can I have? OK, all right. What about you, Ashish? What do you still think that people need to know? Yeah, so Wendy, my view is that I think certainly appreciation or I think knowing how data should arrive for that particular function, whether it's on marketing. It's like if I can get the data, but how do I use that data? How do I use those insights to actually take action and taking the action itself? Is still a requirement, right? That's not going to change. However, at the same time, some things that I think this disruption that is happening has a positive is the fact that a lot of times we've heard about the executive support for AI, for bringing change management in the organization, being more data driven has always been a challenge. Now, what Generative AI has done is it has brought free mindshare to the executives. I mean, yesterday evening, I was speaking to a CDO and he was saying that people who are 30, 40 years close to retirement, they are like, oh, I use chat, GP to do something. So there is already, let's say change management is already primed in the organization. So to answer your question, yes, the ability to now leverage data to take action is still something that needs to happen. But positive trend is that now there is executive support coming at all levels in the organization that this has the ability for me to become productive and drive those positive outcomes. So you think that things are moving in that direction anyway? And so there is more momentum going in this direction. And so having people realize that they have access to it will be part of literacy itself. I believe so. Yes. Okay. So another question that I have is what are the boundaries or the guardrails that are in place? I think we all are have concerns about misuse or misinterpretation or having things go wrong. Is that risk big enough that people should be concerned or how do you have people who are really embedded in the information and they're doing all the analytics now not get freaked out that somebody's going to go push a button and come to the wrong conclusion? Ashish. Yeah. Yeah. No, I think that's really the biggest question I think in terms of adoption at a production level that we are seeing as we're working with the customers. And so I have two points. First is that absolutely there are and that is some of the things I highlighted in the video. There are certain guardrails are very important. The fact that data should not I mean you need to have very clear idea of what you're sending to the LLF, to Microsoft or to Google it's the people who run the large language models like child GPT. What is data really going to them or not? Like our approach is no, data should never leave the customer network and that's why we only use metadata to generate those SQL queries. The second is the privacy, security, the governance capabilities, access controls. I think those things because those policies etc exist in organizations, I think that those things cannot like you need to have a very clear idea and ensure the system should be able to or need to have those capabilities. But the piece that I think a lot of people say oh what about trust if my data is not perfect that is why IT and analysts don't give access to data to the users because they might make wrong decisions. And that is true, I'm not denying it but I think there is what we find is that if it takes you two weeks to give a report to the business and then they want to make sure that whatever they got is exactly what they need and they can drive decision is because that effort is so long the cycles are so long. If versus if you say hey Mr. Customer, Mr. User you are able to find and explore the data and yes as long as you have visibility into where you got it and maybe the numbers are not right exactly right when you ask the question but you keep fine tuning it ability for it to continuously improve. I think the users are a lot more forgiving in terms of oh this data is not perfect but if I got my answer in five minutes instead of two weeks I'm willing to be more accommodating for that. So guardrails yes the need that our data has to be perfect before I enable and democratize the data no I don't think that is a barrier that the users are really concerned about. Okay. What about you Sunny? Similar or a different opinion? Slightly different but I will say I do like the idea of getting answers quickly to overcome some of the risk so the risk of missing the market versus the risk of answering a question with in a shorter amount of time. I do think though what I would add is how do you add trust and validate? Validate all the way from that natural language question into the internals of the system that you're using and all the way out to the answer that you're giving. So I think having that through line is critical. We have to be able to trust these systems and I think the thing we're going to see is a new trust model in an organization that says how do I work with Wendy and Sunny and a data team and come up with an answer where we know AI gave us part of that answer and what does that trust look like and I think that's kind of a new frontier for us. Got it. So there is one question that I wanted to try and get to that came from the audience which was how are you applying AI or machine learning methods to help with the data cleaning and data quality? And Ashish, I saw that as part of one of the steps and so if there's promise there what does that look like? Yeah, yeah. No, that's that's you know of course Generative AI's application so we are talking today about leveraging Generative AI to the last mile but what about steps before that and so one that people who are familiar with data catalogs one of the ways oh can Generative AI help me come up with good definitions of my tables or columns or entities or glossary that's one but specifically on quality so like in DVSum what we do is we have a data quality module that allows to observe or find exceptions and then ability to cleanse the data so there again where Generative AI is being used is typically you would have somebody write business rules in natural language and then somebody who knows the tools are going to translate that into a technical data quality rule now where we use Generative AI is exactly the steps that I described except think about it as an exception rule you can define a business rule saying I expect my data to be this pattern and it should not have a values between a certain range or this enumerated list of values and the Generative AI is used to convert that into the data quality rule that can then be executed from the tool to the database and find those exceptions alert the users so certain I think so what does that do and from a value standpoint you are good you are engaging the business to say hey you can find exceptions in the data or you can help explore what might be quality issues and then those become operational to drive an ongoing basis of cleansing and improving quality so again translation of a business rule to a technical data quality rule is one specific way of using AI to in data quality that we are doing and Wendy I know we're probably right up against time I will say that ThoughtSpot is not a data quality tool in and of itself so we don't have those capabilities but I do see vendors whether it's a data quality vendor or whether it's an ETL type of vendor they're already embedding data quality rules from a generative AI perspective into their tools my my greatest fear right here is that we let generative AI generate a bunch of code for us and we're there's probably going to be a cottage industry of people coming to fix AI generated code so that's my greatest fear at the moment Wendy if you're speaking Oh sorry about that thank you we are right at time and I really appreciate Sunny and Ashish for having you join and thank you to the audience for coming back thank you so much thank you Wendy and great audience and great questions so thank you so much indeed thank you all so much Wendy thank you as always for hosting another great webinar and Sunny and Ashish for joining us and just a reminder I will send a follow-up email by end of day Monday for this webinar with links to the slides and links to the recording along with everyone's information thanks everyone I hope you all have a great day and thanks to the community best community ever you guys are just amazing so I hope you all have a good one ciao thanks bye bye you