 Hello everyone. Thank you so much for joining us for our talk today. Great expectations of Gen AI versus the real world. So I'm Anna Byers and I'm gonna start us off. So how many Marvel fans do we have here in the audience? Yeah, I thought so. So you may remember the famous quote from Thanos, dread it, run from it, but destiny arrives all the same. And that's true for generative AI as well. It's coming whether we're ready for it or not. So today we're going to delve into the contrast between the expectations that we may have for generative AI and the reality of where it is today. And just as a note, we may use generative AI or Gen AI and AI kind of interchangeably in this talk and all the recommendations that we have really will apply to both. So let's start by talking about what are some expectations that people have when it comes to Gen AI. Oops, there we go. So we may see it as this ultimate magic wand that can magically check things off our to-do list. It can be our personal assistant getting us through our day and taking care of tasks for us. I personally like to think about the possibility of generative AI being able to respond to all my emails for me so that I don't have to. And we can think of it as this thing that can make us heroes at our job, allowing us to shine in ways that we haven't been able to in the past. We also think about Gen AI is our answer to all our questions, right? We ask it everything from, you know, building an itinerary for travel to answering complex medical questions. And so we rely on Gen AI as this potential know-it-all of all of the information. We also have some negative thoughts about generative AI. We worry that it could be our worst enemy that instead of making us more productive, it actually adds work to our day. And we wonder what if it's lacking in transparency? It's operating in the shadows and keeping us in the dark about how it's making decisions. And finally, the biggest fear, what if it steals our jobs? What if it makes us irrelevant and displaces us? So positive or negative, there's a lot of expectations that we have that are really based in sometimes minimal or no interaction with actual AI. And so as we think about the reality, it's actually a little bit less perfect than some of those high grand expectations we have. Kind of like our Hulk cakes here. Expectation of this beautiful thing that can do anything versus a little bit more simplified reality. So what is the reality of generative AI? Well, first and foremost, it's important to remember that AI is not infallible. It is a really powerful tool that can do a lot. But just like any tool, it has its limitations. In particular, with generative AI, there are unique risks because it can create new content in a way that mimics human creativity. And that's dangerous when we can run into things like hallucinations where the AI can make up something that is completely untrue but present it confidently. So some of you may have heard of the now famous legal case where the lawyers relied on chat GPT in order to craft their case. And it turned out that chat GPT had actually made up completely made up previous legal cases that didn't even exist. So there's a lot of danger there. AI also depends on our maturity as the consumer in order for it to be successful. We can't expect it to just miraculously change things overnight. While it's learning and growing, it's still is in its early stages, and we have to learn to adapt and change how we interact with it in order to make sure that we are leveraging it as effectively as possible. And then finally, AI has a lot of potential and we are slowly working our way towards achieving that potential. But it really is still in its early stages. And so that development is ongoing and we're continuing to push the boundaries of what it can achieve. So I'm going to turn it over now to Shipra to talk to us about what can happen if we don't bridge the gap between these high grand expectations for AI and the current realities. So here you go. Thanks, Anna. Am I audible? Is it working? Yes, I think. Thank you. Project. Thanks. So what happens when we don't bridge the gap between the great expectations and reality of AI? So it's a double its word in the realm of user experience which is capable of elevating our experiences and also bringing them down if they are ignored or not used properly. So let's try and understand this with an example from the world of healthcare where AI is thought to be a game changer in terms of radiology assistance. It's totally made to be for the use of radiology practitioners for them, getting them through the diagnosis faster and much more quicker and more accurately. But what happens if this is not made for use? Let's look at what it can do when it comes to creating some unintended outcomes. First up is misuse. Now some practitioners might use this technology, over rely on it and neglect their own responsibilities which leads to misdiagnosis and patient harm. In other cases, it might be the other way around where the true potential of AI is undertapped and because of some complex interfaces which leads to the disuse where it is not fully used to its potential. And there is another case which is a little bit scarier. What if patients also have access to these technologies where they can self-diagnose? That's where abuse comes into picture where they would think that AI has made them the ultimate radiologists and just go around needlessly puncture the healthcare system asking for support and help. This is how sometimes it can lead to some unintended outcomes which we never thought would come into the picture. We all must have experienced the last one where sometimes we go to Google and it always ends up, we always end up getting cancer for a normal cough and cold. You can imagine the intensity of it when it is with AI. Let's see what is going on. What does one do in this situation? That's where we, UX professionals come into picture. We want to ensure that people don't feel, you know, they don't feel afraid of AI in some terms, see it as a work threat or a potential, you know, potential threat that seals their job. So we want to make sure that they see AI more as a friend that gets them through the work done quickly through the mundane tasks and they can be superheroes while managing other things. So we, as UX professionals, recognize that there is a human at the center of each experience and with human being at the center, we not only recognize this truth, but also try to amplify it in our research and design processes highlighting the human journey and their perspective. It's clearly not here to be too much of a person, you know, at selling papers, you know, not stealing your job at all. So that brings us to the concept of human-centered AI. And human-centered AI is a lot of things, but we will talk about it, what it means in nutshell. So it is more, it goes beyond the boundaries of ethical and responsible AI, where it is just here to create experiences for humans, taking their capabilities, their experiences, their thought process into mind. So we as a research team at ServiceNow, we have broken it down into six common elements that our team thinks come up repeatedly in our research and are critical to our users who try to interact with AI on a daily basis. So first up is explainability. So providing enough onboarding and training to our users while they are onboarding onto a new system, AI system, is what it means. And it can be done through FAQs or some sections, you know, some learn more sections. The next up is transparency, where we want to ensure that people know where this is, the AI is generating solutions from, where it is creating data from. So it can be just a word saying, you know, just a sentence saying that, okay, this is generated by open AI or LLM models. So then we have control. We always want that our users feel in control and not AI. So taking an example from the email response that Anna mentioned, so what if this is through AI and AI, you want to use it when you're replying to your boss versus you don't want to use it when you're just having a chat with your colleague. So that's the difference of control. You get to choose when you choose AI to be there or not to be there. Then the next one is human oversight. It's combining AI's power with human expertise to have the process of review and approval. It's just like, you know, you have one email generated by AI, but you want it to be reviewed before it goes to the next level. It's as simple as that. And then we have finally the power of feedback. You need to tell AI what it is doing wrong, where it is going wrong, or whether it's accurate or appropriate in your situation or not. So that's feedback for you. And finally, setting up some guardrails, some boundaries to let AI work, I mean, to set those boundaries so that AI has the power to, you know, inconstricted boundaries. So for example, if you have emails, you can always make sure that there are no basically foul language used in it. So we can make sure at our level that AI is tuned to do that. So these are the six elements at a glance. And now there is one thread that connects them all. So let's see what it is. Trust. So trust is truly the center or the one pivot that connects all these six elements. Trust is truly the foundation that creates great experiences for our users. And our research has constantly shown that how trust, people trust over time and sometimes gradually they start building trust over AI technologies. So we need to make sure that we create experiences that enhance the trust. And rightly put together by our CEO, you all would agree to this point, right? Whether it's case of AI or without AI, this stands still correct. And it is more than relevant in the case of generative AI. Although there are different ways of earning trust, but this is something that is integral. And that brings us to this point where, as UX professionals, it is on us how to balance AI and our experiences. It is our responsibility to find out ways where we can, you know, balance this critical role between the great expectations and reality of AI. And so there are some few things that you could do when AI comes to your product. So it's not about stifling innovation, but guiding it wisely and responsibly. First thing up is set realistic expectations with your users. Tell them, you know, assume that they know they are uncertain at all times. Have onboarding experiences to guide, with inguided help so that they feel easy to navigate through the systems. Then we have human, as, ensure that human has control at all times. They should, you know, and at those times, AI should take a backseat always. And finally, build trust and confidence in users. And that could be, this is something really crucial, and that could mean sometimes you want to slow down a bit to avoid the idea of over reliance so that they don't over rely on it. And something adding on a friction sometimes can be really helpful in this case. So ultimately, it's all about building innovative, you know, guiding it innovation through responsibly. What are you going to do when AI arrives at your doorstep? Thank you. I think we have a couple minutes if anybody has questions. We have one over here. Hi. Obviously, we keep on talking about finding the right balance between the AI and the experience that, as human, we are providing. But what do you think is the right way of finding that right balance? Because there's a lot of data available out there. Most of this is driven by AI. So you think that the bias could be one of the things which can come into the picture when we started relying on our instincts instead trusting the AI? Yeah. I think that's a great question. And I think it really comes back to this idea of human-centered AI. Right? How do we position the human at the center of that AI experience and not allow the AI to override that? Even if there are these opportunities, we want the human that's interacting with it to be able to be in control, to know what's going on, those six principles. And if we lean on those, that should help us not over rely. And we can even add things into those experiences like friction to say, hey, check this work. This was generated by AI. You should make sure that it meets your expectations. So really bringing that human back into the entire experience. Yeah, that's what I'm asking. When we want to go back to human for checking the data that we are getting, so will there a bias? We'll come into the picture because most of our information or our knowledge also comes from the same source of information, right? Yes. Yeah, so I see what you're saying. So you're thinking about the bias of the human that's interacting or those of us who are designing? Mostly evaluating. Sorry, say that again? Mostly evaluating the people who are evaluating the data that we are getting from AI. Yeah, well, so I think it's about allowing them to have that control, right? And depending on how that data is being leveraged, right? It could be data from usage, it could be data from, you know, in service now we think about like doing a lot of summarization and things like, you know, predictive text, things like that. And so the data might be coming separately from the person that's interacting with the data, right? And leveraging that data. And so it's really about making sure your models are trained in a way that makes the most sense. But then also still allowing for, you know, the people that are using that interaction to take control back and use it in the way that makes the most sense for them. Yeah, makes sense. Thank you, I appreciate it. Got one in the back there. Hello. Here, here. Oh, over here. Sorry. Yeah, so see, we are talking about human standard AI, right? If we put a lot of restriction on that whole technology, are we going to leverage the full potential of it? Because I have thought in that. Yeah, can you speak? See, we are talking about human standard AI, right? And we are putting a lot of restriction over it, right? That it will be, we will have a lot of check and balances for that, right? So if we'll put a lot of restriction, are we going to leverage the full potential of AI in that case? So it's more about, you know, guiding it through this journey, right? So we know that it has a lot of potential, but we need to be careful because as we say, we should take small steps to it. I can imagine, you know, there are a lot of potential possible with AI, but we need to take those small steps to get there. And putting those restrictions and boundaries is just because it can be just like cybersecurity, right? We put a lot of guardrails or set up those pointers so that it is not, we are not abused, we don't abuse the system, our system doesn't abuse us back. It's the same with AI. So if you take this analogy in place, you wouldn't realize that you know how the setting up of these boundaries is really important and crucial while we move ahead in this journey, and especially when we are just starting off. So hope that answers your question. Thank you so much for your talk. We just have a small momentum for you. Just wait up on stage. Okay, one second. One last question today then. Hello. Hey, Shuprana, a brilliant presentation. Thank you for sharing your thoughts. The key essence of your talk was that trust is the central focal point and everything is around that, right? So how do you make sure that there are some checks and balances in whatever you create so that this is not compromised for? Would like to hear your thoughts on that. So you're talking in terms of design, right? How do we check with design or in general about the entire system? How is it designed? It's about the user getting the experience and them getting trustworthy experience. How do you do that? Okay, so I'll answer it from from our perspective. Like as researchers we do a lot of research in terms of how how do people react with when they see hallucinations in their system? How do you know they take it? Does it pull them back when it comes to trusting AI or whether it you know is still okay with them to have one or three you know these sort of hallucinations happening in their in their work. So this is one one checkpoint for us that gives us back the feedback on what they actually think, what goes into their minds and it depends on different personas. Like the example that I shared was critical because it is related to healthcare. There you cannot afford one even single mistake but the personas are different like for an agent it would be a different experience. The checks and balances would be different for them. So it's totally depends on it again depends on personas and what what is critical for them. For them it is critical to have dates or maybe the case number in place. If AI hallucinates that they would totally lose trust on that. So those are some of the check pointers that we have from the research perspective and there are some from the design perspective too. Thank you.