 So many makes me nervous, thanks for coming down on a one day in Bangalore, but yeah, with that we'll start. We have a quick 20 minutes, let's make the best of it. So the idea of today's talk is to kind of get a reflective sense into the conversations we are trying to have. So with that I think since all of us are here, we have a shared love for UX, we are excited about AI and there's a certain sort of uncertainty in terms of what's really happening. And a lot of that is because we have been thinking about a world of tomorrow, which is seems like it's already here and it's catching up to us very fast. And at least the reason I am here is because I want to talk about human AI collaboration and I think it's finally where we are, where we can figure out how humans and AI is going to collaborate together and seems like the gist of what we should be talking about. So I think what we'll talk about first is why do I care about all of this. So the idea as I said again today's talk, more of a conversation is to figure out what's the right way to approach it and not in the sense of talking about tips and tricks, do's and don'ts, what principles to follow. It's more about I think what do you do in this muddle confusion when you start with something totally new. And for me it's reflective because I started on this journey a couple of months back and for me it was a top-seater where I'm trying to understand what am I making head and tail of, how do I work with what I know, how do I toss it out of the window and still create something impactful. So bear with me and what will help is I think we are going to go through very fast ever too many slides. So don't try to take pictures, it'll help if you are also kind of absorbing it and kind of treat it like a conversation. Thank you. So yes, so why do we care first of all? There is a promise of here, let's look at that. It talks about increased efficiency, productivity, improved decision making, natural language based interactions. Kickass promise is right, very interesting, something to look forward to. And I am also sold to that, we want this. But see of unknowns, with all of this unknowns and new terminologies, new ways of talking about processes, trying to critique our current process. It's very hard to understand what is going to happen. What I'll do, I'll learn, what I'll do, I'll not learn, how do I look at my current process and what do I do about it? So you'll also see, I think now that this whole thing has become very consumerist in nature. The expectations that users have are changing, they're evolving, they're trying to experiment with what to do, what not to do. Tech has changed a lot and designers are scared of tech. Let's not kind of kid ourselves, we have to be. At the end of the day, how do we make a head and tail of this new thing that is happening? Then designers like me specifically are confused. I want to understand what do I do at this point? And you join a fast-paced startup, you're trying to create something, but you have no idea what you're doing at the end of the day. So let's take a moment and raise the fuzziness. And I think this is the first exercise of centering yourself, where you realize you are in the business of design. It's all about fuzziness. That is where creativity comes through. This is where the fun of the design was. That's why I got into design. I don't know about you guys, but yeah. So I think that's the first step. So moving ahead, let's go a little methodologically, try to understand what all has really changed. And if you look at what people are talking about, track social media seems like a lot has. Can we start pinpointing some? So let's first and foremost talk about nature of code. The nature of code has always been fluid. We have been trying to achieve things from ages now. It's just that it's now that it's possible, that fluidity has surfaced to the realm of reality with LLMs and AI kind of becoming mainstream. While we were used to thinking more deterministically, where you are expecting system to always behave in a very specific preset manner, now things have become probabilistic. A simple math question that you put to chat, you put it like two plus three, is being answered probabilistically. So what is really happening here? We are pretty much used to this model, where I ask a question, the system answers. I click on something, something happens. So it's a very ask and be served model. What we are transitioning into is a question answered to suggestion to question again model, where the system can probabilistically suggest you something, can even ask you a question back. How do you make sense of this? On top of that, this cascades. So an interaction can predominantly turn into an infinite loop like this, where things are just keep on moving ahead. How do you deal with this complexity? So now let's say though we are designers, we're also users of this. And me personally as a user, I don't want to deal with this complexity. We'll talk about me as a designer later. I want someone else to take care of this. I want this specific complexity to be offloaded, because I just want to bear the fruits of what this technology, what this new advent promises me. This is a problem for us, because that complexity has to be taken care of by us now. So let's take another specific case. We are very used to talking about point-to-point interactions. Again, we talked about CTAs and other things. They are because of this turning into more compounded interactions. Now you can write a simple command and offset a huge load of tasks to something, an AI agent bot. In this case, remember how much time it takes to schedule a meeting with someone when you have to do the two info. In this case, your point of interaction is only twice with the system. And at the end of the day, all that complexity is taken over. It seems like, and I'll say it seems like, because I'm not sure yet, it seems like it is all about outcomes now. We have been thinking about actions and interactions, but outcomes seem to be more important because the actions are being clubbed together. Let's take another case. I, as a designer, my whole and sole job is to figure out what my user wants. So I have a perceived intent of what my user needs. But since the platforms are becoming open-ended, the capabilities are becoming open-ended, a user's intent is also fuzzy. And it's more applicable today where a user or a person, when I use something like chat GPT or any other tool, I'm also trying to figure out what it can really do to me. And if it was applicable that we thought our users don't know what they want, it's far more applicable today. Their intents are fuzzy. How do you deal with that? That's where the sea of data comes into the picture. You have data across preference, context, possibilities. So it seems like what is limiting now is the capability I has as a designer to create these permutation combinations, to figure out what all is really possible. An example of this is, let's talk about going from point A to point B, how many times we have taken Google Maps and utterly been dissatisfied with what the result is because though you want to go from point A to B, it's not purely utilitarian. You just don't want to reach. And while you are trying to optimize the intent for fast, you might miss out on joy, you might miss out on safety. So these elaborate contexts are something I think that we have to come back, take back into the picture. And this is where it finally possible. So that's the second thing, second so-called possible thing that I want to talk about is how do you model intent into design? And we have been doing that. It just become more broad. Let's take another example. We have always, what the world runs on statistical averages. Your medicine, the TV shows you watch, and so on and on and on and on, what your grocery stocks. Now, the system can actually talk about hyper personalization. Every person can be their own anomaly and it can be beautiful. How do you deal with that? And the reason this is mostly coming in because our limitation as a designer and the systems that we also work with has been playing a predominant role economically also. We have not been able to design for infinite use cases that we can think about or talk about making it accessible for everyone as much as wholeheartedly. We've been wanting and trying to make it accessible. I think it's finally possible where you can really make these systems more accessible, more personalized. And one of the examples is now that you have agents to do everything for you. Health assistance from educational assistance and so on and on and on. That takes us to our third part. We can probably finally stop playing big brothers and let users take charge of their own destinies. So, but let's take a stop here, let's absorb this because next one is going to be fun. And I remember this was very reflective. I'm not trying to give you principles. This is where I was and I just realized have I run myself into a corner now? Is this even the right approach? Seems very interesting. Because this takes me to something I quote as fragility of choice. There are a lot of fragile choices that we have to make. And they are termed fragile because they have huge unintended consequences which we don't understand about. And we tend to take very simple outcomes out of them. So, coming back, is this even the right approach? Let's go ahead for the first fragile choice. So, a lot of these fragile choice come with very wise statements. This is one I came up with. Designs for fluid agency, not agents. What the hell does that even mean? So, in this case, I do a smart thing. I plot it across two accesses. I talk about user control and AI capability. AI capability. And try to see what is really happening. And I realize there are four types of things possible. And I can, in lieu of time, I'll not go through a lot of stuff. But let's focus on the new one. The collaborative is the new paradigm which has finally emerged, where high capability AI with high intent and how high agent users can interact together and do something about it. Seems fun. What to do about it? I have no idea. It seems I have to make a choice here. And it would look like I'm trying to tell you to design either one of these products. But if you go deeper, and at least I realize in my own journey that every product that I'm going to make, which is AI-enabled, is going to be a mix of all four of these at different points of time. So, it's not about designing for one. It's about figuring out which fits at which scenarios and how do you design for that. So first and foremost is I have to make a right choice. You have to make a right choice. That's the first fragile choice. This makes sense now, right? But it really doesn't. I realize that was also pitfall. Because who watches the guard at the end of the day? The problem here is, we think of AI as a wise wizard with while it's a four to five-year-old kid which needs a parental supervision the whole time. It needs to be told what is right, what is wrong, it needs to surface and talk about what it's thinking so that I can correct it. So, you cannot really take the human out of the loop in this process. So, in this case, we have to make sure the guardians cannot be left out of the process. So, you cannot only design for human agency. You have to design for agents also. And it's our job to make sure they can communicate so that they just don't try to interpret, try to work with us. They also communicate, and so that they can accept that sort of feedback. That's our second fragile choice. You have to help make agents make choices. This is another statement. I think as soon as I started working for AI and I started reading sexy medium blog posts, this is the first thing I read again and again and again. And I realized, yeah, what's wrong with this? He realized there is. You are not designing for trust, you're designing for credibility. And there's a reason we'll talk about this. A system which has to be functioned has to have certain sorts of expertise to deserve that trust because that credibility is going to be tested at every interaction, at every point of time that you work with it. How do you do that? So, I think a lot of confusion would be for me also was to differentiate lexicographically what trust means was credibility means. And this is what I realized. And the reason this is AI can be wrong a lot. And we talked about expertise. So, assume you, for some reason, let's assume you guys trust me now. And I have no expertise. And that trust would lead to antichpoint where I can give you a lot of funders which are not going to make sense. This is exactly what currently AI is doing. And we have to figure out how to work around this because high trust and low expertise is a bad recipe. These are very simple examples. Think about cases where you're trying to create a mental health assistant. You're creating a safe driving assistant. Here are the cost of lives at the end of the day. You cannot take that trust for granted because of your branding exercises or because of the cute kind of persona you have chosen for your AI agent at the end of the day and lead to outcomes which are not sensible. So, you have to help users make choices. You have to learn to tell no. You have to help them understand what's the right decision that they can take out of it. So, a lot of interesting fragile choices. Let's take a pause. Let me check how much time we have. Yeah, cool. So, so many things what I do with all of this. So, we still don't have a tight framework. I still don't understand what do I do with all of this. I have a sea of thoughts and possibilities to run with. Let's take certain examples. But, at the end of the day, wasn't our Hori Glale trying to understand what, again, double-demand diamond did for us? And no criticism there. I think it's a good idea to have a conversation and understand where the current process is falling short. In this case, let's just take, this is a very interesting research paper you guys can search for it. Let's only take the initial Discover Define phase. We're not going to go into screens at all. And there are these issues that are being highlighted. As you look at Discover and Define phase, didn't we have our Hori Glale answers in personas, journey maps, and these things that we used to create? So, let's just take one of those. Let's just talk about personas. This is a typical persona. I think talks about needs, fonts, and blah, blah, blah, and whatnot. What's the problem here? So, before we go there, let's try to categorize this for a second. This is a snapshot of a real person in a real point of time. It's a slice of their life, just a simplistic slice of their life, which is going to dictate how your product looks at them for next years, which is eternity in tech. So, let's take a step back. Let's go back to creating, make sexy diagrams. Let's talk about, in this case, okay, I didn't give you the context. I'm very sorry. So, this is for a music learning platform. We were trying to kind of create how do we look at better interactions for a music learning platform. So, let's take a figure out a basis for this. Let's say there are two aspects which are very important, awareness and drive, and how do we talk about that? Let's just map it. We get four personas now. So, we started with one, now we have four. But these are still snapshots. We're still at the same problem. Let's just focus on the top left persona, Prima. If you look at where these people are right now, depending on what you do, they can transgress into different quadrants. And understanding this is fun, because there are going to be events which lead them to this. And why we talk about this is because as a designer, again, I have to have an intent. And I want to intend to reach that person from this point to that point against these forces. Now, our personas just bloomed. Now, we can see what is happening over time, how they're evolving into different archetypes, trying to see what is happening, what are the causes of a relationship of things. So, another example, this was for a Fintech platform, a Kripper-Fintech platform at that point. Similarly, you will see there are a lot of forces being mapped across archetypes. So, when someone transgressed from one archetype to other, what is really happening, and so on and on. Another example, I'll quickly rush through. Let's call these dynamic personas. The reason to define them was we could define better intents. We could see, since we had better intents, we could also look at better outcomes, and so on and on. All fine and dandy, but is that all? As usual with the tonality of the talk that we're going with, we have to cross-question it again. Why were we limited to two dimensions, first of all? This is something that I did like an year back, or so, we didn't have LLMs and whatnot. That was all the capacity I had to process. But is that even applicable today? So, you would want to create a three-dimensional or a four-dimensional map. But before you do that, let's talk about something else. It's a very pulling thought to increase those dimensions and look at that complexity that emerges. But I don't think we've done justice to two dimensions that we've talked about. As a designer, I've talked about an ideal goal. That might not be a user's intended goal. We just talked about the control that even freedom users have now. That might not even be the real thing that would have happened. So, as much as you're trying to optimize for again that single specific thing, what really happens is very different. How do I even model these interactions anymore and not get stuck into that zone of ideality that I've created and the calls I've taken for my user again? So, different journeys, different points, similar end outcome. The intents can be totally different. Someone wants to learn at a slower place. Someone doesn't want to save up on your own schedule and live like a miser. So, how do you now figure those things out? Then we have to think about all of these things. A lot of terms I write again. How do you talk about AI intent? How do you talk about inter validation and so on and on and on? And it seems like this is where I run into, as of today, where I am is, where I figured out there is something called as intent personas, SEO already uses it. And it's something that we have to figure out a way to get back into mainstream and see how our design process is going to get evolved from that point. But this is where, actually, you wanna cliff-angle because this is where I am. So, but with that, though we talked about that, there is one more question. I think I'll just rush through this. There is one more specific question. Has my role as a designer changed? I think this is not only me asking and a lot of us are wondering about that. But I'll probably talk about my own reflection. I don't think so, not fundamentally because I've always thought that my role as a designer is to mediate between multiple entities. Can we business tech user? Can we something else? But that job of mediation and trying to bring that balance about has not really changed. This is what I've always done. This is what I always do. I'll still be crafting experiences and journeys. It just, it's become very complex and that complexity has opened up a lot of things. But I also now have access to the same technology that I'm making for my users. And so, in that case, we are coming back to human-are collaboration. We are going to design for it. And it's important that we design with it. And that's all. Thank you. We have time for a question, probably. One question, maybe. Ask them. Cool. We can take one question if you guys want. Awesome. Thank you guys.