 to talk about generative AI in the FinTech space and instead of sort of a conceptual discussion on JennyI, I thought I'd get to some case studies. Let's look at some specific uses of generative AI in the BFSI sector, banking and financial services. Some of the use cases might be interesting for FinTechs, which are trying to create products, and then we can have a Q&A or a discussion around it. Does that make sense? Okay, as soon as I get my deck up and running, we can get started. How many of you are trying to use JennyI in your company? Can you raise your hands? That's quite a few. Okay, if you're based in Bangalore. How many of you are techies? Is there anything specific you guys want to talk about in generative AI today? How many of you are using RAG techniques for tribal augmented generation? One, anybody using fine tuning? You're using that too. Interesting. What do you do? Authentication platform using AI. Okay, guys, do I have my deck up and the clicker? Can we connect it? I'm running out of questions to ask. And time as well. And by the way, my session was supposed to start at 4.30, so I've been sitting here patiently. I wish they had set it up while I was waiting for the last one hour. That would have been useful instead of the chit chat we're having right now. How many of you guys have tried using Claude, Claude 3? Well, it actually just came out two, three days ago, I suppose. I have a couple of slides on Claude 3 that just came out. Apparently, the mixed trial latest drop, which got leaked, supposed to be better than here with GPT-4. That's interesting too. Can't we just connect it right here? Do you have HDMI? I waited here for an hour, guys. You could have done all this up front. I have to go there, present from there. Okay. If you can get rid of the key, it'll be a keyboard set up. Quit. Okay. So, we all know what generally AI does. It's a statistical model that basically models a probability distribution over a sequence of words, which means if you gave it something like, I live in Karnataka, I can really speak dash well. Everybody knows the answer is Kannada, right? And so on. So, the attention paper in 2017 was a milestone paper, after which, of course, the big milestone was December 2022, GPT-3.5, which showed up as a fountain chat GPT. And then many, many other models came up really quickly after that within a year. Next slide, please. So, the growth has been explosive. As you can see between Googlebot and Tropic, to OpenAI, to Microsoft, and so on, stability AI within a matter of, like, you know, a fairly short time, we have come up with a whole bunch of models that are quite intelligent, right? Next slide. So intelligent. Okay, but why is this different? Let's spend a couple of minutes. You know, human beings have always thought that we have been very special. No other animal can do language. Language is incredibly complex. And if you read, you all know Harari's book, The Sabians, he talks about the neocortex. Our new brain is the one that is capable of language, it's capable of stories, it's capable of mythology, and so on, which other animals can't do. So we've always felt very special about it. We are the center of the universe, we are the smartest, we can do all these amazing things and so on, until now a machine is able to do that. So in some sense, this was huge. That's why we were all taken up, you know, taken in by the idea of chat GPT, because suddenly it's sort of understanding what we're saying. It's able to do things that only humans used to do. So it's a very big thing from a human evolutionary standpoint, right? So it can do something that no other animal did, only humans did. And now a machine is able to do language, right? Next slide. The speed of adoption has been crazy. The Renaissance, back in Europe, took like 200 to 300 years before the idea sort of diffused across Europe. Industrial revolution took another 200 years for all those ideas of steam engines and spinning genies and whatnot to slowly spread around the world. The internet itself took like 70 years if you go back to, you know, the National Science Foundation in funding ideas of networks and so on. Jenny and I, a few months, okay, a few years if you go all the way back to the 2017 paper and Bert and others before that, maybe six, seven years, maybe 10 years, very fast adoption. Chat GPT, two months I think. Next slide. I think in two months it had something like a hundred million users, 60 million daily site visits every day. So it's crazy what's happening with respect to generative AI. And we can see why because it brought new sort of capability that we'd never seen. Next slide. So if you look at the cognitive revolution that took place, the low left, you know, the X-axis is repetitive tasks, you know, you keep doing the same thing over and over again all day versus non-repetitive tasks, okay. On the Y-axis you have manual work versus cognitive work. You know, you work with your hands, you chop wood, you build houses, those are all manual labor. Whereas cognitive, you sit next to the computer and do your work, okay. You're recording, you are doing accounting. Those are all a lot more cognitive, that's the Y-axis. So if you look at it, the low left, repetitive manual tasks got sort of taken care of by the industrial revolution. When we build cars, when we build steam engines, when we build the spinning journey and so on. Instead of human beings having to drag things or even animals having to till the land and so on, the industrial revolution sort of solved those class of problems, repetitive manual things, right. But if you look at repetitive cognitive things, that is, it needs talent, it needs human intelligence, but it's repetitive, you know, switches for telephony or even money machines like what ATM, you know, people used to dish out cash in the cashier counters and so on. They're repetitive, but it requires a human brain. It's not like, you know, just keep doing it. You can't put an animal into it or you can't put a young kid in, you know, and so on, right, which used to happen prior to the industrial revolution. Those kind of tasks, largely the computer revolution took care of, okay. Even smart things, it could do, but they were repetitive, so repetitive tasks could be done. When you look at the lower right, manual and non-repetitive, okay. So these are like building cars. It's not the exact same thing you're doing, you're doing different, different things. You're working on the engine, you're working on the carburetor, you're working on the seats and so on and so on and non-repetitive, but a lot of it is manual, okay. This is not just computers running, programs, but actually building a car, okay, or robotic automation, Amazon warehouses, where things are moving out. They are non-repetitive, also manual. This also got taken care of by sort of robots and automation. Whereas the place where JNAI plays is non-repetitive, cognitive. This is where we thought, human beings, only we can do it. Even now, doctors, lawyers, they all get very upset. JNAI is going to do certain things, no, no, no. It's not really human. It doesn't have the human touch and so on, okay. Maybe it doesn't, but maybe it can do a whole bunch of things, right. So cognitive, non-repetitive tasks of what JNAI is looking at, what we thought only human beings could do, now JNAI will be able to do. Next slide. So if you look at what kind of things have happened with respect to JNAI, the input is usually text, okay. It could be voice, but voice also gets converted to text, okay. So you can think of the input as text. When you go to chat GPT, you say, hey, can you, I need to give a talk today at the, you know, entrepreneur conference. Can you come up with some ideas about AI and FinTech? That's text input I'm giving to chat GPT. And it's giving me an output in text, the blue box on the left, right. So, and it writes out, oh, you should talk about AI for front detection. You should talk about customer service, blah, blah, blah, right. So it can give an answer that's question answering. It can do translation. I ask it in French. It converts it to English or vice versa. It can do summarization. Hey, read this whole paper and tell me in five points for a five year old to kind of understanding what this sophisticated paper says summarizing. All grammar correction, you written a big article, you want to make sure not typos, no grammatical mistakes. Things like, you know, chat GPT, Gemini, Bard, Claude, they all do these things. Text output. The second one is image output. It could be a mid-journey kind of image output. These days, a lot of folks are generating images, beautiful images. And for video generation, many of you might have seen the SORA videos from OpenAI. Amazing. Mind blowing. Okay. The underlying amazing thing is SORA or the model needs to have figured out physics in order to simulate a lot of things it's doing. So it's figuring out a lot of how the universe works in order to make these videos real. So there's an underlying amazing capability that these models are learning as they are generating things like videos. Of course, there's text-to-speech, 11Labs. Amazing. You want it to be your video to be rendered flawlessly with a nice deep accent from the West or whatever it does all. Okay. You can suddenly say, no, I want a woman Indian accent for your content and it does that. 11Labs does text-to-speech. There are many other companies. I'm just, by the way, I'm just throwing the names, brands here just so that we are grounded in reality. Okay. I'm not trying to sell any of them. I'm not related to any of these companies. Some of them when I use the use cases, I am working with some of those companies, but it's more so that you guys can go and check it out and see really what is the use case. Not so much to plug those companies or sell anything. Okay. You don't have to buy anything. But I didn't want it to be conceptual. Then it's sort of boring. Okay. Everybody's talking about the hype of Genai. Let's talk about use cases that real companies, what have they done and are they useful to you? Okay. And let's move on. Next slide. So there is a vast landscape of applications in different spaces that Genai can provide in the tech space. I'm sure a lot of you guys have used it for marketing. Generate a piece of limited text so that you want to send it out. Of course, it comes up with bombastic language. You don't it down at times and say, hey, let's be a little bit direct and so on. We've all done that front engineering. But a lot of people are using it on the left side, text, marketing content, sales content. Hey, can you write a cool email for me about this product because I want to sell this and it is targeted to this kind of a customer. I want the voice to be salesy. I want to call to action, make all this happen and it does. So support, we'll come to support. General writing. Okay. Your kid says, I have something to submit. Can you help me with this? Well, actually they don't even come to you these days. They just generated using chat themselves. And it's a quandary. A lot of teachers are wondering, is this a good thing or is it a bad thing? Okay. Are they learning better or are they just putting out all this because the computer gave them? Nobody knows yet. The jury is divided on all this. Not taking it so on and so forth. This is where a lot of us are using chat GPT today. I use it quite a bit. A lot of times just for ideas. I don't like the way it constructs things. I write a little differently, but the ideas it gives are useful. Sometimes it can construct complex sentences. It'll take me much longer to do that word smithing. It does. So in many ways, it helps on the left side. Second, code generation. Anybody here using it for code generation? Wonderful. Three, four. Amazing. Because I run a project called Ten Bed ICU. We create ICUs in government hospitals. And our team, we have one of the largest open source communities that's building software. We built an EMR out of 400 volunteers. We're using GitHub co-pilot. 3x improvement in coding speed. 5x improvement in testability. Almost 8 to 10x improvement in documentation. Engineers hate to document. They hate to write English. They all want to write Java or Python or whatever. But GPT takes care of all that. You put bullet points. That's what we're good at. Hard to write paragraphs, but we write bullet points. That's what techies are all good at. GPT writes the paragraphs in my eloquent language. So coding, code generation is going to be super important. What you can create, like the NMedia CEO the other day said, for years from Barack Obama to everybody used to say, oh, everybody has to program. Everybody has to learn to program. Whether you're a decky, whether you're a lawyer, whether you're a doctor, you have to learn to program because it's like math. That's what the conventional wisdom was 10 years ago. Now this NMedia CEO is saying, you don't have to learn to code. You just say it in English a little code. It is coming. I think this is happening. By the way, Claude 3 is also very good at coding. So whether generating SQL, because the user is asking a complex query on transactional data and you're creating SQL on the fly, or of course, understanding your context in your environment and generating code that you want. I mean, it's quite amazing if you've tried out even your comment in completes, you're trying to say, I want you to write a code for and then it completes your comment itself. It's sort of guessing what you need because it sees the context. And many times it's right. And you're going, how did it even know? How did it read my mind? So that's quite amazing what it can do out there. Image generation, this is, you see all the Mahabharata characters and everything coming out of mid-journey these days and voice synthesis supports video editing, so on and so forth. And there are many, many, many other applications in general. But now let's get to fintech. Let's get to BFSI because that's what I was supposed to talk about. Next slide, please. So, well, you know what? I just threw in the slide. We'll talk about this later if necessary. There's a very powerful paradigm called retrieval augmented generation. And the reason for the need for this is if you use GPT or Gemini or Trot or whatever, which is red up all the content of Wikipedia and the internet and Reddit and so on and so forth, it might spit out things that are not relevant to you. It might be wrong, blatantly wrong. It might hallucinate. It might make up stuff and so on. This has been one of the biggest issues with the LLMs that it makes up stuff at times, but it's so confident the way it says that you believe it's right. And people have gone and argued cases in court thinking that this has been highlighting some judgments in the past which GPT just made up. And you go there and they look it up and they're like, no such case. GPT just made up all these judgments. So, we've heard about all that. How do you get rid of it? Rag is a very powerful technique to do that. Where you give it your own data, you vectorize that data into a vector database. And then when you ask questions, you are trying to just get segments of the data that you've given it. And then the LLM is only used to generate a nice elegant answer based on the pieces that you have created using your documents. And you're telling the LLM, don't go out of syllabus. Don't go and say, I read up the Wikipedia, I read up the Internet. I'm giving you a book. Only answer from that book. That's all. That's the right methodology. In case we need to, we'll come back to it. Next slide please. Of course, it's fine during which is slightly different. You take the LLM itself, which is a bunch of weights. When they say Lama is 70 billion parameters, which means those many weights have been codified after the training, after the pre-training model is all done training. And that base LLM, if you want to make it a little bit smarter, more specific on, let's say, financial terms, terminology, banking terminology. Then you give your own documents and you fine tune the LLM where here it's slightly different. It changes the weights itself. The weights between the neurons, which consists the intelligence of all that learning, is itself being modified ever so slightly to accommodate for the new data that you've given there. That's fine tuning. And this is also a method to get rid of hallucination. Although RAG has had a lot more, a lot more people have used RAG. Retrieval Augmented Generation. Lama Index is a good thing and the land chain and other people have good frameworks for RAG. Next slide. So let's go through a few use cases in the banking space. Next slide. The first, okay, well, we're going to talk about maybe three or four of these around much time we'll have. Financial decision making through analytics is one use case. Language understanding and translation. I'm going to try and talk only in the banking space or the fintech space. Customer service, okay. Regulatory compliance, huge, especially in banks, RBI type of central banks have a lot of regulatory inputs on how to run. And of course, personalized banking. Next slide. So, ooh, no, I don't want this. Next slide. Yes, I just saw this Claude III video that Anthropic put out three days ago. And they just took this image from Wikipedia. They went to the Wikipedia page that says GDP of the United States. It gave that graph. So they asked Claude III, Claude III, can you pick up and plot the GDP of the US based on that image? Okay, so this is multimodal. It's using advanced vision capabilities of Claude III. It picked up that graph. It figured out, oh, exactly year by year, what is the GDP of the United States? Okay, it wrote a little program. Okay, and you can see the program. If you watch this video, you can see the program that it wrote. Okay, and it actually created this graph. Next slide. And then this guy asked it, hey, can you print the US GDP in the future? 500 minutes? Ooh, okay. I get to wait for one hour, but I get 500 minutes. Yeah, so it does Monte Carlo simulations of what the GDP of the US would be in the future by writing a program. Next slide. That's code generation capabilities. Then this guy asked, can you show me the world the GDP prediction 2020 to 2030? Of course, we might disagree where India is and all that, but it has actually created multiple agents and run across, run code for each country and put this pie chart together. And you can see this happening. It's quite amazing. Next slide. So this is one example. The next example is language understanding and translation. I always wonder, hey, can we do banking in this country by just talking to it? Tomorrow I'm going to be driving to Mysore. I'm always on my car place setting, you know, setting up calendar appointments and so on. Can I just say, hey, you know, if HDFC bank or something wakes me up on car play and says, hey, you have to pay your whatever mortgage, can I pay it now? I just say, go ahead. Wonderful. Okay. Imagine if you're a farmer, you're in a little village and a woman wants to buy a buffalo, can she speak in Kanda and say, you know, can you give me a loan for a buffalo? Okay. I'm not even literate. Can I just talk to the bank? Okay. I know how to talk. I understand numbers. I know a little bit of math. I am running a little business. Why can't I do banking? Right. Conversational. Can we break everything down to conversational where even literacy is not our issue? Anybody can do banking, right? Only people like us know how to do banking. These are all in English forms. Can we break that down? So for product discovery, for customer support, customer acquisition, sales, we can do this, all of that. Next slide. Okay. I don't have the time. I was actually going to show you guys a demo of how this actually works. A question and answer session live, but we'll go to the next slide. If you want to, maybe the demos are there on trust. Next slide. Next slide. Customer service. This company called Klarna. I don't know how much of this is accurate, but about a couple of weeks ago, this article came out maybe 10 days ago, that the AI assistance provided by Klarna had 2.3 million conversations, two-thirds customer service chats, and they did the work of 700 full-time agents. Apparently, it's on par with human agents with regard to customer satisfaction scores. You can read all that stuff. What I'm trying to say is customer service is a fantastic killer app when it comes to generative AI. No human is going to go through all your CTR records, understand the real needs of the customer, and then answer the question. Whereas AI can, Gen AI can and give eloquent answers in your own language. Very interesting use case. Next slide. Regulatory compliance. This company clarity law takes revenue recognition in the United States and automates it using Gen AI. That means you have complex, go to the next slide. You have complex issues with respect to revenue recognition terms in the US. There are certain laws, how you recognize revenue. It can check all your master service agreements, purchase orders, invoices, and actually figure out how much revenue you can recognize. So lots of gaps in the data, it can pick up, analyze all this, and do this. And they're doing an amazing job at this. Next slide. So they use it between purchase orders and MSAs and all of this. They're completely automated, revenue recognition and a bunch of other financial sort of tasks, processes that need to be automated. Okay. Next slide. That's it. Thank you so much. Running out of time, but it was absolutely interesting. Thank you so much. We can't request in Ritu. Yes. Yeah. I think humans will do more interesting work, more useful stuff. If you look at it today, if you're flying on a plane, where do you fly from? From Delhi. 90% of the flying of the plane was in the computer. Okay. And if the pilot actually did the flying, I'd be very nervous to sit on a plane. It is just too many parameters. No human can handle that, those many parameters at any given time. A lot more people will be dying. So in the same way, when you look at healthcare, there are so many parameters. Can AI look at my MRI? Can AI look at the 5,000 new peer-reviewed articles that have come out and specify exactly what I need to do? All of these, I think will improve the quality of care service. So doctors are going to, instead of doing grunge work, you know, instead of doing boring, repetitive tasks, they will probably deliver care at a much higher level of using these systems. So I don't see it as, oh, because AI is able to do amazing things that once humans do, oh, we're all out of a job. AI is going to rule the world or whatever. We're going to do more interesting, more useful stuff, I think. Thank you very much.