 Okay going live. Okay. Hello everyone. I am Sanjay Gupta. I welcome you all on Sanjay Gupta Tech School. So today we have one more session on Salesforce AI Associate Certification Program. So today's topic will be Generative AI. So till now like we have four sessions already done. So different topics we already covered and to deliver this topic I have Nikita with me. So she will be talking about Generative AI and whatever doubts you will be having if you want to ask. So you can raise your queries in the chat so that all your doubts will be clarified. Okay so welcome Nikita on the channel and over to you. Thank you so much. I think we can start with the session. Right. Okay so the topic that we have taken for today is Generative AI. The word Generative is quite literally explainable that what are we looking at today. It's that how do we capacitate a model to generate anything like text, image, video or any content. So this is a very tiny bit summarized format of what we are going to look at today. That is Generative Artificial Intelligence or making a model or capacitate a model enough in order to generate some text or image or video or any content that a user wants. So that's the topic for today. And as we know with some great topics with great technologies there comes a lot of responsibilities, there come a lot of drawbacks. So everything has got enough pros and cons. So we just have to optimize in order to get enough of the pros and we can deal with the cons as well. So let's dive into the session where we are going to learn about Generative Artificial Intelligence. So what is Generative Artificial Intelligence? Generative AI is the field of AI that focuses on creating new content based on the existing data. So well we know that Generative AI or AI was something not very common before a term called chat GPT. So as soon as this term popped up, it caught great eyes and then everybody followed this topic, the chat GPT, which was also succeeded by a lot of layoffs. And people thought that this is a stream, the artificial intelligence is such a stream that can take away a lot of jobs. However, we still are very much into the fact that it is going to transition the jobs. It's not only going to take away, but it's going to have a significant transition from one part to the other. So of course we read about the articles how the content developers got a lot of content developers, a lot of content creators were laid off from the different companies. And hence we got into a lot of study about chat GPT and today it's been around some couple of years that it was launched. It helped a lot of companies to get some good textual content. So this is how the first example of Generative AI worked and it caught a lot of eyes. Hence we can conclude that Generative AI is a field of AI that focuses on creating new content based on the existing data. What existing data? The billions of text pages that are there on the Google. That's how the Generative AI tool was trained on and the chat GPT was trained on with a lot of content that is already available on the Google. So the first thing and the foremost thing is to train a model enough in order to get the desired outputs that you are now getting through chat GPT. So there are several approaches to developing Generative AI models, but one that is gaining significant traction is using pre-trained large language models, LLMs, to create normal content from text based prompts. So text based prompts are those prompts where we can write our prompts, where we can write our desired wants and then we get in response. Whatever the boat is present there, it is going to deliver the desired content according to the prompt that you have plugged in. So Generative AI is already helping people create everything from resumes to business plans to lines of code and digital art and whatnot. So and you must be aware of one of the latest open AI generative tool that is Soda, which creates videos based on the text. So that's the latest tool. However, it is not publicly available, but they have got a beta testing thing and it's going on. The process is still on to optimize the pros and cons as I already mentioned. So some AI that perform the NLP on huge amount of data, which in this case now this is something that we have to read that some AI that perform the natural language processing. So what is happening inside GPTS, the language that we are putting in is process and then analyze and the desired output is generated based on a lot of training that it has gone through. This model has gone through and this model required of course a lot of computing, computing technologies and the GPUs which were acquired from different companies. And that's how the training got culminated into a totally different charging model. So some AIs that perform an NLP are trained on huge amounts of data, which in this case means samples of text written by real people. So first this billion text pages were written by different different people because you can see there are a lot of blogs on a lot of things. There are a lot of images on any type of topic that you require. So they were all human created things and then uploaded on Google and that's how the training of chat GPT culminated into. So the internet with its billion web pages is a great source of sample data because these AI models are trained on such massive amount of data. They are known as large language models and LLMs capture in incredible detail the language rules human take years to learn. These LLMs make it possible to do some incredibly advanced language related tasks. So you should know that these are all whatever you are using right now are LLMs and they have been trained on the data which was initially human generated. And this is a glimpse of generative AI. Let's move forward and look at the top five. Now of course every company varies with its requirements and whatever the requirement is they have got different AI tools. So you will find everywhere if you will put you know top AI tools. Every company has got different requirements and based on that they have categorized their AI tools. However top five of my favorite if you can say would be the first one would be Sora which is now going to be public. And that is creating the videos from the text that is the utility for it. Then you have chat GPT. It allows the regular users to create simple AI generated content without any costing. So we know if I'm going to prompt if I'm going to generate a prompt saying that you know explain this topic to me and explain how you know how there is waterfall. And then the number of waterfalls that are there in the highest waterfall. So whatever I want to know it is going to generate that sort of information for me from the pool of knowledge that has already gathered from different you know different universities sort of. And the main Google. So that's how it has taken all the data. Then you have fireflies. Now when I used to be going into meetings in our college they used to give us a notepad to note down everything into the notepad. However, now we have fireflies AI tool where you don't need to have a notepad. It's an AI meeting assistant that records transcribes and automates the meeting note taking. So this is one of the most important utility of fireflies. Then we have Jasper. It's an AI powered content creation tool that helps business create high quality content quickly and easily. It uses national language processing technology which we are going to dive into into the next next to next sessions. As it is also there in the AI associate program. So it uses the NLP technology to analyze your brand voice and tone and generate the content that matches it. So this is how the sound is working into the Jasper. The last one is Muff AI and it's the powerful AI voice generator that helps to create realistic sound voice over your projects. So it uses machine learning to generate AI to work its magic. So these are some five of the chosen AI tools. However, there are ample number of AI tools for you out there and you can use as many as per requirement. You can create codes, you can create images, you can create anything that you wish. So as I said that creating videos would be so interesting with the help of the text. So I can also share one example which is there on the Sora website. So I'd like to share that with you and just the moment I'll take you to that here. Please let me know if you can see this. Yeah, it is visible. Yes. So introducing Sora a text to video model. Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user's prompt. So you can see that this is one of the videos and the prompt for it was a stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. So she wears a black leather jacket, a long red dress and a black boots. So this is the prompt and you can give n number of prompts like, you know, in a desert I find a lion and with so and so features. So, you know, you can just give them some features and it's going to create a video for you. So as you can see, this is how Sora has come into play and it is by OpenAI. The next model as you can see the next prompt that you can see is historical footage of California during the gold rush. So you can see the efficiency of this model and how it can transition so many jobs from a sector to a totally different sector. Then we have the camera follows behind a white vintage SUV with black and black roof rack as it speeds up a speed dirt road surrounded by pine trees on a steep mountain slope just kicks up. So you can see that this SUV is going on a road and you know, this is how it has read the text. It has processed the text and it has given this particular video based on a lot of readings it has done. So as we know, video is a collection of images. So a lot of images were fed and hence the videos got generated and this is how it works. Alright, let's get back to the slide. So the limitation is one minute, right? For this kind of videos. Okay, sorry. Now, since we have had, we have seen how many tools are there for the Generative AI. We also have to look at the concerns that Generative AI has gotten us to. So with Generative AI or other any destructive technology, it's important to understand its limitation and causes for concern. So here are some main concerns that we have chalked out. One of them is the misinformation of fake content. Now, there's a possibility that you train your model with the fake content. It's very much possible because how will you verify the content? That is the question. So if you are able to verify the content and feed your AI model, the respective training data and training, you know, content, then the model will replicate it and visualize it and, you know, execute it with the help of different technologies that it has been filled with. And then it is going to give you the output. So it's better to be very much confident about the data that you are feeding your model with. Secondly, the content audacity, the content, you know, we have to be very much verified with the content that we are using to train our model with. Second problem is hallucinations. This is when we expect a response, but we do not get that desired response. So Generative AI is another form of prediction only and sometimes predictions are wrong. So predictions from Generative AI that diverge from an expected response, grounded in the facts are known as hallucinations. So as you can see that diverge from the expected response, that means they are not giving us the desired response. They are giving some prediction which are not in alignment with the expected response. That's when hallucinations happen. Then you have plagiarism. As you know, it means the copying of the text and you know, you are using somebody else's content as your content. So this is one of the prominent doubts, the prominent problems and concerns that we have to deal with while we are training our model with. Or we know that the model is going to replicate that sort of, you know, data and content. So if you are wanting to avoid such kind of plagiarism and any other concern like this, you have to be very careful about the training data that you are giving the model with. So LLMs and AI models for image generation are typically trained on public publicly available data. There is the possibility that the model will learn a style and replicate that style. Businesses developing foundational models must take steps to add variations into the generated content. Or either you give the enough credit to the content creator. That's another possibility. But plagiarism is one of the major concerns that we are facing today. Now, as I told you that generative AI is capable of generating a lot of images also. And a lot of images with really human sense into them because you have seen a lot of deep fakes are into trend these days. I don't want to name any, but you know that a lot of deep fakes are coming. Now that is called user spoofing when we use illegitimately any image to create a profile or a video related to the person. And since we can create a lot of realistic image, it is very much true that we can create an AI generated profile to have a conversation with the real users, real time users. And a lot of other things can happen because obviously emotional sense, mental harassment can happen. Emotional challenges can come up. So if you have to avoid such type of things, then you have to be very careful about the user spoofing. So let's just have a read if it is easy to create realistic images. It is even more easy to create a deep fake profile which is an AI generated fake profile. Fake users like this can interact with the real users as I already mentioned in a very realistic way, making it hard for businesses to identify both networks that promote their own content. So this is again the major concern. Then you have sustainability. Now I'll tell you how sustainability has got a major role into this AI. As we know that we are creating heavy models and they require a lot of electricity, a lot of other technical requirements are there like GPUs and a lot of computing abilities we require. And every electronic equipment has to have electricity, electrically charged. We cannot have any alternative for it. Of course there are solar energies and solar power plants, but again, the renewable sources have to be in compliance with the rise of artificial intelligence compliance. So when you are using the AI models and you are training them, there are a lot of computing technologies that you are using. And since there are a lot of requirements, technical requirements, and they have to be charged with electricity, this is where the sustainability comes into play. So when renewable sources are in compliance with the rise of the artificial intelligences, then that is fine. If not, then we have to work in that area as well. And the last is of course data security, privacy. That is another major concern that if I want to keep my data not upfront and I want to keep it to a limited people or I have to surface it to a limited number of people, then that is one of another major concerns that we are looking at. So either I am making my company very much protected from the external interferences by keeping the cyber security experts with us or my company is having any major alternative to have a suspect on privacy interference or a lot of other measures that can be taken to secure my data and whatever I am wanting to surface on the internet. So these are probably the concerns that come with the generative AI. So you can just have a quick read of these and understand them because there might be the question in the quiz that you've opted for in the AI associate program. Then we also look at the possibilities of the language models. So since we are aware of concerns now, we also look at the pros, which are the possibilities of the language models. The language models that we have seen, one of the very familiar one is chat GPT. So language models like GPT offer wide spectrum of possibilities like first natural language understanding. Now, how can you think that you are making an application or a model and you are expecting the results out of it? It's not a web application, it's a model. So you are not technically having a conversation, you are just having a normal conversation just like you are having with your friend. So this had to understand your language, whatever language you are wanting to type and generate the output, it should understand this. So that means it should have enough information of that language. It should be very much familiar with the synonyms and antonyms for it. It should be very much familiar with the meanings of the words that you are plugging in. Only then can it can answer your queries and answer your prompts. So the natural language understanding is one of the major possible things that we can have through our models and programs. Then we have content generation and summarization. Yeah, sorry to interrupt. I have just one query like we are discussing about language models and in the previous two slides we discussed about generative AI. So can you relate language models with generative AI? So the large language models are what I can say, the usage is the utilities of generative AI. The LLMs that we have created, they use this generative AI, this technology is used in LLMs. So it's the used cases, these are all the used cases of generative AI. Okay, got it. So then we have content generation and summarization. We know that whatever, when we have to think about something, for example, there's a topic which we do not know about and we are wanting to learn about it and we are also wanting to create some presentations and some let's say note takings and all for any of the projects. So that's where it is helping us to do. We are basically summarizing the content, maybe whatever, even though the LLMs are giving us a lot of content there also. So they are helping us summarize whatever the things they have found on the, whatever data they have found on the internet. So they can help us summarize the text. Like as I said that if they are familiar with the antonyms and synonyms, they can interchangeably use these words and summarize a lot of lines. For example, a chunk is there and 150 lines I have to write in 50 lines. So yes, LLMs are capable enough to summarize and give you the conclusion of whatever 150 lines you don't want to mention and don't want to write into your project. So this is how summarization happens. Then you have conversational AI. I'm pretty sure that whenever you are opening any kind of technical site, there are boats that appear. There are boats where they are wanting to know what are your requirements. So every technical website is now coming up with this utility of interacting with the users, irrespective of whatever the time is, whatever, maybe my company is from the United States and you are somewhere in India and you are wanting to know about my company's profile and all. So you are just typing in the text and my boat is capable enough to give you the answer that this is what we do, this is what we generate. So these boats are the used cases again where conversational AI is one of the major possibilities that I can see for the generative AI. So language models, powered chat boards and virtual assistants enabling natural and engaging conversations with the users. Otherwise, I have to hire human beings for conversation with the different leads that I may have a perspective relation with. So my lead can distract if I do not give them either a boat response or a human response and then I should hire a human being for just normal interactions. So I would rather go for a boat or a virtual assistant where you can get help from. I'll give you one example from the, let's say there's a bigger bank which is HDFC and you must be having an idea about like whenever you lower one to eight so there are some boats and assistants that help you with what do you want, what is your query, do you want to opt for this what do we call means any kind of basically their services that you want to opt for, life insurances, saving bank account, current account, whatever you want they'll give you the direct information there and then so these boats are of too much importance and if we improvise their capacity, their skills of having a conversation then that means we are making quite a great growth in the usage of the generative AI. Then you have language understanding in context. So advanced language models like GPT excel in understanding context and generating coherent responses that are contextually relevant. We have discussed this, I have made you understand this statement already. Then comes the research and innovation. Well, yes, in research and innovation too, there are a lot of departments where we have to have a 10 minutes discussion or 10 minutes GPT session where you can use these chat GPTs to get the desired information in a minute. You'll be having a lot of content in front of you and then you can brief a lot of content and take the content from it, put it in the GPTs and have a brief about whatever you think is your research topic is. So research and innovation again has got a lot of possibilities when you talk about the native AI and then you come to the education. So of course many of you now must be in the colleges. You must be preparing some lab records and you must be preparing the notebooks, the tutor books, whatever you have. So of course when we have the last day of submission, most of us, at least me, we used to have this information from a lot of different segments. However, chat GPT has made these tasks pretty simple and pretty easier because we can access so much information at just a place and I can type in that this is what I require, get me the readings of such kind of experiments, get me the execution part of this code, get me the algorithm for this. So it becomes very easier to maintain the lab records before time. And of course this wasn't something that when I was in the college so it is a huge drawback but now since it has come I believe that you are making good usage of this chat GPT in your education sector as well. So language models can support education by providing personalized learning experiences, generating education content and assisting students with the language related tasks. Last you have guided image generation. So chat GPT though has not been capable enough of generating the images but there are a lot of AI tools that can do it and LLM can be used to tandem with image generation models so that you can describe the image you want and AI will attempt to make it for you. So other applications, there are a lot of applications where you can plug in your requirement of the image that this is the image that I want. I want an image of Romeo and Juliet. So of course this model is capable enough of searching and give you the best of the images possible and hence this is how the generation of images happen when you type in the text. These were the possibilities of the language models. Then we are coming on to the part where we have to learn about how the Genetic AI works. Before we start, can you go to the trailhead so that you can showcase these topics which you are explaining to the viewers. So they are actually part of AI associate program. So guys, there are one module available which is based on generative AI basics. So lots of theoretical information is already available on trailhead which Nikita is explaining to you. So if you want to read more, you can go there also and if you want to prefer listening and explaining things by someone, so you can watch these videos. So just wanted to have your catch like these are actually related to AI associate certification. So for clearing that certification, you need to know the generative AI basics that Nikita is explaining to you. There is a quiz that we would like to take after we are done with this slide that we are doing right now. So it's like let me just finish this up and we are then going to take up that quiz that is there in the sheet of trailhead. So how does generative AI work? The first and the foremost thing is the creation of the model after the model has been created. We need to train it with the required data. So we need to first collect the data. The process is called data collection or pre-processing and we need to have data which is not plagiarized data or it doesn't have any fake content. That's what we have to be careful enough. So before you train the model, you have to rectify the content with whatever you are training the model. Then you have model architecture and selection. It's very, very important to realize which model is best for this kind of work that you want to make it do. Because there are a lot of models available and there is a huge capacity yet to create another set of models. We have a lot of opportunities in this segment. So that's why the data science and artificial intelligence segment is too much hot talks today. Then you have the training the model and generating new content. So once trained, the generative AI model can generate new content by taking either random noise for GANs, a specific input or for VA's or autoregressive models and transforming it into output that resembles the training data. The generated output can be in various forms such as images, text, music or even desired scenarios. GANs is a very, very important topic. So generative adversarial networks and this is what we are going to do in the coming up lecture where you will again face a quiz based on this one. Then you have evaluation and refinement. As I said, the rectification is very, very important. The generated output is evaluated based on predefined criteria such as realism, coherence or relevance. Depending on the task and application, the model may undergo further training or refinement to improve its performance and quality of the generated content. So refinement means once the model has created its content, it's in the beta testing thing. So there the human capacity works is to see whatever shortcomings are there in the data or whatever generated output is there. And then we reiterate the process in order to get a desired output. And then you have deployment and application. So once you deploy, you need to see the issues that are related with the deployment and application thing. So once a generative model has been trained and validated, it can be deployed for various applications depending on specific tasks it was designed for. Every task comes up with a different utility and you have to generate a model that is based on that utility. So this is the specific thing that you can take as the outcome of this session. So this concludes or this could include creative tasks such as our generation, content creation, more practical applications such as data augmentation, drug discovery or personalized recommendation. There are a lot of utilities, a lot of applications and usages which you can dive into here. We have to be concise and crisp. So two, three applications we can just include. However, there's a vast pool of possibilities as we detailed. So let's get on to the quiz that we have on the trial head. Okay. So do we have anything after this quiz or this is the last activity for today's session? There are two quizzes which are the summarized form for this. So we will end the session after the quiz. Okay. Perfect. Okay. So here we go. What is it called when AI interprets everyday language? How is AI capable of interpreting every language? It's because of the fact of natural language processing that it has dealt itself with. And hence we can mark this up because as I have already mentioned that while explaining the slides to you that we need to have language, natural language processing because every area has got different, different languages. It has to process those languages and get the shortcomings rectified and then probably deployment and application happens. So natural language processing should be how the AI is interpreting everyday languages. Then you have, if you ask a generative AI what its favorite color is and it responds blue. This is an example of what? Now the models are not having enough capacity to have an opinion on something or be opinionated. So they do not have or they don't come with any opinions of their own self of course. And then you have randomness. Now randomness is a guess. Now any model if we are making, we won't have to generate any model that has a limitation with making guessings and randomness. So we have hence got prediction as a desired answer for this. And it is also mentioned over here that generative AI is capable of. So now that you have an idea of what generative AI is capable of, it's important to make something very clear. The text that is generative AI, the text that a generative AI generates is really just another form of prediction. So it's very, very important that you know that whatever is happening is the prediction that is happening. But instead of predicting the value of a home, it predicts sequence of words that are likely to have meaning and relevance to the reader. So hence it is the prediction that is happening. So if you ask a generative AI what is its favorite color and it responds blue, this is an example of prediction. Let's see. Yeah, both of them. When you can just read the reason also over here, natural language processing is the ability of a computer. And in this example, more specifically AI to understand human languages as they are spoken and written. And then you have for the true false. So the text that generative AI generates is really just another form of prediction. As we read, it was detailed about generative AI predicts the sequence of words that are likely to have meaning and relevance to the reader. So this is how this unit gets finished. And then we move back to the next one. We have understood the ecosystem of generative AI, though you can have a quick read of it. And then you can attempt all by yourself. Let's do it today over here. New AI model, architecture and availability of extensive training data are two factors in the rapid improvement of the native AI. So they have mentioned the two factors, which are model architecture and availability of extensive training data. These are very important. But what is the third factor which is very, very important? Let's go through the option choices. So is that increased parallel computing power? Is that AI optimizing AI code? Is that larger data storage capacity of the servers? Or is that faster satellite data connections? So if you can see larger data storage capacity of the servers, we already had means it was not a thing that would affect. Of course, we need a significant storage value also. But beyond this, there's also a thing which is another major factor. So it doesn't count as a major factor, but there is a major factor which is the power of computing. If your computing powers are, you know, if the technical powers and your technical equipments are having enough computing powers, then this can make the process faster and more efficient and more optimized. So hence, we will choose that increased parallel computing power is one of the options that we can or factors we can vouch for. So here you go. We'll mark it up. And then you have two of us. So developers must create their own LLMs in order to add natural language processing to their applications. Now, if every developer comes down to create their own LLMs, then there is possibly the problem has not been solved. And you have created a much more problem than there should be a solution of. So we don't think that there is the requirement of so many large language models that the developers have to create their own LLMs. I think there are enough LLMs that the developers can use an outsourced offer. So I think we can go with the faults. And let's check how does this go. Correct. Yep. So I think with these explanations, both the both the quizzes are solved. So yeah, you you want to cover the solutions as well. Yeah. So as I mentioned that increased parallel computing power is the most another most important factor that generating AI has to be, you know, fueled with. So it takes a lot of processing power to do the math behind the AI modeling model training. And we discussed these topics discussed this specific topic before also that you need to have a certain mathematical sense for execution of AI models. The transformer architecture relies on many separate concurrent calculations. This architecture increases parallel computing power as it allows one computer processor to do the first calculation while a different processor does the second in the same time. So we need to have a processing unit which is faster enough, capable enough, optimized enough to get the desired output. And the last one developers must create their own LLMs. So it's not a requirement. There are multiple available LLMs that I already mentioned that you can outsource and get the work done so that developers can use to add natural language processing to their application. And this is how it's done. I think that's it for the day. And you can please read it from here. And if there is any problem, you can please mention in the comment section. We are happy to solve. Yeah. Thank you. So I think this is the part one of the generated AI. And I hope it is not completed. So in the next session, you will be covering a few more aspects related to generating. Yes. Right. It was just one part of it. So we have generated versus predictive AI. Okay. Yeah. Yeah. So if you have listened everything, whatever Nikita explained, and if you want to join the next session also, if you like the content which we delivered here. And if you understood, so I think you will be able to relate the part two as well. And I think generative AI and predictive AI both are very much important, not in terms of like, if you are focusing on associate program. If you if you are like, like in the tech industry, then also you need to know like how they actually works. Right. So if you are like learning for associate AI certification, then these are important. If not, then also for your general knowledge, these are also important. Right. So if you have gone through with this session, either joined live or if you are watching the recording. So do watch other parts of generative AI so that you can understand it end to end. Right. So I think yeah, we can wrap the session here only. So thank you, Nikita for sharing your knowledge and sparing some time. So guys will be welcome. Yep. So we'll be having one more session on Thursday, like after tomorrow. So join that session also so that you can gain more knowledge around generative AI. Okay. So with this note, thank you so much for watching this session and see you in the next session. Thank you. Bye everyone.