 What we're going to discuss over the course of the next 20-25 minutes is really about how AI is going to make a difference to our work But at the same time how we need to particularly buy, you know, of course There's going to be a lot of governance and government putting in a lot of policies in place for AI But at an individual level we also need to see in our businesses or our organizations How we're going to deploy AI which would be worthy and not put our Organizations and risk are people and risk and our work and risk because It's not just about You know automation, but it is also about automation with care. So I think given that agenda We are going to sort of have this discussion in the next 20 minutes starting with you, Siddesh You know, we are already seeing generating AI capturing pretty much everybody's interest and you know I think his first question was that you know, how many of you are actually replacing your people with AI now That is a reality of our times to come But having said that, you know, if we want to replace even if we want to replace people with AI doing our jobs How can we make sure the quality is just the same? You know, how can You know your your work. I mean, I'm an editor. So I can tell you my content I can only replace a human with an AI if my content is no different from what my Editorial people were writing to what the AI is writing. So how how do we ensure that and I'm particularly, you know, I have a very responsible job I have to get the right information to people So to all my readers to all my audience, how how can I be sure that if I'm getting it written through an AI? What it's it's right and it's absolutely accurate Audience and family Thank you very much Thank you for recognizing our day Thank you So so When we talk about The first piece is Is what Jenny is bringing in It's not about replacing humans, but it is about augmenting human beings Yes, there is a good amount of recycling that can come in as we as we go through this journey But it is about how to automate human intelligence Take it to the next level In terms of Productivity and experience and the various aspects to it Now when you talk about Jenny I, the piece that Jenny I brings in is Your ability to do a lot of things that you could do earlier some some aspects of it But the ability to do it with a minimal amount of data in a very very short time Of course it brings in its its amount of risk as well A foundation model is trained on data which is data available in the open domain It is it is billions of parameters So typically large language model is about 700 billion parameters Now when it comes into my organization my enterprise I don't really know what element of that training is coming in and what is it what is it responding to And how do I ensure that what it responds to is in line with my my business commitments And which is where the whole governance piece comes in with when it becomes extremely important How do I put the guardrails on on Ensuring that this is one is it is it is trained on data which is relevant to my enterprise I do not want things which are the intent as a When I talk about AI for business I'm not about talking about writing beautiful points I'm talking about how can I really apply this to my business and get results out of it So how do I adapt it to my enterprise how do I train it to reflect the reality of the industry I belong to And the enterprise I belong to and how do I get it And put the guardrails in terms of explainability in terms of bias in terms of drift All of those elements that go in one of the biggest challenges that Jenny has seen and we going through the hype cycle everybody in the last few quarters has been a massive hype And I think as we settle down we will go past the hype cycle into adoption from pilots to the real production But as we as we go through that we are realizing that the Explanability part or more than the explorative part the who is accountable part is a piece that people are struggling to answer So is the business owner accountable or is the model accountable who is accountable And which is where you got to make it in a way that the LOB now can take responsibility This is the model I have employed I can be accountable Which we all understand Why this design first approach and you know it becomes AI becomes more mainstream as we go forward How are we going to sort of control things you know ultimately it's a human thing right What we teach our machines is also something that is going to be feeded by humans So in that design first approach as he was saying and just to take that conversation forward You know I was telling him earlier that if somebody is going first through AI And saying that okay look use my model and create an AI fit for me So now the next few companies that are going to come in that sector Let's say insurance or education or let's say any other They are also going to sort of learn from there Their models are going to be also taken a prerogative from the first company that started Now when you talk about a design first how do we also keep security of our information You know all organizations are built on certain practices that make them great And make them good at what they are Now if those practices are sort of on a auto level only going to be feeded into a machine In every organization how do we differentiate organizations I think that's a good question so just to make it easy for everyone to understand that I'll come to the first question which you asked that Is it the AI going to replace any human being who is writing a content for me or maybe How am I going to stop that for the AI So when I was in college my teacher said a very beautiful thing He said that your success relies on what type of questions you ask And I think today I think that he was supposed to say that you should ask the right question for teacher But today when it comes to generative AI it's all about the quality of the question which you ask If there is a very depth in your question the quality of the outcome of that question will be very very useful That's point one Second When it's all about logic rather than the syntax now When we were in colleges and now we have 200 people globally who are working in the company We're a cybers quantification company in SAAS where we use to develop cybers platform So I was having a person who is from Dunbar It's a very ISM Dunbar His father is an auto rickshaw driver He's having a very good salary right now in my office He was having 23 suppleys in his entire BTEC But the guy was very good in C++ one language only The goal of that individual was to become very proficient in one language And understand the logic So when you want to do 2 plus 2 you know the logic right You have to do addition of two numbers But in future now it's all about the logic If you know the logic syntax is not a problem anymore So for a company which is in security like us Our challenge was first is the governance What government will be bringing on the table And second I think as a company we When we adopted LLM model we call it safe GPT Our platform may be safe So we have a product known as safe GPT where we have a very airtight cloud account Where customers entire platforms of security They cloud all the cybersecurity ecosystem in the app And our mobile application which is powered by LLM system is attached to it We have 14 red teamers Red teamers means the people who are expert in terms of breaking the cybersecurity of any product So this is kind of the security of an LLM system They will keep on feeding the wrong information And try to break the logic of LLM So when they will be able to reply something Whether they are deploying in a mannerful manner Or whether they are deploying in a very easy manner In a compliant manner And second one is that we are also ensuring that The ecosystem we are developing in terms of the overall LLM model Which is along with the traditional open AI and other company We have started working on building our own LLM model also So like you mentioned there are so many players who are doing something with their own models So I think at the end of the day In some time you will see that every organization with respect to their industry They will be having their own LLM model launching for the world Like if you see Ola has launched their own LLM model on a grid-driven So with respect to their ecosystem So in a similar manner for example we in cybersecurity Cybersecurity industry will be having their own LLM model Which is specially designed to cater their information Hospitality will be having different and so on and so forth And on top of that every organization will be having certain Compliances to follow Right now it's around the world for everyone But in future you will see it will be very industry specific And maybe geographical specific also as you move forward I think that should probably be a saving grace for a lot of companies who are looking at Deploying AI into their systems But again we will come back to the second round of discussion that we will get into So now given the fact that AI is something that is mainstream And everybody is going to deploy AI at one level or the other Now what do you think the role of a CTO or a CISO is going to evolve to be You know how are they going to make sure that not just do they have to deploy AI But they also have to make sure that they are their organization and their data is safe enough You know so that it doesn't sort of as I was saying earlier also That it doesn't sort of go to the market Because that is sometimes what the organization's core is And you don't want to send that into the market So therefore what do CTOs, CISOs, CISOs need to work on So there will always be a push and pull in the organization I was having a chat with CTO today They are in a business of providing communication technology for smart meters And his one worry was security of data But his contrary worry also was that majority of his talent is very young talent Very keen to use open AI And he is worried that his code base is being exposed then for both testing As well as optimization And my straight out of the box thought to him was that basically your code is now open The open AI has understood whatever that you are doing It now becomes a baseline for everyone else to work on So your own IP and whatever differentiation that you have is potentially gone So the question for you is that as a CTO you have the benefit of a large language model Or any technology that is coming along with it That can potentially reduce your time to market But on the security side of the house you tend to lose a lot So that's one challenge that is coming in And my own thesis with this is I've been now in this business for 10 years In my own company And every 3-4 years something comes up It used to be cloud and then big data and then blockchain And there will be a hike that comes There is a set of the curve of despair as they call it And the settlement that happens somewhere in terms of how a version of a technology becomes applicable to you So let's say we look at us LLM doesn't make any sense We are a very restricted data set that we work off ML probably is a much better way to look at it versus LLM Doesn't work for us So every organization therefore has to look at it just because everyone is saying LLM And charge activities and generalize and whatever else may be there The applicability of that for you as an organization As a CTO or a CO is absolutely a section And a version of that or a subset of that may be relevant But not everything will be relevant So don't get pulled by the hype And there are downside risks from a leaking of information From a leaking of IP perspective that you have to be watching about Absolutely I was going to actually ask you about the downsides He just touched upon it But today you know for you and Fujitsu when you think of AI What are the downsides you are considering And I mean for largely for organizations What are the top downsides they have to sort of look at more closely before They look to deploy AI and they should be ready for it Yeah I think the biggest downside is what you are focusing on To be responsible enough to deploy the AI applications So just to give you a background I am leading one of the largest project of Fujitsu That is Monaka Super Computing Project from India And Monaka is a 2 nanometer chip Which is going to be used to make next-gen data center for Japan So now this is a project where everything comes in Right from HPC to responsible AI and things like that I will focus my discussions on the part that we are here for That is responsible AI So when we talk about developing a technology For something that is going to host a variety of AI applications Not just like LLM We have computer vision, any kind of AI application that we know of I think it's very important that organizations take Ithical AI very seriously And Ithical AI is all about using the right regulations The right set of people who can look into the diversity aspect Of the data set which you are using to train the model Now today because it's a women's day It is a matter of concern that even though we are AI is pervasive with every walks of life But we still have just 22% women worldwide Who are part of this ecosystem And it's not just my opinion and it's not like I am vouching to get more women in AI But yes, even if you see World Economic Forum reports It is said that women bring a different perspective To the table, right, multi-faceted perspective And I really feel that we need to get people Of diverse backgrounds, not just women in AI And that's important to have a data set Which is more homogeneous Because you see if you study about the case studies Of how AI has gone wrong Like I won't name any company But the solution where the credit rating of women Is lesser as compared to that of men Or of a particular race of people They are convicted more easily as compared To other sets of people The recent example of Gemini Of how it went against our own prime minister Everybody might be knowing that story Of how it was projected So it is a big responsibility And I think it is not difficult To manage this responsibility So it's just that as an organization We should have an independent AI governance sector And you should have a right set of people Who can also look into the homogeneity And the diversity that you bring in the data set Which you are using to train the model Most of the time I have seen Like I have worked a lot with the startup ecosystem Even before I had joined for this project And I have seen many times what happens In order to deliver applications We start off with a model Which is already trained, available online And then you try to tweak into A smaller set of data sets Because fetching the real data set Is not an easy job So in order to come out with a quick solution We tend to compromise on This diversity inclusion Which was otherwise brought into the data set And I feel that we will be able to Monitor this more closely If we have an independent AI governance sector Or department or units in companies Whether it's small or big Because when you deliver the solution You are always wanting to cater To a larger set of people No company even starts off a startup And says we just want to be focused On two clients at a time It's never like that We always want to go and gather As much as possible So why to compromise on the base Which is going to lead to the outcome Of applications which are generated Right, so that's the point I think that's a very good take on How you look at AI We like to learn more about What you are supposed to be computing That you are building over there What exactly it's function going to be But Sinesh, you know You touched upon the fact that You know, we've had some erratic Information coming from generative AI And some of it which was not Publicly liked either Now, given that situation And AI reserves will change And then we've also had a governance policy Of the government recently Where every startup in AI So not on a very basic or a foundational level One presumes that AI companies will grow very fast So there is going to be governance In terms of what they're developing Now, what kind of impact will that have In terms of building an AI model Which is probably right, yes And do you think, I mean You know, whether it is to say With credit rating of women Or a certain caste being put in a certain way Or, you know, for that matter Putting our prime minister on the spotlight Will the changed answers Of generative AI need to Lead to some fundamentally different Decisions being taken out So when I talk about Before I answer this question I want to step back on your design first piece And I'll come to this question subsequently So that we can Now, when I talk about A design first approach And monkey alluded to it Typically any model Any organization, any enterprise It's not about A particular use case As for a particular model And that may change Depending on what the need of that Are is and what that particular enterprise Department needs So it could be a simple Python based Program or it could be a Jupyter notebook It could be an ML programs Monkey alluded to Or it could be an LLM as well And it will be a combination of all of these As it goes in So that becomes your first proof Which is it is multi-model It is multi-model So that is a fact That we all need to Establish for our enterprise The next aspect is You typically put this Where the data lies, where your Workflow lies, where your application lies So your models are going to be Across on premise It's going to be on multiple So it is going to be a hybrid scenario Now given that scenario How do I put in this whole governance Whole security piece Across a hybrid The third element which To your question on the governance piece Brianka spoke about Now on the governance piece There are three, four fundamental elements And this cuts across The elements of various models So you will need a governance On your ML models You will need a governance on your LLM models also So you need an ML ops You need an LLM ops as well That cuts across the whole thing Now as you put this What you are trying to do is You are putting a framework That is able to Monitor this Not on a case to basis Case to case basis What were banks trying to do? RBI had mandated that There should be no gender bias There should be no caste bias Banks were putting in a committee So there were 5 GMs who signed Since the committee is approved It is assumed that There is no bias But technology today gives us An ability to measure this bias So how do I put a framework That measures caste bias Measures gender bias But on the bias it brings in How do I bring in technology elements Platforms that measure that How do I bring in elements Of explainability If a loan has been rejected Lady Why has that loan been rejected If I do a swap of her gender Will that loan be still be rejected So how do I bring in explainability And these are coming in as Statutory requirements These regulators making it mandatory Any AI model that puts in So be it RBI Be it SEBI Be it IDRBT IRDA All of them have put in a statutory Framework around how do I What is the explainability of What's coming in The third element which Priyanka Against work of You start with a data set With an intent of doing something But your data set keeps evolving Over time What you would have started with Was probably with a certain intent How do I prevent and measure That is also very Very important The fourth truth These are core principles On which we need to Go and approach any AI model So when we talk about So this is I would say once in a lifetime For all of us As individuals for our enterprises All of us Now how do I scale How do I Embrace this at scale Response Is the piece that comes in Now there is one more element That comes in here One is one part of the governance Is the AI governance The second part of the piece Is the data matters The data quality The data governance piece Aspects of it So how do I have Data that is So if I want to Scale to a level Where I want to Lower the center of gravity of the AI To the lowest possible employee In the organization How do I do it I can't do it with the IT department Deciding that I'll mask this and mask that But I can do it at a policy level At a framework level On deciding what can be shared What needs to be marked What is the PI information That cannot be shared All of those things can come in as a policy And when you implement that as a policy And in the context of the DPDP That has just got passed by the parliament This is becoming extremely critical How do I bring in Consent management into the whole piece You have a consent management You have a withdrawal of consent management When somebody withdraws that consent How do you ensure that Not just your core application But everything that you've shared downstream Is getting withdrawn So putting an AI model Which does a hyper-personalized Marketing campaign Tomorrow with DPDP coming in And somebody has withdrawn his consent And the model needs to be capable Of detecting that And adhering to those So that's a slightly longish answer To your question But again you know I'll go back to my original question That you know I mean for example in a textbook If you go and read a certain answer It's the same answer you get every time And therefore you know as humans We know that this is the answer to things Now if AI is going to The generative AI Is going to change its answer So quickly you know depending On new data information that it receives Where is going to be Authenticity of the question Finding the right answer So when we talk to Jenny and He tries to emulate a human brain So it is about putting things Into a context So the response to a particular question In this context is this In a certain different context The response is something else Which is what brings it to a problem So all of this Why brings in a phenomenal technology on the table It also brings in challenges And which is what you call as hallucination The model with a full conviction Goes and responds to something That it thinks it is right But is nowhere close to the fact that Exists right So how do I have a government's brain Work that prevents hallucination And which is where the LLM Governance comes in And one of the elements Let me go back to the question How has the government changed Between the old ML times And the LLM times right ML times it was Only about explainability, bias And now the LLM world The whole piece is getting back to province What has gone In to train this model Is becoming critical So the whole thing In the ML world the lineage was enough To raise to its origin to find out But in LLM your training will talk Billions and billions of records So what is coming in Ritu Is at which is where There are smaller models That are coming in With a complete explainability Sanity check Something that your finance has Has cleared, your legal has cleared Something that gives you an IP Indemnity on what What you deploy What you want to land up into A legal suit with someone Because it was his copyright In terms of doing this You picked up a foundation What it had done something And it can trap you into a So how do I have that LLM Governance framework It becomes very very critical Just one more aspect And it was what Trunkin spoke about Extremely important point In my large range I think Practically every vendor out there Confirms to that, commits to that The more important aspect Of it is Are the learnings of the data security So I have done this For a We just did this for a large insurance company We Put a LLM model for their data capture And we did a quantum jump On the accuracy level The typical approach that most vendors Out there think is a federated learning Which means What I train here Goes back to the foundation model That is a competitive advantage How do I Protect the learnings of the data Become an extremely important aspect And that to my understanding And you spoke about it briefly In terms of competitive What could you also spoke about it That is also a critical aspect Of your responsible AI How do I guardrail my learnings of the data Yeah You're right, I think That is going to be the biggest challenge And probably also the biggest opportunity That we'll see in the times to come And might just need to A whole new business set To take frame from there And this is now to all three of you What Level of AI Advisory would require human intervention You know, so let's say AI has advised us to do a certain thing I mean, I'm actually honestly Thinking more from healthcare's perspective Now if a doctor puts it and says That look, you know, this is What your course of treatment should be Then where does the doctor put his mind And say, okay now let me cut this out And use only this piece from that And you know, now you look at this At multiple industry levels And multiple roles So what is the human intervention In the AI advisory event So I'll give you a small example By relating it with us How we do that As a company to protect any company You have your cloud You have your cybersecurity tools Which are usually 10 to 20 tools Along with you have your threat intel Which are coming from dark web and what so So there's a multiple layers of information Coming to you as a doctor To fix the Health of a company, for instance Now the challenge for a doctor who is sitting That there is Thousands of this data is coming to you And now you have to prioritize What to fix first Which can really cause a heart attack Or which can really cause something Which can be damaging for that individual Now that is where The agent ID VI comes to picture Where they will see Based on the previous learning for example We have 40,000 Insurance data claims How many companies got hacked They have to claim the insurance And they have to tell them that how we got hacked And how much we have spent to overcome that Along with that we have a database Of almost last 10 years That how many companies got hacked Not again but only got hacked How they got hacked Now thanks to AI Being a doctor for me Whenever there is a loophole Comes into my infrastructure From your radio or from your cloud Data lake that is this vulnerability Or is this problem was There in those companies which got hacked Which previously it was very difficult For me to prioritize Because criminals never change their path Of hacking into something And similar to disease right It comes on a similar way So that will for us For a small chunk of data which we have Not the whole level And we have to understand that this LLM and other thing based Different We have to look this problem in a different manner B2C will have problems You will never have people who Like certified AI security practitioners Will never have this certification soon Because the way the things are changing In real time is very difficult So for a doctor Based on the historical data Of medical claims Or something like that Or based on the reports of the Patient Prioritized for the patient To be fixed first To ensure that we can decrease the Probability of that health consequence That is where genetic AI and other Things will help Where should humans Not become lazy in the In an AI world Because you know that is not what we want That we sort of increase our reliance So much on machines That we don't sort of put our own mind To things But we want to use the machines To sort of then be intelligent If humans stop working on things Creativity and innovation You can't have laziness in either Of the two For example I tried the day if the doctor Would use a chat to give you treatment That's not the doctor I want to go to With already doctor Google has made life difficult for everyone This is the last place I want to be But I would rather use The LLMs and the newer technologies For research I am hoping That larger organizations Have the wealth To verify To go through humongous Amount of data There will be practical verifications There will be field trials There are ways to verify a lot of information That individual doctors will never have I mean there is so much that Happens with an individual when you walk To a doctor's office There used to be time There used to be time The doctor used to know you and your family Personally That's how the family doctors used to be They were in my mind Their ML version of it Because they know what's happening in your family And treat you accordingly But going to a doctor And having a laziness And I don't want to go through Or understand your history And then I won't treat it Probably a very dangerous place to be So laziness is okay in mundane tasks Where even if you go wrong Nothing much significantly will change But creativity I mean writing I was reading I think there is a In New York Museum There is this AI war That has taken Hundreds of thousands of images And it constantly creates new AI images To see all of them are brilliant But there is no wow to it Because there was no human involved in it There is no amount of hours and creativity Spent in creating that piece of art Or for that matter a movie Or for that matter an innovation So there are places where it has a role I still personally believe I'm not a big believer Of using AI for something Which is going to impact humanity in big time Because if things go wrong We are already seeing the impacts of wrong stuff That gets into the biases Very very hard to correct So however much you put guardrails That ship is already sailed It's very difficult to come back to it What happened with social media And trying to put a genie back in the bottle It's not going to happen So there are things that we as organizations Need to be very careful about And the only last thing I'll say Is that unfortunately like what happened With social media very very few companies Are calling the shots in this space It is not as federated as we would like I mean between 5-10 companies The world gets covered In 90% of what's happening from an innovation perspective That's just too much power In the hands of too few And their interests are not aligned to our interests I mean just being blooded out here So we need to be very watchful about What this means to nations And nation security We have right now absolute geopolitical chaos And this Will get fed into a very different narrative That's very hard to come back from So I've already been told on the time And very quickly Dr. Priyanka So what sectors are going to see The first impact of JNI In India And you know Probably we fundamentally see Completely moving to AI in those sectors I think If you see from application Perspective AI has invaded all sectors Right and The guest appreciation And acceptance of AI Happened during Covid times Because Not just AI The way technology kept us connected All across the world Of course that's just one side of the coin But the most important thing was The way AI was used to make Drum molecules For Our vaccines right Covid vaccines If you see how AstraZeneca worked With the clarity of these companies Worked to come up with a drug molecule Within few months It was magical right Because if you see the total Pipeline of drug discovery Typically takes 8 to 10 years And Of course I mean AI was Previously also used for drug discovery But as a It got a public acceptance And people saw the power And the way it can deliver And every now and then we read about that As well And that's what I call as the developing Ecosystem of AI right This will go on So That's why I feel that for example I was involved as an AI advisor to some pharma companies And I helped them Develop their AI pipeline For drug discovery as well And I used to see I mean People are just intersecting the pipeline Okay we want a start Which is just working to identify One particular which is just Like 5% of the entire Drug discovery industry right So what I mean to say is that Whether you talk about how Our AI has penetrated The pharmacy industry or You just look at AI How it has affected healthcare industry How it is affecting The everyday Industry like retail Marketing sector Pre-sale sector The way chat GPT is being used For pre-sale sector So whether we like it or not I mean I think everybody is using Chat GPT in one or the other way Right whether it's through Bing Or the other Microsoft Applications that we are using So what I mean to say is that Just like when I was graduating I did my bachelor's in 1999 Right before 2000 And at that time people were saying Oh there is a dot com revolution It's going to change the job market Then there was some other Revolution in next 10 years And people said this is going to take Our jobs I saw that Nobody ever took jobs It's just that new jobs kept getting created Right the kind of work we are doing That we never imagined 20 years from now that I would be Leading a project and working on a 2 nanometer chip To house so many cores So what I mean to say is that Humans have a humongous amount Of creativity in them And I think it's just not possible For any AI system To completely match Humans That the smarter they become The smarter they become So it's like an ecosystem And I think just like we say Generative adversarial network This is the adversarial network Where we are competing against each other And AI will become Smarter of course But like we are seeing the perils of AI And these perils are because It needs human intervention If you see the way chat activity was Trained it was a human You know inspired Learning mechanism which was used Even for making chat activity So they actually used humans for Validating the way the prompt Used to generate the responses As a part of the training mechanism Which was not very prevalent Before GPT 3.5 came into existence Right? So what this means is that Even humans are now Being used very closely As a part of the AI training ecosystem Which was not very prevalent Few years from now Before the The present gen LLMs came into picture So humans are Very much part and parcel of Even the way these AI systems Are being developed And we just cannot Do anything about industrial revolution Because this is how we have evolved Over so many years And I think our kids will evolve Even faster than them For the lighter note I was telling my daughter to Not look at Alexa for your homework And now she is looking, she stopped that And now she moved to chat activity instead Thank you very much Thank you