 Hi, we adopted AI due to its advantages. Responsible AI is the practice of designing, developing and deploying AI with good intention to empower businesses and fairly impact customers and society. But how do we trust and what are the qualities of Responsible AI? We are going to discuss about in this session about trusted and responsible AI with explainability, adversely bias and fairness. Hi, this is Dr. Wamsi Mohan. Yeah, coming to my credentials, I'm working as a chief technology officer for a company called Hub. Hub technology is very limited. Coming to my awards and recognitions, I'm a top 50 global professionals for the year 2022 and best scientist of the year for 2021. And I also got CXO excellence award for 2021 and next 100 CAOs for the year 2020. And I also got recognition as top 50 global thought leaders and influencers for the RPA and data center technologies and cybersecurity as well. Coming to my academics and credentials, I did my PhD in computer science and engineering and holding several patents in data transmission and cybersecurity. I did my post graduation in computer science and engineering and master's degree in management from IIM Ahmedabad. I published 40 place national and international generals and successfully driven several industry academia initiatives with various universities, tier one and tier two technology institutions globally and also an industry speaker in national and international conferences. This agenda is we are going to discuss about the introduction, the principles of responsible AI, which talks about ethics or fairness, as well as accountability, inclusiveness, reliability and safety, transparency, privacy and security. And also we are going to discuss about expandable AI, XAI, which is and why XAI is important in the growing trends of AI and also what is different principles and benefits of XAI. We are going to discuss about it and we are going to talk about adversarial AI. We will be talking about different ML defenses as well as adversarial trainings, switching models and generalized models. And we are going to talk about some of the adversarial attacks such as poisoning attacks, evasion attacks, model stealing attacks, methods of combating attacks as well. And we'll be talking about bias AI, how the bias will be created and what is the intention between these biases and how we can eliminate and how can we create an unbiased system and how to fix the biases in the ML algorithms we are going to talk about. And also we will be talking about the fairness of AI and what are the different metrics for this fairness we'll be discussing about followed by the conclusion. According to the introduction the trusted and responsible AI ensures fairness and benefits for the society is vital for future acceptance. Nowadays we are seeing many threats from the AI systems and because of the no not having a trust people are hesitant to adopt this. For adopting AI it should be a trustworthy as well as it is a responsible and for these factors we need a fairness, interpretability, privacy and security in this AI systems. Also apart from this we need the good governance on the AI systems and the regulatory bodies to audit these AI principles as well when they are building. What is the best way to build fairness, interpretability, privacy and security into the enterprise systems let us let us see how to build a responsible AI. Coming to the AI principles there are six primary principles which we are going to discuss one is inclusiveness, accountability, ethical and fairness, reliability, reliability and safety, transparency, privacy and security. Coming to the ethical fairness basically artificial intelligence is widely popular from the last couple of years and also it inherits risks of relaying AI has emerged across the number of years. AI-powered solutions sometimes we are discriminatory as we are seeing in the in the recent news and are and are cause potential risk to two individuals unable to explain the decisions what is behind the algorithms, privacy given their heavy reliance on the data and this also gives a security threats on the lack of ethical or fairness in the in the systems. For ethical perspective AI should be fair and inclusive in its assertions it should be accountable for the decisions whatever the system has taken. Coming to the accountability who has to take this accountable the AI scientists as well as the builders of the systems are accountable for these AI systems that the accountability comes as an essential pillar for the responsible of AI. The people who design and deploy the AI systems need to be accountable and interface should consider an internal review body that provides oversight insights and the guidance about developing and deploying the AI systems unless a good governance as we discussed in the previous slide it is quite difficult to monitor the AI activities. The guidance might be depending on these guidelines can be flexible or a right guidelines needs to be placed according to the company reason as well as the industry segments such as healthcare education or our financial segments and coming to the inclusiveness it is responsible of the people who build solutions to that AI is inclusive and provides a net positive benefit to society. Inclusive means helping the ecosystem where possible such as I can give you an example of speech to text or a text to speech and visual recognition technologies should should be used to empower people with hearing visual and other impairments. Artificial intelligence when we are talking about the inclusiveness is a mandate and should be considered all human races and experiences and inclusive design practices can help developers to understand and address the potential issues. And reliability and safety AI systems needs to be reliable and safe as we discussed and it should be trustworthy. Its inherent uh uh reliance should resist intended or unintended manipulations. Regress testing and validations should be established for the operating conditions to ensure that the systems are safely built according to the business cases. As well as continuous monitoring and model tracking is mandated to establish uh safe execution else it will create uh uh the biasing as well as uh some threads on the system. Security is one of the uh factor or a principle of uh of responsible AI. Achieving transparency helps AI stakeholders to understand the data and algorithms used to train this model transform transformational logic and its associated assets. This gives uh a great confidence to the stakeholders who are using these AI systems. The information offers the insights about how the model was created which allows it to be reproduced in a transparency transparent way uh comes as part of the principle. And also uh one of the principle is privacy and security. When we are talking about the privacy, privacy means one of the recurrent uh recurrent concern that people have about AI technologies or AI systems. The personal data needs to be secured one and it should be accessed in a way that doesn't compromise an individual privacy. Data privacy is a critical factor when we are talking about uh either either enterprise systems or AI systems. These uh these data privacy is often linked with the AI models based on the consumer data. Interpretable AI or interpretable AI the term defines uh explains or interprets the AI derived solutions understood by the engineers and the scientists. XA is a artificial intelligence framework which the results of the solution can be understood by the human beings. It aligns and supports the white box in machine learning where engineers and scientists can explain why an AI arrived at a specification. XA is implemented as a social right to explanation. XA is relevant even if there is no XA legal right or regulatory requirements. Scientists should be able to explain to the to their stakeholders how they have uh achieved certain levels of accuracy and what influenced the outcome. To comply with this enterprise policies, auditors needs a specific set of right tools to validate these AI models as well as uh uh the business decision makers can take a right decision while incorporating the AI models in their systems. Yeah, we are going to discuss why XA is important. It is crucial for enterprises to learn and understand how these AI decision making processes without blindly depending on the AI systems and also XA helps to understand and explain machine learning algorithms, deep learning and neural networks. And one of the uh one of the critical factor is ML models are like a black boxing though it is quite difficult to understand and interpret uh about the integrities of the system. Neural networks used in deep learning are some of the hardest things to uh for humans to understand. And also the biasing of biasing of the system based on the race, gender or location has been a long standing risk in the training models as well as the AI model performance can drift or a degrade to the production data uh from the training data as the data varies from that uh the training environment to the production environment. This makes it crucial for uh uh businesses to continuously monitor and model to promote AI explainability while measuring the business impact of such algorithms. The principles of uh AI is also uh introduced by national initiative of standards and technology. There are four principles introduced uh the first one is AI systems should provide explanations that are backed by the evidences which uh primarily talks about XAI and also explanations should be meaningful in a way that can be understood by the users of the AI. This talks about the clear definitions of the models and what are the what algorithms they have used and what is the intent of these algorithms has to be explained and documented as part of this process and it should be accurate describing the AI system process as well. This helps the auditors to audit the models and uh follow uh in line with the uh with the corporate governance on the uh on the AI systems. Also AI systems should operate within the limits that they were designed for. Uh this is this talks about the scoping of the AI system uh a particular scoping needs to be applied when designing the AI models and it should be it should work within the uh limits of those uh principles. These four principles capture a a variety of disciplines that contributes to explainable AI uh including uh uh different competencies like uh computer science or healthcare or psychology or engineering. And what are the benefits of explainable AI? The first benefit is operates with AI uh trust and confidence and reduce time to AI results migrate risk and cost of governance models. The operationalize AI uh with the trust and confidence helps to build trust in production systems as well as rapidly bring AI models to the production uh easily. Also interoperability uh and uh explainability of AI models simplify the process of model evaluating while increasing the model's transparency and traceability. This helps to operationalize AI with the trust and confidence uh for the deployment passport as well as uh systematically monitor and uh manage models to optimize the business outcomes and and continuous evaluation as we discussed in the previous slide a monitoring mechanism is needed uh to uh to track and evaluate the improved models. This helps to uh to reduce the uh blind spots as well as increase the performance of the AI systems. And and the third benefit is mitigate risk and cost of the model governances. This helps to keep a models explainable and transparent and uh manage regulatory compliances and uh risk and other requirements helps as per the governance model adversely AI helps in in the cybersecurity uh especially the threat intelligence or vulnerability detection. Machine learning offers many benefits to the companies but it can also enhance threat actors attacking uh progress. Machine learning models are complicated to understand and these poor understanding is exploited by the hackers or attackers uh on these hidden weaknesses. They could trick the model into a making incorrect predictions or give away sensitive information. Fake data could even use to corrupt models without using uh without unknown using the uh the uh data. The field of adversely machine learning aims to address these weaknesses and also adversely learning aims to uh these some of the techniques uh some of the tricks uh machine learning models provided by the defective inputs. These uh these are causing the uh vulnerabilities as well as threats uh such attacks to overcome it adversely learning helps to uh understand and build the models efficiently. The most successful techniques to train AI systems to understand these attacks fall under two classes one is adversely training and the second one is defensive distillation. Second training is a brute force supervised learning method where many adversely examples as possible it will be fed into the system and it automatically whenever the threat comes or a similar kind of uh uh threats comes into the system it detects and uh eliminates from the uh cube. This is similar to the approach most of the anti-virus softwares work on our systems. The second kind of uh the second class of the class of AI systems are defensive distillation. This strategy adds flexibility to the algorithms classification uh so that uh it will be applied on the on the model output of the second model. There is an advantage with this basically uh the advantage of this distillation approach is that adaptable to the unknown threats and it is a progressively increasing however the biggest disadvantage is while the second model has more uh wiggle room to reject input manipulations it is still bound to the general rule of the first model because of that even though it is a progressive model some of the hidden uh hidden insights will be still there in the second model as well with that the efficiency of the second model might reduce because of the first model the drawbacks. Yeah coming to the adversely training the first approach is to train the model to identify adversely examples like uh the image loading to the system uh with the recognition model that classifies uh uh different segments and it would be uh the hope is that by rigorous training of these models it are it effectively it works on the uh incoming uh threats and it it separates from the cube. The problem is that it may be difficult to discover these adversely examples in the first place so research in this area not only aims at defending against them but also automatically discovering. There are uh there are many tools in the in the market on the adversely robustness uh one of the tool uh recently developed by IBM uh which is quite effective uh for the adversely training impacts. Training models protect the system from multiple attacks the approach is to use multiple models within the system the model used to predict uh the the changes randomly this creates moving target as an attacker would not know which model is currently used however this is a costlier effect at the same time they may also to compromise all the models in order for an attack to be successful poisoning our findings adversely examples for multiple models is much harder than for the just ones as i as uh we discussed this is a costlier effect uh for the uh model switching generalized model approach focus on the data and models it is important to remember the models doesn't exist and in an isolation as they are part of the larger systems this means many attackers can avoid it with changes to the system in general for example encrypting and good password practices can protect database making uh poisoning effects less likely uh likewise uh the other example when we are talking about the spam filtering or spam mails coming to the inbox uh when when there is in when the uh mails coming uh coming to the inboxes when when the rejected mails when we are sending to the the sender as a rejected mail it helps the attacker to track and and trace out uh whether whether it is a right format it it sent to the inbox or not this uh helps the attacker to predict in a whether the action he did it in a genuine way or not coming to the uh adversely attacks there are primarily four kinds of different attacks one is uh uh poisoning attacks uh evasion attacks model stealing and methods of combating attacks machine learning can help us automate more complicated tasks the downside is that the model is introduced a new target for attackers to exploit new types of attacks can be used against it systems this primarily used as i told you uh poisoning uh attacks and model stealing attacks are primary in this adversely attacks coming to the poisoning attacks uh it focuses on the data used to train the model attackers will change existing data or mislabel uh the uh data data and input to the system the model trained these data will make incorrect predictions on the correctly labeled data this automatically the entire the system will be misguided uh attack attackers misguide these systems by changing the existing data or existing labels and uh we can we can have an example uh an attacker can change the labels or relabel the existing data and it could uh it could cause frauds even in case of uh autonomous systems or uh autonomous vehicles when these labels are getting changed automatically uh a driverless car uh will will see the signals as a red signal into the green or a green into the uh uh green signal into the red automatically it hampers the entire system evision attacks focus on the model itself they involve modifying data to seem legitimate but leads to the incorrect predictions to be clear the attackers modifies the data used by the model to make predictions and not data used to be trained models like a image recognitions and the pattern findings uh will be uh completely misinterpreted for example recently uh researchers at google showed how introducing specific noise into an image could change the predictions of an image recognition model yeah model stealing attack focus on the stealing the models after it has been trained specifically an attacker wants to learn about the structure of the model which has already trained and it stealing this model helps the attacker to learn about the model and uh gain the uh financial benefit or the data which is trained to use it uh and also the attacker could use this information for further attacks as well for example they they could find exact what words a spam filtering model will flag the attacker could then alter spam phishing emails to uh to ensure these spam mails are delivered to the inboxes coming to the combating attacks the way we defend our ml systems depends on the type of models which we use many problems can be solved simply models like linear regression or logistic regressions more complicated models like neural networks are less interceptable uh this means that we are uh at a poorer understanding of inner workings of the model this leads to the hidden weaknesses and more opportunities for the attacks this is why most research in this field adversely missing machine learning is aimed at combating attacks against these models let's discuss about the biasing how the biases will be happening and we is it intentional or unintentional how the systems the systems are biased with respect to the several factors when coming to the biasing a biases is a anomaly in the output of machine learning algorithms due to the prejudice assumptions made during the algorithm development process are prejudice in the training data uh the biasing happens there are two primarily uh kind of uh biasing one is a cognitive bias and the second one is lack of complete data cognitive bias basically uh it depends on the uh on the stakeholders or the scientists and the engineers who works for this uh AI system uh intentionally they have created some biasing in the system which works according to the uh the inputs given by the engineers designers knowingly or unknowingly introduce uh these biasing to the models and a training data sets uh which includes some of these biases that is why uh with a strong governance model and at regulators it avoids the cognitive biasing uh from the uh uh from being biased AI systems and the second kind of a category is lack of complete data if data is not complete it may not be representative and therefore it may cause uh biases how to make a AI unbiased uh these uh some of some of the recommendations uh is given from the industry is uh if you clean your training data from the conscious and unconscious assumptions like race gender or other ideological concepts uh the system will be clean and it can be far from the biasing systems and and it will be it will create a unbiased data event decisions this this is one of the the cleaning activity uh helps the AI systems to be unbiased as well as uh to minimize the AI bias it can uh possibly be possibly by testing data and algorithms and developing AI systems with the responsible AI principles uh basically the human nature uh creates the biasing when different race or a different category of humans when they are training these models they intentionally they uh make the AI systems biased this can be can be eliminated as we discussed in the uh in the uh previous slides a strong governance mechanism is required one of the question is how to fix uh fix biases in AI and machine learning algorithms and as straightforward uh one of one of the approach is removing the labels from the train data and putting into the system but it may cause uh the entire system uh clumsy as well as uh it the main intention of this AI system uh will will be uh will be questioned let's talk about the fairness of AI uh fairness is a ubiquitous term in artificial intelligence and machine learning fairness is a generic concept not restricted to the AI any decision making system can exhibit biased towards certain factors and thus needs to be evaluated for fairness fairness is tested by verifying if the system is unbiased as per the pre-established ethical principles fairness uh metrics in AI there are many different definitions of fairness they often conflict each other the definitions you can choose depends on the context which the decision making a being made fairness through an unawareness basically it is without intentional or unknowingly uh keeping the factors into the system and expecting the fairness however the uh the blind spots all still exist in the system that needs to be uh come out of the building platforms and the second one is a demographic parity the focus is equalizing the selection rate between privileged and then non-privileged groups there are two metrics commonly used the first one is a disparate impact ratio the second one is a statistical parity differences disparate impact ratio the ratio of rate of a favorable outcome for the unprivileged group to the of the privileged group is a disparate uh disparate impact ratio and statistical parity difference uh is the difference in the rate of favorable outcomes received by the unprivileged group to the privileged group this also create uh fairness in the AI also equal opportunity and equalized odds are uh different factors when achieving the fairness in the uh uh in the in the in the AI systems coming to the conclusion a primary argument supporting a the efficiencies and the capabilities of this technology which surprises human abilities the arguments agnus uncontrolled development of a presented by the technologies specialists and the scientists argue that it is unclear by placing right measures regulations and the governance a will become a boom for the human kind we are we are privileged to have computing machines which we resisted in the era of 70s and 80s hence while artificial intelligence and machine learnings are rapidly changing our world and uh powering the fourth revolution human humanity does not need to be afraid and leverage the uh benefits of AI this hence the session i hope this is uh valuable and useful for you uh thank you very much for being part of this session and thanks to lax foundation and open source summit