 Welcome, everyone. I will be discussing fairness and machine learning from theory to practice. So my name is Alex Karsten. I work with the community programs team here at GitLab. I'm in charge of making sure the operations of the open source education program and startups program go smoothly. So introduction to fairness and machine learning. So machine learning has become increasingly prevalent in the tools that we use in our daily lives. From personalized product recommendations to healthcare diagnosis, I have heard the quote passed around by many machine learning should and will be used like a calculator. This suggests that our interactions with machine learning should be similar to using a tool that provides quick and definitive answers. However, unlike a calculator, there is no often sing no single correct answer in machine learning. Instead, machine learning models provide an answer based on the available data. The importance of machine learning fairness lies in this data. Where are we pulling this data from? What's the timeline used? Who is represented in this data? And how is this presented to the users of the calculator of the future? What's the best way to ensure future generations do not blindly use the tools they have given for ease of use, but rather question the ease of accessibility? Moving forward, this calculator of the future will have impacts much grander than any of us could have ever imagined. Changing the way we interface with the tools we use on a daily basis and our capacity to share information and knowledge. This highlights how crucial it is to develop and implement approaches to promote fairness and machine learning, which includes measuring and mitigating the impact of bias, developing techniques for addressing unfairness and creating policies and regulations to ensure equitable outcomes. So machine learning begins with the data. However, data is not inherently good or bad. Rather, the function we use as humans to interpret the data controls the outcome of our models. This process of human decision making often stems from mental shortcuts and patterns of thinking, which leads me to cognitive bias. So Wikipedia lists over 100 cognitive biases that exist, and I'm sure that is just scratching the surface. In the sake of time and both sanity, I'd like to discuss just a few of the cognitive biases that have a high chance of being magnified in machine learning. So first off is the framing effect. The framing effect is when our decisions are influenced by the way information is presented. I'm sure all of you are very familiar with this. Equivalent information can be more or less attractive depending on the way the features are highlighted. How someone frames an issue influences how others see it and how it focuses their attention on particular aspects of it. Our choices are influenced by the way options are framed through different wordings, reference points, and emphasis. Framing is the backbone of the targeting of our communication to specific audiences. How someone frames an issue. Sorry, as you can imagine, framing is a considerable effect on public beliefs. This is one of the most common methods used in politics and advertising to sway the interpreter into believing the framed issue. People who are deeply ingrained at the subject in hand often are less likely to fall guilty to framing effects. This directly translates to how a machine learning model is built. The more relevant and accurate the data we can feed the model, the more likely it is to operate in a fair methodology. People who are ingrained, sorry, when a model, we must remember a few different concepts to avoid the framing effect. Collect diverse and representative data, highlight your own biases that might exist, use multiple models, and allow for input from diverse individuals, and consistently iterate and search for feedback. The grander and more defined that we can paint the picture, the clearer the picture becomes. Secondly, is functional fixedness. Functional fixedness is a mental bias that can limit our ability to be creative. It happens when we get stuck in a traditional way of using an object or an idea and can't think of new ways to use it. This bias affects our problem solving and prevents us from thinking of new creative solutions. For example, I know there have been many times that you have searched aimlessly for a flathead screwdriver, but can't find one. Yet, my brain might overlook the fact that I have change in my pocket that could simply unscrew this flathead screw in a matter of seconds. This bias can be overcome by intentionally trying to think of new and unconventional ways to use objects or ideas. There are some obvious connections between this methodology of our brains and machine learning models. Machine learning models can do things the way they have always been done. Learning from the information that has been given in the past, but not being fed new information that changes the framework of how it constructs its answers. What if we never took risks to new solutions completely changing how we approach a concept? Society, institutions, and industries would all greatly suffer if they did not iterate on the frameworks they use on a daily basis. Creating fair models lies in our ability to think of new and creative approaches to the same idea. Lastly, the gambler's fallacy. So the gambler's fallacy is a belief that a random event occurring in the past, or sorry, occurring in the future is influenced by previous events, are instances of that event. Our brains often pull memories from the past that seem correlated at the task in hand, yet often offered little to true to no insight to the current event. I know there has been a point in most of our lives where a professor or a teacher has handed out a practice test before the exam. These practice tests contain let's say 75% of the answers or content to that test. So you study the practice tests all week and you think you have mastered the course. Then you go to take the exam and you quickly realize that that practice test barely covered the content of the entire course. Although these events are clearly semi-correlated, we cannot base our perception on limited sections of data, much like a machine learning model. Creating a model that involves all of the information available and representative of the problem we are attempting to solve is key to a model's success. So now that we have looked at biases that exist within our brain, we need to take a look at some biases that exist within data itself. So as we know, cognitive bias refers to systematic errors in the way we perceive information or make decisions. In contrast, biases that exist in data are a result of systematic errors or inaccurate data within the data itself or inaccuracies within the data itself. These biases can arise from a multitude of reasons, such as incompetent or incomplete data on representative samples or measurement errors. So a few of these being one of these being limited features. When a model is trained on non-representative data that fails to capture sophisticated or real-life phenomena, resulting in poor data quality for observations of certain groups. An example being a recruitment algorithm that only considers qualifications of education and work experience, while ignoring factors such as volunteer work, extracurricular activities, or non-traditional career paths, resulting in talented candidates who did not fit the model of traditional candidacy may be missed, leading to poor data quality in certain groups. Skewed samples. Skewed samples are direct consequences of using data collection processes that is already biased. So leading to biased models in a feedback loop where decisions and based on models reinforce data bias. A voice recognition model trained on speech data collected only from male speakers may not accurately recognize the speech of female speakers, therefore leading to marginalization of those who are not represented in the group. So proxies. Even if sensitive attributes such as race or gender are not included in the data, used to train machine learning models, other features that are closely related to those attributes can still result in the model performing worse on minority groups compared to the overall population. These features are known as proxies of the sensitive attribute. An example being zip codes can be proxies for income or educational level can be a proxy for socioeconomic status. This model may use these proxies to make decisions which can lead to unfair outcomes. Masking. Masking is when bias is intentionally introduced during data collection. So either being used by a state of collection process using tainted examples or limited features. This results in models that replicate the bias without being detected. An example of that is being a natural language processing model trained on text from particular region may struggle to understand accents or dialects from other regions as it was only trained on a specific location in the world. This model may appear to represent the data correctly yet the data collection process does not fairly represent the population that we are trying to represent as a whole. So daily life and machine learning biases. So we all know these exist very commonly probably the most obvious one is healthcare. So health insurance is one where artificial intelligence bias can potentially have significant consequences. Health insurance companies may use learning algorithms to determine insurance premiums, approve claims, identify patterns of health risks. However, these algorithms can be biased if they are trained on data that reflects existing patterns or discrimination and bias in the healthcare system. For example, if an insurance company's algorithm is trained on historical data that shows a certain racial or socioeconomic status of the groups that have higher healthcare costs, the algorithm may disproportionately penalize those groups by charging them higher premiums or denying them coverage. Similarly, if an algorithm is trained on data that reflects existing biases, the healthcare diagnosis and treatment, it may perpetuate those biases by excluding certain groups or conditions from the coverage. Another way with health insurance companies may face machine learning biases through predictive modeling to identify individuals who are at high risk or age at a high risk of developing certain health conditions. These models may be trained on data that reflects existing biases in healthcare access or treatment, which can result in certain groups being unfairly labeled as high risk and charged higher premiums or denied coverage. To address these issues, it is important for health insurance companies to carefully evaluate their machine learning algorithms for bias and to take steps to mitigate and identify biases. This can include ensuring that the data is used to train algorithms, to diverse and representative of all groups, as well as regularly monitoring and updating algorithms to ensure that they are fair and just. Health organizations and policymakers should proactively use machine learnings to advance healthcare equity instead of just avoiding harm, which has been the traditional approach. Structural classes in the healthcare are increasingly recognized as contributors to health disparities, and machine learning systems can perpetuate or even amplify these disparities. A participatory process involving key stakeholders, including marginalized populations, is recommended to ensure fairness in machine learning. Distributive justice should be considered in a specific clinical and organizational context. Fairness should be incorporated into the design, deployment, and evaluation of these machine learning models. Trade-offs exist in different technical approaches, different technical approaches, and ethical reasoning is required to decide what is best for a given application. Two clinical applications are discussed where machine learning can harm protected groups. The design, data, and deployment of all models can contribute to disparities. Different approaches to distributive justice in machine learning can advance health equity in various ways. The appropriateness of different equity approaches depends on the context. So secondly is credit scoring. Credit scoring is one where AI and machine learning tools can potentially have negative side effects. Credit scoring algorithms are often used by banks, lenders, and to assess credit worthiness and determine whether they can approve loan applications. However, these algorithms can be biased if they are trained on data that reflects existing patterns of discrimination and bias in lending practices. For example, if a credit scoring algorithm is based on historical data that shows a certain demographic groups such as minorities or low-income individuals are more likely to defeat default on loans, the algorithm may disproportionately deny credit or charge higher interest rates, rather than reflecting their recent job history or ability to make payments. Another issue with credit scoring machine learning tools is the lack of transparency and accountability. Their decision-making process is clearly a black box of different decisions and ideas and very hard to interpret for the average individual. It may be challenging to identify correct biases or errors. This can result in decisions that are unfair, unjust, without any clear recourse for the affected individuals, which once again reflects the need for transparency and an algorithm so both the user and creator can accurately identify the injustices. So lastly, political modeling. So political modeling can be biased due to the data that train the models. If the data is used to train the model or not diverse in its representation, the model can produce inaccurate or biased predictions. For example, if a political machine learning model is trained on data that reflects existing biases in polling or election results, the model may perpetuate these biases and inaccuracies in its predictions. Similarly, if the data is used to train the model is biased towards certain political viewpoints or exclude certain demographics, the model may produce biased results. It is important to carefully evaluate and monitor how these political machine learning models calculate for bias, ensure the data is trained to use that data used to train these models is diverse and representative of all viewpoints and demographics. So defining fairness and machine learning. Fairness and machine learning is a complex issue that requires consideration of various variety of perspectives. Legal frameworks, quantitative, social science and philosophy all play important of shaping how fairness is understood and implemented in the context of machine learning. Legal frameworks provide guidance on how to ensure equitable and fair practice. While quantitative social sciences offers tools for measuring and assessing fairness, philosophy on the other hand provides a framework for incorporating ethical values and considerations into the development and deployment of machine learning. Together these three areas can help shape it. Three areas of study can help shape and create a more equitable just system for machine learning. So first is law. So legal frameworks provide guidance from our governing body of how to provide an equitable and fair set of principles. The problem represented here is we have roughly 200 countries in the world, therefore we have 200 sets of operating principles. As you can imagine, this is where the problem begins. So quantitative social science. So how we measure social science provides us with a statistical right from wrong. Using things like economics, political science and sociology to create a framework of fairness. Philosophy. How do we incorporate ethical values into the algorithms we create? What are the weights of justice of equity? Morality, will and shall play a key role in the direction of any machine learning platform. So how do we measure machine learning? Or how do we measure bias in machine ruling? This is a crucial step in creating equitable models. To achieve this we can use specificity and sensitivity to evaluate models performance on different subgroups of the population. Identify any biases and then make necessary adjustments. By doing so we can evaluate a model's fairness, choose the appropriate threshold for a balance of fairness and accuracy and ultimately reduce harm caused by negative or false positives. Building the framework is essential to creating a fair and unbiased model that benefits everyone in machine learning model society. So sensitivity, also known as the true positive rate, is a measure of how well a model can accurately identify instances of positive indications. This is calculated by dividing the number of true positive predictions by the number of actual positive examples. A high sensitivity means that the model is able to identify the most positive cases. While low sensitivity means that a model is many, while low sensitivity means that the model is missing many of the positive cases. Specificity. On the other hand, specificity is a measure of how well a model can identify instances of negative cases. It is calculated by dividing the number of true negative predictions by the number of total and actual negative examples. A high specificity means that a model is able to correctly identify the most negative cases, while a low specificity means that the model is misclassifying many negative cases as positive. It is important to note that sensitivity and specificity are inversely related. As sensitivity increases, specificity decreases, and vice versa. This means that it is not possible to optimize both indicators simultaneously. It is important to consider that both sensitivity and specificity, in order to select the best model for the task at hand. So let's take a few, let's take a look at a few ways these indicators can help us create fair machine learning models. So addressing biases. One way to create more equitable machine learning models is to use specificity and sensitivity to evaluate the performance of the model. Based on different subgroups of the population, we can identify whether the model is based biased against certain groups. For example, if a medical diagnostic model has a high sensitivity, but a low specificity on a particular demographic group, it could mean that the model is over diagnosing for that group, leading to unnecessary treatments and higher costs. Evaluating fairness. So specificity and sensitivity can also be used to evaluate the fairness of the model. For example, if we want to evaluate the model's fairness for a specific subgroup of the population, we can calculate the sensitivity and the specificity for that subgroup and compare it to the overall sensitivity and specificity of the model. Therefore, if there's a significant difference, it could be that the model is not treating that subgroup fairly or equitably. So threshold selection. Specificity and sensitivity are affected by thresholding. Thresholding, which determines the cutoff point for what counts as positive prediction in many cases, the choice threshold can have a significant implication on the fairness of that model. For example, if a credit scoring model has a high threshold for what counts as positive prediction, it could lead to disparities lending to certain groups of people. By using sensitivity and specificity to evaluate the model's performance at different threshold levels, we can also choose a threshold that balances fairness and accuracy. So mitigating harm. Specificity and sensitivity can be used to mitigate harm for machine learning models, for example. In applications such as a medical diagnosis or predictive policing, false negatives can lead to dire consequences. By setting a higher threshold for sensitivity, we can have, we can reduce the number of false negatives and potentially prevent harm to individuals. Similarly, in applications such as fraud detection, false positives can have negative consequences such as freezing illegitimate accounts. Or by setting a higher threshold for specificity, we can reduce the number of false positives and harm to individuals. So how do we prevent unfairness in machine learning models? There are a few different ways to ensure fairness when creating a model. For example, for the ease of simplicity, we can divide them into three main categories when working with the data. Pre-processing. So pre-processing methods try to remove bias in the data before it is used to train the model. This can be done by swapping the target variables of previous decisions or moving the association between model features and protected variables. One downside of pre-processing is that it can make it harder to interpret the model's features in processing. So methods adjust the machine learning algorithm itself to consider fairness in addition to accuracy. This can be done by changing the cost function or imposing constraints on the model predictions. One challenge while in processing is that it can be difficult to implement and requires adjusting well-established algorithms. Post-processing. Methods, post-processing methods change the predictions made by models without adjusting the data or algorithm. One approach is to have different thresholds for privileged and unprivileged groups which can help achieve fairness in certain solutions. Ultimately, the goal of fairness considerations is to detect and exterminate fairness-related harms in machine learning models, rather than trying to create perfectly fair models by involving experts using quantitative and qualitative approaches ensuring transparency and appeal options. We can work towards creating a more equitable machine learning model. So creating equitable machine learning models. In order to create equitable models, it is essential to take a moral approach that incorporates fairness in its consideration. While it may be impossible to create a perfectly fair machine learning model, detecting and mitigating a harm is a mandatory goal. Documenting processes and considerations as well as utilizing quantitative and qualitative approaches can help ensure transparency and accountability. By taking a holistic approach that prioritizes fairness, machine learning models can become a tool that is truly equitable. Below have outlined the framework that can provide guidance when attempting to create a machine learning model. So firstly, identify fairness considerations and involve relevant experts. When designing a model, it is important to think about fairness from its origin. This involves identifying potential fairness issues and involving experts who have the relevant knowledge in the domain and a diversity of backgrounds in different perspectives. Detect and mitigate fairness-related harms. Rather than attempting to create a perfectly fair system, the goal should be to detect and mitigate all harms that affect as many as possible. This involves asking questions such as who is in the system and who is impacted. Document processes and considerations. Fairness considerations can be complex and not always have clear-cut answers. To ensure transparency and accountability is important to document processes and considerations, including priorities and trade-offs. Then, use quantitative and qualitative approaches. A range of quantitative and qualitative approaches tools can help be used to facilitate fairness considerations. It is important to remember however that these tools do not guarantee fairness and should be part of a larger holistic approach in mitigating bias. Lastly, ensuring transparency and appeal options. Fairness considerations do not end when some machine learning system is developed. It is important to ensure that users and stakeholders see and understand how decisions are being made by the system and continue to appeal any decisions and continue to iterate. So how do we apply an open-source framework to machine learning practice? So firstly, identify the sensitive attributes, select the fairness metrics, collaborate development, and set up for reproducibility. So applying the open-source framework is essential for an equitable future. By identifying the sensitive attributes that may cause bias in a model, we can then select the appropriate fairness metrics to evaluate its performance. Collaborative development which involves diverse and inclusive teams can help ensure that ethical considerations are prioritized throughout the development of the process. So tools for creating fairness and machine learning. So here we have a few of the different open-source tools that people are currently using to recreate fairness in their algorithms. So fair learn is an open-source Python package that is used to assess and improve fairness and machine learning models. Being that it is open-source allows for contribution and contributors from large community of users. This can lead to different perspectives, different ideas, and ultimately resulting in more effective solutions for addressing bias in machine learning. Then we have the seldom fairness tool. The seldom fairness tool is another open-source tool that uses or another open-source tool used to detect and mitigate discrimination and bias in machine learning models. It allows for transparency and accountability in the development of the machine learning models. By making the code available to review for scrutiny, the users can ensure the models are fair and unbiased. Then we have TensorFlow indicators. So TensorFlow indicators are another open-source Python package that is used to assess and improve the fairness of learning models. It promotes collaboration and sharing the best practices among data scientists and machine learning practitioners. Lastly, we have interpret machine learning. So interpret machine learning is another open-source Python library that is used to understand how machine learning models make predictions. It allows for transparency and interpretability in the model itself by making the code available to review. Users, this can lead to more effective and equitable machine learning models that are transparent and accountable users. So what happens when the quantitative approach fails us? This leads to a non-qualitative approach in fairness machine learning. How can we focus on incorporating ethical and societal considerations into the development of these models? So beginning with inclusive design, these approach involves designing machine learning models that are accessible and usable by a diverse use of range of users. This includes taking into account factors such as language barriers, cultural differences, and physical abilities. Participatory design. This approach involves designing engaging diverse stakeholders, including community members in the design and the development process to ensure that it reflects the values and priorities of the model you are trying to create. Ethical foundation. Incorporating ethical principles such as fairness, accountability, transparency into the development process can help ensure that the model is a responsible and equitable manner. So multifaceted collaboration. Collaboration between experts in different fields including computer science, law, social science help ensure that the model is developed with a comprehensive understanding of the relevant ethical, legal, and social issues. Stakeholder involvement. Stakeholders involve stakeholders who are maybe impacted by the system. So that represents groups that are both taking part in the design and the individuals who will be impacted by the represented data. Education and awareness. Educate the developers, stakeholders, and users of the machine learning system about the potential impacts of bias and discrimination. And raise awareness about the importance of fairness and ethical considerations. Adversarial testing. Test the machine learning system with adversarial examples which are deliberately designed to exploit wrong liberalities or biases in the system to identify or address potential sources of unfairness. Open source has the potential to shape AI and machine learning policy in several ways. I think we're all hoping that open source drives policy rather than policy drives open sourced as we can ensure the dangers could be dire to machine learning policies. So OSS or sorry open source software has played a significant role in driving the adoption of machine learning in recent years. The availability of open source tools and libraries have made it easier for developers and organizers to experiment and build learning solutions without significant financial investments. This has helped democratize and access to machine learning and has allowed smaller companies and individuals to compete with larger companies in the big tech field. Accessibility. Open source democratizes access to machine learning and machine learning tools, making it easier for people to experiment with these and develop technologies. This increased accessibility can lead to more diverse range of voices and perspective shaping of machine learning policy. Transparency. So transparency in the use of open source and machine learning can help increase accountability as the code is open for inspection and review. This can help ensure that technologies are being developed and used in ethical and responsible manner. Collaboration. The collaborative nature of open source development can encourage cross sector collaboration and knowledge sharing which can lead to better policy outcomes and force policy makers to work with open source companies rather than large existing tech behemoths. Innovation. So innovation open source can drive innovation by providing a platform for experimentation and iteration. This can lead to development of new innovative approaches to policy changes and related to these technologies. The dangers of policy pitfalls. So I'm sure these are all things that we are currently worried about. So the dangers of political policy pitfalls and machine learning cannot be ignored. Without transparency there is a risk that these biases and inequalities will be amplified. Exacerberating societal issues. It is critical to recognize that the significant individual and societal implications of these technologies and approach them with caution. Machine learning has the potential to widen existing inequalities and further divide society which could have catastrophic consequences. Moreover the lack of societal trust in these technologies can lead to a breakdown in public confidence leading us to vulnerable and malicious actors. Ignoring the perspectives of developers and failing to consider ethical responsibilities can lead to unintended consequences that could have been avoided with better collaboration. It's crucial that we take a proactive and responsible approach to policy development in the realm of machine learning to ensure that we don't create new problems while trying to solve existing ones. So how do we act on this? As we strive for more equitable and just machine learning systems there are several progressive actions we can take to support fairness. One crucial step is to foster and diverse inclusive machine learning teams. Bringing together individuals with a wide range of backgrounds and experiences to ensure that the biases and blind spots are identified and addressed. Additionally it is vital to ethical considerations throughout the development and deployment of machine learning models with a focus on mitigating harm and maximizing benefits for all members of society. Transparency and accountabilities are key factors in providing fairness. By making decision-making processes of these models more transparent we can build greater trust and confidence in their outcomes. Similarly encouraging accountability at all levels of development from individual developers to corporate entities can ensure that machine learning systems are being used ethically and in the best interests of society. Adherence to ethical principles is another important aspect of promoting fairness in machine learning. By establishing and following clear ethical guidelines and principles we can ensure that the development and deployment of these three systems are aligned with societal values and goals. Finally the use of adversarial testing can help identify address potential sources of machine learning bias ensuring that the they are fair and equitable for all contributors and individuals of communities. So fairness in machine learning will become one of the most important decisions in shaping how we interface with technology moving forward. There is no one framework or standard procedures that will lead us to a world that is fair and just for all who interact with it. Fairness rests on the shoulders of those who care about how these tools are built. Those who are willing to challenge the uphill battle that is defining fairness incorporating it into machine learning models will ultimately be instrumental in shaping the future of technology. It is not only efficient but also ethical and equitable ensuring that these benefits of innovation are accessible to all members of society regardless of their race gender or socioeconomic status. I will leave you with a few questions to ponder as we enter a new era of technology. I don't know if anyone has any questions. Hello just to speak clearly for everybody here and everybody watching. Thank you. Hi um are there any governments around the world that have implemented or thought about fairness or has nobody done anything yet? I saw something in Europe actually I just read something a few days ago regarding this. It seems like it's definitely lagging behind and that's definitely one of the the key priorities is to kind of get alignment around the world on on what that means and I think you know everyone has a different interpretation of what they want for the machine learning future and that's the biggest problem is you know aligning on what these ideologies are. Thank you. First question hopefully you will share your slides. Yep yep and possibly some of your your speaking notes I thought it was incredibly informative so thank you very much for the talk. Yeah yeah I wanted to give a perspective on how policies can be created. The Ford Foundation is an example of non-governmental organization in the United States that works to develop ideas and develop policies kind of like we've developed software iteratively and then help people in the U.S. government determine what laws ought to be passed. Recently they've come to cerebral valley to to the ML sort of changes that are happening in Silicon Valley and asking for help so what we're starting to see is that there are pre-governmental organizations asking for help from technologists who understand if anybody wants to know more about that I'm happy to connect you. I'm super curious about it. I don't represent the Ford Foundation. I'm actually starting a new ML application infrastructure company so awesome awesome background so thank you so much Alex. Yeah hi you mentioned educating stakeholders and developers using ML technologies. Do you have any like go-to resources that someone could rely on if they were in the process of educating stakeholders and developers like organizations that have more information about this stuff or people who've been doing research? I don't have the top of my head but I definitely have a long list of articles I referred to that I can I can send you if you give me a contact information or something along those lines. Sure that sounds great. Thanks thanks so much for that talk it was great. Although as a professor I have to take issue with your sample exam example so we'll need to talk about that later. But yeah but as an annoying professor I want to ask about definitions so you're using bias and fairness a lot of times a bit interchangeably and I want to get from you whether you think bias is only one form of unfairness or if you think that's sort of the main form because it seemed as though all the sort of technical solutions were really focused on this you know definition of fairness as you know you know making sure it's equally accurate predictions for all subgroups which you know in the bigger picture you know if it's a if it's a system that was built unjustly or is being used unjustly you know we don't necessarily care if it if it you know has these differential abilities to predict right so so what what in your mind is is bias only one form of unfairness or is it sort of the main form to attack? I think bias is just the easiest to identify when working with fairness so I don't think it incorporates the entire identity of fairness in any way but just using that was the easiest thing for me to explain and elaborate and identify the unfairness that exists when trying to create these models that's what I thought so maybe maybe in future slides maybe just clarify that that's sort of you know that's that's sort of the easiest place to take on or something yeah we just know what's going on so I appreciate that thank you no thanks thanks for the talk it's very good okay thanks everyone