 Hello, today I'm going to be talking about ethics at the edge, IoT is the embodiment of AI for rampant intelligence actuation, so let's dive in. For today's agenda, I'm going to be talking about individual vectors of AI ethics and their implications for the internet of things, evolving principles and governance for IoT devices, definitions of trusted entities and key ethical principles in the context of IoT security, and finally conclude with an interesting formalization on moral machines and key unconventional considerations for human-centered design intersecting with the AI ethics and internet of things domain. So let's get started with the what and why on AI ethics for IoT. We're seeing a lot of headlines in the media that are addressing some top concerns in the AI ethics space. So some of these recent headlines include important ethical concerns being brought to light by uses of facial recognition and surveillance technologies that are violating some ethical concerns that we have in addition to hacking of surveillance systems from the IoT security standpoint. We're also seeing in a general perspective the impact of AI systems in our society. For example, AI's carbon footprint problem, the evolving regulations that are attempting to handle a lot of the concerns that are raising in this space and the tie-in to AI and power dynamics as all being a part of this landscape and a lot of the concern around AI. We have some good positive developments happening in this space but also a lot of interesting and important concerns to consider in the context of IoT security and the impact of these solutions on societies. And this is a great segue to the way that we're thinking about AI ethics. It's a pretty broad field or space and really we can consider this to be, I would propose, a socio-technical lens on the design and impact of AI solutions on our societies. Now if we start to consider what is the implication of this for Internet of Things, here I'm proposing that, of course, Internet of Things devices are consumers and producers of data. Now we take that concept and also starting to understand that they're also enablers of actions made by intelligent autonomous agents. This is also the reason for that phrase rampant intelligence actuation as part of this presentation title where AI algorithms and smart analytics are being leveraged by Internet of Things devices in order to make intelligent decisions. And as part of this we're starting to see a lot of emphasis being placed on the actuation component of the pipeline and being able to understand and apply ethical and trustworthy guidelines toward this to prevent rampant intelligence actuation. Now as part of this what we're seeing is the commissioning deployment and maintenance of AI systems and compliance with ethical governance is an Internet of Things problem. Considering the differentiation between ethical and trustworthy AI, the line is pretty blurry. I would like to propose that there is analogous correlation between ideas of alignment and intent where you have a system that can be trustworthy, it has support mechanisms in order for its users to trust its decision making and it has transparency, etc. But it's not necessarily designed with an ethical intent or with good intentions. Likewise you could have a system that is designed with good intent, it has good ethical implications on society but it doesn't have any support trustworthiness mechanisms or metrics in order to help the users build trust with the system going forward. Now given this let's dive into the individual vectors of ethics for IoT. What I'd like to propose here is a framework or a way of thinking about the dimensions of AI ethics that we're starting to see. These include transparency, also encompassing auditability and accountability, sustainability, fairness, security, safety, and privacy, and finally vulnerable target populations and varying use cases where that one is intercepting with all of these different vectors and we're seeing that it's influenced by each of these but is deserving of its own standalone implications and similar when we're starting to see how regulations, policy, and technical implementations are changing depending on the type of use case. Now diving into this further let's go into each of these. So for the transparency component this is including ideas around the right to understand the how and why behind AI decision making throughout development and deployment of AI systems. On the topic of sustainability this also encompasses energy efficiency where for AI algorithms we're considering low resource implementation of these systems in IoT ecosystems. In an interesting statistic I'd like to share with this audience an off-the-shelf AI language processing system reportedly produced 1,400 pounds of emissions which is about the equivalent of flying one person round trip between New York and San Francisco. So there are alarming implications for AI systems. Now in the context of internet things, devices, and ecosystems it's important to start to consider the implications here and I'm leaving it at a pretty high level. There's a lot more research and investigation that needs to be done in this space. Similarly for fairness and bias we're looking at two interesting ways of thinking about this. First from the social context where we want to be able to mitigate and prevent data sets and methodologies that are allowing AI to reflect different types of manipulative power dynamics. That's from the social context standpoint. Now from a technical context it's a little interesting to also think about this from the point of view of again an IoT device where we're seeing representative allocation of resources and prioritization to edge devices also coming under that representativeness, fairness, and bias definitions. So this latter point is pretty trivial in the context of prioritizing processes and made by edge devices but in the context of these AI ethics definitions nevertheless interesting to consider. On the security safety and privacy standpoint we're looking at ethical review and enforcement of end-to-end AI safety and privacy in the cloud and at the edge including capabilities such as sandboxes and isolation of model capabilities for testing. Finally on that last point vulnerable target populations in varying use cases we're seeing again this point intercepting with these other vectors and this is really hitting at customizability of the system and the overall pipeline and during deployment as well to accommodate regulatory and user feedback and feedback from societal norms. Now looking into this further let's go into some interesting thoughts around risk management for machine learning taken from a wonderful summary review that's looking into this in detail and proposing some assets that should be considered as part of risk management it can find the links to these relevant materials in the slides. Now as part of this for the input data assets that we're looking at there are a few key considerations I'll call out a subset of these here so we've got documenting model requirements as key from the documentation and transparency standpoint assessment of data quality and encapsulation of models as additional two components that are necessary from the security and trustworthiness components as well. We've also got monitoring of the underlying data testing and monitoring of data drift which also involves being able to characterize the behavior of data and its impact of AI systems. Finally we also have the idea of making alerts actionable this is again placing emphasis on that actuation point of the pipeline where okay you detect a violation in you know a trustworthiness threshold or an ethical threshold to some level what are you going to do about it and can the system act autonomously on this or not these types of interesting considerations and again this is just a subset of some of these there are much more interesting implications that come out of this. Similarly on the output assets perspective we're starting to get to the considerations on the deployment of AI systems for example in IoT ecosystems here so as part of this we want to think about exposing biases throughout development and deployment of the system occurring at all stages of model design and implementation. This may include determining the model's reliance on sensitive features for example you know sensitive demographic data and any features that could also act as proxies for this information that we hadn't initially anticipated. Similarly we also want to see here clear documentation and methodologies established for continuous monitoring detecting a feedback loop so we're able to identify algorithms that are influencing each other in ways that bias can trickle into the decision-making processes. Documentation of all testing and being able to pull models from production including consistent validation checks which can occur again by being able to deploy some trustworthiness or ethics measures either manual or automated that are able to keep us up to date about the behavior of our model in relation to these guidelines. Now some interesting sample open research questions that I'd like to pose to the audience here. In the context of the Internet of Things we're starting to see some interesting points arise so one of these are how can AI consider balance cultural values while considering individual personalization so we're starting to see some good and robust definitions of fairness and bias emerge they still work in progress things like group versus individual fairness societal versus statistical bias and similar but the premise with my thought process here is will the edge in the cloud redefine some of these definitions for example just taking fairness as an example in distributed environments. An example might be personalization and generalization for recommender systems where you might have a case for local bias and let's say interactive kiosks you know where it might be nice to have personalized recommendations based on information that's collected on diversity and societal norms and have that reflected as part of interactions that a user is making in order to better align with their preferences and similar and again you know this does raise some interesting ethical implications that are also increasingly relevant given a lot of the regulations coming up in this space so an interesting add-on to that would be whether biases should be limited to edge notes instead of impacting the AI model that's hosted on the cloud. A lot of more work that needs to be done in this case but some interesting questions for discussion. Now next up let's talk about governance and here I'm going to do a very high-level perspective on implications of Internet of Things for AI and more due diligence needed on the viewer side as well to kind of dig into these regulations and figure out the applicability for business outcomes in addition to the way that we're developing and thinking about AI systems. So here in terms of IoT in the context of AI governance I've taken a definition here from the European Commission's Communication on AI on AI based systems where they describe how AI based systems can be purely software based such as a voice assistance and similar or can be embedded in hardware devices and this includes Internet of Things applications. So a general comment that I've made just surveying some of the regulations and policy in this space is that IoT ecosystems and applications are typically considered for the sensors or input component of an AI pipeline in the formulation of these guidelines. Now again hitting back on that point I mentioned previously we see that the commissioning deployment and maintenance of AI systems at the actuation component of the pipeline as well in terms of the decision-making processes AI is making by itself and leveraging the IoT ecosystem to do so is an Internet of Things problem. So we talked about some of those output assets as part of risk management for machine learning and again you know revisiting that some upcoming actuation considerations for AI ethics include piloting regulatory compliance that's consistently occurring during deployment evaluating the user experience etc and these have some interesting implications for the Internet of Things that we should start to consider. One of these include the implications of ethical AI vectors for model splitting and partitioning which is still being examined that's going back to the idea of you know splitting or considering biases differently perceived on the edge versus the cloud is that relevant does it make sense and how does it align going back to the user experience and the societal impact of the solution with what our users want. I'm showing here two interesting case studies that are referencing Internet of Things in the context of AI governance. One of these is from the OECD public consultation on a framework for classifying AI systems where they describe a interesting case study around managing a manufacturing plant. So this system is taking as input multiple different data streams and of different types. It's a composite system composed of different models that are interacting with each other and are responsible for a couple of different outputs related to the decision-making process at a very high level. Similarly to the right we're also seeing a diagram from the FDA's proposed regulatory approach on an AI ML workflow for software as a medical device. So as part of this they have started to investigate the idea of regulatory practices and best machine learning or good machine learning practices during the development and deployment of these systems. This also ties back to those vectors that I was discussing earlier emphasizing on transparency security safety and reliability privacy fairness etc. Now I do also want to address this interesting concept of accountable anonymity that's starting to emerge as an important user preference. So according to the recent governance in the space and also the practical implications of AI systems we're seeing that trustworthiness and transparency of data usage and governance are key considerations. Now accountable anonymity is an interesting idea. There have been a couple of encryption schemes and methodologies and similar developed on this but the definition that I'm trying to apply here is the ability to offer levels of privacy while ensuring accountability and this can include capabilities such as data provenance or similar. I'm also referring to methodologies such as differential privacy on learning etc that are enabling the ability for users to track their interactions with AI systems over time, retract their data if they choose or also influence the way that the AI system is reacting based off of the data that they're providing. There's also an interesting news article that I'm showing a snippet of at the bottom of the slide here which is also kind of highlighting some of the interesting or important implications of IoT gadgets and the toxic legacy that they're leaving. This of course again has implications for AI systems where these algorithms are constantly consuming and producing data performing evaluations and forming understandings and representations of our data inputs. So accountability or accountability plus anonymity is an interesting concern to think about upcoming. Next let's dive into trust identities which is a bit more of a deep dive into the levels of autonomy of AI systems. So here we're looking at the definitions of trust identities and human oversight based on some initial guidelines that have been proposed in policy frameworks, you know governance models and similar. So the four primary levels of AI system autonomy that have been identified here include human support where an AI system is not able to act on its recommendations or output. Human in the loop which is where the primary emphasis in the field today is being placed according to what I've seen where an AI system can act on its recommendations and outputs if the human agrees. An example if this might be the medical diagnostics application where a medical professional is leveraging the AI system to gain insights about a particularly complex problem that fits the representation and overall structure of the AI algorithm for the task but this professional is making the decisions and can choose to leverage or not use the decisions that are outputted by the system. Next we also have human honor over the loop level of AI autonomy where the AI system can act on its own recommendations and output unless the human vetoes. I personally see this as an interesting evolving consideration of user experience where the AI system is not constantly pestering the user for validating its outputs every time but is able to identify critical outputs that need to be brought to the user's attention and not proceed or determine some sort of threshold accordingly in order to get important or salient user feedback at the right time. Finally we have a intriguing application or level of AI system autonomy around human out of the loop where the AI system is able to act on its recommendations output without human involvement at all and we're seeing modifications of these autonomy levels leveraged in the European Commission's guidelines on trustworthy AI, the OECD public consultation framework I mentioned earlier, the Singapore government's model AI governance framework and additional documents. I also want to briefly touch upon approximate stakeholders in the AI supply chain. So for example some of the users or the folks that are interacting with the system and are influenced by the levels of autonomy of AI systems include model developers whose role might include debugging performance increase biases etc. Business owners which are trying to evaluate the fit of the model and the agreement of the use for use cases this may also include the definition of data controllers and data processors that are proposed as part of the EU Commission's guidelines, model risk evaluators which are responsible for checking the robustness and deployment readiness of AI systems, regulators which are responsible for reliability and impact assessments etc. And finally our end users and consumers where transparency is especially of an emphasis so our users are able to understand the comprehensive AI pipeline and able to interact with it in a meaningful way that is best aligned with their interest. Next I want to quickly also go through some critical issues revealed from the Ricotta hack which was a large-scale privacy breach. I recommend reading the news article to learn more if you haven't heard of this already because it's a very interesting IoT security use case and I want to generalize some of those insights or findings to AI ethics as well. So the importance of authentication and authorization is shown by these types of recent large-scale privacy breaches as a very quick high-level summary of this hack. Employees and or third-party hackers were able to gain access to a super admin account that was designed for debugging. Now they were able to use this to leverage sensitive or access sensitive data and they were able to do this by giving themselves permission under the guise of testing the system. So some three critical issues that I'm pulling out of this for our discussion include lack of review of justification or documentation for access, no time-bound access rights and the ability to override customers' privacy modes due to misplaced trust. And that third point is especially intersecting with AI ethics and trustworthiness considerations. While the first two are related to transparency and the mechanisms that we're putting in place in order to ensure trust, the third is also highlighting on ethical dilemmas as well and the interpretation of those vectors that I mentioned earlier in the context of trustworthiness and overall in IoT security. Bridging based off of this we are also seeing transparency as an interesting concept that's starting to serve as a double-edged sword. So some proposed attributes of human-centric IoT trust models could include trusted attributes and delegation. Now when we're starting to include transparency as a mechanism or a way of enabling these particular attributes we're starting to see some contention here. For example, explainable components of a machine learning pipeline for debugging purposes can overlap or expose some security and privacy vulnerabilities. And an example taken from a very interesting academic paper that's looking into this include the different types of explainable AI methods and potential security risks that they can pose. So here again at a very high level mentioning the names of these attacks and the associated risks, explanation by example AI interpretability and explainable methods require access to sets of reference data, but they pose privacy risks due to the access of data that's required. Model transparent methods reveal model weights which can make it easier for attackers to infer underlying data. And differential privacy is an interesting methodology for anonymizing training data and useful, but it could impact the quality of the explanation and the overall accuracy. So a lot of different contentions that need to be explored as part of this, where these vectors can serve as double-edged swords. And here we're using transparency as one of these case studies. I'd like to conclude with some intriguing implications on unconventional considerations for human-centered design for social reasoning and moral machine. So at a very high level here we're starting to look into what social reasoning means for AI systems. How do we instill value alignment in AI systems and in the context of IoT ecosystems moving aside for a moment from some of those practical implications, risk management frameworks, governance and policy frameworks, etc. What does this mean also from the implementation of the research perspective, forward-looking? So one of the key principles or the ideas that we're seeing in this space is the idea of techno-solutionism, where technology is being leveraged to solve problems that were created as technology, as a high-level description of the trend. So this also has some interesting ideas that are applying for AI ethics. An example of this is in the value alignment space, where we need to understand that setting discrimination and fairness types of principles and concepts in stoner code without transparent processes can violate democratic ideals. And this was very nicely noted as part of an excellent survey paper linked here. So just taking fairness to that case study, it seems important to be able to automate fairness while respecting an enabling contextual approach and the extent to which we automate this in relation to those trusted entities that we design the stakeholders involved in the pipeline and the types of mechanisms that we're imposing for trust. All of these need to be considered. We also need to consider what are the types of societal implications and proposals that we can make from the governance and policy perspective to support this, because one of the key issues or challenges we're encountering with these technical or practical solutions is just because you can do it doesn't mean you should. So on this idea of incorporating social reasoning into AI systems, I also want to mention some very interesting research on creating a dataset for visual common sense reasoning where some researchers are attempting to be able to incorporate common sense reasoning or in general reasoning about images and similar supporting data from different modality types as part of AI systems. This is still work in progress and here I'm over exaggerating for the purposes of a forward looking perspective, but what I want to drive home here is this idea of embodied AI where we're bringing the gap between tasks such as language generation, which is the task that was investigated by the researchers in the paper linked at the at the right. You can see the image here and actuation at inference. So embodied AI can be considered as the cool terminology for AI embedded in IoT ecosystems where we're giving AI a body methodology or a way of being able to act based off of its evaluation of data and being able to evaluate exactly what points as part of that pipeline it's okay for AI to act on, how we may be substituting too many technological solutions in light of concerns that should be addressed for more of a societal lens, etc. These are all key considerations in the development of these systems and being able to instill social reasoning for AI as well. Now some additional interesting considerations on how versus why and the idea of explainability and causal methods. So I'm outlining some interesting thoughts in this space. For example, we're seeing causal methods in the AI space focusing on why the AI system made a decision or being able to infer regarding why other parties or entities have been making a decision. I'm seeing AI explainability is an interesting answer to the question of how the system is making the discussion and self-improving and guiding AI is a key factor that's starting to come into play where we see AI systems trying to understand how they can improve their decision-making system, defense, or role in a particular framework in addition to how other entities could do so as well and provide recommendations or improvements. So ethical considerations here include embedding human intuition into AI algorithms, reliance on automation and the human in the loop concept and neutralizing human misintent. Now on that last point I want to present an interesting use case here. So this is a use case for an unconventional concern for human-centered design around when users are a threat. So at the left of the screen you can see a GIF here that's showing a robot that's being deployed in a mall where you know essentially it's trying to navigate through a bunch of people and as part of this we're seeing some interesting behavior from the children where they're playing around with the robot but it sometimes escalates to become more violent like with kicking, hitting, etc. So in this case users that we planned for and anticipated for have become all of a sudden an unexpected threat. Now interestingly as part of this paper what the researchers did is they built a attack evasion simulation planning model and they made the robot change its original destination when they were predicting the children about to approach the robot and make some trouble. So here the AI system and the engineers who are developing it are accounting for the fact that users can be a threat and designing accordingly rather than trying to influence the people themselves. They are trying to do this in a way that influences the robot's behavior but in a way is also able to influence indirectly the entities that it's interacting with. Now that's an interesting safety model and this is also bringing a concern of when should humans be out of the loop? How do we differentiate between regular and malicious users? And explainability or transparency again is that double-edged sword so as we continue to provide a lot of information about AI systems and ensure transparency how do we protect against potential malicious manipulation and circumvention from users that can exploit this information? So a very simple and high-level example might be users jailbreaking or hacking AI models provided by an API for their own use. In the case of an IoT case study let's think of a healthcare user trying to hack the number of steps they've taken through a smartwatch and for smartwatch users you may have noticed for example if you wave your hand around do some typing or playing the piano it'll walk some steps for you in addition to what you've actually truly accomplished for your exercise goals. Now as part of that that's kind of a trivial example but the ability to manipulate systems especially when given a lot more insight into them is interesting and it has some implications for users that are you know for example in the medical space where let's say you've got a user that wants to maliciously try to hack into the system and be able to say something like you know I've taken these many steps per day or I've taken these medications when in reality they haven't. So this could be malicious this could also be you know just a neutral intent or you know the user of the system is not necessarily aware of that they're interacting with the system in this way so it's really this interesting balance of transparency versus security and how we're anticipating for these different case studies and different profiles of users those who may have a malicious perspective from the hackers angle those who may not be aware that they're influencing the system in this way and those who need some help to understand the system's outcomes and can improve accordingly. Finally I also want to address another unconventional concern for human-centered design of AI systems which is this idea of AI systems hacking into themselves or you know beating themselves and as part of this we have an interesting case study on adversarial examples for reinforcement learning I highly encourage watching the video for an entertaining but also a very intriguing and potentially alarming application of you know reinforcement learning agents trying to fight and essentially outsmart each other. Now this is interesting or funny from the context of you know humanoid agents in this constrained 2D environment or 3D environments but if we're considering this let's say for a setup of drones or similar we're starting to see AI agents figuring out and exploiting their own methodologies and this brings me to the last point of my presentation in terms of when is AI a threat to itself and what are the implications for the internet of things ecosystems there so is AI exploiting itself possible and again taking going back to that drone setup you know and reinforcement learning based drone for example that needs to reach a destination in given time the quicker the delivery the higher the reward so let's say for the purpose of path optimization is it possible the drone could try to exploit other drones or even digital traffic signs or other connected IoT devices in its environment that it has control over again emphasizing on that actuation capability in order to get to its destination quicker this is also tying back to value alignment but also you know this is all interconnects that are occurring from a technological perspective nevertheless we still see the societal implications here in the ethical implications you know a final interesting point is also on adversarial perturbations in general and how there was some interesting research on this being able to influence time limited humans to choose incorrect classes in addition to this research now we're starting to get into some interesting ethical implication domain related to deep fakes and similar where you're simulating input or content to trick AI systems or humans so can AI be a threat to itself will the internet of things ecosystem enable that or will IOT be able to enable proper security trustworthiness and ethical measures in order to prevent misuse and abuse of AI systems only time can tell but the positive outcome here is that IOT including security and trustworthiness and ethical implications have a long way to go but we can build a secure and ethical future for AI systems and the humans that are leveraging it thank you so much for your time on this slide here you can also find Intel's code of conduct and a snippet on our responsible AI principles that are informing our development processes