 Hi, everyone. So you don't need to hear again today why artificial intelligence is so important, because it plays such a vital role in your decision-making processes on day-to-day basis, from what movie you're going to watch to whether you're going to get a mortgage or a home loan. And so for that reason, it's extremely important for us to be able to understand what goes into these models and have the ability to influence them. So Gardner has said that explainability is one of the most important top 10 trends in artificial intelligence in the coming years. As researchers and machine learners, we understand what goes into these models, but for day-to-day users, they're black boxes. They don't understand what's behind the decision-making processes. They don't necessarily understand how they can try and influence it. And realistically, explainability is just the first step in letting users have the ability to have an influence on these systems. So if you think for a moment how our AI pipelines work, you have data, they go into a model, and then they go into an algorithm that generates a model, and then you get an output from that model. And that could be a recommendation or a prediction or some kind of class identification. But your models are only going to be as good as the data that you have as your input or the rules that you have in your system. And realistically, none of these systems are going to have all of the data that they need to make the decision that can be right all of the time. You're going to have users and experts who are going to have contextual information, which is going to have an impact, or they're going to have novel information that the system maybe was not aware of. So what we have started looking at is areas where we can try and let a user influence the system. And ideally speaking, you should be able to control the entire pipeline and let users be able to have domain expertise and knowledge from the ground up right the way to the end. But in reality, most existing solutions that we have in industry or already out there in the wild already have certain key pieces in place. So what we've been looking at is at the different elements or intersection points in the pipeline. Where can we let the user have an influence? Whether that's at the data level, whether it's at the model generation level, or whether it's at the output and doing some post-processing. And we'll see one of the solutions we have there. So once you have explainability and someone understands some of the aspects that are going on in a decision or a recommendation or a prediction, then you can try and let the user influence in some shape or form. And that's what we're trying to look at in these situations. Being in IBM Research, we have the opportunity to work with a couple of different domains and try out a couple of different use cases and see where we can experiment in different ways. So one of the first ones I want to discuss here is opportunity team builders. So we have the opportunity to work with our sales division to try and understand the sales pipeline and try and figure out areas where artificial intelligence and decision support systems can help the sellers make better decisions. And one of the areas we started focusing on was a recommender system for sales teams. So one of the reasons why sales teams is an interesting problem and any teams in general is that it's not about recommending a single item, it's a composition of items. And I shouldn't be referring to people as items, but it's a composition of people and skills. And so I like this chart here because it shows one of the nice trade-offs that needs to happen when it comes to putting people together to work together. So if you have people with low overlapping knowledge, they tend to not be able to communicate, they don't speak the same language, they don't necessarily have enough overlapping goals. And so team performance is quite low. If you go all the way to the other side of the spectrum where they have the same overlapping knowledge, they're speaking to the echo chambers. They all understand the same thing, they all have the same ideas. And when they come to a challenge or a hurdle or some kind of novel issue that they need to tackle, they don't necessarily have other areas to draw on to inspire them to come up with innovative solutions. And so what you actually want is a nice in-between trade-off between these two aspects. And so we started working on a recommender algorithm to try and help sellers build these teams and compositions. And so one of the things we did was not just provide recommendations, but also provide evidence as to what attributes that person brings to the team so that they could see what skills. And in this case, you can see that there are attributes such as experience with the client and the customer, experience working in that country, experiencing working with deals of that specific size. But one of the other things we did was we trained a machine learning model to try and predict based on the current composition of your team that you had, what was the probability of success of the opportunity. And so it's a strand that we've been investigating that we refer to as recommendations with consequences. So we don't just show a recommendation to a user, we try and show to them what's the impact going to be on you consuming this recommendation, what's going to be the impact of you making that decision. So that was one of the first projects that we had where we tried to go a bit beyond just showing someone a recommendation and even just giving them a bit of an understanding of the underlying reasoning behind our recommendation, but what was the impact going to be on them taking this recommendation? We had a similar project which had to do with recommending business partners. And we would make these recommendations to the sellers and they would choose sometimes not to select our business partner that we had recommended. And so we had to build in this feedback loop to understand why is it that you didn't consume this recommendation because according to our data, according to our nice view of the world where we think we have this nice complete view, we think that these are the best people who are going to be good for your deal. But the sellers had real world information that was beyond our knowledge and our system was never going to be able to know it because it would be things like this customer has, this customer met this business partner at the conference and they're the ones who the client wants to work with. And so it's knowledge outside of the scope of our data that we had access to. And so we enabled users to be able to give feedback when they were consuming different recommendations to say why they were making that decision and we used NLP to extract what attributes were the key criteria and add that to our evidence model. And so there's an example where we're letting users influence the data going into our system in order to be able to make different decisions going forward. And this actually was responsible for what's called a Stevie award in innovation for sales that we won at the beginning of last year. So the other project we started working on was trying to again go in the area of recommender systems but we wanted to push this notion of interacting with the recommendations even further. And we had the opportunity to work with one of our product teams who does career recommendations for inside the industry. And so we decided this was an excellent avenue for exploring the notion of interactive recommendations. And there were a couple of reasons for this. One the reasons why people don't always want to interact with recommender systems is because it's not worth their time. If it's just picking a movie there's a low cost of getting it wrong. You've wasted two hours and you can just stop at midway through. Your career is extremely important and you're going to invest the time to understanding. So when we were trying to figure out how to captivate people and get their information into the system this was definitely a problem area that we felt users would be able to interact with. Another interesting aspect associated with it is we were inferring knowledge about the users. So things like processing their CV or looking at their job record inside IBM. But in this case this is a notion of a recommender system where we don't have rating data. You took a job. We don't know whether you liked it. You took a job that had certain attributes associated with it like it could be being a manager or could be taking a client facing role. Again we don't know if you liked those attributes associated with it. And so we built a dialogue system that could interact with the users in order to not just present them recommendations but let them probe and understand the recommendations and give feedback on the information we the system were able to give them. So if we made a recommendation for an item we let them say well tell me about that item. Tell me about that job. And so we would be able to return to them things like well it has a management role or it has to do with supporting networks. And the user was able to say well I like working in management but I don't like working in the area of networks. And so they were able to what's called critiquing the individual recommendation items and tune the recommendations in real time. Let's say the person has got to the flat set of recommendations and we have presented them but they haven't really noticed ones that grabbed them. We also did the notion of preference solicitation. So well we know we have these other items that we could show you but if you could answer these particular questions we'll be able to get to the answer quicker for you. So we would auto generate these preference solicitation questions that would divide the data in the most telling way to get you to get you to hopefully a more appropriate recommendation. And then finally we would let them ask why did you recommend this role to me? And we would be able to surface to them the attributes that we understood about them because again we're inferring. We're inferring from their job roles. We're inferring from their CVs what we think we know about the user but we could be wrong. And so we tell them well you have this kind of history. You have this kind of relationship with the job that we're recommending to you. And we give the user the opportunity to correct those assumptions and tune the information that comes back to them. One of the things I want to point out is when we first started this project we built a recommender system with all of with an understanding that we could control the entire pipeline that I showed you at the beginning. But in but in reality we came across that there were lots of APIs and lots of existing recommender systems within the domains that we were working on that they didn't want to rewrite them. They didn't want us rewriting them. So one of the things that we developed was a platform a framework that would sit on top of an existing recommender algorithm and it would create these APIs on top of it where it would build an understanding of the attributes of the items build these preference solicitation questions and be able to allow an out-of-the-box recommender algorithm be something that you could interact with through dialogue. And this was something that we had deployed as a pilot inside the organization. And then the final piece I want to mention which is very much a work in progress it was we talked about being able to modify the data that goes into our system and we talked about being able to do a step that works at a layer above the output to allow and bring this interactive layer into it. This is where the model itself is generated through something that we could deem is interactable which is where you have a policy in this case we're dealing with health insurance policies. You have a policy and that policy has the rules dictated in text and we have a team that are working on building ontologies on top of these documents and being able to create a semantic graph of rules associated with that represent the system and so be things like you're entitled to six physiotherapy visits a year and these are the age requirements that you have associated with it. If you change the document because this is an example where machine learning algorithms would have a big problem associated with it you historically have your data saying if this person has this criteria if this is the history associated with it they're approved to consume this resource. The problem is something like health insurance policies is going to change potentially on a yearly basis and so you can't wait for your machine learning algorithm to catch up. You need to have a mechanism to be able to change it and take these business rules as they arise and so here the model is driven directly from text and so then they build a mark of logic network on top of it and they're able to are able to not only infer whether the claim should be supported or not but also provide an explanation tree as to what clauses and what aspects of the policy are being triggered to make that decision and giving the supporting information and once again you can give the explanations associated with why this decision is being made you can allow for correction if there's been mistakes made and this is something that's work in progress and so finally I just want to thank my collaborators so I have a great team that I work with there and I just want to thank them on these projects that I showed you today so this is just a taste of some of the pieces that we've been working on where we really are trying to look at the different pipeline stages of the AI process and understand where a user can bring influence to the process and thank you