 Oren, how are you? Great to see you. Good, how are you? Thank you for having me. Very well. Welcome. Thank you. If you're all set, the stage is yours, Oren. Take it away. Excellent. Thanks. Can you see my screen? Cool. Thank you. All right. So nice meeting you all. As an introduction, the role of product manager, whether held by a product manager or by an engineer, any other builder that is acting as a product manager is evolving once AI and machine learning is embedded into the product. And this is a topic of the conversation we're going to, I'm going to talk with you today. I'm going to start with an overview of what is ML, not as this is not an educational session. And what is ML is just to make sure that we're all on the same field as far as terminology. Then I'm going to dive into deeper into the use case methodology of finding the use case for machine learning and AI ML using the PR FAQ of Amazon, the methodology of Amazon for working backwards. We're then going to look at how do you qualify for use cases, a good fit for machine learning or not? We're going to talk a little bit about the POC builder. How do you decide which element do you want to de-risk in the POC and can you do machine learning in the lean methodology approach? People used to think that not today, actually with AutoML and pre-trained model, this is doable. We're going to talk about the team. How do PMs and data scientists collaborate in an empowered team structure along the different journey stages of the ML journey of the company? Whether you start with the first use case all the way through an ML platform that can support an abundance of machine learning models, some in training, some in deployment and so. Data is a very big piece and we've heard a lot about that today. It is a very big piece of any machine learning process and product managers play a key role in us retaining data quality and preparing the data and labeling the data for any machine learning model. The important thing to remember is that a model is not a product. Taking a model into a product requires earning the user's trust. There are multiple mechanisms and tools we can do in order to, you can apply in order to do that. Some of them are called human in the loop. Some of them are called explainable AI and we'll talk about them. And we'll talk about the ML file in the feedback loop. How do we take the data into a model, how do we change the behavior? And then the model will drift because we are successfully changed the behavior. So we maybe need to identify new data, relabel it, retrain it and so on. And finally I'll talk a little bit about how AWS can help you guys with a channel, with a journey of ML challenge of ML journey, whether you're starting it or an advanced stage so that. So let's dive in. What is ML in a high level terminology, just so we can all kind of know that we're speaking about the same thing today. There are three terminologies that are often used interchangeably. So artificial intelligence is mimicking human behavior given in rule, rule engine based method. Machine learning is where we let the machine learn from the data by itself. Deep learning is a subset of that when we have hidden layers. So we have an input and output layer and everything that's happening in the middle is hidden from the human. Though today with explainable AI we can probably interpret and understand a lot of the reasoning behind the machine learning. Another way to look at the classification of different machine learning types is supervised learning where we have a label where we tell the machine what we want it to learn. So we say here's an image of a cat, here's an image of a dog or a million of each. And then for the two million one, we want the machine to identify and classify binary classification by itself. Or unsupervised learning where we do not tell the machine what it is that we want it to detect. We just put the data inside the machine and we expect the machine to identify patterns in the data. Whether it's anomaly or clustering that are attributes of the data that can help say this group is more similar than that group. But what is the meaning of that group is something that we will require an SME, a subject matter expert to later look at the cluster and maybe change them a little bit, name them and give it a meaning in a context. So supervised learning is a place you want to start. It's easier to start but it requires labeling. Unsupervised learning can support in a way. We have a lot of data, dimensionality is in the data and we want to reduce the dimensionality is using unsupervised learning and then feed those into a supervised learning model. Reinforcement learning is where we have a lot of iterative games or opportunity for the machine to learn and design a policy based on a reward. So we tell the machine, hey, this is good. This is bad. Now you can learn from that. The recommendation engine, for example, might learn from how users interact and we can decide as product managers, not as data scientists, product managers can decide whether we want to exploit what we learned about the user so far and give them recommendations based on what we know about them or whether we want to explore new stuff that they may be interested in. So this balance of exploration versus exploitation, I think it's more of a product manager decision than a data science decision. And we'll talk about those decisions and what is your role in those decisions going forward. Machine learning is probably the most disruptive technology for product managers is that traditionally we would look at traditional programming in the following manner. You have data. We have a program that the computer scientists and the engineers are building in order to get the outputs that we want. This has usually high price, a lot of effort and we as product managers often have a desire to do a lot of features, new products, many ideas, but then we hit the wall of, hey, this costs a lot to implement, the software is very complicated, the software is very complicated and so on. In the machine learning world, we actually work the reverse. We say here's the data, here's the output that we want the machine to do for us and the machine takes care of designing its own program. This makes it a lot faster and easier for product managers to actually test, iterate and develop new products provided that you have an ML platform to support this fast iteration of data scientists and engineers ideating together and trying out new stuff. You still need the data, you're still into label the data, but the process of not having to now heuristically code everything actually in the long run makes it easier to design, develop and roll out new products and new features. So machine learning has been around for a while. Why is it now so successful? Something out working here. Can you guys still hear me? We can hear you, Aaron. Okay. From whatever reason my deck... Your presentation is stuck. I guess you can see that as well. Okay. Let me just start that over. I apologize for this. I'm glitched with the computer. So machine learning has been around for a while and the reason we're now seeing machine learning taking up so fast is for three main reasons. Compute is now available for everybody basically. Data has been accumulated in a fast growing manner and research showing that machines they can see better than people in computer vision can almost analyze language better as people in NLP natural language processing and can detect patterns in data probably better than people. All of this is available in the cloud today. So what does that mean for you, the product managers? What is your role and how does that change in machine learning powered product management process? First you can now productize and feature complex situations. Situations are first originally too complicated to condition. When data scales fast you don't have to redesign your heuristics. Machine learning can support that. When the value is a personal hyper personalized path or journey of a user rather than the segments. Let's say instead of looking at an age group or an age cohort, you want to look at each user individually along their digital journey for example. When data changes in an unpredictable manner in real time, machine learning can cope with that much better than traditional heuristics. If you have a lot of data and you're just looking to generate some insights from this data. So you start with visualizing that if you want to go deeper and identify stuff that visualization cannot detect easily. When you want to automate stuff like computer vision like BPM processes or NLP chatbots and so on. And traditionally people used to say that hey we need millions of images or data points in order to train a model and that doesn't really support the lean approach. Well actually today that's not true. In the last couple of years we've seen a significant uptake of auto ML capabilities we now call this auto pilot and AI services. These are pre-trained pre-built models that can be tweaked to a specific use case or specific data set of any specific domain or vertical. But can actually get you started with a machine learning model in a matter of hours or few days. With a poc that has machine learning embedded inside it might be not the best or optimized model for scaling your own use cases. You can like I said tweak it later it's going forward but you can really get fast into a poc with pre-trained and pre-built model. And today you can do that in auto pilot with an open box machine learning. So it's not a close box. You're actually if you have data scientists that just want to save their ramp up time they can start with auto ML and then get on in the background all the ETLs that were designed by the machine and all the algorithm that it checked selected and so on and then you can take it and continue to improve on it going forward. So what is the role of the PM in a machine learning process look like? First we have the problem use case KPI definition. We then want to derisk something with through a poc. The modeling which usually includes data selection and labeling. Feature engineering retakes the raw data into features meaning the longitude latitude can be a raw data but saying that this person in that location is near a ballgame or near the high road or a shopping mall that is feature engineering that is taking two features of location and kind of organizing them in a way that you can see whether it's 50 meters apart from each other. Hyper parameter optimization stands for all the elements that you can control inside the black box meaning that you can control how many hidden layers are in there. You can control whether the data moves in one direction or a circular direction. If you're looking at an example of recommendation of products which Amazon has been developing for the past 20 years, you can look at what is the data that we want to look at historically and when do we want to sample or give weights to the data. So if you're now wanting to look at data pre-COVID, maybe it's not relevant for this specific use case so you want to kind of tone down the weights of the older data or maybe as we're coming out into vaccination or a vaccinated world maybe the data from COVID is actually the one that you want to weight down into the model itself. So these are decisions that are not science decisions these are product decisions that are derived from the data that we put into the model and impact how the data scientists will tweak the hyper parameters of each of the model. Metric selection and optimization again this is where you decide what do you want to optimize the model for let's stay with an example of recommendation engine. Is your workflow is the app the way you embed the model into a product built so that the user sees one button he needs to see the next action and then click next and approve it in which case you have to have one recommendation that needs to be either good or bad or does your flow of user journey is such that you have three recommended items and anyone that he can choose from is good in which case you're just looking to have one out of three further three out of five so all those inputs into deciding what are the metrics we want to optimize for like you see is a product decision not a science decision the model is not a product in between there is earning user trust we'll talk about that we'll talk about human in a loop we'll talk about explainable AI measuring feedback in changing behavior of course that beginning and end of this process are led by the product manager you're the decision maker and what is the problem use case and so on in your decision maker and analytics and feedback and what is the product behavior in the market but you're also a shared decision in productization POC and even in the model process not unlike traditional engineering sometimes that we write the PRD MRD and then we hand it over to engineering to build and we wait for the outcome in the Q&A phase we then we just help to productize and launch it actually here you as a product manager need to be involved in the data selection in the labeling this is just as much a product decision that it is a science decision I gave some examples on the HBO and future engineering and the metrics and optimization selection so now for the next 20 25 minutes I'm going to talk about each of these shavrons and dive a little deeper and give you some insights and hopefully you can start to ideate and lead these processes internally in the next 20 minutes I'm going to talk a little bit more about how to design a product for our companies at Amazon we have what we call the working backwards mechanism when we design a new product or feature we first write the press release with the customer quote inside and the FAQ the external FAQ what our customers are going to ask and the internal FAQ is what our stakeholders are going to ask about this product only then we go to design the visuals and then we go to the business owner could be you the product manager but could be the head of marketing head of sales ahead of operations and so on someone who knows where the data resides and how to access the data and someone who's going to build it if anyone asked them or kind of force them into a process in a workshop manner that answers the following three questions if you knew X you could do Y in order to get said so it's not enough to say I really want to do this. You can drive from that. Well maybe if you're a publicly traded company you could use that in order to report to regulators okay what does that ROI look like for this I mean how much do your analysts today spend time or money to design these forecasting models and does that worth a replacement so keep really asking all these questions sometimes we tend to really fall in love with machine learning because it's a healthy and actionable thing that we can do with it and even if we do we don't don't always have the KPI in place remember that the process of machine learning is can be costly because it's an iterative process and because data scientists are relatively scarce and expensive resource should make sure you are choosing the use case that has a strong ROI here are some examples of how this looks in customers that we've helped within the past which prospects we can get the right message to the right person at the right time or which customers in the last example here will churn so we can proactively mitigate them and reduce our churn there are more examples of course this is an endless number but this is just kind of generic bi type of examples and here are some examples from real-life customer examples from us business tools like in the top left corner here forecasting churn prediction support routing and so on consumer application sentiment analysis in the bottom left top right cybersecurity anomaly detection on logs and other examples or bi examples you know making your analyst use auto ML under the hood in order to generate insights about the business that leverage machine learning without necessarily having to have a data scientist in the team of the bi analyst team once you have a use case try and be critical in the thinking process whether it's an ML or you're just falling in love with idea of doing ML here because sometimes you can say well if you can really solve this in 10 lines of if then you don't have to ML this if the data is gradually scaling you don't have to ML it we talked about the hyper personalization example if the data is unpredictably changing in real time like user journeys in an online manner that's good for machine learning usually if it's confidence tolerant if it's life or death decision well you need some time to make a decision you want to know why the model recommended the surgery versus not but it's not it's easier usually when there's some tolerance for whether false positive or false negatives as an explain the next few slides and whether we have access to the data and this is a critical element typical use cases for product managers to dive into our either computer vision language or the entire realm of patterns in data whether that's for ranking forecasting, personalization, anomaly detection and so on so designing a POC I like the Marty Kagan approach and inspired that you want to de-risk one of these four elements will customers want to use it or willing to pay for it will they understand how to use it can we build is it feasible from a data science perspective and is it viable should we build is it's profitable for us ethically right for us to build this product now you want to time box the POC for four to six weeks you want to make the decision which assumption together with the data scientists and share your insights with the rest of the stakeholders the company it's not always obvious that the POC needs to be machine learning you could be a wizard of Oz sort of pilot when you have a manual person actually imitating the machine that you'll build later just so you can check whether it's valuable enough to your for your users today WS we also have this site where you can actually use pre-trained models you don't have to go through able as console you don't have to use the engineering team this product manager can go into this link here and start to test some of our language and vision services this is an example of one of our customer that use recognitions is a computer vision service that we have a pre-trained one to match people with the art that they like and they were able to get up a prototyping in four hours in production within a week based on machine learning that they didn't have before this process as you grow you will see that the product manager you may first get some help from a data scientist that is a broad company resource or maybe even one of our data scientists can come in and help you in a prototyping on ML solutions lab engagement or a pro-serve and so on but as you grow you'll see that you want to create an empowered team of product manager working alongside a data scientist and these people would want to focus on iterating the ideation and the use cases and the math of things not the IT and the DevOps that supports around it you don't want them to spend a lot of time on getting the right machines up and running or sending multiple GPUs to be tested just for distributed training and so on all of that stuff you want to put in an ML platform so you start to seeing platform engineers and as you grow more and more you have these core teams, multiple empowered teams with a lot of support around it now the modeling side of the work like I said it's not just for the data scientist or engineer this is stuff that product managers can, should and need to actually own and be a shared decision maker especially around data is the data available internally or is it public? Do we have the permission, the IP to use it? GDPR HIP I fear in the health care space privacy is the data fresh we talked about the example of using data pre-COVID or pre-vaccination sparsity of data if we're in a time series use case and we're trying to forecast but we have a lot of missing data in historic data that might not be a good support for our forecasting model very often in anomaly detection we have very few labels because it's anomaly right doesn't happen a lot so we may sometimes want to double index on those few anomalies kind of over sample them so we can really train a model properly but we need to remember data will confess to everything if we force it enough and the way to overcome this was called overfitting is not just to train on everything that we have usually train on 70-80% of the data we keep a hold outset 10-15 20-30% of the data kept free from the training and then we can test validate that the model we train can generalize and that 15-20% we kept on the side is a good place to validate as a general example and then also in deployment we usually don't deploy machine learning model from 0-100 immediately we do some canary testing A-B testing and we deployed gradually in order to make sure that the model we trained on certain data behaves the same in real-world data but very often the problem is that we have missing data and within that missing labels is the most common scenario because we need to manually label manually tell the machine this is a cat this is a dog sometimes we need a professional to do this manually we at Amazon have a tool called SageMaker Ground Truth which helps you manage the workforce of the labelers themselves for example if you want to send a certain labeling job to three different people and make sure that it's only consensus vote that this is a label or majority vote but then these people might be expensive people that are really experts in their field and you have a lot of labels to generate so we have what's called active learning while they start to manually label the data we actually build the machine learning model under the hood that starts to use the data that is already labeled to develop a model and see in the remaining data that has not been labeled yet can the model already detect clear confidence labeling and then reduce the amount of effort that's needed for manual labeling as they advance more the model gets smarter and is better and reducing the amount of work for those labelers you can also use leverage other third parties or you can start by using an example of recommendation and you can recommend the most popular item just to get started and as the data accumulates under the hood you can replace the engine when it's seamless think like a hybrid car it's seamless for the user when it's electric or gasoline powered same here from the popular engine to a machine learning engine you don't have to have a lot of data Amazon personalized which is a our AI pre-trained or pre-built service for recommendation engine requires a thousand interactions between items and users to actually generate the first model can be 25 unique users each having at least two interactions so if I talk about hundreds of thousands of examples and so on very relatively much less data is needed and you can start like I said and transition into machine learning from a popularity based engine you can also use synthetic and data you can use reinforcement letter we talked about before or use again generative adversarial network which is two networks training each other and this is how usually fake human beings for example are created one network is trained on all the images another is creating and trying to trick the first network when the generator is able to surpass the second network then that person is considered to be good enough that it was able to trick a machine for that Airbnb is an example of a customer that's using SageMaker Ground Truth to manage their labeling workforce and it's a good example to remind us that some of the labeling can be sensitive and the workforce that you want to manage needs to be internal or within your customers and users and some can be external because it doesn't have any sensitive data you can use Amazon Mechanical Turk just for the low cost of labeling generic or generally available data now the outcome of the model comes in the form of a confusion matrix in this example for example could be healthier could be cyber if you have a true state which could be malicious or benign in a prediction if you're predicting something to be malicious and it is malicious it's true positive and same for benign but if you're predicting something is malicious when in fact it's a benign that is a false positive false alert but there's also a situation of false negatives when you're actually letting on bypass you know you're saying hey this is this is benign when in fact it was not a benign action now this is the science what is in it for the product manager is actually to decide whether you want to optimize for precision which basically says I want to have fewer false alarms but I can live with missed events or you want to optimize for recall which says I don't want to miss any of the true state situations but I'm okay with living with some false alarms you can see how this is a product decision not a science decision and it depends really on how the product is embedded in the workflow of the user if you're talking about a real-time CISO hot room that now needs to make a decision you need to reduce the amount of false alerts and alert fatigue and so on so you want to really focus on precision but then you got to remember that all the data that was not alerted on is not necessarily clean you are having some missed events in there so when you're doing for example manual forensic investigation in offline manner and the person needs to review 100% of the uses if you can identify 50% that are very likely not to have any malicious activities and that person needs to review only 50% and not 100% you've just created a lot of value for your company and that you've done that by optimizing for recall because you're not looking to have fewer false alerts 30% yeah I have a lot of false alerts but you're looking to see that you don't miss any real true-state scenarios now it also goes into how user product managers set the customer or the user expectations what is the micro copy that you use are you saying hey here is some suggested action decision support system here is probably all of the cases but you may have some errors so there are different ways to position this within the product remember that that's why it's important for a product manager to understand what is precision, what is recall and whether you want to change that in the life cycle of the product maybe you want to start with one and as trust is gained through using the system you want to change to another so I said earlier that model is not a product something needs to happen in between and that is user trust there are multiple ways of earning user trust one of them is adding human in the loop human in the loop is just like in the labeling of an Amazon SageMaker ground truth we call it Amazon augmented AI this is where you take the outcome of the model, the inference it's called of the model, the recommendation or the prediction of the model and you decide whether you automatically plug it into the application into the user interface or you want to first pass it through a human that will take a look at this data you can do that when the model is not confident enough you say below 95% I want a person to look at this you can also say hey I'm screening this now and even if the model is very confident but the person that is the candidate has a PhD I want the recruiter to actually look at this personally and you can also say you know what any 5, 10, 20% of the data randomly just pass them to the human in the loop and with time maybe reduce this number as we earn trust of the product but there is also a tool called Amazon SageMaker Clarify that helps you detect the bias in the data I mentioned earlier when you have oversampling the data in some areas or you may have mis-weight unbalanced data in the different classes so you're an insurance company and you're trying to cohort your age groups but in some age groups you have a very small amount of people because you're focused on older drivers and not young drivers for example and then this tool will kind of alert you say you know what this is not balanced data so you may want to take attention and maybe either synthesize more data or use the cohort groupings so different mechanisms that you can do but this tool can detect that for you can also detect whether you have bias post the training phase and during the model monitoring when the model is running in production and it's dependent on data coming in we're actually creating a baseline of this data when you first train the model and then once there is a drift from the baseline and you can configure what does it mean a drift for you is considered if you can also design your own metrics into that then you can get an alert from that but sometimes what you need is to actually explain why the model why did that not get alone from this bank based on sometimes it's not good enough to say the model said so this is what we're called individual explainable or local explainable AI we use a SHAP SHAP model under the hood we kind of basically took away the under differentiated heavy lifting for you so SHAP is published on research papers we just coded it so you can now use it embedded into the SageMaker platform and you can provide explainable local predictions to your users but you can also provide that to regulators let's say the regulatory wants to know whether you use gender or what weight did the gender feature receive in your overall model for underwriting scoring of this group because it's okay to use it but not too much not to rely on an over matter so global explainable AI and this is also supported by SageMaker clarifying and now lastly once you have the model out there it's very important to remember and especially for us as product managers if the customer is saying something we need to listen how does that behave in real life situation so we call this MLFly we start with saying do we have the data and labels that we can use for modeling great can we generate the model meaningful enough in the prediction great do those prediction and the way they are embedded in the product with all the trust envelopes that I mentioned before the bias detection the human in the loop the explainable AI and all of that are those driving the behavioral change that we wanted to achieve with the product if so great but that by definition is going to change the data because we just drove or generated the behavioral change which needs to be manifested in the data so ask yourself if my model is successful in generating behavioral change how will I detect that in the data will I need to relabel will I need to maybe some of the people that have been successfully altered their behavior based on this then it to be cascaded to another model to provide more granular prediction to different data set with a new label they're different decisions some of them can be made before you get started some of them need to be made on the fly and you just need to get an alert that hey there's a drift in the model now let's take a look at the behavioral data and rethink and retrain the entire process okay so how can AWS help you along the ML journey that you're at we have what we consider three tiered ML stack and in the top tier is what I refer to before as pre-trained models where if your use case fits one of them then you can easily and very fast go in from idea into a POC like I said we saw in the example before from hours or a few days using one of these pre-trained models some of them can be great fit for your use case and to any you never need to do anything beyond that and these do not require any data scientists and so on and some of them could be just a good starting point for your POC in some of them you can also tweak or customize the model for your own language let's say in the example of transcription in Amazon transcribe which is above the speech section in the bottom left in the top tier I'm still in the top tier here and you want to transcribe calls to your call center to detect sentiment to detect what's going on and train your people and manage the entire process you may have a unique domain you're working in the legal domain and you want the training of the model to be tweak customized to the legal area or the manufacturing of the airline industry then you can actually create a ceiling a custom language model for transcription so you don't have to start from scratch you do need to give some data from your own and then you can get the model to be more customized for your own needs but we also have in computer vision the recognition chat bot and lex text comprehend which is natural language understanding how to understand what the language that is within the call within a paper which is can be extracted using text track which columns and graphs and can extract entities even from radio buttons if it's a form that you feel and so on we also have business tools the example I mentioned before Amazon personalized for recommendation engine or forecast for time series fraud detector as an example for any fraudulent activity transaction that undertakes and you want to say hey this IP email credit card how does that impact your score of how likely that is to be fraudulent that can actually enjoy from Amazon so if Amazon has had some exposure to this data points that can influence the score that you get from the model so you get kind of more than just a model here but they're also verticalized solutions for healthcare for industrial and so on and then if you do have data scientists you can start in the middle tier which is the Sage maker you can choose pick and choose so we want to take away the undifferentiated heavy lifting so if you have some things that you think is really differentiating from my business to do my own feature store find and use us for model distributed training use other elements that are non-differentiating for you this is a complete IDE with CI CD that you can manage and it also has the autopilot which I mentioned before if you have tabular data and you want your data scientists to list to get started with some recommendation on ETL's and algorithms and selections but then have it as an open ML auto ML in the sense that then the data scientists can get the notebooks and see what the recommendation auto ML did and do it on their own continue and expand and improve this on their own it's called autopilot as well but if you are an expert and you don't like to use these managed services you want to build everything from scratch and manage it we have that bottom layer of the stack as well optimized for TensorFlow PyTorch MXNet and so on so kind of recapping and we have about five more minutes I want to leave a couple minutes at least for questions so first we talked a little bit about the product management powered by ML I recommend running these discovery workshops with internal stakeholders from business data and technology so you can get a long list of potential use cases qualify whether they're a good fit for ML or not then we can help you go from the prioritizing those use cases and help you choose how to get to the first POC or MVP we have multiple mechanisms where it's a prototyping engagement where we work together with your team hand in hand and your team can learn the process whether we have a pro server we can build for you or ML solutions lab where you can give us the data we will build it and hand it over back to you different modes of engagement and I'll soon show you some slides of the people you can contact if you want to test out different things here in the programs but if you already have several use cases now you want to get to this next level where you want to do an ML platform now you want to create a situation where you have empowered teams and they don't have to worry about the ML ops or DevOps they just want to build the models iterate on them improve in all the DevOps and IT work let us worry about that this is our expertise your expertise is your domain your products your services then we can help in that phase of the journey as well if you want to build everything on your own and just have us help you train your team and get your certification we've got the right side of this slide as well and we will be happy to support and do some training and certification for your team as well so with that I want to thank you we've got a few more minutes for questions and here are the people different languages and we have a very varied and diversified audience today I encourage you to take a screenshot of this screen so you can then know who to contact and if you have any questions or want any follow up or work with AWS on any of your use cases Oren thank you so much what a fantastic talk thank you we enjoyed that very much you emphasised early on in your talk the importance of developing trust so maybe you could just expand a little bit on that in particular what happens when human intuition that gut instinct what happens when that comes into conflict with what the machine learning is telling us that's a great example I think the question is how do we as product managers communicate that to the user so the user may have an intuition let's say you're a professional you're a doctor and now you have a decision support system in healthcare that recommends certain action to that patient but the doctor has an intuition if you position the machine learning recommendation and say I know I'm the machine you should follow my recommendation that can create some clash with the doctor's opinion or said intuition but if you're saying hey this is a decision support here is my recommendation and here is why meaning that I am a model that was trained on data of type X this is the type of patients that I've seen it was labeled by the community of doctors that have all been certified with this grant and this group and this and this and that so now the doctor either identifies that this is not relevant for his patient because the training data was not of the patient that he had or the type of illness that the patient had or that the people who labeled are good strong candidates, good strong physicians that he trusts and he says you know what maybe I will start to trust this because I see that other physicians that I would have called to get some consultancy with to enable this as a tool so I think by being able to provide insights into how the model, what data was used how the process of labeling was conducted and also in that local explainability so I recommended to undergo surgery here because I've seen this size of the tumor was about this big and not that big and because the texture was this and not that this is why I recommended and then the doctor can say okay I understand that's a valid point but you haven't seen I haven't given enough weight in my opinion to this thing that I've seen in the side in this way together a machine and a human can actually make a better decision because either of them can accidentally not notice something or not in not give enough weight to a certain feature and together if it's exposed and explainable and open that decision can be improved okay that makes sense but not just conflict let's consider about possible pushback because humans were not all that rational maybe we can accept that ML is providing us a better solution that we might come up with on our own but then perhaps we're fearful that we're not needed so much this is encroaching on our own role our own job is that something you're seeing yeah I think it's always a question with innovation you know it depends on how you use the innovation right innovation by itself is not good or bad that's why I'm talking to product managers I am encouraging an ethical approach to AI the sense that you think should we build it how do we communicate to the user what to do with it do we force do we say hey this is a machine and this is how it created it's a recommendation and these are its limitation you can decide to overrule it and you want to hey I want to go wild I want to do something different Netflix can recommend the next year is for me to bench doesn't mean I have to take on this recommendation right it's nicer if it says hey you know what I'm recommending this series because of that and I know can now understand and I'm maybe more friendly with that recommendation because I understand where it's coming from so I think that's more of the where this world of machine learning is headed to a collaborative approach of human and machines openly making decisions better and yes the humans can still have the control but the machine can support that can detect things that the human may not have the time or capabilities to do so it can actually augment what the human is doing and not replace that's the way I'm seeing things heading more today. Yeah you're very optimistic Oren I wish I wish we had more time to expand on this this is absolutely fascinating but we are sadly out of time anyone out there who wishes to find more questions Oren please do so via the via the website via the forum Oren once again thank you so much for your time best of luck for the future