 How's everyone feeling? Happy holy? Right? I've been greeted with that so many times today. I woke up not even knowing it and feel really excited to be wearing bright colors for you today. Who's excited to learn about artificial intelligence? Raise your hand. So everyone's excited. Who's not excited to learn about artificial intelligence? Great. I want to first introduce myself and we're going to see if this video decides to play. Great. We're just going to have to deal with that play pause. Hello. My name is Allie Miller. That is a very large photo of me. But I came here from San Francisco, California. Has anyone been to San Francisco, Silicon Valley? Nice. That's actually a lot of you. I am very active on social media. I encourage you all to take photos of slides that you really enjoy, slides that you want to take home back to your team. My username is Allie K. Miller on pretty much every platform. A bit about me, I am a previous lead product manager at IBM Watson running product development for our computer vision work. And we'll be starting a brand new job at Amazon working in artificial intelligence in two days. So I'm very excited to be here. I'm going to fly back, immediately get started, and can't wait to take everything that I learned from this room back. I studied computer science as well as psychology at Dartmouth and also have my master's in business from Wharton. And so that's how I approach AI. I think about it from design, from technology, and from business, and try and combine them all. So we're going to, there we go. Amazing. I do want to give just a nice foundational starter in artificial intelligence. Who's worked in artificial intelligence? OK, not that many. Who's read an article about artificial intelligence? Who feels like it's overhyped? Too much noise going on. Not that many. If I asked the same question in San Francisco, everyone's arm would go up. So a bit about artificial intelligence and how we define it. AI is really just the idea of a computer performing as if it's a human. So when you think about a human, we have vision, touch. We have ears that can hear. And that's exactly what AI is trying to do. It's trying to mimic what a human would do. In the world of visual recognition, that might be something like color recognition or depth perception. And I'll be giving you a few examples throughout this presentation of use cases that I've worked on. I've worked with over 200 clients in 10 different industries from manufacturing to agriculture to retail to health care. And we'll be talking today about a few of those that I've seen. Artificial intelligence is kind of this big, big term. The subset of that that we'll really be focusing on is machine learning. And I'm going to demystify machine learning a little bit and make it less magical. All it is is giving the computer a lot of examples of something. It learns that as a pattern, as a model. You show it a brand new something and it decides how it compares against the million things that you've given it before. So one example, you've given it lots of emails that you've gotten from customers. And you've said, these are really angry emails. Everyone hates us. These people are about to fire us. And then you give it nice emails. Everyone loves us. We did so well. And you teach the computer these two different things. It creates a model. You give it a brand new email that it has never seen before and it compares against the two. And it's able to say something like, I'm 83% confident that this is an angry email. And it's up to you to decide what to do with that. This is gonna be really fun. I'll sing my own background music as it's going. All right, so now we know a little bit about what artificial intelligence is, what machine learning is. And I want to teach you, in a very, very quick manner, seven steps of really how to build AI and how to consider it. So I'm gonna be walking through the following seven, problem, scope, team, et cetera. And here's the great news. 80% of what I'm about to say is exactly how to build a non-AI product. So if you've built anything in your entire life and raised your hands if you've built something amazing, you're already 80% a master of building an AI product. And so I'll try and call out a couple of the differences so that as you're looking at each of these different categories, you'll be able to take home what the differences are and figure out how to map that to actually building an AI product. All right, none of this slide is new. Every single one is exactly what you already know. So first step, whenever we're talking about design thinking, agile work, right, it's always what is the problem we're trying to solve. And I'm gonna tell you right now, the worst thing that you can do for your organization or team is believe that AI is cool and that you just wanna use AI. That is the worst thing you can possibly do. It is always starting with a problem and then deciding whether AI or non-AI is the best way of solving that problem. So always, always start with the user pain point. Doesn't matter if it's AI or not. The second thing, what is the outcome? What is the result that you're craving? And it could be numbers-based. Generally it is when you're looking at a spreadsheet and you decide that a number that you're unhappy with you want to improve. An example is I worked with a retail company and they were actually having a lot of store theft issues and their store theft in one city was much higher than every other city. So the outcome that they wanted was to reduce theft by 10%. They decided to use AI for that. Another example might be in manufacturing. And so you have a lot of items coming off the assembly line. Some of them are damaged. You don't wanna sell those. So an outcome that you might be aiming for is improving exactly what product comes off the line and reducing quality assurance time by, say, 15%. So deciding that outcome ahead of time and the last, again, nothing new is meeting your users. I do wanna ask who works in user experience? Who's a UX person? Okay, design, general design. Okay, how about software engineer? Wow! You guys should all work with me. Okay, software engineer, what about business leader? Couple hands, what am I missing? Anyone that I didn't get? Great. All right, so the last would be meeting those users and speaking to them. And that's really where the design comes into play, developing those user personas. I think we might have lost it. I'm happy to make it all up from here. We already know the next six categories. Just died for a second. Great, I will talk less. All right, so problem, that is step one. We're already done. Item two is scope. Number one, again, not AI specific, is there a timeline that you absolutely must adhere to? That will usually drive constraints that you need to play with in. For example, if you're in agriculture, do you want to get something finished by the time the next crop season rolls around? If it's retail, and you know that a lot of people buy items on Saturday, do you want to release on a Tuesday to make sure that it's up and ready for them? So, thinking about timeline, what constraints do you have to build within? In general, and we're talking about resources and budget, those are kind of combined. AI is not necessarily more expensive than non-AI, but one thing to consider is that upfront cost might be higher as you're training people, as you're taking time away from that normal development cycle. So thinking ahead about what these budgets might be, if your budget is higher, do you want to bring in an outside group? If your budget is lower, do you want to teach in-house? So thinking about where that flexibility has to come into play. And always, I know it's blocking right now, add a buffer. I'm sure you guys have seen that. Whenever you're coming up with a budget, add 20%, be safe. All right, team, here is where we really start to get into the differences between AI and non-AI. And I see a lot of phones out, that's really good. Number one, machine learning is a skill. It's not a random talent that I woke up and was like, oh, now I know machine learning. It is very much something that you need to learn. And so whether you're hiring outside, which again might be more expensive, or teaching machine learning in-house, you absolutely need to have that expertise on your team. Generally, as I've worked with several teams, the ratio tends to be one machine learning engineer to about three to five full-stack software engineers. The more software engineers needed, the more difficult it is. The difficult the architecture, infrastructure needed. So I've seen ratios of one to three, one to five. Diversity and bias, this is a huge topic happening in Silicon Valley right now, as we're building artificial intelligence products. One of the worst things that you can do is have your entire team be 25-year-old white males who all went to the same school, who all work at the same company, who all worked at the same company before that. So as you're thinking about how to build these teams, think about things like age, gender, what city, what country, and a lot of pieces around this might be where you want to release the product. If the product is 100% focused on Bangalore, you should absolutely have people from Bangalore on that team. If the product is for the US, you should have Americans on that product, right? And as we think about more global, larger products, that is why it is so important to have diverse teams. For example, the last team that I was on at IBM, which was building multimodal artificial intelligence, our software team alone was from five different cities. So these are things that we have to think through, right? How do we bring remote teams together to be able to enable that diversity to help fight bias? And the last, and this is kind of why I had you all raise your hands, every single one of you should be on this team. It's not just engineers, it's designers, it's product owners, business, even things like finance, legal, marketing, sales, they should all be in the room right at the beginning building that product and knowing what is going into it. Data, this is another big one, super different between AI and non-AI, because frankly, when you're not building an AI product, you might not actually need data going into it. You might be testing hypotheses getting out there and testing, but it is very different to build an AI product. You need to have this constant pipeline of data coming in. There are two questions you should ask yourself about where you're getting this data from. Let's say that this is a popular use case on YouTube that a lot of AI people are talking about. Let's say you're driving a car and there are security cameras on the street and you just need to be able to read a license plate and maybe every time you pass a city boundary that you wanna be able to see what that license plate is. Pretty easy, right? But as you think about this, you're thinking what if a car is dirty and I can't see the license plate? What if they go 20 miles an hour? What if they go 90 miles an hour? And my camera can't catch that fast enough. What if there's a motorcycle blocking the front of the license plate? So these are all things you need to think about to build a big enough data repository to be able to know what you're about to see. So one, what is the source of your data? In the example that I just gave about traffic, maybe it's those traffic cameras, right? If all you had was someone standing in front of a car like this taking the photo, that would look nothing like what you're getting from those security cameras and from those traffic cameras so your model would not perform well. Second question you need to ask is it owned? Is it proprietary? Do only you have access to this pipeline? Because if someone else has access to it, they're gonna be able to build your exact product in probably half the time that you just did. So when you can, think about when you can bring in a unique pipeline, something that only I own. So maybe you sign a contract that gives you exclusive rights to the traffic cameras. Maybe you send out a team of 10 people to take a million photos each and that's so much work that no one else would do it. So start to think about really how it can build a unique pipeline. Second is good data, right? So sourcing, that's like where do we actually get the photos from, cleaning up the data. So if you're thinking about that traffic camera, a lot of the time it's taking photos, there's not a car there, an animal could be walking by, right? And that might be noise, right? Like photos that you don't need. That might be noise that you actually need. Because as a car is coming through you wanna know when it's a car and when it's a human and when it's whatever. But if there's noise like a bird is in front of the camera and is blocking it, maybe that's stuff that you wanna clean out. So thinking through who is actually gonna be annotating this data? Who's running quality assurance? How are we making sure that these million photos are perfectly labeled? Because if there's even one small error that could expand when you're actually building your models. The last one is generally the more the better, right? We always want good data. We want data that mirrors what we're actually gonna be seeing. That's why cell phone pictures wouldn't mimic traffic cams. But you're always going to need more than you think. And depending on what type of model you're building, whether it's that email system that I was talking about in the beginning, right? Angry versus happy, maybe you only need like 30 emails of each. If you're talking about traffic cameras that have dogs and humans and it's day and it's night and it's foggy and it's not, you might need eight million photos and you won't know right when you start the project. So more is always better. Who knows the term edge cases? Is that a new term? Okay, edge cases just means something you didn't really think about in the beginning. So again, maybe you never thought that something could block the camera, but that's a brand new edge case that comes up and so you're gonna wanna constantly have that in your model. We are halfway done. Who's excited? Now you get to sleep. All right, but also how are we halfway done if there are seven? That's because surprise, I have a UX bonus. Because this is a design conference and I wanna make sure that you guys are getting the information you need. So here is exactly how you need to think about user experience for building AI products. First question, is it AI? Right, and obviously if you're building an AI product you're using AI, but does it actually matter if you tell the end user whether it's AI or not? Here's an example, right? We called an Uber yesterday to head into town. I wasn't told by Uber, oh, we recommended this amazing ride for you using this machine learning algorithm. It's the perfect ride. It's the cheapest, the fastest, the best, the coolest. I don't care, I just need a car and I need to get to somewhere. Something like a doctor recommendation, like what medicine to take, I would rather know as a user maybe, whether a doctor thought about it, whether AI thought about it, whether the two combined thought of it. And so you always need to think about when do I need to tell the user that it's AI and when does it actually waste time or make it a worse user experience? The second is thinking about interface, right? Are users involved? Who's used like Netflix? Lots of Netflixers, nice. How many times have you like thumbed up and thumbed down something? Like everyone in this room has done it at least 50 times. This room has gotten to know how many people in it, and so that multiplying factor for all of the Netflix users, you guys are basically training Netflix, their algorithms, you're doing their job. So the most common algorithm and most common way of getting users involved is this like thumbs up, thumbs down, right? We gave you this answer to this chat box question, was that the answer you wanted or did we completely mess it up? There are less common ways of getting users involved and I'll let the genius designers in the audience think about it, but there are some common ways that are coming up. And then the last one is more transparent, always better. So let's say that Uber decided to tell everyone that they were using AI in their car recommendations. Is it really important for you to know that it's AI? Yes, no. Is it important for you to know why they picked it, the way they picked it? Yes, no. Is it important for you to see the algorithm? Yes, no. And is it important for you to see the training data or how the AI model came up with it? So there are different levels of transparency that different products require. Generally, the more like scary the recommendation is, like doctors telling you what medicine to take, more transparent is better. Things like is this a dog or a cat? Don't care, right? So think about what level of transparency your product's going to require. UX bonus is done, there are no more surprises. I'm so sorry. All right, when you're testing something, right? So you've built the right thing, you've decided what your data pipeline is, you decided you have the right team to go after it and now we're actually gonna put it into market. We're gonna see how it performs, see what people think, see if it works. Who knows the term MVP? Pretty much everyone. All right, minimum viable product, right? So you wanna be able to prove small and note that I said prove small and not test small. One of the hardest things about starting an AI product is that everyone wants to say no. Everyone wants to say AI is not gonna work, I have my right team, we've got the right product, I don't wanna innovate. And so you want to be able to say my first AI project out the gate was successful. So how do you prove small? And that's researching competitors, that's building the right team, and that's making sure that everyone on the team and not is aligned to what you're building. Success metrics, right? This is a term that everyone's heard of, but there are AI metrics and non-AI metrics. So like I mentioned before about store theft, those are common, you guys have heard of those. But AI metrics, right? How well is that model performing? How many thumbs up, thumbs down? Are we getting when a user is giving us that feedback? And I'll give you kind of a rule of thumb that I like to use. Generally when we would launch a product on my past teams, we would spend about four weeks, get to about 60 to 80, 85% accuracy, launch that, test, test, test, more data, more data, and then within the next four to eight weeks, get to about 80 to 99% accuracy. So that's why it's so important to start small. And then the last, which I kind of gave a heads up to, don't aim for perfection. AI is so new, the worst thing you could do is say, when I launch this, it will be perfect, it will be 100% accurate, it'll be better than humans, it'll be the best, everyone's gonna love me. Aim for 70% accuracy, aim for 60, just try and get something out there and see if it actually works for what you're trying to build. So after you test what is the exact thing, optimize, right? Get that feedback loop going. How are we understanding from the market what works and what doesn't? How are we understanding from the team of whether people like it or not? Are we building more AI projects or not? The biggest thing that you need to do is treat mistakes with care. A lot of AI projects right now, when I said that there's imperfection, when you're wrong, make sure that you're building that back into your algorithm, right? So if it was deciding between a dog and a cat and it gets a picture of a cat and it's 99% confident that it's a dog, bring that back in as trading data, teach it that it's a cat, and keep optimizing. Something that people might have heard of is continuous feedback loop, that's when every single new image coming in is updating the model. You also might have heard of like batch updates, maybe it's at the end of every single day, at the end of every single week or month, you're doing a full retrain, and it totally depends on cost and how your team's aligned. Data lifecycle, a big topic in Silicon Valley is like data is king. It used to be that content is king, now it's data is king. We wanna have that constant stream, so if I was talking about those traffic cameras and you decided to have people manually go out and take those photos, if you don't have that happening all the time, your data stream is gonna dry up, someone new is gonna come in, someone's gonna take over that product, make it better, and they'll have set up some sort of constant stream. So you can't just bring it in and let it die. You always have to be thinking of how to refresh it. Can it be automated? If you're manually sending people out, that's gonna be really hard to keep it automating and you're gonna always have that cost. So how can we automate that flow? And the last is don't silo it. You always wanna have it when AI makes a decision that it then does something else, right? If we decide that an email is angry, I'm not just gonna sit there and be like, that's amazing, it's angry, AI got it right. Now I can go on with my job, no. What you wanna do is take those angry emails and get it dealt with faster, right? If someone's happy, I don't need to reply within two seconds saying, oh my God, thank you so much. Thanks for your nice email. No, you wanna deal with the person whose car is on fire. You wanna deal with the person whose crops just died. You wanna deal with the person whose manufacturing line is broken. So figuring out what to do with those angry emails or the results of really any AI model and deciding how to triage it is really important. We're on our last step. How are we doing on time? Good? Great. I wanted to leave ample time for questions. Last step is scale. And a lot of people, when they think of scale, they think about more money, right? All I want is more money. How do we grow it? How do we go to more cities, more countries? How do we get more users, right? Scaling AI is not always about more money. It could be about a more accurate model. It could be about making something more robust, less brittle, so that competition can't just come in and steal your idea. So things like are there new features that you should be thinking about? Should you combine computer vision with natural language? Should you combine facial recognition with emotional sentiment so that I can tell, sorry, like you've been smiling most of the time, right? So do I want to know that you are more engaged than someone who was on their phone? Thinking about how to add those new features will only make your product better. And again, always using more features to solve a problem. Combining tech, that's like the next big topic. Everyone's talking about how do I combine IoT, Internet of Things with AI? And make sure that everything's running on the edge so that we're not sending everything up to the cloud. And so that is a really popular topic that's happening right now. How do we bring more things together, hardware plus software, to be able to build something even better? And the last topic of scale is to not be lazy. You gotta check in, see how it's going. You can't just put AI in the corner and say, I've done my job, I did AI, everyone said to do AI, I did it, and now I get to quit, right? Check in every few, I say every few months. In the beginning, it'll be every day, then every couple of weeks, then every few months, at no point should it be every year or every few years. So thinking about, is it still valuable, right? A year ago, someone might have built something that no longer makes sense today. And that could be it needs new data, needs new features, it's not robust enough, someone came in and beat us. But thinking about how it can constantly be improved. And the last one is that all of these big tech companies and all of you as well, are building new products all the time. And so making sure that you're always having at least one person on your team, keep a pulse on what is new in the market, so that you're always using the best and brightest tech out there. So we finished all seven steps, cough, cough, eight steps. And this is the summary. So if you have been sitting on your phone the entire time and you just wanna make sure you have everything, this is that final summary slide. So again, thinking about problem. Who are you helping? Why are you helping them? And making sure that everyone's on board about that solving that one problem. Scope, what are the constraints? Thinking about team, resources, budget. When you're thinking about team, how do you align everyone to get the job done? How do you keep it diverse? How do you keep it multifunctional? Data, we call this the fuel of the rocket engine. Engine won't take off unless there's fuel and unless you're constantly giving it fuel. So where is that data coming from? What is the source? Is it unique? Is it yours? How long does it take? Can it be automated? Testing, will it work? I need to prove that it will work. Start small, start imperfect. Optimizing, how can we make it better? How can we grow it so that it's something that is valuable and robust? I know I've said the word robust about six times, I'm sorry. And lastly, how do we scale it? How can we grow this project? Both for more revenue, more users, or frankly, just a better AI product. Those are all the steps. I'm gonna throw up this strangely large photo of myself one more time just so that you get my LinkedIn and Twitter information. And I'm so grateful. I've never been to India before, Canada Gotila. So we're gonna have to go in English for the questions, but I really appreciate it. You guys have been so engaged and I welcome 15 minutes of questions, which is exactly what I wanted. This is the only AI session, so I wanted to make sure that you guys got your questions answered. But thank you very much for your time. Hello. Hi. Thanks, that was a good talk. I come from a healthcare sector, work with a product that we are building in. Can you put it a little closer? Embracing AI. Perfect. The question I have for you and this being an Agile conference is most often than not, we want to release the product every two weeks, which is a sprint for us. And what I hear from the guys on the AI part of my team, AI doesn't work that way, Rajiv. It's not gonna be two weeks. We need more than four to eight weeks for us to train the model, get the attrances and blah, blah, blah. And then we can make it into a release cycle. So you can go and let the dev team or the software team do two sprints while we prime this up and then maybe we'll converge. How do you take care of that in whatever experience you had? First of all, blah, blah, blah. Sounds really important. Here's what I would say. And it's a flaw in what they believe is a two-week sprint. And I know we have some of the other speakers in here, so I'll duke it out. In my mind, every two weeks does not mean that a new product has to be released. So I think that the process that you were talking about around four weeks, training models, what I was saying about accuracy right after four weeks, maybe you'll get somewhere around 70% accuracy, that's around building. Optimization can absolutely happen every two weeks. On the build side, what I would say is breaking down these steps, how to make sure that data pipeline is set and maybe that's a two-week sprint. Then when you're talking about how do we clean it up? Maybe that's another week and then the next week can be QA on the pipeline. So design sprints and software sprints within AI should absolutely be two weeks. Your AI cool guys should not be like off in some other room with the door closed saying, Rajiv, I'll talk to you in two months. But every two weeks, they don't need to be delivering a full-on product. The last thing, and just to make sure, I know a lot of you have not yet worked in AI, training a model takes an hour, right? Once you have the actual data and once you've decided here's how to annotate it and here's what we need, the actual hit a button that says train and have it be done depending on how big your pipeline is, could be somewhere between a couple minutes or a couple hours. In no way should training the model take two weeks. I would say, to go back to them and say, AI cool guys, we need you at the table, you're not super cool, you're part of this, we are one team, and you need to make sure that something is delivered every two weeks, even in the build phase. Thanks, Ali, that was a great, great talk and gave a good overview of what AI is and so for the software engineers in the room, one of the technical practices that we rely on heavily is test-driven development. Do build a test before you build the piece. How does that work in training data? So a couple pieces to unfold. So the question was how do we make sure that we're testing appropriately? Do we kind of come up with a test ahead of time and what are the processes for maybe testing intelligently? Is that a good ish summary? Couple pieces to unpack in that. One is the bias piece and one thing that a lot of people believe is AI is unbiased as long as it completely mimics what a human would decide. And one example, right, Shane's already nodding no because he knows that that's wrong but a lot of people think that that's how you do it. So what I would say, and this is kind of two-stage, so foundational, what you should always do in that annotation phase, let's say that you had dog versus cat and you have 1,000 images of each one, set aside 20% of your annotated data to be able to test, right? So you already know that this cat is a cat, this dog is a dog. Set aside 20% that is already trained, train the model on the 80%, train it, use that 20% as test, right? Like not all at once, one by one or 5% at a time and see how well the model performs. If you realize that after going through 100% of your data that you're 30% accurate, that is abysmal and you should absolutely be 10xing your data. So that's foundational, always set aside annotated data to be able to test against your algorithms and figure out how well it performs. The second piece which is kind of this mimicking humans which I just said you want to be able to test against is that humans don't always pick the right thing. One of my favorite examples is in the courtroom, I believe this was a US study, in the courtroom judges decide more favorably, they let people off more after lunch, right? So if you're getting arrested, maybe you're gonna go to jail, maybe you're gonna go to prison for 15 years. If your appointment, if your court date is at 2pm, what's up? Yeah, that's awesome. I mean, it's not awesome that you're going to court in the first place, but if you're gonna go to court, go to two o'clock and bring a sandwich, there you go, and throw out crackers into the crowd. So that is something where a human's decision is not always what you want to mimic. And so one way to build into that is having multiple humans see what the output of the algorithm is to say whether it's thumbs up or thumbs down, and frankly that's why Netflix so often will ask you for recommendations because they're doing it to millions of people, if that makes sense. My question is particularly from the predictive policing if we talk about. Say that one more time. Policing, predictive policing the federal department. Okay. So what is your approach around the systematic biases which comes because of the algorithms, because there is a limitation. Whatever data you provide, it's only that how the machine is gonna react. Which is the case whether it's for federal government or not. Yes, so cats and dogs, you can still generalize that this is the particular breed. What do you do about the felonies that have been committed by different diversity? I don't want to sound racist, but if there's a white man or there's a black man, how does that work in this case? Yeah, absolutely. There's a term called the risk of imperfection. So in the dog-cat world, if you're wrong about whether it's a dog or a cat, depending on what you're trying to do with that data, it's not a huge deal, right? Because a human will probably, like if you're trying to split up dogs and cats into the right kennels or daycare, that seems insane, why would you send a cat to daycare? But you know what I mean. The fact that it's wrong, a human's gonna see it and be like, I'll fix it. The risk of imperfection in things like should I arrest that person is much higher. So step one, when you're getting your whole team together, and this is with executives, finance, legal, especially, everyone, is deciding the risk of imperfection. So that use case where I kind of generalize saying you can launch with 80% accuracy, if it's a police and you have a body camera and you're deciding whether to arrest the person in front of you, 80% accuracy is not gonna cut it. Or you can have the AI decide and always give it to a human no matter what the AI says. So one thing that we do with our clients a lot is deciding what are the MVP metrics that we have to hit, and usually that's a conversation around risk of imperfection. The second piece is, and this is to Shane's point, around kind of gathering the testing plan ahead of time. Getting baseline metrics for how human would perform in that exact same situation is really important. So if a policeman realizes and picks the person and recognizes the person 50% of the time and an AI is at 85, suddenly 85% sounds fantastic. The protection against it, again, is always giving it back to a human to have a human in the loop, so that we're making sure that the best, safe AI is getting out there. Yeah, so, hi, one question. We like this table a lot, I wanna make sure. Just one question, I think it was similar to what Rajeev asked. So we are also a team, we are building AI solutions and often the data scientist plans certain experiments and we also struggle a lot while estimation and during the sprint it can't be planned or people say that it requires research, can't be time bound and so we moved from sprint to kanban. Because in kanban you don't have fixed time frame as opposed to sprint, but was that a right approach or was that an escape? Your guys' questions are so funny, I love it. I'm not gonna tell you whether it was the right or wrong decision. But typically what methodology being followed in because everyone building AI would be having the same challenges when you plan experiments we don't know exactly how much efforts are required or how much data analysis is required. So when you dig into that, then only small thing becomes bigger. So how to handle that and give the more predict, deliver is in more productive manner? Here are two things that I can tell you. One is that my software engineers, right, like my most recent job is running a team of 30 engineers, they hated estimations. They're like, Ali, why do we do this? It's always wrong. I can't even get it within 10%. Everything that we're doing is new. I have no idea how to estimate this. So I can tell you that the pain points that your team is feeling are felt by a lot of AI teams. The second is what to do with that pain and what to do with that problem. There are some teams that decide to only build out AI projects or sprint planning for 80% of their time allotted, right? I was talking about buffers in budgets. People do buffers in time estimates as well. But again, that only gets you so far. So I know teams who have not moved to Kanban and keep it at those estimates and just only estimate for 80% of their time and they just get better and better at estimating. That's one option. And it could be that you are just more mature and that you guys have moved to Kanban. I can't say whether or not that's the perfect decision because I haven't worked with your team, but come work with me at my new job and we can figure it out. No. Thank you. Sure. Yeah. Are you under arrest right now? Is that something that we should know? Not exactly. So that was quite an expressive talk for artificial intelligence and I liked it. So my question might sound a bit science fiction-y or black-mirror-y. Are you concerned about the dangers of AI? Like I was watching other day, Joe Rubin's talk with Elon Musk and he was quite concerned. Do you, like a young professional in this field, are you concerned? So Elon Musk is a very interesting example. Slight brag moment. I met with him in Dubai to talk about AI and I know, right? Who knew? He talks about how are we gonna send people to Mars? We're gonna do it in two years. Like his timeframe for a lot of things is easily one-tenth of anyone's. So I'm gonna put Elon Musk to the side, amazing as he is and say that he is, he's gonna be like a smaller opinion. That doesn't mean that he's wrong. I'll give you one example. Google, the head of Google Assistant and the founder of Siri made a bet it was a year and a half ago on when we would reach AI, artificial general intelligence. Founder of Siri said it's over 50 years. Head of Google Assistant said 10 to 20. Open AI, anyone heard of open AI? Very popular, right? If you had asked them a year ago when they thought AGI was gonna occur, they would have said over 50 years. If you asked them today, they've changed their mind and said 10 years. And they're the ones working on it, right? They have some of the best, brightest PhDs working on this. My answer to your question, which was am I worried? So question one is when will it actually happen? And the answer is like who knows? Best and brightest minds completely disagree by decades and generations. Where I start to get worried is on the policing of it and the regulation around it and can law keep up with artificial intelligence? We start to see things like self-driving cars. Are we ready as a society for that? And Elon Musk has said it'll take at least about 35 years for every car on the road to be replaced by self-driving cars, just based on how long cars last and how much they'll be in technology analysis. So my biggest worry is actually on the law side and that's one reason why I and why I encourage you all to participate in law and ethics talks. So I go to a lot of government meetups. I've addressed the European Commission about AI and the future of workforce and the future of education. So the way that I feel okay about it is by getting involved in those conversations and making sure that safety and policy is happening right now and not waiting until it's too late. Oh, hi. Yeah, the microphone doesn't help because it's from everywhere. No problem. So you said that you'd worked on about 100 AI projects still now? Yeah, maybe it's like 250. Okay, okay. So can you just give us a couple more examples of some interesting AI applications? Sure. Other than the ones you've mentioned during the talk? One of my favorite examples, because it's hilarious, is a cognitive nose that IBM originally built just kind of as a fun research project and they were testing it on Pesto and seeing if we could pick up all the sense and it's now being used to track air pollution in Beijing, right? So even things that are kind of like fun, researchy and can move into great use cases. Another one that I love was the measurement of pigs and trying to figure out whether pigs were ready to be slaughtered and looking at cameras to be able to predict how big the pig was so that you didn't have to individually weigh every single pig and using computer vision. Last one is damage analysis is incredible so we had one about cell phones coming off of product lines and using an automatic camera to be able to see whether there was a scratch that was a millimeter long. And so knowing whether that phone had to be fixed or whether the phone could be immediately thrown out. Yeah, banking is an awesome use case. One of the best is around pre-fraud detection so if you're using, you know, I know exactly, what's your name? Say it one more time. Vivek. So I know how Vivek has been spending, all he buys are t-shirts, right? Vivek is all about t-shirts and now all of a sudden he's bought a car. Not that weird, because people only buy cars once every 20 years. But if all of a sudden he's buying 1,000 max, that would really trigger something. And so fraud prevention is one of the biggest use cases in banking, two more that I'll quickly say is ATM vestibule safety. Trying to pre-predict whether someone is going to detonate a bomb inside of an ATM vestibule. So using video vision to figure out whether that would happen. And face recognition at ATMs is kind of what everyone tries to talk about so you don't have to type in your pen or scan your card. We've run out of time now so you need to take the question. Yeah, I'll be standing outside and I'll be wearing this exact outfit so you can come find me. Thanks a lot. Thank you so much.