 I am very happy this afternoon that we have five excellent speakers here with us that will explain us all we need to know about machine learning in this connected world. Summer Simar has been an angel investor and managing director of Modrona's Ventures Group through his work he has years of experience in machine learning and other emerging technologies. Mr. Gareth Keane is an investor in that field and an investment manager at Qualcon Ventures and mostly he invests in hardware and robotics companies. Carly Agneema is the CEO and co-founder of Neutronomy. Out of his research here at MIT he has built one of the world's leading developers of self-driving car software. And also second day is the co-founder and CEO of Guardhat. Guardhat develops wearables that improves the safety of workers in heavy manufacturing industries. And Ari will be leading the discussion this afternoon. She's the director of emerging technologies experiences at IBM and spends her time identifying and enhancing scientific breakthroughs. I'm extremely excited about this panel. I hope you are as well. And I will ask our five speakers to take the stage. Thank you. Thank you. Hi, welcome everybody and good afternoon. So I'm also excited with the panel today. I think I'm going to start by asking our panelists, given that machine learning is really everywhere. So that's almost a commodity. How do you help or in your own jobs or help others understand what intelligence means in this context? Who would like to take that? Hard question. So am I going to start? Great. So whenever I think about intelligent applications, the thing that comes from my mind is, you know, it's a way that application developers or applications think about how to take advantage of both historical data as well as real-time data, whether it is structured data or semi-structured data or unstructured data, and then be able to process the data to help make decisions and predictions that is going to enable people to be able to deliver personalized, rich, adaptive experiences for users, no matter whether you're a consumer or an enterprise user. Cool. I think from an investor perspective, we look at it almost from the lens of what problem am I, this company, this entrepreneur, going to go solve for an enterprise or a consumer. Being an investor for a large technology corporation, we also have the advantage of a lot of very smart people internally, a lot of whom are working on either hardware implementations of classic machine learning models or the software stack that sits on top of those. So in particular, application feels like computer vision, sensor fusion, where these software pieces on the machine learning side add a lot of value. We do have some internal competency that we can use to sort of peel the onion and understand if a team truly has something unique and differentiated, or if it's a, you know, vaporware type approach to things. Well, if I were to leave the stage right now, the average IQ of the stage would increase. So given that my background is in steel and mining and metals, you know, I often find machine learning, artificial intelligence, and deep learning to be words where I think people confuse each other and trip up quite a bit. And often enough, at least from my end user perspective, it's hard enough to teach human beings, you know, think about teaching machines. But let me get back to the sense of machine learning itself. I think there's so much talk and so much hype around this term now, you know, artificial intelligence has been here for at least 60 years. Machine learning is a portion of artificial intelligence to be precise. It's a part of it. And there are elements of which we can apply today, the elements we can apply in 10 years or 20 years from now, and the elements that you could never potentially apply. So a lot of what my background being from one of the larger steel companies in North America in a former life is to distill what's going on in academia and in technologies looking for use cases is to come at it from the other side where, you know, what's the use case and what's the right way in which you address that use case. And that's where I think that's my perspective of machine learning. Right. Yeah. So, you know, we also think about machine learning from a practical user perspective. You know, how can we deploy it in our system? And I guess a useful discriminator for us is we ask ourselves the problem that we're facing, whether that's, you know, to help a car detect a pedestrian on a sidewalk or to help a car decide when to, you know, travel through a yellow light. We ask ourselves, can we model this using other tools? Can we write down equations that precisely describe this phenomena? If we can, that's usually a good way to approach the problem. You know, we just describe it mathematically and off we go. If we can, it's often because the phenomena that we're talking about is so complex or rich or inherently unmodelable, that is, there's just no good, you know, way to simply write down a closed form expression that describes that phenomena. In that case, often a useful way to tackle the problem is by looking at data and to use methods like machine learning that are often data-driven and can, through some magical process, generate these complex models that can help us explain our phenomena. So we, again, think about it more from the end user phenomena. And, you know, I come at learning methods with an air of skepticism because at the end of the day I don't understand what this thing is, but I do know that it works. Sometimes that's good enough. Right. That's great. So, I think that, right, from the user perspective, that's the interesting situation because it could be what we call it, it doesn't really matter anymore to the user, but what matters is the interaction and what they get out of it. So how do you see the current trends or machine learning in particular? How is that affecting how users interact with machines? That's a great question. You know, I think we track a couple of things very, very closely and there's some interesting developments we've seen in sort of both consumer and enterprise in the last 18, 24, 36 months that we think are indicators of how technologies like this are starting to percolate into, you know, stuff that we all touch and interact with every day. I think self-driving cars is a perfect example, right? If you talk to a computer science professor here at MIT 10, 15 years ago, they would have told you that problem is probably intractable and never will be solved. But we've seen companies like Neutonomy and others make great strides and, you know, Uber and you guys are driving autonomous cars in various jurisdictions now. So I think that's one place we'll all see a lot of interaction going forward is just with the way we interact with machinery in general, whether it's a very tactile physical thing like transportation or something like Voice UX UI. You know, Amazon has stolen a march on a lot of people with this amazing Alexa platform. They sort of pulled out of their hat in the last two years and it's fascinating to think about how that ability to interact in a very seamless, natural way with technology becomes this ubiquitous, magical technology layer. It's pretty exciting. I would say when you interact, when you go to sort of amazon.com, for example, or when you go to Netflix or you go to a dating website, right? Any of these sort of, you know, consumer facing properties that you access on a day-to-day basis, you are starting to see smarter decisions that somebody is making on the back end because they know you or they get to know you, they learn about what you like, what you don't like, they understand your sort of buying patterns, they understand your habits and they're trying to be smarter about what recommendations they can give you. That's one place where we all encounter day-to-day. I think, you know, autonomous car is a great example, but probably it's going to be another three, four, five, six years before a lot of us here are going to get a chance to be able to directly experience the benefit of that. But if you take an enterprise scenario right now, hey, I flew Alaskan Airlines to come to Boston. I like Alaskan Airlines. But whenever they sort of surface offers for me, I want them to be smarter about knowing my travel patterns, knowing my likes and dislikes, and give me offers that matter. There's a lot of machine learning that goes into them having to deliver that to me. So those are like, you know, sort of day-to-day experiences that we are all starting to see what the power of machine learning could deliver for us. From a, that's a great point so much, from a user perspective, as I see the adoption of machine learning in different technologies or different use cases, the way I kind of try to distill it into something that, you know, somebody like, even as dumb as me, can understand, simplistically, is on one axis, if you think about, you know, the downside risk of things going wrong. Is somebody going to lose their life, or am I going to get a pepperoni pizza versus cheese? Right? On one axis, the downside risk of machine learning going wrong. On the other axis, on the vertical axis, is the element of human control on that decision-making, which means, is it giving me a recommendation of the five people, you're young Sloan folks, I'm sure you're all probably dating at this point, of the five people I could go out on a date with, whereas you have the ability to control that decision that's saying, this is the person I choose, versus the other extreme, I'm sorry, making it really absurd. Like, this is the person you have to marry in your code date for getting married is tonight. So that's the other extreme where you don't have control on the system, just give you a sense. If you distill it, distill every application in those two kind of parameters, or those two axes, machine learning becomes extremely easy to understand where the adoption's going to happen and where it's not going to happen. So I'll give you a simple example. I know, you know, Carl's here and Carl's the expert in autonomous driving. I'll take my personal example. If I were to be driven in a driverless car, I would think twice, if my kids are being driven in a driverless car, I would think twice of stepping in that car because the downside risk is pretty high for me having an accident and maybe, you know, whatever is the downside from there. But having said that, you know, think about a city like Bombay or Manila or Mombasa for that, a chronic parking issues, right? So people keep drivers in these cities or employed drivers because they want to get off of their location and let the driver drive itself to a parking lot and then call the driver, say, bring the car back, I'm done with my appointment. For something like that, you have the control on the system or an autonomous and yet the risk is low in terms of the downside, at least to yourself. Forget about the societal part of it. Now that's the most apparent way as I see it. So as you can map what application you're trying to write on that matrix and map it against, is my user ready to adopt it in that same matrix, you get to see where you can actually get user adoption for machine learning type applications. And that's the apparent universe. The invisible universe is think about technologies like fly by wire or autopilot. These technologies have been existing for a long time. The only problem is or only difference between a car being driven autonomously versus a plane is you know in a plane when you get in, you're out of control. The pilot is up there, he's guiding the plane and he's got a door between you and him and if you try to get into that door, you land up in Guantanamo, right? On the other side of that door. So the difference is since you've lost that control anyways, maybe that's some place where adoption has become easier versus not. So think about as you've, what I would encourage the students to think about is as you think about applying ML or AI or any deep thinking or neural network or genetic algorithms or whatever you're thinking about applying, think about where you fit in those squares and whether it's applicable, whether it's usable and what's the lack of control or the sense of control you give to the end users. So you're bringing a great point that we've discussed earlier where the technology might be ready but are we ready to use it in a way that we feel safe and that enhances our experience as opposed to freaks us out, right? And self-driving cars is a great example. What are other really good use cases that you see that are just perfect for the current state of the art of machine learning? Well, I mean this is, it's a really I think important point. Yeah, I don't know that I could make it any better. I mean, we, I can just go back to that for one second. You know, we have competitors in this space of automated cars that would argue that you can solve the entire problem through a machine learning approach. You can basically go from sensor inputs through some learning process to some control outputs. And you know, I strongly believe the real fallacy there is that if something does go wrong and this is, I would say, one of the weaknesses of machine learning is that it's very difficult to interrogate that system to understand why it made a particular decision. The end result of this machine learning process is typically a black box. And so it gave you an answer. Well, why did it give you an answer? Well, it just did, right? It's hard to know why. And in many application domains, that may be insufficient. So to, you know, kind of paint with a broad brush, safety critical domains and machine learning methods, I think it's probably a fairly complex and delicate marriage there. For these non-safety or critical applications, convenience applications, applications that require adaptation, customization for the end user, you know, those are potentially fantastic applications. So that's where I mentally draw a distinction. I kind of hinted at the outset that I'm inherently suspicious. And this is one of the reasons why is because I think about the prospect of deploying machine learning at the heart of our driving stack and I get very nervous. We don't do that today. Now do we deploy it in other parts of the stack where we can tolerate this inscrutability? The answer is yeah. So just to build on that a little bit, I think, you know, one of the ways to think about this that we use is we do a lot of investing in what we think of as applied robotics companies. So taking, you know, some hardware, software combination with some machine intelligence on top to solve some interesting problem. And, you know, if you reduce the world down to two by two matrices, like all the consulting firms like to do, you can sort of think about sort of low cognition jobs, high cognition jobs, and sort of low repeatability, high repeatability. And where sort of activities and enterprises and sort of value creation points fall in that matrix sort of drives a lot of the use cases around machine learning we feel. That if you have a sort of super low cognition job which is highly repeatable, like think about accounting. You know, accounting has essentially been automated by these smart sort of software backends. You get to something very complex like a hotel made, you know, folding up a hotel towel in a bathroom. That's a really challenging thing for a machine to do right now. So I think there's gonna be a continuum of opportunities as entrepreneurs look at these fields where you can find good places and bad places to put machine intelligence to work. That reminds me of maybe, it would be worth talking about what is, what are the difference between the things that machine learning and systems that use artificial intelligence can do today versus the ones that we keep seeing in movies, we keep getting excited about and dreaming about, but really we're not gonna see industries for another 10 or 20 years. Like what's your take on that? I think there's, I mean, the nuance here that I think has been brought out, I think very importantly is whether or not something can be done or whether it should be done. So whether machine learning should be used for this particular application and not necessarily can it be used. I think, you know, there's lines of research fairly recently into trying to develop methods for actually understanding the thought process, if you will, behind the machine learning algorithm. So try to interrogate that black box and so in some kind of quasi forensic way understand why it made a decision that it did. I think if researchers, you know, people down the street here can make progress in that domain, I think it unlocks potential in these safety critical environments that I was mentioning earlier. Right. Yeah, I think one distinction we draw is between, you know, I think the perfect example of an application or a use case that I would love to see, but I don't think I will in my lifetime, is anybody who's a fan of the Jetsons, you know, the robot mate who did everything in the house from picking up the laundry to making the dinner to hoovering the floor. That's a super challenging problem because it is such an unstructured, you know, so many different things in terms of the interaction with the physical world. I think, you know, the machine learning algorithms can probably handle sort of atoms to bits, but going from bits back to atoms again is a really challenging thing to do. So that's, I think, one area where we'll see, from an artificial intelligence sort of research domain perspective, it's a concept of a specific intelligence versus a general intelligence. I think, you know, specific problems will be solved, but general purpose, you know, helpful machines will be a lot more challenging. I think another challenge I think that we'll see that machine learning will have to overcome is the environment of irrationality. Right, think of ourselves trying to hail a cab in New York City at 5.30 in the evening and you're in downtown Tribeca and trying to get up to, say, upper west side, right? I have two choices. I have an autonomous car and I have, you know, one of the yellow cabs. Obviously the cab is really dirty, but guess which one I choose? In this hybrid environment where humans exist along with, you know, autonomous driving techniques, I would choose that yellow cab any day, especially if I'm gonna miss a flight or an important appointment, why? Because he drives it rationally. You know, think about the playing chicken thing, right? Pedestrian crossing the road, you're looking at him and saying, let's see who plays chicken here, you know? My autonomous car is always gonna take the safe route. So until the point of time, so it may take me two hours with the autonomous car, it'll take me 30 minutes, but yeah, I might have a heart attack on the way with the yellow cab, but I'll still get to my appointment. I'm just trying to make the point here, drawing out the absurdities of it. And the thing that we're trying to see here is the environmental conditions that surround the problem that you're trying to solve, as long as we live in a hybrid environment, the complexity involved with our machine learning techniques have to be even higher than one where everything is connected and everything is very rational. Those are much easier to design, but unfortunately we live in a world where we're beginning with a much higher problem to solve and then scaling down. Right. And Carl, you sort of touched on something that we've discussed too that's very interesting, is like should we build these systems even if we can? And so that's when ethics kind of comes in and Sama was talking about that earlier. So I mean, clearly there is a technology piece that either it will be ready or not, then there is the human piece, like can we tolerate it? And then there is the ethics, right? So maybe you wanna say what's your take on and what should we be thinking about or what should companies be thinking about when taking these on? One of my fears is I think the technology is sort of advancing at a pretty fast pace. And I think the policy issues related to ethics and other things are sort of not keeping pace with the technology advances, right? You know, we talk about the classic trolley problem, right? You know, in an autonomous driving scenario, somebody is gonna, the system is gonna make the decision and somebody is gonna feed data to train the system to act in a particular way. What is the right decision here, right? And if I sort of, you know, use a system that is built by new autonomy, will it work the same way as a system built by somebody else? Should it be same? Should it not be the same? Where do you wanna have some level of policy control to decide what action should be taken? Where do you let the different technology providers and the different sort of models that are being trained to make the so-called right decision for what they think is right, right? I think there's a tremendous amount of work that needs to happen and I don't think we are spending enough time thinking about those aspects of what it means to live in a world where there's gonna be more machine learning or artificial intelligence-driven systems that are gonna play a critical part in our fundamental day-to-day activities. Yeah, I think that's a really valuable point by Soma. I think as humans, we struggle with exponential change, right, and it's very hard to internalize that technology is accelerating every year and as sort of capabilities are put into play by, you know, infrastructure vendors like Qualcomm and Intel and NVIDIA and others, it just enables a whole host of applications to become solvable that maybe weren't solvable 12 months ago, 24 months ago. It's on this constant, you know, 18 to 24-month cycle. It's just unstoppable. And I think one way to solve that might be or sort of get around those thorny issues might be the concept of augmenting humans, right? So it's not a purely standalone machine intelligence making life and death decisions. It's helping humans make better decisions with the ability to this, you know, massive super intelligence which can spot patterns, make inferences from incomplete data, helping them to make those decisions. Yeah, we certainly adhere to that. We call it augmented intelligence as opposed to artificial intelligence for that reason because our point of view is that it shouldn't really be artificial and independent from the human, but it really should be supporting and augmenting the whatever we're trying to do. And I think that's an ethical point of view, right, that we're taking right there. Yeah. Did you, so you're building autonomous cards, right? So what is- Well, that's cool. I mean, it's a really interesting topic for discussion. At the moment, it's kind of a parlor discussion because we don't really have good techniques or reasoning about complex ethical scenarios from a machine intelligence point of view. It's not clear that machine learning per se would be the right tool to use in those cases. I mean, machine learning is good at observing, imitating, and adapting. Ethics is something of a different topic. I think when we think about that from a naive approach, we might think about codifying a set of rules that a machine would follow in a complex scenario. The difficulty today is that it often implies that we've got sufficiently rich information to make a complex judgment. In the case of a car and this trolley problem example, for those who aren't familiar, it's this scenario where if you had to get in an accident, should you turn right and hit the elderly grandmother or should you turn left and hit the school bus full of children and how do you weigh these two options? The fact is we typically lack the inputs required to make such a decision. In other words, when we look out at the world, we can't tell if that person is an adult, male, female, elderly, a good person, a bad person, an atheist, a Christian, any of these inputs we might want to have somehow. And then even if we could, it really places a very high bar on the machine. I mean, to be totally frank, the whole conversation makes me a little nervous from now from a technical perspective. I mean, as an engineer, because I think boy, this is a really high bar to place on the technology. We as humans, in a scenario like that, I would say virtually none of us would be able to make a reason principled, ethically correct in some sense, choice in the fraction of a second you would have to make that judgment. By, as a society, kind of this conversation implicitly putting that burden on the machine intelligence, I think we're setting ourselves up as a community for disappointment. I do think it's a valuable conversation to have. I just think we have to be realistic about how this will be deployed in products over the coming years. And it comes back to the point about sense of control or loss of control there, right? So, I saw one of the startup pitches of drones for humanity, a very interesting idea, right? And the thought going in my head was, if it's gonna do visual recognition, a vision-based recognition of how many people are in distress in one place versus another, if you have one place which has got five people which are on top of a roof needing help and it needs to make a payload drop, versus another place where there are 500 people in far more challenging conditions, which one would it choose? Would it just drop it at the first one or not? So that's where you've given up control to the machine to make that drop, or there has to be control somewhere in the back end, so somebody to make that judgment call, this is what I mean by the sense of control on that one axis of where should I be dropping that payload? So, I'll give you a simple example. So we come from union environments. So, some of our unions are the UAW, the United Steel Workers, and that's kind of Detroit, that's us, Muscle City, you know, Motor City, whatever you wanna call it. But the big question when we were designing our product was from an ethical perspective, we're saying Saika, that's great that you are able to warn people when somebody gets into an area which they're not supposed to be in restricted access area or somebody has high gas levels in their environments or somebody's just had a fall, but I don't want my supervisor to have the ability to know how long has Ben been smoking, or how long of a break has he been taking in the restroom? And now, where do you have the ethical lines to design the system? Because you're capturing that data anyways. How do you firstly handle that data? How do you suppress it? How do you bring it out? Those are questions that, from an engineer's perspective, is hellish to determine. Now imagine adding a machine layer and a machine learning complexity on top. That's gonna become a whole different level of, you know, a whole different level of hairball that you gotta deal with when you take it to that level of understanding. That's a good, I think a great point. And to kind of follow on and then circle back to another point that was discussed earlier, I think we think about, the worst case scenario is when something happens where the machine didn't behave as it would, and injury results, whatever the case may be. There will be a need. It's a human instinct to wanna understand why it happened. Why did, in the case of a person, you ask them, well, why did you do that? Why did you do the thing you did? In the case of a machine, you wanna do the same thing. You wanna understand why it did the thing it did. We live in a world where all of us fly airplanes, in our case every other day, or frequently, and planes crash, you know, they fall out of the sky. But we do it anyway because the risk is below some threshold. And I think also because when accidents happen, there's an international effort and interest in understanding, well, why did that happen? And I think as people, we can take comfort in the fact that if we can understand why it happened, we can correct it so it doesn't happen again. In the case of machine learning, in the absence of that ability to really interrogate those systems, I think there'll be a lot more resistance to putting these systems in positions where they have to reason, judge, make safety critical decisions. Right, exactly. I mean, I think that's sort of what I was trying to get to when I asked what are the types of use cases or what are the intelligence that we're gonna be really delegating to machines, right? Because we don't wanna delegate the decision. We made that very clear. But so can they do the understanding of huge amounts of tax for us? Can they reason on that tax and then provide that evidence, right? So that we, as humans, make the decision. So I think that we kind of sort of all agree that that would be a pretty good set of use cases right there. So maybe back to the technology now, since we've been talking about the human side and the ethics. So what are the existing limitations of the machine learning methods that we currently have at hand? So I think I will take a stab at this and it could be interesting for you, Carl, as well. Like one question we think about a lot is sort of the system architecture between stuff that happens in the cloud with all the processing that's there and stuff that happens on the edge of the network. So if you think about a self-driving car, which with the best will in the world, connectivity to cars, it's okay, but it's not great a lot of the time. If I'm relying on the centralized command and control to control my driving and make decisions for me, if my radio craps out, that's kind of a bad situation to be in. So the ability to build sort of lightweight, smart, possibly resource constrained processing models that sit in very distributed computing architectures, I think that's something we think about a lot. Interesting, Qualcomm makes the chips. Yeah, yeah, it's self-interest right there. The other thing to think about is in that context, just like cars, think about a mine or a steel mill or an aluminum refinery for that matter, or a petroleum refinery. Think of the amount of metal in these environments and a lot of the traditional IoT big data models fail in these environments, because first of all, you're not allowed to carry your cell phone in these environments because these are intrinsically unsafe, which what means is it can cause a spark by the fact that it's an electronic device and hence cause a bigger explosion. If you have the traditional channel missing in these big environments of mining steel and so on and so forth, then your device is automatically a hybrid model where the design itself has to be a lot of the ML components of it or lightweight ML components of it have to reside on the edge device, on the wearable or on the whatever is the device at the front end. So while there's a lot of talk off late of taking everything back to the cloud, I think I'm of the firm believe that there are certain industries where latency requirements, network requirements demand that the ML light versions gets pushed out back to the client. It's like the old IBM versus, so my excuse me, the Microsoft Intel debate of IBM and I was an ex IBM or push everything back to the mainframe versus and then, forget about the powerful wind tell platform and now I'm the other end saying, no push a little bit more out of the smarts in the front end because the other thing about ML that's interesting is at some point of time, we're gonna run out of pipe capacity, right? Just to give you a sense, our hard hats generate 17 megabytes of data of uncompressed data without audio and video every eight hour shift. Now, assuming that in a sample refinery you have 3,000 people working in a shift with another 4,500 contractors. I think of the amount of data that's hitting you. Now, ML can actually be under the radar where it's not visible to the human interaction where you say, let me figure out how I optimize what goes back and through so I can optimize pipe capacity for lack of a better word. So those are other areas to think about where the limitations could be and what you need to design around. But I love what Carl said about if you cannot at the end of the day do that post-mortem as to why things went wrong and if it remains a black box, it remains a black box. One-on-one decision modeling in business school where if you can't explain your model, I'm sorry mate, doesn't make any sense. Right. Yeah, I think this black box nature, I would add scalability. We want to apply machine learning to increasingly complex problems. The more complex the problem, the higher the dimensionality, the longer it takes to train models for these problems. We've got a lot of compute power in the world, so these days we can tackle quite complex problems and reasonable amount of time. But there's another kind of dimension to that is kind of the scope of the problem. So for example, the limitation inherent in machine learning by the very nature of the fact that it's data driven. So by that I mean machine learning can be very good at learning models based on the data that it's seen. It's not very good at generalizing to scenarios that it hasn't encountered before or data that it hasn't seen. Right. And that can be tricky. What you'd love to do is be able to give the technology a few examples and have it learn general enough models that it can then apply those examples to scenarios that had never been trained on. More commonly in the case is, if you wanted to recognize dogs, you give it examples of every type of dog you could find. And you're confident that when it sees those types of dogs again, it'll recognize as a dog. But I would say today there's limitations there. It's not the case that you can expect the system to generalize robustly. Right, perfect. So I think I'm gonna make sure that you guys have time to ask questions since I've been asking a lot of questions. Anybody? There's someone in the back. Okay. So I'll repeat the question for the video. He's asking about bias and what are the implications on systems that use machine learning? Right. So I would say that it's a classic problem of, hey, the person who's feeding in data, what bias they have or they don't have and how does that translate into it? Because it's all about, as you feed in massive amounts of data to train the models, if there is any bias that is sort of, whether advertently or inadvertently introduced into that, and that bias is gonna show through and it's gonna get replicated and the decisions that get made are gonna be a reflection of that, right? There have been sort of classic reports about like, hey, when Google surfaces ads, right, somebody has done a study to figure out that 80% of the ads that get surfaced for high-paying jobs are surfaced to men. Why did that happen? Did somebody consciously make a decision that that's what they're gonna do or was it some inherent bias that got injected into the system without people realizing that? I think those are all, I sort of think about them in the same context of ethics that we talked about in the past and I think, right? How do you make sure that the data that is used to train the model is as unbiased as realistically possible, knowing there is a lot of human beings involved and sort of that data getting fed into the machine learning models? Exactly, yeah. Yeah. Anybody, there's a question here. Hi, I have more of a macro question based on the upsides and downside rigs that were discussed today. How do you see this trend towards machine learning and AI playing out in emerging markets where the infrastructure, of course, is less developed as it is here in the US and the regulations will adapt even more slowly that they could be adopted here? That's a great question. Yeah, you stumped me on that one, mate, but when you think about it, the emerging markets are always the place to leapfrog, right? Think about connectivity issues in the mid-90s and how now mobile medicine is so prevalent in these sub-Saharan, India, Pakistan, South Asian markets per se, with mobile connectivity certainly becoming the bridge. So I think you would see these markets in itself evolve on their own organically. Just to give you a simple example, I think it would be very difficult and very presumptuous of us to say, okay, this is the model that fits these emerging markets. That goes back to the example of the Whirlpool washing machine. I don't know if you ever heard this example. Back in India, when Whirlpool was launching its washing machines, they thought, this market's gonna be on fire, people are gonna, it conserves less water, and suddenly they realized people don't use it to wash clothes anymore. Up in the northern part of India, they use it as a continuous yogurt stirrer for lassi, you know what lassi is. So it was just sugar being fed in, yogurt, and the next thing's coming out of the pipe is lassi. So that's exactly what you've got to be aware of. Like any machine learning technique that you think you can come up here and outthink people from there, they'll outthink you better than that, because they know the ground conditions there better than anybody else can presume it. And you have to just let that organically come up on its own. So you may see a lot of heavy machine learning on the client side because of the lack of good connectivity, right? So this could be certain presumptions you can make, but believe me, you'll always be surprised when the end results come out. We have an eager question over there. So like you said, machine learning can encompass quite a few different use cases. So I had a question on the competitive landscape for machine learning right now, because you have really large enterprise companies like Google, IBM kind of making their play in this space, and you have a bunch of small startups. So who do you think will ultimately lead in this space? Is it more of an enterprise play or tiny startups? So I can give you, I think two years ago, three years ago, you asked me, I would have said the data aggregators, the big guys, the Googles, the Facebooks, they had an inherent advantage just because of the amount of data they could gather. I think my view on that has changed in the last two years. I believe startup companies can be very successful because techniques have evolved to where you don't need to have Google's 100 zillion hours of training data. You can actually build very effective models with much more sparse datasets. So I see a big, I mean, we funded a number of machine learning-based, computer vision image recognition companies who are competing directly with the Facebooks and Googles, and we think they'll be just fine. Yeah, I agree. I mean, there's techniques for synthesizing data, for example, techniques for using sparse datasets. I mean, the currency is really, the smarts behind the algorithm, the team. I think what we're seeing in a cross space is often is that the team is valued as much or more as the technology. Whether that persists is a little bit hard to say. If the technology all starts to plateau, maybe not. But at the moment, I think for the foreseeable future, I wouldn't expect that to change. Sorry, just to round up one last thing on that, I do agree with both these guys. I'd also say that there are gonna be probably a smaller number of machine learning platform folks, and then a lot of people who are gonna use machine learning platforms and models and algorithms, and feed their own data for specific use cases or industry-specific things. I think there is a tremendous amount of opportunity for all kinds of companies, including startups, to be able to have a meaningful impact in this space. Exactly, I was gonna say, the platforms are lowering the bar, and so for entrance, it's much easier for anyone to get to these machine learning models and even data, right? So there is a couple, like... Thank you very much, but we... Is that it? I'm not allowed to take any more questions. Okay, I guess that's it. You asked you to finish up, please. All right, thank you. Thank you. Thank you. My pleasure. My pleasure. Thank you.