 All right, well, it wouldn't, you know, so we covered, we covered Kubernetes, cybersecurity, it wouldn't be a good morning if we didn't include artificial intelligence in the mix here. This is, we're very fortunate to have a set of experts here this morning to discuss the AI landscape, how it intersects with open source. So I want to welcome all of them to the stage. Why don't you all come on up and sit here and I'll introduce each of you. So Deepak Agarwal, he is the VP of engineering and artificial intelligence at LinkedIn. How many people here use LinkedIn? A few of you. So he's responsible for AI efforts across the entire company. Please have a seat. Mazin Gilbert is Vice President of Advanced Technology at AT&T Labs. He's heading up AI initiatives there as well. Let me make sure I've got my right list here. Terry Singh is an author AI machine learning expert in both deep learning and AI at Coursera. And finally, Rachel Thomas, who's the co-founder of Fast AI. Please give them a warm welcome. So I want to kick off by just having each of you quickly talk about what you're doing in AI, what your organization is doing. And what kind of business imperative it is for each of you. So Deepak, I'm going to start off with you. So I'll probably start with the mission and vision of LinkedIn. So LinkedIn, for those of you who are on it and for those of you who are not, I'll encourage you to sign up. So our mission is to connect talent with opportunity at scale. And again, opportunity, I don't mean it in a narrow sense of just finding a job. Opportunity could be any professional opportunity. And so AI and machine learning is something that is embedded in all our product. In fact, we many times refer it to be the oxygen of all our product, right? So it kind of is a horizontal layer that permits all our product. So if you're on LinkedIn and you're looking for a job, all the job recommendations you get is all powered through machine learning and AI. If you're a recruiter trying to source a candidate, then the search results which you're getting is powered through AI. If you're on the news feed, consuming content, that's all powered through AI. If you're trying to connect with someone, which you should, for getting professional opportunities in the future, those recommendations are powered through AI. So it's kind of ubiquitous and we have been doing it for a long time now. It's very mature and we are no longer in a state where we think about it. It has kind of become an integral part of everything we do. It's embedded in everything we do. I think we are now at a state where we are thinking of what can we do with AI to power the next generation of user experience on the platform. Maz and you guys are running some huge networks out there with hundreds of millions of users and 5G is just around the corner here. I see you're two cities in and out, you're running it out. So AT&T, which I'm hoping that 95% of you guys are AT&T customers today. So thank you very much. So AT&T's mission is to really connect people with their wealth. Where they live, where they work, and really do it better than anybody else. And when you think about a company whose their sole business is communication and entertainment, AI is a foundation not just to drive one application but really to drive how we live and how we work as a society and basically globally. So if you're thinking about well how do we drive 4G and go into 5G and how do we really scale cloud technologies all the way to the edge or really how do we try to drive our network? How do we operationalize our network? We talked about security earlier on. How do we make sure we get tens of millions of attacks every day on our network? How do we really address those? Really AI is the foundation of pretty much our business from the get go. So I'm gonna start with the mission and the vision. No, I'm just kidding. So my name is Terry. I am a founder, CEO, Neuroscience researcher, studying brain and that kind of stuff at DeepKafa.ai. I'm also a mentor at Coursera, working with a couple of smart people. Andrew and a bunch of other guys are developing deep learning specialization. So help, this year we have a plan to get about a couple hundred thousand to about a million people trained in deep learning and AI. And an amazing training set up from Stanford and Andrew setting it up. And I guess I know you do that a lot of stuff as well with fast AI. But so very short, what I do is I work with enterprises and helping specifically they could be engineers, software developers. Or they could be PhD postdocs as well training them to work with deep learning techniques, right? Competition neural networks or vision or text kind of stuff for the guys who don't understand. But so the goal is for me, the goal is to convert a whole lot of people to adopt artificial intelligence or deep learning. And the other part of the things that I do if I'm not working with the enterprise I work in the most difficult parts of the world. I go to Tunisia, I even go to Syria. And we work with women. And these are young girls having all kinds of problems in the world. And for instance after this I'm going to Tunisia after I'm done with this conference and then we go to Turkey but we have people who were totally distracted with everything that life can take out of you and put those kids and say let's go and work and let's work with code. Let's work with TensorFlow and this is what I do. So I guess this is a bit of an introduction. That's amazing, that is incredible. Rachel, I checked out your book. This is the Practical Deep Learning for Coders. So lots of coders in the audience here. Yeah, so I'm Rachel Thomas. I'm co-founder of Fast AI which is a non-profit research lab and I'm also a professor at the University of San Francisco. And with Fast AI we're trying to make deep learning easier to use. And so we do this both through kind of building the tools. We have an open source library. And it's kind of very high level and encodes best practices called Fast AI. And then we also have a free course, Practical Deep Learning for Coders. Over 100,000 students have taken it. We've had students get jobs at Google Brain, have their work featured on HBO. But we're particularly trying to reach coders. You don't have to have any advanced math background. And we're interested in people that are, I think particularly, that are kind of working on projects outside the mainstream. Things they very much care about and don't have access to a lot of resources. And we've had students improve agricultural loans in India, try to stop illegal deforestation of endangered rainforest, help patients with Parkinson's diseases. So a lot of interesting applications. Amazing. Very, very cool. Well, I get these questions all the time. A lot of people in the audience find sort of the AI landscape totally confusing, right, that there's all these different tools and there's different ways to deploy and how you do it. I think you're all kind of the exception in that you're in roles where you're just far down that path. But if you're talking to someone who is maybe confused about it, what are some of the exciting areas, what are some of the things that people should be looking at right now in AI? I think I'll just kind of go back around, Robin, you're starting with you, Deepak. Yes, I think first of all, AI before 2012 was very different than what it is today. And in 2012, something great happened. So a bunch of professors, researchers, they were able to figure out a new way of computing things at scale by using GPU cards. And this is what kind of ushered in the era we are in, right? So I would definitely want you to pay attention to everything that's happening in deep learning. So what has happened is that you have with the availability of cloud computing, data management has become commoditized, right? And with the availability of deep learning tools and things like Fast AI and what Andrew is doing, slowly that's also becoming commoditized. So if you have a problem where you know you want to predict something by using a lot of input signal, this is slowly getting commoditized. And this alone can be transformative. Look at what has happened in computer vision. Look at what has happened in natural language processing and speech. I mean, these things have become way more accurate than what they were like 10 years ago. And the impact of that is pervasive, right? In every area now we are kind of, we are able to use AI technology to do things that we are not able to do before. Like for instance, in the old days, if you're a radiologist, you will actually send the MRI image to India, I know. And then someone there is going to read the image. And so in the morning, you have it on your desk. You don't have to do that anymore, right? I mean, you can have an AI software kind of do that for you. And again, I can go on and on with the example. So definitely supervised, this is this class of problem where you're predicting some output based on input that's called supervised learning. And this is slowly getting commoditized. And this alone can actually have a very big impact in many different things that you do, right? So you have this paradigm where you can have data, learn itself, learn patterns through algorithms, right? You don't have to write rules to kind of program a computer. You can have the computer learn. There are other areas that are not very well researched, and we have a long way to go. Like if you look at unsupervised learning or human level intelligence, that machine can learn. That's still an open research area. And I think in the next 10 years or so, maybe we will be there. And it may become as commoditized as the supervised learning techniques are kind of. How about you, Maz, you've been involved in some pretty cutting edge stuff in terms of trying to get this technology out there and deployed, I'll plug Accumus a little bit for you in terms of building a marketplace where these things can be reused. What's exciting to you? So just for those who don't know me, I did my PhD in the 80s in neural nets for speech. I was excited about AI because the concept of getting a computer to use neural nets, which at the time there was this resemblance of artificial neural nets with the biological neural nets. Still today, there's that sort of association and confusion. To get them to have a machine actually articulate sounds and speak, just like humans do, was just amazing and completely fascinating. And that's how we started. And we were at a time like a few hundred people. We would go to workshops, we would go to conferences. And even in the past 20 years, 30 years, is there's been phasal interest of AI. It went from buzz to a hype. It went down again. If you look at the literature, deep learning and neural nets and AI never actually stopped in the community, in the research communities being worked on for at least three decades. It is different now. And I agree with Deepak here that the distributed computing, the GPUs, the flux of larger data has really made a big change. What I actually think that a bigger driver to that, we started with a few hundred people today, thanks to you guys that we have hundreds of thousands going to the millions. Really the big driver, the big revolution is open source. If you think before is that 20 years ago when I was working on this, is that there were a few of us who can write some code. We would join very large companies who have deep pocket to have big computers, great computers and others to be able to run these type of jobs that can require a lot of data. Today if you've gone to CES, every company is now an AI company. Because to really be in that business now, it's nothing more than just download a software and you can just get going and you can take a course at Coursera or somewhere and you can pretty much get going in a very, very short period of time. And by doing that, that's really trying to have created a huge revolution in the industry. So what we did, however, that even with that excitement and revolution from a company like AT&T and probably most of you are the same, is that we are hitting big bottlenecks. And our bottlenecks, we believe those bottlenecks are tremendously large bottlenecks that AI cannot move to that next step of scale without addressing those. The first one is that it's still the case that there are lack of understanding and training and learning about what AI is and where it can be used. There are a lot of tools out there. And the question is that which tool do you use and are they interconnected with each other? We need it to figure out a way to harmonize those. Number two, when you ask your team, I need to build an AI solution, they would go and start from pretty much from scratch. There is no reusability of AI. These are very expensive things to build. It takes months, it may take more than years or so. So what Acumus is trying to do, which we have announced on the Linux Foundation, is that to create a marketplace, a marketplace, a distributed marketplace for AI. So think of the app store with one difference, is that with this app store, is that the applications are built by many different tools. It's agnostic to the tool that's being used. That's number one. Number two is that the applications that you build in this marketplace, they interconnect and interoperate. Think of them as microservices that interconnect. So you could be using TensorFlow, and she could be using Cycletlearn, and you can actually connect the outputs of those to create new solutions. The third thing is when we talk about machine learning, a lot of people talk about data scientists and machine learners and so forth. And then when you start thinking about what does it take to move that into production from an AT&T, it's sometimes months, a year, two years, you have to figure out funding, you have to get a prioritization, you have to put a team together. The developers don't think the same way as the data scientists. In fact, these are completely different organizations, that maybe they report to the same senior VP at some point. What we try to do with Akimus is really to streamline the process from a data scientist building a model to that model being completely production and do that in a matter of minutes as opposed to what it takes today and really have that as part of a marketplace that you can just download and run on any cloud and be agnostic to the cloud. So this is a very big revolution. It's a community where all trying to really get together to change that. I actually believe that without doing that, it's gonna be very hard for us to really move to where we are today, where we are deploying AI for some applications to really making AI mainstream where practically a 12 year old kid who can design and build and deploy a website can basically do the same thing with AI. Well, Terry, it sounds like you to some degree are doing that, right? You're working with young kids and they're able to take this open source tooling and do real things with it. Like what are your experiences there? So yeah, I think the point that you raised on open source is something we just kind of assume. It's like fresh water, you pick it up and you start building stuff. So I wanna say first thing is I wouldn't know how to take these technologies either to the enterprise customers or into Syria, which I am gonna be there on the 19th. Amazing, you know, we're gonna be getting coverage from CNN and BBC and a couple of other guys. That's amazing. But I almost forget that if it was not for, I contributed TensorFlow as well, it can just go contribute to developer. It's all open source. It's like free, you know? It's like free as in super free. That's great. Really, I mean, it's something I can just get on my hard disk and I can go and I can implement and you can set up our servers, you can set up virtual machines, also free, you can split up on virtual box, you know, free up, okay, I think Oracle has bought it, but it's still free, you can download it. So, you know, I think that is something which I realized when I had a question. So Angela, she sent me a list of questions to all of us. And I said, hey, you know, this is the revolution. It's like a silent revolution. I'm, you know, guys like Richard Stallman, all these guys, everybody's been working on it. It's really super, we should be like super thankful for everybody, all these millions of developers who make this happen. So I can just pick up my laptop and I can go to Tunis where I'm gonna be in Tunis. And there are like people pulling me all over the place. So there are political parties saying, why don't you talk about AI because we need to really clear the air. I said, okay, fine. So I'm gonna make a business-y kind of presentation as long as you don't talk about that stupid robot called Sophia. But everything else is free. You can take it and you can just implement it, guys. And the practical example is this. So I said, okay, so I have free stuff. So what do I do? And then, you know, so we said, okay, there is, you know, I can go into detail. So I'll keep a little bit high level. For example, skin cancer detection is something which, you know, you have, you know, a human bias and then you have technology. And I said, okay, so I, you know, that stuff is free as well. You can download it. I have all those data sets. In fact, you know, like, I don't know, it's like 40, 50 gigabytes. I can put it on my flash disk and just go anywhere. And we can train on those data sets provided by ISIC, the International Skin Image Classification Society here in the US. There are a whole lot of data sets from skin cancer images, you know, just identifying your moles or, you know, those kind of, you know, malignant or if it's benign or if it's nebulous, you know, people have been teaching me different things. I'm not a surgeon. But it's like, okay, so we have free software and then we have a problem which we can solve. And then what that led to, I gave a bunch of lectures on different, you know, we are researching capsules, just like 10 people in the world who are researching on capsules, building stuff. And capsule networks is like the next, let's say convolution neural network thing, or the evolution of that. And so we take that and then I said, okay, so we're going to take a step further. Who wants to develop an app in Android? Free stuff again. And, okay, Core ML with Apple is also free. You can download it and build an iOS app as well. And people are building apps right now. So I'm just back from Finland where, I don't know, if Martin's still in the room, you know, being an Espo. And they were like, really smart researchers and we are building apps. So it's all possible. I didn't have to go on anybody to ask money or permission from a manager who would say, I need to talk to an account manager because this big corporate company needs to give you software. And when you have software, you need licenses, you need this, you need that. So it's just available. It's super amazing. Rachel, I want to ask you though a question that I get a lot, you know, I agree. I think that we're standing on the shoulders of giants in terms of folks like Richard Stallman who came up with this concept of sharing and critical open source licenses. And all the folks who followed that, you know, whether it was folks from Apache or other organizations. But one question that we keep hearing is, all right, that's code. What about data? Right? You know, is data the new proprietary, right? Is this where? One thing that, so I think there's some misconceptions in that a lot of people think that, yeah, you need Google size data sets and you need, you know, like millions of dollars worth of GPU power to do deep learning. And those aren't the case. And so kind of getting to your question about this, like what if you don't have the data? A lot of people are releasing pre-trained models. So, you know, if someone has a large data set, train a model, they release that model and then you can do something called transfer learning where you are fine-tuning that to a much smaller data set. So we had a student download, I think it was just 20 pictures of people playing baseball, 20 pictures of people playing cricket and trained a classifier to tell cricket from baseball, just using 40 images. And that's something because they were using this pre-trained net, you know, and just fine-tuning the last layers. And so there's really amazing potential there, I think in terms of being able to get deep learning to work on smaller data sets. And it's part of why it's so important that people do share their weights and their models openly through open source. Yeah, and we worked on a data sharing license agreement that would, both a copy left one, which would be a share back license and then a permissive license, which you just didn't need to do that. But we're trying to get ahead of this ability to kind of share. Yeah, and there are issues because it's like data is an important part of how models are trained. But there's also so much around privacy and true anonymization is almost impossible. And there have been several kind of high profile cases of people thinking they've anonymized data and then it's been de-anonymized. And so I think there is kind of a bit of attention sometimes between wanting to protect privacy and yeah, like it is really important though to be sharing models in your training process. Yeah, I wanted to mess in here because the telecommunications market's very, very competitive. But to her point, what are data sets that you would want to share? I don't know, I'll tell you how we're maintenance or whatever, that just isn't a competitive per se, but kind of follows that open source philosophy of like, hey, this is just data we want, models that we just want to share with it. Do you see patterns there? So I think the idea of having companies share data is not new, people have been talking about this for several decades. We've never cracked it, we've never created an open, shareable infrastructure community where people can easily, with security, can share data with HIPAA requirements and their privacy rules. I think we're starting to get almost there. I think with you guys and a lot of the policies you guys putting in place, and that's what I'm hoping we're gonna do with Acumus. Acumus cannot really succeed without having a really clear understanding of the data behind it. From an AT&T, there are obviously data we can't share. There's no doubt about that. We carry data and we track data about cell coverage, so those kind of data we cannot share. But there are other data that we actually, we want to do as part of the Acumus to consider whether we can open a community to looking at that. Just like you mentioned with transfer learning, we're a very capital intensive company so you can imagine that one of the things we do, we send people out. We own millions of poles, we own thousands of cell sites, macro cells, small cells, et cetera. We send people literally up a pole to go and check a cell to see if there's something wrong with it. If a wire is disconnected, if it's rusty, if there is dirt, et cetera. What we're trying to do now with transfer learning and AI is that we send a drone. And that drone has visual capabilities. That drone detects whether there is what the object is looking at. And we can't do that. We don't have enough data to do that. So we've just collected few hundreds of those data points. And then that drone looks to see if that object has a rust. And if that's the issue that we can perhaps do something about or send somebody there or not, 95% of the time we don't need to send somebody. So there's a safety aspect we're using this for. We could never do that by collecting significant amount of data. We can only do it with small set of data. But thanks to the open source community by having models, it's not just data. People think of data as raw data. And I think raw data is very important. And there are situations we cannot share raw data for many reasons. But there are derivatives of data. There are probably an area we need to talk more about. Which is that when you build these models, these models now reflect weights and capabilities that resemble the data. You cannot probably do a reverse engineering. They hide the privacy aspect. But those models can be shared. And those models with some additional new data can do something re-remarkable. And that's exactly the kind of things you're talking about. Deepak, how about you? At LinkedIn, are there similar patterns where you're finding these commodity components? How do you make those decisions? How are you doing it? Yeah, I think so. I agree with everything. So one thing I would like to mention, even if you're a developer, we provide developer APIs where you can actually get information about LinkedIn public profile information. So for instance, you can use the APIs to get a person's job title. Or if you want more information about a company, like how many employees work there and how many people have changed jobs. So these are things that are available through the developer APIs. There are some other information that we provide to companies. Those are not available for free. So you have to talk to us. And based on the use case, we can still provide that to you. So that's already happening. Now, I think one example, for instance, I can give you right away. For instance, one of the things, challenges we face on our news feed is there is a lot of non-professional content that gets like things like hate speech and porn and things like that. So we don't build those models from the scratch. Like for instance, there have been a lot of nice models that have been built by using ImageNet data, using ResNet, and again, we use the same technique. Like we chop off the last two layers of the neural net and then customize it for our use cases. We have a lot of different teams that are working on different problems. So we have a notion of what we call internally a feature marketplace. So features are signals that go into your machine learning models. And we don't want every team to be building the same signals over and over again. So there is a framework where if you create a certain user interest vector, for instance, or a signal that kind of captures the user interest, you can actually share it with every other team. So it goes into the feature marketplace and anyone can then grab it and start using it in their model. So that's how we are scaling it. And we are in a world where we are not just doing machine learning, we don't have only experts doing machine learning anymore. We have opened up the machine learning to every single software engineer in the company. So we have training programs where every software engineer can get trained in machine learning. We have a feature marketplace. These, think of these as cookie cutter, prefabricated features, if you will, right? And so if you are a developer, you take a course, you have prefabricated features available in the marketplace and you have a problem that you want to solve in the product, you can go take these prefabricated features, build a model and deploy it in your product. You don't even have to talk to an expert in many cases. Okay. I had a question this morning that I want to, I'll start with you, Terri, but then I want to go to Rachel and as in and come back to you, you talk to that. It's not necessarily on our list of questions, so I'm going to surprise you on this one. There were a list of questions? Yeah. There was a list of questions. I have a lot of stories. Don't worry. It's the surprise one. So you have a lot of developers and people who care about coding out here. And Terri, you'll remember this, on the 25th anniversary of Linux, I made a toast to the kernel community and folks saying, congratulations on 25 years of Linux. And I want to announce our next big project after Linux, which is an artificial intelligence. And that is actually a self-coding platform. So drink up. This is the night's your last night of employment. All of you. But I get this question all the time. And there's a startup in Spain called Swurst, which has sort of cached all the code that's on GitHub and never been written in a lot of other repositories. Just as experts, I'll start with you, Terri. When are we going to get to self, to aware coding? Like to the ability to either, like, as you're coding to use AI, to improve the quality, or to actually have self-coding systems? Crazily, I get this question all the time. You know, what you probably are never gonna get, and I hope you're not gonna ask about these general AI kind of things, because I kind of shut down. I get it. I totally get it. The evil robots taking over. Yeah, yeah, yeah, yeah, yeah, yeah. I get that. No, no, this is a specific question. So I think the most beautiful thing is that there's a whole lot of code out there. The intuitions behind the way the code has been written is not something you can encapsulate in software and create a kind of an automated software development kind of library that says, these guys are great in convolutional neural networks, image classification kind of technology, that these guys are great in recurrent neural networks, as advanced kind of text analysis and making kind of predictions. What you will not get is the intuition of what is coming next. So as long as you are building things, for instance, just an example, we know there's been delay, and I've studied astronomy as well, so I really, really follow a whole lot of things, and I quote also on the side to understand how we're kind of learning about gravitational waves and all that. So we have web going into the space next year, sometime in June, and it keeps getting delayed, it's really sad. But anyways, so we're gonna get huge, humongous amount of data coming out of the universe at us. And to do that, that software which we are super excited and self-congratulating that we have great software is not gonna help. To a certain extent, it will definitely help in best practices and doing unit testing and all those things, definitely huge scope in making those things work and automating it. I say, really, I think we should automate that. So from that perspective, I think shouldn't worry too much about jobs getting out of the door because there's a whole lot of beautiful things we need to do. If you need to colonize Mars or something, I wanted to do as well as a kid, so I said like when this gentleman from South Africa who comes to the US and sets up a bunch of companies says, hey, okay, it makes sense. But so there's a whole lot of beautiful things we need to solve in the universe. So from that perspective, there's definite, like 60, 70% of encapsulation of doing things which we don't need to do anymore can be grabbed literally from GitHub and from even best practices and Stack Overflow, whatever. And you can put these and kind of provide guidance so people don't lose time. We spend a lot of time doing a whole lot of stuff which we should not be doing. So from that perspective, I totally agree. But I don't think my cognition as a human being, a single human being, I can already envision a universe. I don't need like 100 people to do that. I can do that already. That power has been given to me. The other thing is my intuitions of trying to solve a problem. It could be any problem here right now, physical problem, object detection problem is something doesn't exist at all in software library. What does definitely exist is best practices which I definitely search and seek out to. And so yeah, I think so I guess 60, 70% of that work definitely can be automated to self-encoding. Well, we can call it self-coding but basically it's just grabbed information, understands how you want to follow that logic and eventually throw it into your algorithms and or in your software library and do that. And the other 30% just keep, hold on to it. Your intelligence and your beautiful cognition and your powers to grab the universe and make it your own are yours. They're not going away. All right, quick last word from Rachel, who has an NDPOC on advice decoders out there and how can they use MLAI to improve their projects to improve their property? Yeah, I mean I would say with, I mean one just to know that it's possible if you know how to code, you can learn to use deep learning that domain expertise is still incredibly valuable. And so I think something I hear a lot from companies is like oh it's so hard to hire like a Stanford PhD and it's just that's not what you need at all. Like the people that are already working with and for you are kind of the right people. Like they understand your problem and your domain. And so I guess again this is something that I think is like whatever and I know here we're a lot of coders but like having specialized knowledge around a domain is still super crucial. So this showed up recently, MIT released a deep learning course and I don't know anything about the course but the image that they led with was like, see why the algorithm predicted pneumothorax, it's a picture of the lungs and a radiologist who's also machine learning specialist responded and was like oh that doesn't make sense. Like that model must have been overfitted. And so that kind of domain expertise is gonna remain very valuable for a long time and like one of our goals at Fast AI is to kind of be taking the domain experts and teaching them deep learning as opposed to trying to engage deep learning specialist with your particular domain. Yeah. Maz and Deepak, last word on that. Deepak. Yes, so I think in order to machine learning or AI you need three things. One is you need to know what your objectives are. You need to know what you're trying to build the algorithm to do. You need data, label data, and then you need algorithms that can learn from data. So if you have a simple objective like identifying a cat in an image, that's easy. I think that is something that you can put together very quickly like with few lines of code because all the other materials are available, right? As long as you have label data. But let's say if I have to solve a more complex objective like I want more users to come to LinkedIn every day. This is a very complex objective, right? Like users can come to LinkedIn because they like the news feed because they want to connect to people because they want to search for someone. How do you formulate a series of machine learning problems that can actually solve this objective is very difficult. Like this, you cannot encapsulate at least today in a software, right? So that's where you have to go and understand the domain very well, do some data analysis. And once you're able to formulate those series of objectives, then again the coding is very easy, right? So I would say I think in a few years everything else will get commoditized pretty much. Not completely, but pretty much. And we have to spend a lot more time understanding what you're really trying to solve. Like if you want to do job recommendations without caring about diversity, well, if your algorithm does not take care of that, it's not going to optimize for that. It's just going to optimize for a number of applications. But if you tell the algorithm, no, I care about that as well and put that as part of the object, then it's going to do something about that. So that's probably what's going to happen in the next 10 years as we can start commoditizing. And we will be able to solve more complex problems than what we are able to do today. Mads, last word. Coders, doesn't matter what your background is, absolutely get involved, do some training and learn this field. In my organization, every coder, every programmer, doesn't matter whether they have a PhD or a master's, whatever it is, 100% compliance this year, they all have to learn it. They all have to be able to code, machine learning and AI capabilities. AT&T in general, we've put out six months ago, a plan, a program that everybody in the company, we have 300,000 employees, to be able to go through ML AI training, even if you are a marketing or legal or whatever, we all need to know. This is not about just coders, it's about everybody. The people on the legal team, they need to understand it, the marketing team need to understand it, financial people, everyone really need to come to the same page. So we go away from every problem, AI is the solution, that hype to really what are the key problems we need to solve. From an AT&T and from even the coders is there, look where the problems are, where the challenges are. So I have always stood in my team three things. Look where we're spending a lot of money. Two, look where there are opportunities of revenue we can have and we could do with these technologies. And three is safety. There are places where we can apply these technologies and I mentioned the thing about polls and the 5G for safety. And I think that's really, if you can start with a real problem, just like what Deepak just mentioned, real problem that really wants a solution, that's probably a long way to go. So here, here, if you're not training everybody in the organization, you're gonna be caught up in a never-ending hype cycle. And that's a pretty interesting practical way that AT&T is handling that problem. So thank you everyone. I really appreciate you coming here today. Thank you all for joining us. Thank you. Thank you.