 Good morning everybody. Good morning or afternoon already. How you doing? Good. How are we doing on energy? Only only that How are we doing today? Now that's better. That's way better. My name is Pablo Carliad and I I take care of partner engineering for Iberia and Italy with Google Cloud And I'm gonna ask you to come with me on a journey from zero to auto machine learning using our experience and our findings in this travel that we had so many years ago and For that what I'd like to start with what people usually think when they hear about AI So most likely some of you are experiencing these things in your pockets right away And you have things like this things like can tag the pictures of your your daughters or your kids things that can suggest what is the smart reply that you should have for one email or for a chat and You're probably now thinking about image models or CNNs or recursive networks and things like that But but many people afterwards just go back to their companies and they they start thinking how do I start with AI? What's my starting point? So what I'd like to do is like to share what we did when we had the same question because we had this same problem and And the fact is that our perception is not gonna be very different to what you're gonna see So for us machine learning is an algorithmic approach to make him predictive decisions from data And actually that part is important to extracting insights and hey taking decisions and it's important That for that you first have access to the resources you're gonna need for that So first we're gonna need access to data. That's for sure and you probably know that already Then you're gonna have need to we need to have access to algorithms to get the most out of that data Check we also have algorithms and we're gonna have to know what kind of predictions. We're gonna have one to have but most likely You may be thinking about how to use these already and that's the key part because in the end We're gonna use these to make predictions and and those predictions need to be applied to decisions And we're not talking here about a couple decisions a week We're not talking here a couple decisions day. We're talking about millions of decisions so for example machine learning is a great technology that you can use for things like Predicting whether a shopping cart is gonna be abandoned or not in your e-site in your e-commerce website It's probably not the best technology to choose where to plant a physical location a physical store Because you do that what once a year So you first you need to understand the use case and what decisions you're gonna be making and probably if you're gonna make a Lot of decisions then is where you start to think about machine learning So what how we learned at Google in this journey as you can imagine we've had this challenge We work with billions of users every day actually we have eight projects eight solutions that have each of them Over one billion unique users every day So that's that's scale right there and and for that we had to understand first how to use machine learning To address things that we were previously addressing through rules. So look for rules on your business When we took a look internally and we took a look at the models that we had been building because now we're doing Almost everything that we do with Google is now based on machine learning Well, we took a look at the models that we were using We got a bit surprised because we were we were expecting to see LSTM's a lot of popularity with deep neural networks But in the end most of the networks that we have are actually multi-layer perceptions very very simple Predictions based on structured data actually the vast majority of what we do is based on structured data It's not very fancy. It's not very very complicated and most likely you have the opportunity to do something very very similar For example, the challenges that we had at this scale You can imagine that it would be very hard to code what to reply to someone who's looking for giants on Google Depending on whether he's coming for San Francisco or coming from New York I'm gonna have to answer with the baseball team or football team or if the questions coming from somewhere else They're probably gonna have a response about all people But in the end it's very hard to hard-code this and this explodes very quickly And these are the kind of problems is someone having a beer already so that that's nice So these kind of problems are the ones that we like at Google, you know, we like problems that explode So in the end things like these are the ones that motivate us. This is another example of a great application of machine learning When you need to understand how to best recommend the content to someone on your streaming video platform You may be tempted to take a look at the viewership data and get that data Then start with the model start with the assumptions that you make and then for example try to extract which parameters are gonna be relevant Have take some assumptions and for example extract some data like the age the gender location neighborhood income and past preferences and then put that in your in your cocktail thing and Have an outcome But once you start collecting enough data, you can turn it around You can actually start from the data and if you start with the data, you don't leave anything behind You can actually extract new correlations and discover new patterns that were invisible to you before if you were making those assumptions And that way you can start gathering all the viewership data of everybody and then Condense that going through a model to the actual predictions you want to make for that user and that's changing completely The perception of your users So start with the data is a great strategy that we learned inside of Google Another one is that this gives us the opportunity to reach customers that are far beyond our regular customer to reach that long tail of users That still constitutes a lot of the market, but you may not be reaching right now And to do that you need to create new experiences, you know, Andrew Eng and And he he's not suspect for not liking models You know, he was one of the founders of Google Brain And he was the founder of Coursera afterwards and he's right away to my to my perception the best ever With communicating how machine learning works and explaining these algorithms and the math behind But even himself says that it's not gonna be the one with the best algorithms But that one with the most data who's gonna win and there's the opportunity right there for us to start collecting data from other places And starting to collect data as well from places that were not necessarily the ones that we were expecting to collect for example And the one would assume initially that it's relevant to gather tornado information to recommend movies on Netflix and Somehow it might make sense for you to do that So don't cut out yourself from opportunities of collecting different data Just try to accommodate for diverse factors because those are gonna enhance the experience of your users And it's gonna bring new opportunities for you and now you have the chance to do that And that's what we found out only last year eight billion devices were connected to the internet And that's an opportunity to gather more and more data from the experiences of our users to have a different impact on them To actually personalize their journey and this is transforming everything IOT is transforming everything And we need to build solutions that can also accommodate for that future It's changing dramatically every single industry and we see it because we are doing that with every customer right now in the world So the second is actually try to reach the long tail through personalization But then I'm talking about getting more and more data And the challenge with that is that eventually you can have to plan for that data So our third finding was plan for the data you will have not for the data you have right now And when you do that then then you start gathering more data and you start to build the practices in-house that help you do that But what happens when you reach a certain amount of data, which is by by for instance the amount of data You're gonna need to do these things. That's KO. What happens when you reach the petabytes and the exabytes level What happens is that again this problem explodes and that you end up investing so much effort and so many resources In actually building the infrastructure so your data science team and the data engineers can actually work So you spend a very little amount of time extracting value from the data and a lot of time provisioning infrastructure Deploying it operating and a lot of money as well, which is not very efficient So another recommendation was for us to change the way we're doing it internally And that's why we have been investing in 20 years. We've been investing in a lot of technology around data problems And now from Google Cloud what we do is we package that in managed services that provide an end-to-end portfolio Since the ingestion of data Through its transformation preparation cleanup engineering of data and then feeding that into analytics or into real-time Platforms where it can be consumed from our applications and then fed into those machine learning cycles Where we are augmenting that data we've built a comprehensive set of solutions That are all serverless So you minimize the amount of time that you need to actually invest in provisioning that and that's a key differentiator as well You need to invest in the right platform And what we think we've done is we've made it very easy and very smooth for you to be able to do this Which is to actually spend the time on what matters to you, which is your problem and Delegate everything else to someone else, which incidentally could be us So you can free up those resources and the most valuable resource in the world Which is the human imagination imagination to actually putting it to the task of solving your business problem Which is what your company expects from you anyway So to do that Spotify has brought all the music of the world Into Google Cloud Twitter is moving 300 petabytes of data to move their whole hadoop practice Into Google Cloud Because these companies are leading in their markets and they're seeing the value of actually abstracting themselves from all the tinkering from the Downlayers and actually bringing value to their business in the Apple layers Which is where the data science is actually gonna make a make a difference and where you're gonna build that differentiation in the market So the fourth finding was actually use a platform that lets you focus on your models And actually offers great infrastructure ready to use without configuration and great pre-built models And that that way we need to accommodate for many different runtimes and many different experiences Because there are use cases where people are wanting to actually develop in the laptops and experiment with the Jupiter notebooks and Accelerate the pace at which we are making hypotheses and predicting things Some other people are want to be deploying things at scale in the cloud and training and doing inference in the cloud Some other people maybe want to into migrate between environments and that's why it's something like kube flow exists By the way, who's heard of kube flow? Who knows about kube flow? Can you raise your hands? All right, so who's heard about Kubernetes here? Kubernetes. Yeah, you know kubernetes, right? So the problem that we were seeing is not very different from the problem that we saw when we started the DevOps movement And the agile development movement Which is the fact that developers are gonna need portability. They're gonna need modularity to compose different blocks But everybody was building their own snowflake, you know their own special ivory tower of combination of solutions They only in they knew how to integrate that was very costly to maintain Almost impossible to deploy in production and in the end all the changes between environments from your laptop to production at scale meant down times Enter kubernetes and we transformed the way that application was Application development was accelerated. We are doing the same thing with machine learning right now. Thanks to kube flow So it's the same portability the same composability, but now for your machine learning pipelines So that allows you to basically work in your laptop one way and then deploy it across any environment across any cloud across the perimeter Or whatever you want using the same set of obstructions. So that's what kube flow offers But of course there's people that just wanted to it in the cloud and do it at scale and use art Acknowledged you to do that we need to accommodate for that and we to we need to provide that flexibility for those different runtimes And again when we need to scale Across all those runtimes. We find the challenge of doing it With the right Capability to accommodate the storage of the data that we've been gathering and with the right compute power So for that you need to divide and conquer as the Professors in in university used to tell us when we started programming divide and conquer which means distribute the load And then use bleeding edge compute And that's what we're doing here what we're doing is basically having more distributed solutions using the better hour and To illustrate what we mean by more powerful hardware You know that this is all about actually multiplying tensors and multiplying matrices, right? so Almost if any one of us could do it just with with paper and pencil it would just take us a long time to do it But we may be able to do that but the problem is that computers don't do it that way So computers actually do it in what we call the al you in a processor So how many al use do we have in an Intel? She on core for example, we have eight roughly We use GPUs because instead of eight we have two thousand al use more or less in a typical GPU Well, we built the tensor processing units because we have 32,000 al use in one TPU So we're going from eight to two thousand to 32,000 al use in one chip in one ASIC That's cutting edge power. That's the kind of power. We're gonna need because again We're moving into higher levels of abstraction and there are more and more use cases and we're seeing it across every industry as image recognition For cars. It's actually predicting behavior and actually accommodating for better customer care through chatbots It's it's changing the way that we are filtering content in when people are uploading images to the internet. It's it's providing even chatbots built for free and Just without any single interaction with the machine learning layers and only modeling the conversation You want to have to improve your retail experience and all these use cases are going up the stack and are demanding more and more from the from the bottom layers and That thing that they're demanding needs to be accommodated through an end to end portfolio. So basically if we go back to the To what we were talking here about the power and we take a look at the kind of storage that we're gonna need for this Storage is gonna grow exponentially when you try to decrease your error rate linearly And that again is a problem Because for every incremental points that you want to drop your error rate your curve for the data you're gonna need is gonna explode So you better be ready for that and you better have the right big data practice in your organization to accommodate for that And the other change is that the compute that you're gonna need why are you gonna need those 32,000 ALUs anyway? Because the compute is also growing exponentially with the time. So for example, just to illustrate that take a look at that curve That illustrates only the last six years in cutting edge algorithms The amount of power that you need to run cutting edge machine learning algorithms Has gone up not tenfold not a hundred times not a thousand times 10 million times in six years So from the from the cutting edge algorithms and models that of six years ago To the cutting edge of today We need 10 million times more compute It's impossible to follow that trend if you're investing in traditional ways of doing things That's why all these use cases all these these businesses that are demanding this expect a platform that abstracts this complexity from them And actually helps them focus on their problem that industrialized as a solution for that and that's the portfolio that we've built We've actually gone from totally personalized experiences that you can build and you can shape and you can actually create In your in our machine learning engine for the models and the part of the problem that you need to customize for yourself And we're ranging up to the packaged capabilities that we have developed internally because as you can expect we also also had this problem for image recognition video analysis translation natural language voice to text and text to voice You're all seeing that in the services that you use today with Google We have this problem So what we've done is we've packaged these capabilities in APIs that are ready to use and those APIs are now exposed So every developer can use them and leverage our infrastructure leverage our network and leverage our technology And only worry about writing against the right API and the first rule of the machine learning club It's just like fight club You know the first rule is that you don't build a model if you don't need to the second rule of machine learning club Is that you don't meet with the model unless you need to so basically don't build a model unless you necessarily need to Try to split out your problem Address the pieces of your problem with the prepackaged API as you can find and then focus on your specifics For your custom model only where you need to and that's what we do with the building blocks that we've we've created That's what we do with the with the integration of the collaboration tools in the community That's actually feeding data sets and feeding models and village innovation through things like Kaggle for example and what we're using to build and create Vertical solutions that are ready to use for the market because we need to accommodate needs from the guys that are going to go deep and Have the deep ML expertise and the ability to do it To those that have never ever thought about using AI or machine learning for anything But have a business need and want to build upon a top of this platform and with minimal machine learning expertise We need to open this up to the world, you know, Sundar Pichai our CEO is On the record saying that they believe that machine learning is going to be as transformative as electricity or fire. I Believe that's a bit ambitious, but don't tell him. Okay, so it's not that I don't trust it But I think that it's a bit ambitious But if we want that to happen we need to democratize it we need to open it to everybody That's why we've not only invested in machine learning technology But we're also invested in building that serverless platform for the previous part For the machine learning to sit on top of a big data practice that serverless and can let you dedicate your time to bringing value on the top Because who can you say I today? Who's actually there with the expertise to build complex ML models? Well, it's in the numbers of the thousands of people in the world right now, and that's not very democratic That's not everybody There are maybe two million data scientists around the world. That's what we estimate But there are 10 times that the amount of people That are able to actually use an API and build something of value to their business with an API. That's the people that we need to reach That's how you win When you expose your capabilities to everybody and not only to the Gandalf's and the wizards of the world That can actually extract the magic knowing on the unicorns and all that stuff. I couldn't do it myself So we need to democratize Democratize this technology for you to have the impact that we expect that it may have so that's why we've built something like auto ML Who's heard of auto ML before? Alright, you're gonna be bored for this then auto mail is basically a technology that builds a model for you So you don't have to think of all about those things you don't have to think about pre-processing your data They don't have to think about how the model is gonna look you forget about the process of hyper tune Hyper parameter tuning you forget about how Properly you've evaluated your labels and you don't have to build your own confusion matrix and and of course You don't have to deploy it in production and you don't have to update it because we do it for you So basically what auto mail does is it's put in machine machine learning to the task of creating machine learning models And this is opening up the applications of those API's that I shared before For use cases that are not generic but personalized and tailored to the data of our users because we are Very good actually at detecting shoes But it's impossible that we were able to detect that specific shoe from your catalogue if you're a retailer We are good at detecting Hardware pieces, but it's impossible that we are trained to detect your hardware pieces But we have the technology for you to actually do it So that's what we're gonna see right now in a demo. It works. So Let's hope that everything's all right Yep, so what you're seeing here is I'm basically locked Into my cloud console. I Am I am going to honor the Simpsons today? So who likes the Simpsons here? I like the Simpsons who likes Apple here. I like Apple. Come on. What's wrong with Apple now? There was a problem with Apple in the past week So we're gonna try to honor Apple here and what we tried to do is as you can see there's a very simple UI There's no coding required. What I did is I wanted to build an image recognition model that would detect characters from the Simpsons So in order to that I first need tagged pictures from the Simpsons, right? Who wouldn't want to have an image recognition model that recognize recognizes characters from the Simpsons? I mean everybody likes that So I'm me assuming that someone would have done it already. I went to Kaggle You know Kaggle, right? So I went to Kaggle and of course in Kaggle you could find actually if you don't know to Kaggle you should sign up for Kaggle right now You found You could actually see that there is many many data sets, but there is indeed a data set of 20,000 images from characters of the Simpsons, which is crazy But what I did is I basically downloaded that And what did I do? I just created one data set. How do I do that? I just basically download that zip file And I made sure that the zip file had the hierarchy Accumulating the characters in different folders So it's basically one folder for March one folder for Homer one folder for Grandpa Simpsons one folder for Mr. Barnes and everything and I'll put all the images there Actually, I didn't have to do it because the guy that packaged it at Kaggle had done it for me So I just got that zip and uploaded it. How do I do that? Well, it just basically come to a new data set and Then I upload it and you can see that if it follows the right hierarchy There's no labeling that I need to do So that's what I did. It took me what 10 minutes maybe for it to upload the data set and then the process in of the labels and things like That and then once I did that I had a model. I was ready to be trained. So I locked in into the model You can see that it was last updated some days ago, I can see the images that I have I Can see that I have indeed 20,000 images you can see that I have lots of characters some of them I only have a few pictures of them. So it's asking me to have at least 10 pictures per category And it works pretty well with 10 pictures per category. This is another breakthrough of auto ML You can actually have great results with very little amounts of data But it's also going to depend on your data set as well. So in the end what I did is I just came here Yep, I made sure that I had properly labeled my things Troy McClure There you go, and then I clicked the train button, which is basically coming here. I'm getting this button if I click there What's gonna happen? Auto ML is gonna take care of me and There's they're just gonna let me go over for a coffee and then we'll send me an email when they're done All I mail is gonna explore all the different possible strategies No different combinations of hyper parameters to actually converge to the optimal way of training this data set it took like 35 minutes something like that less than one compute hour as you can see to actually create it and It created it with a pretty nice Precision right and you can see the area under the curve and you can see the recall and everything like that You can see a full evaluation actually after the process finished and you can see all the different parameters that you may want to be interested in and who is being confused with whom and Grab a bubble is gonna be confused with With Flanders weirdly with I don't know why but anyway after after that happens You take a look if you're training if you're comfortable with it. You're done That's it You don't need to do anything else the model is ready to be used and we serve it So how do it? How do we do it? How do we actually put it to the task of being used? You basically could predict And you have a very basic UI for you to test it, but you can also just copy and paste An integrate that into your application either the rest API or the client libraries and you can just call this API Let's call it and see what happens so what's gonna happen is that I'm gonna choose an image and My model is latent. It's waiting there. It's it's expecting that it's gonna be used It fits been so so much time. I mean if it's been some hours since that's been used It'll probably go to sleep So it's gonna need a little bit of warm-up time eventually And if you're working with serverless functions and cloud functions and things like that You might expect that so I just was testing it some minutes ago And that's why it responded like this the first time If not, you may expect that the first request may be a little bit slower But all the rest are gonna work like a charm and Indeed we have that is Apple NASA Pima Petilon with some ninety nine point three Certainty so I think that we should give a hand to Apple right? I think that Apple Apple deserves it We like Apple. Thank you my friend We love you. I don't know what's wrong with him. So what would you we see there? Actually had to click four or five times four or five times From model sorry from data set to trained model in production And of course, this is not only thanks to magic This is because we've been investing in higher quality models in ways of serving those models in way that scale and actually lead Nest data so AutoML is a great starting point for those who don't have the expertise But it's not only for vision It's for natural language It's for translation and it's coming for more and more solutions But the thing is now this is all right and good for the parts that you can commoditize of your problem But that specific thing that special sauce you're gonna have to build for your problem How do you make sure that you build it and you stay open and you stay portable and you avoid lock-in? Because that's a great challenge that we have ahead of us To innovate and differentiate and build something that only you can build But stay open in the in the way and that's something that we actually had to do ourselves because we recognized that AI is a team sport and This is all about Leveraging everybody in the organization or across organizations and this is the magic of open source To have ten times the impact. We love having ten times the impact We love growing that ten times and this is what we needed to do and how do we do that? How do you have more impact across the world with this technology? Well the way to do it is two ways first you allow for reusability of content and second you allow for collaboration and This is again not unlike what happened with the DevOps movement and everything that sparked Kubernetes This is about leveraging innovation inside your organization across teams in your organization and across organizations And allowing you to leverage instead of reinvent the wheel all the time How can we do that? Well, that's why we're introducing two new solutions this week to breakthroughs that are going to help us go to the next level With this vision and the first one is Kubeflow pipelines So those of you that were familiar with Kubeflow I invite you to just go over and take a look at it Those of you that hadn't heard of Kubeflow before first go and take a look at the David Aaron check and present in Kubeflow Kubeflow is a project that started within Google, but it's open source and the rest of the community is going like crazy behind it So Kubeflow pipelines is a workbench. It's a workspace for you to actually start to compose integrate tail together and then share Productize and deploy your pipelines In a way that others can reuse it in a ways that you can move them from environment to environment in a simple way So you're not building a complete work of art every time But you just package it when it's ready and others can plug on it and reuse it and it's got a very very nice UI And it's got a very simple way of working And what it does it basically prevents this from happening it prevents you from reinventing the wheel at every single team in your company So we want you to work on one problem once and then reuse the solution for other problems So this is going to accelerate Tremendously the way that you're efficient with this and actually you have to build one recommendation engine across your company And then reuse the same pipeline Packaged in a nice way for different problems and then regular regular developers without the Specific expertise are gonna be able to use that and apply to several problems that you never thought of and a utilized solutions and technology That was forgiven. Sorry was forbidden for them to use before there was impossible for them to leverage Because when you create this That red part was limited only to those data scientists and ML engineers that actually knew how to do that But the blue parts Many of us were able to actually do that once you package that pipeline you've created Once you package that recommendation engine with its moving parts and its pieces together and you can bundle it in something And an artifact that other can you use? the impact you can have grows exponentially and Then you can share it with the world and this is the second breakthrough that we're announcing this week and that's a hub The same way they get and repost and share coding and get have changed the world now We want to do that with artificial intelligence, and that's what a hub was built for AI hub is basically your one-stop AI catalog It's it has it has a platform for you to actually contribute and we are going to be contributing We're actually in fact starting to contribute right away today Sharing built recommendation engines ready to use for other companies in not only from Google But from Google research from DeepMind and from more groups inside of alphabet And our partners those companies that are out there are also going to contribute to a hub But you can also contribute to your solution so others can build on top of it It's like a marketplace of ideas for AI solutions that you have built using Qt flow pipelines or notebooks Or other ways of working with ML that you may be familiar with and then it also has enterprise grade Role-based access control and privacy mechanisms so you can use it as your own private repo for your own private company So it's your internal repository as well of assets That's close to the world and only you can reuse if you want to work that way to share information Confidentially and securely inside your organization And then with one click you can deploy in production on GCP or across hybrid environments using the magic of open source This is how you stay open so this is how it how it looks basically This is a fragment of the introduction presentation that we were on the other day But this is the experience that you're gonna get you're basically gonna browse a store You're gonna look for pipelines. You're gonna check before if you can contribute something You can just click add give it a name upload the tarball upload your your documentation and Then it will be there for someone else to use you can either make it public or not and There that's it and it's that simple for you to make that available in a standard format that now Everybody can leverage and this is something that's gonna be open and remain open so everyone can use it and After what you can do is explore what others have put there So basically you can just start to browse what other pipelines are available to you or what notebooks people have Identified that may be interested and they have published so you can reuse that I've seen so many people building recommendation engines in the past year They're all doing the same thing It's wasting CPU cycles. It doesn't make a lot of sense. Just go there and grab one and iterate on top of that That's the magic of offering a offering a one-stop AI catalog for enterprises so they can Work with the type of content that they're looking for either pipelines or notebooks or TensorFlow modules or services for deep learning or whatever Using the framework that you want to it's not limited to TensorFlow the same way that cube flow is not limited to TensorFlow Using it for different parts of your organization different development paths Across your entire AI workflow So this is how you actually combine the power of sharing and the power of modularity and Composability to have the greatest impact and that's why you spin this flywheel and this virtual circle of innovation in artificial intelligence In your company and we know that because that's the way it's worked inside of Google Basically, you start with search and discovery Before you what was the rule of machine learning club? You don't build a model unless you need to write well We package some solutions and you saw it working But for those that haven't been packaged most likely someone's already worked on that before the same way I didn't have to label 20,000 pictures of the Simpsons most likely someone has already built that pipeline before for you Just go search discover if your problem is already solved Then deploy it fine tune it customize to your needs deploy it again in production at scale Whatever you want in my cloud in your cloud everywhere in the perimeter using tensor processing units the size of a fingernail So you can have hardware accelerated inference inside of a light bulb or in a moving train and Then contribute back and actually publish your improvements and publish your fine tuning so others can build on top of that This is how you stay open. This is how you make a difference So let's recap and then we'll open it up for questions for some moments. What were our findings going back to start? We found that machine learning can be used for many things For which today we are building rules if you find yourself writing a lot of rules to make decisions That's probably a good idea to explore machine learning for that use case But it has to be a lot of decisions. Don't do it for one decision a month The second is start with your data Personalize the experience that you're gonna get from your users and start from the data to actually do that Gather as much data from as many different sources Even sources that you may not have thought of before that and then when you see that data start to explode Make sure that you have the practice in-house to actually handle that data and prepare for the data You're gonna need in years not for the data that you have right now because that data is gonna come and then Bet on a platform that actually abstracts the complexity and frees up resources so you can focus on your specific part of the problem and delegate everything else to someone else and That's the way you innovate. That's the way you stay open. That's the way you scale and most of all that's the way We make it easy together so thank you very much everybody and Now I'll be I'll be more than happy to take a couple questions afterwards. I'm gonna be sitting as well in that In that armchair. That's just just around the corner So we can also have a discussion later on for quarter an hour. I think I'll be there But I'll be more than happy to address any question one question. We have to choose one question All right, can I ask the question? No, it has to be okay. No, no, no, we have time for one question. They tell me Can you give us more details about the auto ML for natural language processing? Sure I think I actually have a demo on that and these wasn't staged by the way so Hold on what natural language does is a bit it's a bit different in the application, but it's very similar in the process So what we're going to try to do here is actually get samples of Text and the natural language API by itself is already very smart. So if you go to our cloud.google.com Slash, I don't know. I think it's ML or AI or something like that You can actually test it yourself and you can see that we can do syntax analysis We can analyze sentiment so we can know by the text if it's a positive or negative Sentiment and the magnitude of that sentiment we can know what intense you have in that sentence What you're talking about. Is it a question? Is it a complaint and we know What entities are there in your sentence as well? Are you talking about objects? Are you talking about? Is it a verb? Is it a subject? So It's very smart by itself, but it's limited to generic text For example, the news Industry has a problem categorizing news and Sometimes I mean I like football for example and when I go to tu marca.com and I see how they're the writing A news a report from a game They may not mention the game soccer or the game the name football that word at all Not even the names of the teams maybe they would say los madridistas los merengues or something like that They may not even say the name But you need to categorize that so without OML natural language processing what you can do is you can fine-tune that engine So it starts to recognize patterns that are specific to your needs So you can learn how to for example categorize news in sports in politics in international into orbit, whatever what I'm doing here This is actually a model that I built to try to predict from From questions from the parliament of India So there's many people in India and there are many questions that come to the parliament And they need to be categorized and routed to this to the department that makes sense when a question comes It's not necessarily evident What ministry it should go to? Of course the question is asked to a minister in particular But when you're indexing the questions afterwards and you're routing them in their internal systems They need to understand if the question was intended for the minister of agriculture or the minister of food Which may not be the same one or the miniature minister of industry or economy So what I did is I again I went to Kaggle And I went to a data set and apparently there's data set with thousands of questions from India So what I did is obviously I just built a model about that and You can see that there are the the tags the labels that I'm using here are the categories that I want to use for categorizing that content But then it's going to take those layers of the model that we have been perfecting over years To do natural language processing and then add an extra layer and use transfer learning to build a model That's gonna be tailored to actually specializing that last layer Into your categories and now it's detecting the trends and the patterns and and Displaced constructions that make one question actually go for human resources and not housing and If I were to predict and I were to say I Would like to understand In the strategy for wheat and corn crops You might assume that this may be related to agriculture in a very very hard certainty way Right. It's not unlike the same way that a vision is working It's just applied to a different thing But the technology behind it is very similar take an API that's working take a model with perfected And then add an extra layer use transfer learning to fine-tune it To your use case. So can you repeat that with microphone? Yeah, thank you He's going to be in as the expert if you have any more questions and also you can go to the app and read the The talks. Thank you. And also, I'm available in Twitter LinkedIn or later and Pablo Carlier again And it was very nice to meet you was a pleasure. Thank you very much