 The next panelist is Dr. Graham Williams. You may have seen it on the panelist on the first day. He will talk about ML Hub and accessible machine learning. OK, thank you for joining us. At the end of this fairly long day, I must say pretty exciting day. It's always fun in these conferences. There's lots of idiosyncratic projects that we hear about in the open source community, as well as getting updates on some of our favorite tools, and our previous speaker on TensorFlow. I want to talk about accessible machine learning and this working. It's kind of continuing a theme over a very long career in open source. Most of us in open source, we're very keen to share what we learn, what we build, to share that with the community. My daytime job is a machine learning, artificial intelligence researcher. I did my PhD in the 1980s in AI and machine learning, developing new algorithms, something called decision trees, and actually developed the concept of ensembles of decision trees. And I've kind of been what's called an ensemble person ever since, building multiple models and combining them. And ever since then, I've been looking at how we can, and I've also been an educator, so I've been working in universities teaching for many years. I still teach. I'm still an adjunct professor at the Australian National University, and the University of Canberra. I do some teaching here in Singapore as well. So I'm really keen on this concept of making sure and demystifying, if you like, many of the complexities that we often fear and often see in machine learning and AI. So as I say, this is kind of like what I want to talk about today is a continuation of that theme of how do we make AI and machine learning more accessible to everyone? How do we empower everyone with this kind of technology? Some of you may know of, and I was talking to some in the audience earlier, of some of my earlier work around some toolkits that I, or some products, effectively that I developed called Rattle in the R community, to very easily, it's a graphical user interface, to build your very first machine learning model in less than four clicks or in four clicks or thereabouts. Loading in some data, building your very first model, a decision tree, and exploring and understanding what decision trees are in the machine learning type of context. More recently, I've developed the ideas further, particularly in the context of teaching, and my most recent book, The Essentials of Data Science, introduces a kind of a template approach that practicing data scientists like myself and my team use in a day-to-day basis for the projects that we get involved in. And we publish openly in GitHub a collection of templates, essentially scripts in R and Python that we use as the starting point for any project that we do in data science. And that leads to what I want to talk about today and introduce today, which is mlhub.ai, which is a repository of pre-built machine learning models. Today, particularly in the, you know, today we're in probably the fourth surge of artificial intelligence. Maybe it's the fifth. I wasn't alive for the first, the 1956, Dartmouth meeting that kind of created the whole field of artificial intelligence. But over my career, I've seen three or four surges of interest in AI. This is another surge that we're in at the moment, characterized by massive amounts of data being able to be computed across with massive amounts of compute that kind of is available to everyone today. The supercomputers that we were using in the 90s are now readily available to all of us in the cloud. And relatively cheaply, yes, you're getting to massive GPUs. Some of the compute that we need runs for run weeks on end, that does get very expensive. But for smaller projects firing up, massive compute on the cloud for short periods of time to do some analysis is becoming much more accessible to everyone. However, we are building in this current surge of AI some fairly complex and incredibly useful models in computer vision and in language audio type of areas. Areas characterized by massive amounts of essentially numeric data. A image is numeric data. Audio is numeric data and so on. And that's where neural networks are really doing some fantastic stuff. We don't actually understand what the neural networks are doing underneath. And my purest AI kind of background says, hey, we're not really discovering new knowledge here. But what we are doing is something incredible. It looks like magic. It is producing really good models. And some of these models take weeks of GPU time to build. In Microsoft, some of the language translation model, some of the text processing, speech to text, some of the image processing models that we've built, where we are seeing above human results to some of these tasks, these models are taking multiple weeks of multiple many GPUs to compute these models underneath using TensorFlow, C and DK type technologies. So the question is, how can we, should we be sharing these models? And how can we share these pre-built models, or these models that we've built more freely amongst the community? And there's a number of efforts underway to kind of figure that out. Given also my background in the open source community around back in the early, well, back in the 80s, if you're at the panel, you would have heard I talked about, I explored ideas for packaging Emacs packages and making them freely available as tar files on the internet and developing a repository, then with LaTeX and tech and the CTAN repository, I got involved in that in the early days as well. And then of course, Debian developed its concept of packaging systems. At the time, it was really, wow, this is fantastic stuff. The Debian guys really got on top of how we should be packaging and sharing open source products in an easy way in a repository. And of course, we've seen that repeated over and over again. So now I come to, how can we do something similar for pre-built machine learning models? How can we get them out there for people to access and share the models that we build as data scientists, machine learning researchers, and share those models really easily amongst the community? Make it accessible and freely available so that others can then take those models and build upon what we have there. And we have technology now that allows us to take models and extend and build on those models that we've published and provided. So that's part of the motivation of ML Hub. Another part is, as an educator, I'm often wanting to share or for the students to very quickly come up to speed with technologies. The whole world is not really about neural networks. A lot of work that we do as data scientists is still a lot of the traditional machine learning algorithms, decision trees, are still widely used in a lot of enterprises today. It's a good technology. They're there as random forests or boosted gradient boosting type algorithms. But the technology, which has been around since the 1970s, neural networks have been around since the 1950s, the technology is solid and widely used. We wanted to communicate that kind of technology very, very quickly as well. And to show that it's not really, you know, the mathematics behind there may be some complexities around that. But there's a level of understanding we can gain fairly quickly about how machine learning actually works through a hands-on experience. And we wanted that experience to be a five-minute experience. And I guess this is also my lack of staying ability, I guess. I often see new projects. Somebody shows me their GitHub repository. Often at a conference like this, I'm sitting in the audience and they're presenting a new algorithm. And they say, hey, we've got this on a GitHub repository. And during the presentation, I go to the GitHub repository. I download it. I try to compile it. Oh, I need this and that and get the dependencies there. And I give up if it takes more than five minutes. Maybe it says something more about me than the software. But gee, I really like that experience of being able to take something. And this is true in the Linux world with package managers. I can take a package, apt-get install package, try it in five minutes. If I like it, I'll keep it. If I don't, I move on. So it's that kind of concept I want to try and capture in the ML Hub as well. So there are a number of efforts around already to kind of see if we can create repositories of pre-built models. There's a couple I've got on the screen there. ModelDepo.io is a really nice open source attempt to do this. Nice graphical user interface. Microsoft has, we have our own gallery.azure.ai for a whole bunch of pre-built AI-type models, some of which you can access via APIs to the cloud. First one, modeldepo.io gives you some API interfaces, pretty much aimed at developers. The extra thing we wanted to do with the ML Hub was to make it accessible at a command line to anyone, not just developers. And maybe it's another sign of my heritage. It is a command line tool as a basis at the moment. So everything I'm doing here is controlled through the command line. Now, I would like to encourage anyone who's got a computer in front of them, encourage you to go through this as I'm going through it here. And it would be a nice experiment to see how much difficulty you get. And the first difficulty is going to be the tool is currently fairly focused on running out of the box on Ubuntu, and particularly Ubuntu 18.04 LTS. However, you can install virtual machines, et cetera, running Ubuntu. We kind of suggest we test this regularly on Mac OS with parallels with running Ubuntu 18.04 there. How many in the audience actually are running Ubuntu? OK, good number. And how many are running 18.04? What are you running? 18, 10, 19, 04? Not quite. OK, it probably runs out of the box on 18.10 as well. So I would encourage you, particularly if you've got Ubuntu, it's not going to run out of the box on Mac OS, unfortunately, sorry. But if you've got parallels or virtual box or even a container for Ubuntu 18.04, it works on all of them. So to run, to install ML Hub, it's a PIP3 install. So PIP3 install ML Hub, and that will just download ML Hub, latest version, and you'll have it on your local machine. Oh, another way we often run this, which I probably should mention, is through the cloud. So Ubuntu servers on the cloud. You can fire up an Azure, Ubuntu 18.04, or an Azure data science virtual machine, particularly a data science virtual machine, which is a Ubuntu server that we have published on Azure, which contains all of the open source software by default that AI machine learning research to use. So five minutes, push a button, and you have a data science virtual machine that has Python, R, TensorFlow, CNTK, every package that you can imagine that a data scientist uses that is open source, plus a few add-ins from Microsoft. So very easy to install out of the box. The nice thing about the data science virtual machine is all of the dependencies are already there for most of the models that we publish with ML Hub. PIP3 install ML Hub, depending on how familiar you are with PIP and the Python ecosystem, PIP install will install that into .local bin ML, so you need to make sure that's in your path. After you've installed that, you should be able to do just ML or ML available, and that will go to our ML Hub repository and tell you the models that are available at the moment. ML installed will show you the list of models that you've currently installed. Now, the installation process, it's fairly simple. It takes the files of the package. So you can think of this like installing a Linux package. It takes the contents of that package. It unpacks it. It puts it into a .ml Hub folder with the package name. Very, very simple mechanism. Wanted to keep it really simple to avoid too much complexity. And then the ML command will work with what's been extracted into that package folder. So when you type ML available, did anyone actually have success just then? Great. OK, we've got a few hands coming up. So do feel free to give it a go. ML available will list a collection of the packages available on this particular repository. You can point ML Hub to any repository and it will list the packages available on that repository. And there's a collection of sample models there, if you like, that we use to illustrate ML Hub. Now, the kind of hello world example is the rain package. So if we go through that, ML install rain. So that will go to the repository and install the rain package. Now, the first trick here to explain is that we actually decided not to create packages. I'm installing a package, but we don't actually go out and create a package. The packages are created dynamically from GitHub or GitLab or Bitbucket, whatever your favorite Git repository is. And it's all based on YAML files, which is a configuration file, to specify what has to happen for ML Hub. So we were at first packaging stuff from GitHub into a zip file or a R file and having that stored in a repository. That turned out to be quite a losing game. Why take stuff out of GitHub and put it somewhere else? And why not rather take it directly from GitHub and build the packages effectively dynamically? And so you can see here it's actually getting the code from my Git repository, a package called rain. And it's downloading that GitHub repository and unzipping it into .ml hub slash rain. That's the default behavior of ML install. That's all it does. And another thing we've been careful to do is to always give you a guide what to do next. I never like the tools that, OK, we've done something, what do I do next? OK, look up in the manual and see what I do next. And so we've tried to make this as user-friendly as possible in the sense of, where do I go? One of the nice things about Ubuntu these days is if you mistype a command, it will do that. Maybe you meant this command. Or this command is not available. Install it with this app get command and so on. So it's that kind of thing. So it suggests that ML read me. So if you say ML read me in the name of the package, it will show you a little bit of an introduction to that package. In particular, it should give you a link to the actual GitHub repository as well. And for those who are online, have a look at that GitHub repository and you'll get a bit of a sense of what it's doing to turn a GitHub repository into an ML Hub package. And we'll talk a little bit about that as time permits. And then it suggests the next command is an ML Configure. Yeah, dependency is a real issue and a real struggle for it's the real problem trying to solve effectively. How do we make sure we've got the proper dependency so that we can run this? And so we have a system of specifying those dependencies, whether there are dependencies, Python, operating system dependencies, and so on. Whether you're using PIP install or you've got a condor environment in Python or you've got an R environment and you've got a local library of R packages and so on. We're looking at simplifying or handling all of that. So the next command you would run is ML Configure. After the configure, well, it's gone through. This package actually uses Atrial, one of the, it's just a PDF viewer, a collection of packages that it needs from R, so it's an R model in this example. And it ensures that all the dependencies are actually there, available and installed. Next command it suggests is ML Commands Rain. It tells you what are the commands that are provided by this package. So we've got one, two, three, four commands here. Demo, every package should have a demo command. And to be honest, it's a demo.py or a demo.r script that it finds in that GitHub repository. That's all the command is. So we've got a print.r display.r and score.r from that repository. So the next thing I want to do is to run Rain, to run the demo for Rain, ML demo Rain. So I might just swap over to actually showing this if that's visible. Hopefully you can see that as well. So I'm going, I've already installed it, so I won't install it again. ML, let's jump right to that ML demo Rain. Now, a little bit of a description. So I would use this to explain to people what a decision tree model is. Very simple machine learning model. It's predicting whether it's going to rain tomorrow based on historic data. Very simple data set, very simple example. But this is a pre-built model. The concept is we have a pre-built model here that predicts whether it's going to rain tomorrow. And we apply that model, that pre-built decision tree to some actual data, get the actual results and predicted results. And you can see it's mostly getting the right answer. So the model is doing OK. And then I introduced the concept of a confusion matrix. This is a measure of the performance of the model. And we can see it's getting an overall error rate of 25%, average class error, 25%. It's kind of OK. It's not too bad, 75% accuracy type of model. Here's a performance evaluation chart. And we might explain that a little bit. And that's the end of the demo. However, there were some other commands. And it says next, we might want to print. So ML print, and maybe the choice of names can be better. Print is a textual explanation of the model. I won't go into any detail, except to say that textual structure that you see there is just a text representation of a decision tree model. It's discovered that model. That's the model we're using. Some people prefer a more visual version of it. So that will pop up a graphical version of that textual model. So here's the model that we've pre-built on some historic data. It says if the humidity at 3 PM today is, let's say, greater than or equal to 67, then the chance of it raining tomorrow is 73%. Similarly, we go down the other path. If the humidity is lower, but we've got a lot of sunshine today, chance of it raining tomorrow is only 16%. So we predict that it won't rain tomorrow with 84% accuracy, and so on. That's a decision tree model. It's built that model on some historic data and made those decisions. And often we explain the knowledge discovery or what is most impactful on the decision here, whether it rains tomorrow. We have these plots, and it says humidity at 3 PM is the most important variable. Not so interesting from a single decision tree, but very useful when we have something called a random forest or other ensemble approaches. And then we have this particular package also has a score command. So we say in our score rain. And that just extracts the variables from the decision tree, asks me what are the values of some of these variables. Let's just put some random numbers in. I've got no idea what I'm doing here. It's probably a bit too big. And it says, I predict the chance of rain tomorrow to be 43%. If I run that again and put in some different numbers, let's just put in, it's the same. Put in some different numbers, always 43%. Let's try something dramatically different. OK, it's not really showing the effect. There are other pathways through that tree to get down to 16%. OK, so different numbers will take you down different pathways in that tree. This one predicts 43% rain tomorrow. This one 16% rain tomorrow. And then a nice message at the end. Thank you for exploring the rain model. There are no other commands to explore here. So if we go quickly go back to here, we've just gone through all of that. So that's a fairly simple model. That's kind of like the Hello World model. The next one is a colorize. It's a TensorFlow built model. It's one of the traditional examples of using deep learning. It takes black and white photos and colorizes them, turns them into color photos. So this is a pre-built model. The model itself took in thousands. The read me will probably tell us. So in this case, we would go to demo install but then read me colorize should give a little bit of an explanation there. So this was a model built by one of my colleagues in China. Not a lot of extra detail there. If you go to the website, he's probably got more of the details there. It's a TensorFlow deep learning model. Thousands of photos, black and white and color examples and builds a neural network model to take any black and white photo and colorize that photo. So if I look at, so I do an ML install colorize, you should be able to do that. You should then do an ML configure colorize. Now I say within five minutes, you should be able to get the demo up and running. If you don't have all the dependencies there, sometimes that time may take more than just the five minutes. But ML, once you've got the configuration, if you don't have TensorFlow installed, it will install TensorFlow and that's not a trivial process. So it will install the dependencies. So ML, then we go, ML demo colorize. And so the point that the aim here is what could wow you within five minutes? So this goes through a collection of photos, black and white photos that it provides in the package and it colorizes them immediately for us. This is not canned. This is taking those black and white photos and colorizing them as we're running it here. So I'll close that one and bring up the next one. So you can see it's doing a reasonable job of colorizing a bunch of black and white photos. I think this particular model has a bias towards water and green scenery, I must admit. But it does a reasonable job on most of these. So I'm just going through control, closing the graphic window and it goes on to the next one. And that's hence colorized a whole bunch of images for me using that pre-built model. So an ML install of this actually downloads from his GitHub repository, his code, his demo.py script basically, which and also the configure also downloads his binary model that he's built in TensorFlow already and it downloads that to the local machine and is running it on this local machine here. That model could actually come from anywhere and there are TensorFlow based model repositories. It could come from the TensorFlow repository. Similarly, we've got the ResNet, if you know the ResNet models, we've got those models we can download. We've got some examples of that as well. We'll probably run out of time to see that, but we'll see how far we go. Now I won't do the print colorize. It doesn't really give us much information, but if I do ML, let's say ML read me colorize. Let's just open this link in the browser. So this is going to the GitHub repository. And going to that GitHub repository, you can see it's got a bunch of the usual stuff, but there's this mlhub.yaml file and that's the key for turning any repository into a mlhub package. So the aim is to be non-burdomsome, to be simple, in anyone having a package, anyone having a GitHub repository, installing a, or creating an mlhub.yaml file to specify what is needed and possibly creating this demo.py and particularly score.py for, and that turns it into an mlhub package. mlhub will look at that GitHub repository and in fact on the mlhub command line, mlinstall, you can give it the path to the GitHub repository and it will look for the mlhub.yaml file, download that file and follow the instructions in there. If you're online, you can have a look at this yourself, but if you have a quick look at this, it's a specification configuration. So you can see there's the dependencies, if that's big enough to see, there's the dependencies for this particular package. Yeah, so he is using TensorFlow there, so we have to make sure we've got TensorFlow installed, a whole bunch of other system dependencies plus the files from the repository that we also need to download. So we don't have to get the whole repository, just what's required to be able to demo this particular package. And so you can see a list of the files there and folders plus you can specify, so here's the model that we're downloading. So he's got this in a store somewhere, that's the model that he's built, a HDF5 model, and we just store it locally on this machine once we download it. mlhub will try and be a little bit clever in case you need these downloads when you get the new version from his repository. It won't necessarily download the actual binary model file unless it needed to. So that's what's required to create an mlhub package. Have an mlhub.yaml file on your repository. Now notice another command he's got there is score. Actually, let's make sure I'm not skipping over too much. So that's the colorize package, and we've seen that and we saw the demo and we've talked about building the actual package, so github and having a look at the mlhub.yaml file. Now some github repositories have multiple functionality or multiple commands if you like. Before I say that, let's just think about that mlhub score. Sorry, if I just go back to here, what I wanted to say was there's an ml score and this was colorize. You can provide here an image, any black and white photo that you might have or anything that you might find on the internet. Whoops, I think we've got an example here of a file to colorize, yeah. So you can do this, so I'll just grab some picture from the internet and do that. So that downloads that picture, it colorizes the picture, it will pop up the result of that colorization. So it's not dramatic, but it does do the task. So in effect, you can think of it as a command now. It's actually a small tool. If I've got a folder of black and white photos that I want to colorize, I just point it to the folder and it will colorize those for me. The aim again is a common infrastructure that we can use to have this working as quickly as possible. So another example is that I might just illustrate with is objects, ml, few ml install a package called objects, it's again another demonstration of using neural networks on images to identify objects in photos. You ml install, ml read me, ml configure objects and then you'll get to ml demo objects. So let's just jump straight to demo objects. This is using the ResNet 152 model. I won't go into details of what that is, we can go to the GitHub repository and see what ResNet and 152 is, but it's a model that will take a photo and identify the primary object in that photo. And there's been competition over the years, maybe over months, between Google, Microsoft and others. And we keep leapfrogging each other on how accurate these type of computer vision models can become. So in this example, here's a computer vision model, it's a pre-built model, so I haven't built the model here, it's already built. If I remember right, I think this is a CNTK model and which is another deep learning framework and it's recognizing in that image that that's an African crocodile. The green there is just the strength of that recognition. It could have been an American alligator or a Komodo dragon, but they, if you see the text here, 99.9% African crocodile, 0.07% that it's an American alligator and 0.02% that it's a Komodo dragon. So that's accurately identified that. And so we've got a collection of examples here. That's a lynx, apparently, rather than a leopard or a snow leopard. This next one's a brambling rather than a partridge. And you can see there, it has slightly higher probabilities that it might have been a partridge or a water loosal. That's a liner, it could have been a dock but a planetarium, but pretty unlikely. This one's a sports car or a racer with some degree of probability or a convertible and so on. We've got a, that's a so-called injury. Could have been a Madagascar cat or a koala, fairly unlikely. And there's a summary of them all. So there's that five minute wow or hmm, looks interesting, but I'm not interested particularly. So I'll move on, I've only wasted five minutes of my time. So that's the kind of intent of building the model. And of course, this has a ML score function. Again, if I do the read me, objects, go open up the GitHub repository. I think we've got further examples there, yeah. So I can score some random images, your own images if you like or images from the internet, copy that, paste that into there, whoops, lost the M. So that's downloading that image, whatever it is from the internet and it will run that model, the ResNet model over that and it comes up and tells us that's a damsel fly, could be a dragon fly, a little bit higher probability in that instance, but in this case, it's clearly a dragon fly. A damsel fly. Okay. One last example, I was saying that we look for the mlhub.yaml file in GitHub and that defines a mlhub package. We can have multiple packages within the one repository. So Microsoft, we're developing what's called a collection of recommendation tools, recommendation packages to support the Netflix type recommenders, food recommenders and so on. We're collecting together a framework for dealing with recommendations. That's all open source and it's built on our experience working with a large number of customers, some of the big customers that you know of who are developing these type of technology. We take the algorithmic side of that and the process side of that and turn it into GitHub repositories that we call best practice that we're sharing openly in GitHub. One of them is the, you will find in my repository called recommenders but the original one, I've got a fork of it, but Microsoft slash recommenders for example. Now there are multiple recommender algorithms and there's something called SAR, which is smart adaptive recommender, there's RBM, which is the restricted Boltzmann machine, there's ALS for alternating least squares and so on. And so there are multiple packages if you like in this single repository. And so all that I've done to the Microsoft repository for recommenders is added, I've added a folder here in this case with two YAML files, RBM.YAML and SAR.YAML. So I can now install either a RBM model or a SAR model from this one repository. And indeed we can do ML install SAR that will download the appropriate files. If we look at one of them, YAML, RBM.YAML for example, you can see that it specifies here the actual files that it downloads. So RBM.py becomes demo.py in the package. Hopefully you can see that. RBM.MD becomes readme.MD in the package, things that ML Hub knows about. In fact, and then there's the actual some data that it also downloads. So if we run this demo, so this is a movie recommender, it's using public data, the movie lens data set if you're in the recommender space. That's the data that is often used. So we're loading the data now. Takes a little while to load that data. It's a fairly large data set, 100,000 users in this data set, it's loaded it and it's showing us the first few data items from there. So you can see a user, the movie they watched, the rating they gave, the movie in the name, the title of the movie. I'm now building, in this case, I'm actually constructing the model and I'm applying the model to this data set. And then I just, as an example, I use it for one particular user. These are the movies that that user has watched or the top five movies that they've rated that they've watched. The images are just downloaded dynamically from Amazon IMDB. And these are movies that are being recommended by the recommendation system, the model that we've just demonstrated here. Again, the images are downloaded from Amazon IMDB to do that demo. So again, wow, that's great or not very interesting and move on to something else. We also do model performance in the framework. So if you're familiar with recommenders, the map is the usual discriminated, but we've got precision recall and so on there. So that's basically what we see in a little bit. We've got face detect models there. Again, a lot of the traditional examples of some of the more complex neural network, deep learning models are included in ML Hub. You can use it live. So ML live face detect. So you can have your camera on your computer start up and it will follow your face around live as we do that. And there's a variety of other packages available. So just to finish, mlhub.ai is where you should be able to find all the information. It's a little experimental still at the moment. I'm really keen to get feedback, comments. This is a crazy idea. Stop it now and move on to something more interesting or hey, this looks interesting. We'd like to contribute a model to it. How can we do that? Love to work with anyone who's interested to do so. And contributions are very much welcome. Thank you very much. So some questions. Hi, Kate, I have two questions. First is, so other than doing just demos of various models, is do you still have the use case of doing ensembles on models somehow to basically leverage all of the underlying models? In terms of the tool itself, it can be used as a so ML score and then a model name. And then you can provide a data file to apply the model to that data file. You could orchestrate it to have multiples of these to save the output and ensemble it in that way. Or you can package it up to demonstrate the concept of an ensemble of different types of models or an ensemble in the decision tree and the ensemble of the same type of modelling approach. So it's not excluded that they do that, but it's not directly supported. And sorry, just a quick second question. Are your slides online somewhere? They're not at the moment, but I will. Keep an eye on mlhub.ai and I'll put them on there. Thank you for your presentation. Just a quick question. So you said that contributions of pack models are welcome, but what's your criteria for curating the models? So currently, and it is being developed and in fact, we moved away from the concept of curation when we moved from taking the code from GitHub and putting it into our zip file and moving that to GitHub itself. Now, in terms of our package repository, so in mlhub, when you say ml available, it gives you a list of the packages. That is curated there. That's where our curation takes place. However, you can install models from anywhere you like. So ml install and then you give it the GitHub location of the YAML file. Anyone can install a model in that way. The ones that we put into our repository, we're curating, we're ensuring that they look OK, they work OK. Hopefully there's no issues in the security of those models. We will review the code and so we are confident the ones that we're curating in that list. Underneath, when you do ml available, you'll have seen a list of, I don't know if I've got that on the next slide. No, you'll see a list of models. Each of them is just a map to a GitHub repository where the model actually lives. So it's just a shorthand for that command line that says ml install and then the whole GitHub path. But that's what we're curating. But there's nothing to stop anyone creating their own ml hub model and you'd be able to point anyone else to that model through the ml hub command. I think that's the end. Thank you very much. Thank you.